Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In the context of precision viticulture, remote sensing in the optical domain offers a potential way to map crop structure characteristics, such as vegetation cover fraction, row orientation or leaf area index, that are later used in decision support tools. A method based on the RGB color model imagery acquired with an unmanned aerial vehicle (UAV) is proposed to describe the vineyard 3D macro-structure. The dense point cloud is first extracted from the overlapping RGB images acquired over the vineyard using the Structure from Motion algorithm implemented in the Agisoft PhotoScan software. Then, the terrain altitude extracted from the dense point cloud is used to get the 2D distribution of height of the vineyard. By applying a threshold on the height, the rows are separated from the row spacing. Row height, width and spacing are then estimated as well as the vineyard cover fraction and the percentage of missing segments along the rows. Results are compared with ground measurements with root mean square error (RMSE) = 9.8 cm for row height, RMSE = 8.7 cm for row width and RMSE = 7 cm for row spacing. The row width, cover fraction, as well as the percentage of missing row segments, appear to be sensitive to the quality of the dense point cloud. Optimal flight configuration and camera setting are therefore mandatory to access these characteristics with a good accuracy.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this context, remote sensing offers considerable opportunities, with visible (VIS), long wave infrared (LWIR) and near infrared (NIR) hyperspectral sensors all providing windows into the complex surface chemistry present in vineyards. Although potentially promising, satellite-based observations are not routinely exploited: imagery from several satellite systems is commercially available, but the spatial resolution of their VIS sensors (5 -30m) is inadequate for mapping small vineyards (Weiss and Baret, 2017). Moreover, the superposition of signal returns from vines and inter-row material within a single pixel makes extraction of desired signatures difficult. ...
... To overcome such shortfalls, a photogrammetric technique known as Structure from Motion (SfM) (Longuet-Higgins, 1981, Furukawa and Ponce, 2010, Hartley, 1997 is often used to obtain dense point clouds (DPC) from the overlapping nature of multiple VIS/RGB images obtained by a UAV. This has been applied to vineyards (Mathews and Jensen, 2013) and other crops (Bendig et al., 2013) to obtain 3D models of terrain and extract fractions of vegetation cover; and to estimate vineyard structure (Weiss and Baret, 2017). Unfortunately, while these approaches are able to discriminate between vines and grass between rows, existing algorithms are sensitive to even slight illumination variations common in real world settings. ...
... Unfortunately, while these approaches are able to discriminate between vines and grass between rows, existing algorithms are sensitive to even slight illumination variations common in real world settings. Additionally, in order to obtain good performance, current techniques need flight configurations and camera settings that are optimised (Weiss and Baret, 2017). ...
Article
Full-text available
Remote sensing techniques can be used to identify and classify vine properties such as row width, height, cover-fraction, missing segments and leaf area density, providing opportunities to visualise plant vigour as a spatial function of vineyard geography. This information may then be integrated into decision support tools to improve vineyard management practices. An algorithm for identifying vines from a sequence of overlapping aerial images and then estimating their properties is described. The image stacks were obtained from visible and long wave infrared cameras carried by an unmanned aerial vehicle (UAV). Structure from motion (SfM) was used to create 3D thermal and colourised point clouds, from which the underlying topography of the surface terrain was extracted. The surface topographic model was obtained using bounded data query nearest neighbour calculations, which were reduced to computationally manageable levels using Kd-trees that recursively partitioned the point clouds by progressively separating them into binary trees. This allowed the point clouds to be classified in terms of their hue, saturation, surface temperature and height relative to surface topography using Lloyd’s unsupervised k-means clustering. Individual samples were then associated using Gaussian probability density functions normalised by cluster statistics. The algorithm was evaluated against ground truth obtained using aerial data in terms of its accuracy and robustness using a combination of real world conditions that included high shadowing, poor contrast and UAV flight paths and camera settings that delivered sub-optimal SfM performance.
... Within the PV context, aerial remote sensing in the optical domain offers a potential way to map crop structure, such as vegetation cover fraction, row orientation, or leaf area index. This information can be registered in a non-destructive way and can be later used in decision support tools [8]. Among the remote sensing platforms, Unmanned Aerial Vehicles (UAVs) stand out because of their unprecedented high spatial resolution and flexibility of flight scheduling, which are essential for the accurate and timely monitoring of the crop. ...
... Among the remote sensing platforms, Unmanned Aerial Vehicles (UAVs) stand out because of their unprecedented high spatial resolution and flexibility of flight scheduling, which are essential for the accurate and timely monitoring of the crop. To date, UAVs have been used for a wide range of purposes in PV, such as the assessment of water status [9], disease detection [10], vine canopy characterization [5,8,11,12], and the study of spatial variability in yield and berry composition [13]. The development of new techniques based on UAV imagery is therefore a required target for PV, since UAVs are rapidly replacing other platforms for vineyard monitoring [12]. ...
... To the best of our knowledge, OBIA-based technology has not yet been applied to automatically estimate vine height, and subsequently, to validate the procedure using individual ground truth data. In this context, some authors have estimated vine height by using photogrammetric point clouds from UAV imagery, such as Weiss and Baret [8] and Ballesteros et al. [5]. However manual intervention was needed in both approaches, which would make the process less time-efficient and less accurate due to errors from a subjective manual process [19]. ...
... Using this technology, spatial information linked, both directly and indirectly, with canopy characteristics or information about designed areas can be recorded in a practical and efficient way. Examples of this information include water status [17], disease detection [18] and canopy characterization [19][20][21][22]. De Castro et al. developed a fully automatic process for vineyard canopy characterization [14], which self-adapts to varying crop conditions. ...
... In order to build the vigor map at each canopy stage, an orthophoto map with a ground sample distance (GSD) of 6.33 cm pixel −1 was obtained from spectral images acquired with the camera. The orthophoto map was radiometrically calibrated using four grayscale standards (22,32,44 and 51% grayscale reflectance), placed in the field during the flight, to transform grayscale 12-bit digital numbers to reflectance values. This new data were used to calculate the normalized differential vegetation index (NDVI) [31] (Equation (1)). ...
... In order to build the vigor map at each canopy stage, an orthophoto map with a ground sample distance (GSD) of 6.33 cm pixel -1 was obtained from spectral images acquired with the camera. The orthophoto map was radiometrically calibrated using four grayscale standards (22,32,44 and 51% grayscale reflectance), placed in the field during the flight, to transform grayscale 12-bit digital numbers to reflectance values. This new data were used to calculate the normalized differential vegetation index (NDVI) [31] (Equation (1)). ...
Article
Full-text available
Canopy characteristics are crucial for accurately and safely determining the pesticide quantity and volume of water used for spray applications in vineyards. The inevitably high degree of intraplot variability makes it difficult to develop a global solution for the optimal volume application rate. Here, the design procedure of, and the results obtained from, a variable rate application (VRA) sprayer are presented. Prescription maps were generated after detailed canopy characterization, using a multispectral camera embedded on an unmanned aerial vehicle, throughout the entire growing season in Torrelavit (Barcelona) in four vineyard plots of Chardonnay (2.35 ha), Merlot (2.97 ha), and Cabernet Sauvignonn (4.67 ha). The maps were obtained by merging multispectral images with information provided by DOSAVIÑA®, a decision support system, to determine the optimal volume rate. They were then uploaded to the VRA prototype, obtaining actual variable application maps after the application processes were complete. The prototype had an adequate spray distribution quality, with coverage values in the range of 20–40% and exhibited similar results in terms of biological efficacy on powdery mildew compared to conventional (and constant) application volumes. The VRA results demonstrated an accurate and reasonable pesticide distribution, with potential for reduced disease damage even in cases with reduced amounts of plant protection products and water.
... Thermal-based sensors capture the temperature of the crop's surface, which indicates the plant's stress (both biotic and abiotic) [53]. Generally, digital RGB camera and LiDAR can be used to quantify 3D metrics, such as the plant size and shape, via 3D pointclouds with sufficient accuracy for canopy level assessment [111][112][113][114][115][116][117][118]. ...
... The evolution of canopy structure within and between seasons can be useful to understand the spatial variability within the field and corresponding water requirements. The macro-structure of horticultural crops, such as row height, width, spacing, crop count, the fraction of ground cover, and missing plants, can be identified remotely, which can aid in the allocation of resources [113,122]. The terrain configuration in the form of a digital elevation model (DEM) generated from a digital camera can also enable understanding of the water status in relation to the aspect and slope configuration of the terrain. ...
... Commonly used SfM software to process the remote sensing images are Agisoft PhotoScan and Pix4D. The commonly retrieved outputs from the SfM software for assessment of horticulture crops include the orthomosaic, digital surface model (DSM), DEM, and 3D pointcloud [113,126,163]. This technique of georeferencing can be applied to any sensor that produces images, e.g., RGB, thermal, or multispectral cameras [126,164,165]. ...
Article
Full-text available
With increasingly advanced remote sensing systems, more accurate retrievals of crop water status are being made at the individual crop level to aid in precision irrigation. This paper summarises the use of remote sensing for the estimation of water status in horticultural crops. The remote measurements of the water potential, soil moisture, evapotranspiration, canopy 3D structure, and vigour for water status estimation are presented in this comprehensive review. These parameters directly or indirectly provide estimates of crop water status, which is critically important for irrigation management in farms. The review is organised into four main sections: (i) remote sensing platforms; (ii) the remote sensor suite; (iii) techniques adopted for horticultural applications and indicators of water status; and, (iv) case studies of the use of remote sensing in horticultural crops. Finally, the authors’ view is presented with regard to future prospects and research gaps in the estimation of the crop water status for precision irrigation.
... Two algorithms (specifically local maxima extraction and threshold selection) were developed ad hoc for the purpose of the project, by starting from [28], who proposed a method for crop rows' extraction by using as input the 3D point cloud. Both methods were generated in MATLAB R2017b [29] and are based on the concept that high pixel values generally correspond to crop rows. ...
... Only the cases of Bayesian segmentation on the pear orchard site and classification algorithms for tomato site require NDVI and NDVI jointly with SAVI to obtain the best results. Therefore, NIR information does not give any particular additional value in crop row detection, and RGB sensors can perform accurate canopy extraction, as already demonstrated by other authors [15,28], saving the time and cost of the UAV surveys and processing. ...
... Precision viticulture is already widespread in the world, and recent articles have demonstrated the added value that remote sensing from UAV platforms can give to this sector [36]. Hence, numerous studies can be found in the literature dealing with vine canopy extraction [15,28,37,38]. The results presented in this work had accuracy values similar to those available in the literature. ...
Article
Full-text available
Climate change and competition among water users are increasingly leading to a reduction of water availability for irrigation; at the same time, traditionally non-irrigated crops require irrigation to achieve high quality standards. In the context of precision agriculture, particular attention is given to the optimization of on-farm irrigation management, based on the knowledge of within-field variability of crop and soil properties, to increase crop yield quality and ensure an efficient water use. Unmanned Aerial Vehicle (UAV) imagery is used in precision agriculture to monitor crop variability, but in the case of row-crops, image post-processing is required to separate crop rows from soil background and weeds. This study focuses on the crop row detection and extraction from images acquired through a UAV during the cropping season of 2018. Thresholding algorithms, classification algorithms, and Bayesian segmentation are tested and compared on three different crop types, namely grapevine, pear, and tomato, for analyzing the suitability of these methods with respect to the characteristics of each crop. The obtained results are promising, with overall accuracy greater than 90% and producer’s accuracy over 85% for the class “crop canopy”. The methods’ performances vary according to the crop types, input data, and parameters used. Some important outcomes can be pointed out from our study: NIR information does not give any particular added value, and RGB sensors should be preferred to identify crop rows; the presence of shadows in the inter-row distances may affect crop detection on vineyards. Finally, the best methodologies to be adopted for practical applications are discussed.
... In addition, threedimensional digital models, including vineyard point cloud and digital canopy model (DCM), can be created from overlapping images captured by the UAV (Comba et al., 2018). From vineyard digital models, parameters such as canopy height, projected area and volume can be calculated and provide detailed information regarding the canopy structure (Matese & Di Gennaro, 2018;Weiss & Baret, 2017). Compared with manned airc raft and satellite-based remote sensing, UAV also offers convenience in simple flight preparation, flexible operation options (Khaliq et al., 2019) and is more cost-effective for small and mediumsize vineyards (Andújar et al., 2019;Matese et al., 2015). ...
... Canopy volume measurements can also be obtained immediately after canopy management practices are applied without the interference of the trimmed shoots or leaves on the ground, unlike NDVI and projected canopy volume measures. In addition, although no ground vegetation (weed or cover crop) grew in the study vineyard, it has been found that DCM can also filter out the interference from ground vegetation as only the partial volume of the DCM that was above the cordon height was included (Vanegas et al., 2018;Weiss & Baret, 2017). With these advantages, canopy volume calculated from DCM displayed potential as a suitable tool for monitoring canopy management outcomes. ...
Article
Full-text available
Aim: To analyse unmanned aerial vehicle (UAV)-based imagery to assess canopy structural changes after the application of different canopy management practices in the vineyard. Methods and results: Four different canopy management practices: i–ii) leaf removal within the bunch zone (eastern side/both eastern and western sides), iii) bunch thinning and iv) shoot trimming were applied to grapevines at veraison, in a commercial Cabernet-Sauvignon vineyard in McLaren Vale, South Australia. UAV-based imagery captures were taken: i) before the canopy treatments, ii) after the treatments and iii) at harvest to assess the treatment outcomes. Canopy volume, projected canopy area and normalized difference vegetation index (NDVI) were derived from the analysis of RGB and multispectral imagery collected using the UAV. Plant area index (PAI) was calculated using the smartphone app VitiCanopy as a ground-based measurement for comparison with UAV-derived measurements. Results showed that all three types of UAV-based measurements detected changes in the canopy structure after the application of canopy management practices, except for the bunch thinning treatment. As expected, ground-based PAI was the only technique to effectively detect internal canopy structure changes caused by bunch thinning. Canopy volume and PAI were found to better detect variations in canopy structure compared to NDVI and projected canopy area. The latter were negatively affected by the interference of the trimmed shoots left on the ground. Conclusions: UAV-based tools can provide accurate assessments to some canopy management outcomes at the vineyard scale. Among different UAV-based measurements, canopy volume was more sensitive to changes in canopy structure, compared to NDVI and projected canopy area, and demonstrated a greater potential to assess the outcomes of a range of canopy management practices. Significance and impact of the study: Canopy management practices are widely applied to regulate canopy growth, improve grape quality and reduce disease pressure in the bunch zone. Being able to detect major changes in canopy structure, with some limitations when the practice affects the internal structure (i.e., bunch thinning), UAV-based imagery analysis can be used to measure the outcome of common canopy management practices and it can improve the efficiency of vineyard management.
... A point cloud is a large dataset of points, referred to a geodetic reference frame, representing spots of the external surface of visible objects, where light is reflected. Data for 3D crop modelling can be directly provided by a laser scanner (such as light detection and ranging systems -LiDAR) [19] or derived from multispectral and thermal imagery [20][21] [22] by photogrammetry and computer vision algorithms, such as Structure from Motion (SfM). Several published literature works proved the potentiality of this new type of crops models for monitoring and tasks assessment in agriculture by developing reliable algorithms to exploit 3D data to describe vineyards macro-structure [22], to optimize irrigation [23] and yield estimation [24]. ...
... Data for 3D crop modelling can be directly provided by a laser scanner (such as light detection and ranging systems -LiDAR) [19] or derived from multispectral and thermal imagery [20][21] [22] by photogrammetry and computer vision algorithms, such as Structure from Motion (SfM). Several published literature works proved the potentiality of this new type of crops models for monitoring and tasks assessment in agriculture by developing reliable algorithms to exploit 3D data to describe vineyards macro-structure [22], to optimize irrigation [23] and yield estimation [24]. ...
Article
Addressing the intrinsic variability within vineyards is a key factor to perform precision viticulture management. To this aim, new and more reliable methods for vineyard monitoring purposes must be defined. The introduction of Unmanned Aerial Vehicle (UAV) airborne sensors makes available a considerable amount of data with very high resolution, in terms of both spatial and temporal dimension. In this work, a data fusion approach for vigour characterization in vineyards is presented, which exploits the information provided by 2D multispectral aerial imagery, 3D point cloud crop models and aerial thermal imagery. A crucial phase of the procedure is the proper management of data provided by several sources, to achieve high consistency of the obtained huge dataset. The enhanced effectiveness of the proposed method to classify vines in different vigour classes exploiting multi source data was proved by an experimental campaign, considering 30 portions of vine rows, each made by 8 vines. Results showed that the error of the discriminant analysis using data fusion reach an improvement ranging from 67% to 90% with respect to a single data source, with a misclassification error rate of 3%.
... The DTM can also be estimated by the DEM [29]. The DEM model gives consistent information about the physical characteristics of the field, such as biomass [30][31][32][33]. Also, from this model, it is possible to correctly identify vines when the bare soil elevation is known, which is sometimes trivial to retrieve but, in other cases, requires considerable supervised work. ...
... Weiss and Baret [33] have used the terrain altitude extracted from the dense point cloud to get the 2D distribution of height of the vineyard. By applying a threshold on the height, the rows were separated from the row spacing. ...
Article
Full-text available
Technical resources are currently supporting and enhancing the ability of precision agriculture techniques in crop management. The accuracy of prescription maps is a key aspect to ensure a fast and targeted intervention. In this context, remote sensing acquisition by unmanned aerial vehicles (UAV) is one of the most advanced platforms to collect imagery of the field. Besides the imagery acquisition, canopy segmentation among soil, plants and shadows is another practical and technical aspect that must be fast and precise to ensure a targeted intervention. In this paper, algorithms to be applied to UAV imagery are proposed according to the sensor used that could either be visible spectral or multispectral. These algorithms, called HSV-based (Hue, Saturation, Value), DEM (Digital Elevation Model) and K-means, are unsupervised, i.e., they perform canopy segmentation without human support. They were tested and compared in three different scenarios obtained from two vineyards over two years, 2017 and 2018 for RGB (Red-Green-Blue) and NRG (Near Infrared-Red-Green) imagery. Particular attention is given to the unsupervised ability of these algorithms to identify vines in these different acquisition conditions. This ability is quantified by the introduction of over- and under- estimation indexes, which are the algorithm’s ability to over-estimate or under-estimate vine canopies. For RGB imagery, the HSV-based algorithms consistently over-estimate vines, and never under-estimate them. The k-means and DEM method have a similar trend of under-estimation. While for NRG imagery, the HSV is the more stable algorithm and the DEM model slightly over-estimates the vines. HSV-based algorithms and the DEM algorithm have comparable computation time. The k-means algorithm increases computational demand as the quality of the DEM decreases. The algorithms developed can isolate canopy vegetation data, which is useful information about the current vineyard state, and can be used as a tool to be efficiently applied in the crop management procedure within precision viticulture applications.
... One of the main issues in row-planted crops is the individual plant characterization, which requires the isolation of pixels belonging to canopies from those belonging to soil or any other vegetation material such as weeds or cover crops. One common approach is to differentiate spectral properties between rows and inter-rows, which require multispectral cameras, it was first explored by applying a threshold on vegetation indices computed from multispectral images [9][10][11] and, recently, machine learning algorithms [12,13] . The analysis based on spectral bands can result in some problems associated to the same plant material presenting different spectra (different growth level), separation of plant material and shade, different plant material with similar spectra (such as the case of weeds or cover crops in the inter-row) and mixed information per pixel such as soil/plant associated to the specific spatial resolution from the imagery [14] . ...
... Grigorijs et al. [25] used point clouds derived from SfM matching techniques obtained from UAS to detect individual trees, measured tree heights, and provided RGB estimates in Australian tropical savannas. Marie and Fré dé ric [11] developed an algorithm for vineyard structural characteristic estimation based on dense point clouds derived from the RGB colour model images acquired with a UAV. Finally, Su et al. [22] used DSM acquired from UAV RGB images to assess affected and missing grapevine canopies affected by frost. ...
Article
Full-text available
The identification of Chinese medicinal plants has relied in the past on ampelographic manual assessment by experts. However, more recently machine learning algorithms for pattern recognition, have been successfully applied to leaf recognition in other plant species. These new tools make the classification of Chinese medicinal plants easier, more efficient and cost effective. This paper showed comparative results between machine learning models obtained from two methods: i) a morpho-colorimetric method and ii) a visible (VIS) / Near Infrared (NIR) spectral analysis from sampled leaves of 20 different Chinese medicinal plants. Specifically, the automated image analysis and VIS / NIR spectral based parameters obtained from leaves were used separately as inputs to construct customized artificial neural network (ANN) models. Results showed that the ANN model developed using the morpho-colorimetric parameters as inputs (Model A) had an accuracy of 98.3% in the classification of leaves for the 20 medicinal plants studied. In the case of the model based on spectral data from leaves (Model B), the ANN model obtained using the averaged VIS / NIR spectra per leaf as inputs showed 92.5% accuracy for the classification of all medicinal plants used. Model A has the advantage of being cost effective, requiring only a normal document scanner as measuring instrument. This method can be adapted for non-destructive assessment of leaves in-situ by using portable wireless scanners. Model B combines the fast, non-destructive advantages of VIS / NIR spectroscopy, which can be used for rapid and non-invasive identification of Chinese medicinal plants and other applications by analyzing specific light spectra overtones from leaves to assess concentration of pigments such as chlorophyll, anthocyanins and others that are related active compounds from the medicinal plants.
... Some studies have retrieved winter wheat height and vineyard structure using UAV-based point cloud data [29], [30]. However, to our best knowledge, there has been no reported application of using UAV-based 3D crop structural information for winter wheat LAIe estimation. ...
... This method can successfully be applied to LAIe monitoring and estimation between leaf development and the stem elongation stages as shown through this study (BBCH 20-39). LAI information from these stages is valuable for winter wheat growth modeling and final grain yield forecast [30]. The information of LAIe could help end-users identify the growth status of crops and make early decisions on agricultural management strategies. ...
Article
Full-text available
Within-field variation of leaf area index (LAI) plays an essential role in field crop monitoring and yield forecasting. Although unmanned aerial vehicle (UAV)-based optical remote sensing method can overcome the spatial and temporal resolution limitations associated with satellite imagery for fine-scale within-field LAI estimation of field crops, image correction and calibration of UAV data are very challenging. In this study, a physical-based method was proposed to automatically calculate crop effective LAI (LAIe) using UAV-based 3D point cloud data. Regular high spatial resolution RGB images were used to generate point cloud data for the study area. The proposed method, Simulated Observation of Point Cloud (SOPC), was designed to obtain the 3D spatial distribution of vegetation and bare ground points and calculate the gap fraction and LAIe from a UAV-based 3D point cloud dataset at vertical, 57.5°, and multi-view angle of a winter wheat field in London, Ontario, Canada. Results revealed that the derived LAIe using the SOPC Multi-view angle (SOPC-M) method correlates well with the LAIe derived from ground digital hemispherical photography (DHP), R2 = 0.76. The root mean square error (RMSE) and mean absolute error (MAE) for the entire experiment period from May 11 to May 27 were 0.19 and 0.14, respectively. The newly proposed method performs well for LAIe estimation during the main leaf development stages (BBCH 20-39) of the growth cycle. This method has the potential to become an alternative approach for crop LAIe estimation without the need for ground-based reference measurements, hence save time and money.
... Photogrammetric dense point cloud has a point density depending on the image spatial resolution and overlap level, but with a consistently lower cost than a LiDAR one. These advantages have led to an increasing interest in this technology and in the last few years, several studies utilized dense point clouds from SfM in vineyards with different applications Ballesteros et al. [3,14,30]. ...
... False negatives that indicated gaps wrongly classified as vines only occurred in one field, with 3.2%. Weiss and Baret [30] sampled 20 sites called elementary sampling units (ESU) covering a 10 m square area. The percentage of missing segments of rows for each ESU was computed and results showed that when the percentage of missing row segments and percentage of missing pixels are low (ESUs 1 to 4, 13, 16, 17), a very good consistency between the two methods and ground measurements is observed. ...
Article
Full-text available
Background: The knowledge of vine vegetative status within a vineyard plays a key role in canopy management in order to achieve a correct vine balance and reach the final desired yield/quality. Detailed information about canopy architecture and missing plants distribution provides useful support for farmers/winegrowers to optimize canopy management practices and the replanting process, respectively. In the last decade, there has been a progressive diffusion of UAV (Unmanned Aerial Vehicles) technologies for Precision Viticulture purposes, as fast and accurate methodologies for spatial variability of geometric plant parameters. The aim of this study was to implement an unsupervised and integrated procedure of biomass estimation and missing plants detection, using both the 2.5D-surface and 3D-alphashape methods. Results: Both methods showed good overall accuracy respect to ground truth biomass measurements with high values of R2 (0.71 and 0.80 for 2.5D and 3D, respectively). The 2.5D method led to an overestimation since it is derived by considering the vine as rectangular cuboid form. On the contrary, the 3D method provided more accurate results as a consequence of the alphashape algorithm, which is capable to detect each single shoot and holes within the canopy. Regarding the missing plants detection, the 3D approach confirmed better performance in cases of hidden conditions by shoots of adjacent plants or sparse canopy with some empty spaces along the row, where the 2.5D method based on the length of section of the row with lower thickness than the threshold used (0.10 m), tended to return false negatives and false positives, respectively. Conclusions: This paper describes a rapid and objective tool for the farmer to promptly identify canopy management strategies and drive replanting decisions. The 3D approach provided results closer to real canopy volume and higher performance in missing plant detection. However, the dense cloud based analysis required more processing time. In a future perspective, given the continuous technological evolution in terms of computing performance, the overcoming of the current limit represented by the pre- and post-processing phases of the large image dataset should mainstream this methodology.
... One of the main issues in row-planted crops is the individual plant characterization, which requires the isolation of pixels belonging to canopies from those belonging to soil or any other vegetation material such as weeds or cover crops. One common approach is to differentiate spectral properties between rows and inter-rows, which require multispectral cameras, it was first explored by applying a threshold on vegetation indices computed from multispectral images [9][10][11] and, recently, machine learning algorithms [12,13] . The analysis based on spectral bands can result in some problems associated to the same plant material presenting different spectra (different growth level), separation of plant material and shade, different plant material with similar spectra (such as the case of weeds or cover crops in the inter-row) and mixed information per pixel such as soil/plant associated to the specific spatial resolution from the imagery [14] . ...
... Grigorijs et al. [25] used point clouds derived from SfM matching techniques obtained from UAS to detect individual trees, measured tree heights, and provided RGB estimates in Australian tropical savannas. Marie and Frédéric [11] developed an algorithm for vineyard structural characteristic estimation based on dense point clouds derived from the RGB colour model images acquired with a UAV. Finally, Su et al. [22] used DSM acquired from UAV RGB images to assess affected and missing grapevine canopies affected by frost. ...
Article
Full-text available
Information about canopy vigor and growth are critical to assess the potential impacts of biotic or abiotic stresses on plant development. By implementing a Digital Surface Model (DSM) to imagery obtained using Unmanned Aerial Vehicles (UAV) it is possible to filter canopy information effectively based on height, which provides an efficient method to discriminate canopy from soil and lower vegetation such as weeds or cover crops. This paper describes a method based on the DSM to assess CG as well as missing plants from a kiwifruit orchard on a plant-by-plant scale. The DSM was initially extracted from the overlapping RGB aerial imagery acquired over the kiwifruit orchard using the Structure from Motion (SfM) algorithm. An adaptive threshold algorithm was implemented using the height difference between soil / lower plants and kiwifruit canopies to identify plants and extract canopy information on a non-regular surface. Furthermore, a customized algorithm was developed to discriminate single kiwifruit plants automatically, which allowed the estimation of individual canopy cover fractions (fc). By applying differential fc thresholding, four categories of the CG were determined automatically: i) missing plants, ii) low vigor, iii) moderate vigor and iv) vigorous. Results were validated by a detailed visual inspection on the ground, which rendered an overall accuracy of 89.5% for the method proposed to assess CG at the plant-by-plant level. Specifically, for CG category i) the accuracy was 94.1% and for ii) was 85.1%, iii) and iv) were 86.7% and 88.0% respectively. The proposed method showed also to be appropriate to filter out weeds and other smaller non-plant materials which are extremely difficult to do by common color thresholding or edge identification methods. This method can be applied on a number of agricultural operations.
... This information can be subsequently used in decision support tools, such as guiding a harvesting robot in the field. Image processing techniques have been applied to crop row images in order to define the directrix of a vehicle [1], [2], [6]- [8]. In [6] the captured color image is clustered by a mean-shift algorithm. ...
... However, this method reports large deviations due to missing or/and not fully developed plants. Another recent method [8] is also insensitive to weed. First, a pre-processing step takes place to derive the binary image and then a main step to characterize the macrostructure of the crop rows. ...
... There is no necessary to use more predictors for a light model (i.e. in [25]) if the backbone network has well enough representational power since more predictors bring more computation complexity. In our experiment, we stick to this argument and hereby run k-means over our datasets to pick prior boxes for our model, as in YOLOv3-tiny, we first set k=6 and get a box array of (23,23), (35,36), (48,49), (64,66), (90,91), (147,157), referred to as "def-anchors" in the later section, with all the images are normalized to (416,416). Since the smallest box are bigger than a high-resolution cell grid (16,16) in both width and height, we use another box array of (10,14), (27,23), (37,58), (75,64), (93,104), (187,163), referred to as "cust-anchors" to see what happens if there is one smaller prior box than the smaller cell grid. ...
... Gao, P. et al. [44] applied a machine learning approach to recognized spray areas from unmanned aerial vehicles. Weiss, M. and Baret, F. [49] used 3D point clouds from RGB-images to describe vineyard. Hobart, M. et al. [50] detected growth height of tree walls in apple orchards. ...
Preprint
Full-text available
In precision crop protection, (target-orientated) object detection in image processing can help navigate Unmanned Aerial Vehicles (UAV, crop protection drones) to the right place to apply the pesticide. Unnecessary application of non-target areas could be avoided. Deep learning algorithms dominantly use in modern computer vision tasks which require high computing time, memory footprint, and power consumption. Based on the Edge Artificial Intelligence, we investigate the main three paths that lead to dealing with this problem, including hardware accelerators, efficient algorithms, and model compression. Finally, we integrate them and propose a solution based on a light deep neural network (DNN), called Ag-YOLO, which can make the crop protection UAV have the ability to target detection and autonomous operation. This solution is restricted in size, cost, flexible, fast, and energy-effective. The hardware is only 18 grams in weight and 1.5 watts in energy consumption, and the developed DNN model needs only 838 kilobytes of disc space. We tested the developed hardware and software in comparison to the tiny version of the state-of-art YOLOv3 framework, known as YOLOv3-Tiny to detect individual palm in a plantation. An average F1 score of 0.9205 at the speed of 36.5 frames per second (in comparison to similar accuracy at 18 frames per second and 8.66 megabytes of the YOLOv3-Tiny algorithm) was reached. This developed detection system is easily plugged into any machines already purchased as long as the machines have USB ports and run Linux Operating System.
... Thus, providing this type of data throughout the different grapevine phenological stages would benefit winegrowers in assessing a grapevine's canopy spatial variation [46,47]. Weiss and Baret [48] used dense point clouds, generated through photogrammetric processing of UAV-based RGB imagery, to characterize a vineyard's properties, such as grapevine row height, width, spacing, and cover fraction. Matese et al. [21] used a multispectral sensor mounted on a UAV to assess the photogrammetric processing of multispectral imagery. ...
... Besides grapevine vegetation, inter-row vegetation and shadows cast by grapevines canopies are two examples of elements usually present in vineyard aerial imagery [58]. To automatically separate grapevine vegetation in aerial high-resolution imagery acquired by UAVs, different approaches have been proposed in the literature: digital image processing-based techniques [59,60], supervised and unsupervised machine learning classification techniques [61], point clouds (obtained from SfM methods) filtering [48]; and the use of DEMs [20,58]. ...
Article
Full-text available
This study aimed to characterize vineyard vegetation thorough multi-temporal monitoring using a commercial low-cost rotary-wing unmanned aerial vehicle (UAV) equipped with a consumer-grade red/green/blue (RGB) sensor. Ground-truth data and UAV-based imagery were acquired on nine distinct dates, covering the most significant vegetative growing cycle until harvesting season, over two selected vineyard plots. The acquired UAV-based imagery underwent photogrammetric processing resulting, per flight, in an orthophoto mosaic, used for vegetation estimation. Digital elevation models were used to compute crop surface models. By filtering vegetation within a given height-range, it was possible to separate grapevine vegetation from other vegetation present in a specific vineyard plot, enabling the estimation of grapevine area and volume. The results showed high accuracy in grapevine detection (94.40%) and low error in grapevine volume estimation (root mean square error of 0.13 m and correlation coefficient of 0.78 for height estimation). The accuracy assessment showed that the proposed method based on UAV-based RGB imagery is effective and has potential to become an operational technique. The proposed method also allows the estimation of grapevine areas that can potentially benefit from canopy management operations.
... Some 109 publications may be incorrectly eliminated as a result of manual selection. Finally, we 110 want to express our gratitude to the VOSviewer tool and SciMAT for offering the data 111 visualization functionalities utilized in this study. 112 113 ...
... Growers must consider vine density, row spacing, and direction while creating a 409 new vineyard so that distant sensing would be simple [111]. Canopy management is 410 one of the important measures to take by the growers. ...
Article
Full-text available
This review focuses on the use of unmanned aerial vehicles (UAVs) in precision agriculture, and specifically, in precision viticulture (PV), and is intended to present a bibliometric analysis of their developments in the field. To this aim, a bibliometric analysis of research papers published in the last 15 years is presented based on the Scopus database. The analysis shows that the researchers from the United States, China, Italy and Spain lead precision agriculture through UAV applications. In terms of employing UAVs in PV, researchers from Italy are fast extending their work followed by Spain and finally the United States. Additionally, the paper provides a comprehensive study on popular journals for academicians to submit their work, accessible funding organizations, popular nations, institutions, and authors conducting research on utilizing UAVs for precision agriculture. Finally, this study emphasizes the necessity of using UAVs in PV as well as future possibilities.
... In general, water resources for the more abundant crop are expected to be available quickly, resulting in a shortage of agricultural crop water. The production structure, such as width, height, spacing, missing plant, and ground cover fraction, is remotely identified (Castro et al. 2018;Weiss and Baret 2017). ...
... The Pix4D and Agisoft PhotoScan were used to process the UAV images. The standard output of SfM software includes the (DSM), orthomosaic, and 3D cloud (Weiss and Baret 2017;Turner et al. 2011). Different spectral, thermal indicators can be calculated from the equation below (Jones 2013;Costa et al. 2013). ...
Article
Water management is becoming a critical issue for sustainable agriculture, especially in the semi-arid region, where problems with water scarcity are rising. More accurate water status recovery in crops is required for precise irrigation through remote sensing technologies. These technologies have a lot of potential in intelligent irrigation because they allow for real-time environmental data collection. Nowadays, digital practices have been used, such as unmanned aerial vehicle (UAV), which plays an essential role in various applications related to crop management. Drones offer an exciting opportunity to track crop fields with high spatial and temporal resolution remote sensing to enhance water stress management in irrigation. Farmers have historically depended on soil moisture measurements and weather conditions to detect crop water status for irrigation scheduling. This review paper summarizes the use of UAV remote sensing data in crops for estimating the water status and gives a detailed summary of the potential capacity of UAV remote sensing for water stress application. The remote sensing techniques help modify agricultural practices to meet this significant challenge by providing repeated information on crop status at different scales and various performances during the season. UAVs successful implementation in water stress estimations depends on UAV features, such as flexibility of use in flight planning, low cost, reliability, autonomy, and capability of timely provision of high-resolution data. UAV with a thermal sensor is considered the most effective technique for detecting water stress using specific indices. Thermal imaging can identify water status variations and crop water stress index (CWSI). This CWSI acquired through UAV thermal sensors imagery can be acceptable for managing real-time irrigation to achieve optimum crop water efficiency.
... A typical application of RPA technology is to acquire high-resolution visible, spectral, infrared or thermal imagery at a low altitude to achieve large cope farmland monitoring and hazard prediction (GAŠPAROVIĆ et al., 2020;PANDAY et al., 2020). RPA imaging has been used to detect weeds (DE CASTRO et al., 2020;TANUT;RIYAMONGKOL, 2020), plant diseases (MATESE; DI GENNARO, 2018), perform plant count estimations (KOH et al., 2019), and characterize plant dimensions (DE CASTRO et al., 2020;WEISS;BARET, 2017). Another use of RPA imagery is the automation of locations for soil sampling based on a soil map created from RPA imaging after plowing, and wearable augmented reality smart glass to assist the user in collecting soil samples (HUUSKONEN; OKSANEN, 2018). ...
... A typical application of RPA technology is to acquire high-resolution visible, spectral, infrared or thermal imagery at a low altitude to achieve large cope farmland monitoring and hazard prediction (GAŠPAROVIĆ et al., 2020;PANDAY et al., 2020). RPA imaging has been used to detect weeds (DE CASTRO et al., 2020;TANUT;RIYAMONGKOL, 2020), plant diseases (MATESE; DI GENNARO, 2018), perform plant count estimations (KOH et al., 2019), and characterize plant dimensions (DE CASTRO et al., 2020;WEISS;BARET, 2017). Another use of RPA imagery is the automation of locations for soil sampling based on a soil map created from RPA imaging after plowing, and wearable augmented reality smart glass to assist the user in collecting soil samples (HUUSKONEN; OKSANEN, 2018). ...
Article
Full-text available
Site-specific management practices have been possible due to the wide range of solutions for data acquisition and interventions at the field level. Different approaches have to be considered for data collection, like dedicated soil and plant sensors, or even associated with the capacity of the agricultural machinery to generate valuable data that allows the farmer or the manager to infer the spatial variability of the fields. However, high computational resources are needed to convert extensive databases into useful information for site-specific management. Thus, technologies from industry, such as the Internet of Things and Artificial Intelligence, applied to agricultural production, have supported the decision-making process of precision agriculture practices. The interpretation and the integration of information from different sources of data allow enhancement of agricultural management due to its capacity to predict attributes of the crop and soil using advanced data-driven tools. Some examples are crop monitoring, local applications of inputs, and disease detection using cloud-based systems in digital platforms, previously elaborated for decision-support systems. In this review, we discuss the different approaches and technological resources, popularly named as Agriculture 4.0 or digital farming, inserted in the context of the management of spatial variability of the fields considering different sources of crop and soil data.
... With the increased availability of 3D sensing approaches, researchers are starting to frequently use 3D imaging techniques for measuring size-related traits 14,19 . Previous studies intensively investigated sizerelated traits at the plant and canopy levels for tree and shrub crops such as apples 20 , pears 20 , grapes [20][21][22] , hickories 23 , olives 24 , almonds 25 , peaches 26 , and blueberries 27 . These studies showed a general trend that the accuracy of crop size measurement mostly depended on point cloud quality, which is determined by sensing range and imaging approaches. ...
Article
Full-text available
Crop breeding: 3D measurement of blueberry bush shape Researchers in the United States have developed a 3D imaging technique to quantitatively measure the shape of blueberry bushes in order to evaluate their suitability for mechanical harvesting. The team, led by Changying Li of the University of Georgia, used handheld LiDAR scanners to collect 3D data from bushes in blueberry fields. Their analysis pipeline converted these data into a description of the bushes, including height, width, volume, crown size, and parameters describing the shape. While some traits matched manual measurements better than others, the analysis described bush shape sufficiently to distinguish varieties. The team also created a tool to visualize key parameters related to suitability for mechanical harvesting. This study provides a promising tool to evaluate and manage varieties of blueberries and similar crops, though further work is needed to speed up data collection.
... Desta forma, a aplicação de métodos de processamento digital de imagem surge como uma alternativa aos métodos tradicionais para a identificação de videiras. Com esse propósito, vários métodos, foram propostos por diversos autores, tendo por base as imagens adquiridas por VANT ou os produtos delas derivados através de processamento fotogramétrico para segmentação de vinhas: através de técnicas de processamento digital de imagem (Comba et al. 2015;Nolan et al. 2015), recorrendo a métodos de machine learning supervisionados e não supervisionados (Poblete-Echeverría et al. 2017), utilizando nuvens de pontos derivadas de processamento fotogramétrico (Weiss & Baret, 2017); e recorreram à utilização de MDE (Baofeng et al., 2016;Burgos et al., 2015;Kalisperakis et al., 2015). ...
Conference Paper
Full-text available
Palavras-chave: Veículos aéreos não tripulados, processamento fotogramétrico, vitis vinifera, processamento digital de imagem, modelos digitais de elevação, índices de vegetação. Resumo: A qualidade e o desenvolvimento da videira estão relacionados com a heterogeneidade espacial, que depende de vários fatores, associados diretamente à vinha, com impacto direto na produção e na respetiva qualidade. Esses parâmetros podem levar à ocorrência de problemas fitossanitários que, dependendo da severidade, podem resultar numa diminuição significativa da produção, originando perdas económicas significativas. O recente desenvolvimento tecnológico veio permitir, aos agricultores e aos vitivinicultores, a melhoria do processo de tomada de decisão. Os veículos aéreos não tripulados são capazes de adquirir dados georreferenciados com elevada resolução espacial, utilizando diferentes tipos de sensores, que possibilitam a obtenção de diversos produtos. No entanto, uma parcela é ocupada por vários objetos para além das videiras. As técnicas de processamento digital de imagem surgem como uma metodologia viável para a deteção automática das áreas da parcela ocupada pelas videiras. Neste estudo é apresentado um método para a segmentação automática de videiras e para a obtenção de parâmetros de vinha, através da combinação de modelos digitais de elevação com índices de vegetação. De entre estes parâmetros, destacam-se a área de videira, o volume do copado, o vigor vegetativo e a altura das plantas. Este tipo de análise é fundamental, auxiliando de forma significativa o processo de tomada de decisão na vitivinicultura.
... For example, because of UASs' flexibility in the temporal dimensions of data acquisition, UASs can be used to study phenological variability due to climate change at broad scales. While data collection from aircraft is costly, it overcomes the limitations of traditional satellite sensors where data acquisition is bounded by the orbital characteristics of the platform (Weiss and Baret 2017). Therefore, the capabilities of UASs can help in planning flights very close to phenological stages important for detection of plant stress, impacts of climate change, or preharvest characterization (Burkart et al. 2017). ...
Chapter
Full-text available
The accuracy of volumetric estimates using traditional methods is resource-intensive and often limited by project timelines and resource availability. Unmanned aerial systems (UAS) and the miniaturization of remote sensing devices have proven to be cost-effective tools that allow for highly accurate three-dimensional (3D) mapping. In this study, we collected UAS photographs of wood chip piles at a fiberboard plant in August and September 2016 and processed them using 3D volume models generated from multiple structure from motion (SfM) programs. We then compared these models to those collected using a more traditional ground survey method. Of the software programs explored, in terms of accuracy, the Pix4D Mapper Pro program outperformed the Agisoft and Maps Made Easy programs. This program provided the lowest standard errors, while closely matching the volume estimates produced from the ground survey. In terms of simplicity and ease of use, the Maps Made Easy application outperformed all others. Across all applications, we found that the standard error of the wood chip volume decreased as the number of photos used in the image processing increased. This chapter provides a review of different platforms for processing UAS data and suggests best practices for creating the most accurate outputs. https://www.crcpress.com/High-Spatial-Resolution-Remote-Sensing-Data-Analysis-and-Applications/He-Weng/p/book/9781498767682
... Unmanned Aerial Vehicles (UAVs) have been widely used to carry remote sensing devices due their flexibility for flight scheduling, versatility and affordable management. Spatial information direct or indirectly linked with canopy characteristics or information about designed area as water status (Baluja et al. 2012), disease detection (Albetis et al. 2017) and canopy characterization (Ballesteros et al. 2015;Weiss and Baret 2017;Mathews and Jensen 2013;Poblete-Echeverría et al. 2017) can be recorded in a practical and efficient way. De Castro et al. (2018) developed a fully automatic process for vineyard canopy characterization self-adapted to different crop conditions, representing an important improvement in the canopy characterization process, generating a time-efficient, reliable and accurate method, avoiding potential errors inherent to the manual process. ...
Article
Full-text available
Site-specific management of crops represents an important improvement in terms of efficiency and efficacy of the different labours, and its implementation has experienced a large development in the last decades, especially for field crops. The particular case of the spray application process for what are called “specialty crops” (vineyard, orchard fruits, citrus, olive trees, etc.) represents one of the most controversial and influential actions directly related with economical, technical, and environmental aspects. This study was conducted with the main objective to find possible correlations between data obtained from remote sensing technology and the actual canopy characteristics. The potential correlation will be the starting point to develop a variable rate application technology based on prescription maps previously developed. An unmanned aerial vehicle (UAV) equipped with a multispectral camera was used to obtain data to build a canopy vigour map of an entire parcel. By applying the specific software DOSAVIÑA®, the canopy map was then transformed into a practical prescription map, which was uploaded into the dedicated software embedded in the sprayer. Adding to this information precise georeferenced placement of the sprayer, the system was able to modify the working parameters (pressure) in real time in order to follow the prescription map. The results indicate that site-specific management for spray application in vineyards result in a 45% reduction of application rate when compared with conventional spray application. This fact leads to a equivalent reduction of the amount of pesticide when concentration is maintained constant, showing once more that new technologies can help to achieve the goal of the European legislative network of safe use of pesticides.
... Our results are in line with those obtained in other research on woody crops, although accuracy varies according to the type of crop and the structure of the plantation [26]. For example, [23,27] in vineyard, [15,28] in olive orchards and [29] in palm trees modeled the irregular architecture of the tree crowns with UAV technology and derived the tree heights, and the tree volume in some cases, with low errors in the range of a few centimeters (RMSE between 9.8 cm and 59.0 cm). Additionally, the DSM data alone or in combination with spectral information (generally, with the NDVI) were also used to retrieve crop biomass. ...
Article
Full-text available
Poplar is considered one of the forest crops with greatest potential for lignocellulose production, so rapid and non-destructive measurements of tree growth (in terms of height and biomass) is essential to estimate productivity of poplar plantations. As an alternative to tedious and costly manual sampling of poplar trees, this study evaluated the ability of UAV technology to monitor a one-year-old poplar plantation (with trees 4.3 meters high, on average), and specifically, to assess tree height and estimate dry biomass from spectral information (based on the Normalized Difference Vegetation Index, NDVI) and 3D Digital Surface Models (DSM). We used an UAV flying at 100 m altitude over an experimental poplar plantation of 95x60 m2 (3,350 trees approx.), and collected remote images with a conventional visible-light camera for the generation of the DSM and a multi-spectral camera for the calculation of NDVI. Prior to the DSM generation, several adjustments of image enhancement were tested, which improved DSM accuracy by 19-21%. Next, UAV-based data (i.e., tree height, NDVI, and the result of fusing these variables) were evaluated with a validation set of 48 tree-rows by applying correlation and linear regression analysis. Correlation between actual and DSM-based tree heights was acceptable (R2 = 0.599 and RMSE = 0.21 cm), although DSM did not detect the narrow apexes in the top of the poplar trees (1 m length, on average), which led to notable underestimates. Linear regression equations for tree dry biomass showed the highest correlation with NDVI x Tree-height (R2 = 0.540 and RMSE = 0.23 kg/m2) and the lowest correlation with NDVI (R2 = 0.247 and RMSE = 0.29 kg/m2). The best results were used to determine the distribution of the trees according to their dry biomass, providing information about potential productivity of the entire poplar plantation by applying a fast and non-destructive procedure.
... A number of UAV-based SfM studies have focused on a single UAV campaign to monitor crop height. Such studies examined a variety of agricultural crops, including maize [47], sorghum [47,48], alfalfa [49], vineyards [50,51], sugarcane [52] and olive tree plantations [53]. Despite positive results, a single crop height extraction during the growing season is generally not able to provide the information needed to inform precision agricultural management. ...
Article
Full-text available
Monitoring the development of vegetation height through time provides a key indicator of crop health and overall condition. Traditional manual approaches for monitoring crop height are generally time consuming, labor intensive and impractical for large-scale operations. Dynamic crop heights collected through the season allow for the identification of within-field problems at critical stages of the growth cycle, providing a mechanism for remedial action to be taken against end of season yield losses. With advances in unmanned aerial vehicle (UAV) technologies, routine monitoring of height is now feasible at any time throughout the growth cycle. To demonstrate this capability, five digital surface maps (DSM) were reconstructed from high-resolution RGB imagery collected over a field of maize during the course of a single growing season. The UAV retrievals were compared against LiDAR scans for the purpose of evaluating the derived point clouds capacity to capture ground surface variability and spatially variable crop height. A strong correlation was observed between structure-from-motion (SfM) derived heights and pixel-to-pixel comparison against LiDAR scan data for the intra-season bare-ground surface (R2 = 0.77 − 0.99, rRMSE = 0.44% − 0.85%), while there was reasonable agreement between canopy comparisons (R2 = 0.57 − 0.65, rRMSE = 37% − 50%). To examine the effect of resolution on retrieval accuracy and processing time, an evaluation of several ground sampling distances (GSD) was also performed. Our results indicate that a 10 cm resolution retrieval delivers a reliable product that provides a compromise between computational cost and spatial fidelity. Overall, UAV retrievals were able to accurately reproduce the observed spatial variability of crop heights within the maize field through the growing season and provide a valuable source of information with which to inform precision agricultural management in an operational context.
... These advantages have led to an increasing interest in this technology. As a result, photogrammetric point clouds from UAV images have been utilised in forest characterization studies (Chen, McDermid, Castilla, & Linke, 2017;Dandois & Ellis, 2010;Wallace, Lucieer, Malenovský , Turner, & Vop enka, 2016), and in agricultural scenarios such as poppy fields (Iqbal, Lucieer, Barry, & Wells, 2017) or vineyard orchards (Ballesteros, Ortega, Hern andez, & Moreno, 2015;Mathews & Jensen, 2013;Weiss & Baret, 2017). The above-mentioned works needed user intervention during image processing. ...
Article
The geometric features, such as canopy area, tree height and crown volume, of agricultural trees provide useful information to elucidate plantation status and to design input prescription maps adjusted to real crop needs. This work presents an innovative procedure for computing the 3-dimensional (3D) geometric features of almond trees by applying two phases: 1) generation of photogrammetric point clouds with unmanned aerial vehicle (UAV) technology, and 2) analysis of the point clouds using object-based image analysis (OBIA) techniques. To test this approach, a UAV with a visible-RGB (low-cost) sensor was flown over three experimental almond groves at different phenological stages for two years, and the validation field method consisted of registering the height of a total of 325 trees in the two fields. The OBIA algorithm developed in this study achieved successful results: i) the individual and overall similarity measures between manually delineated and automatically detected almond tree crowns were above 0.9, and ii) the validation assessment conducted to estimate tree height from the UAV-derived algorithm produced an R2 = 0.94, an overall root mean square error (RMSE) of 0.39 m. The information derived from the OBIA algorithm was used for generating 3D maps for every tree volume and volume growth, which would be useful to understand the linkages between tree and crop management operations in the context of precision agriculture, with relevant agro-environmental implications. Our findings show that an RGB, low-cost sensor on-board a UAV provided dense 3D point clouds can be used for accurately characterising almond tree architecture.
... Crop damage has already been estimated by analyzing the relations between vegetation canopy height and NDVI values [33]. Discrimination of vegetation types can also be achieved with close-range aerial photogrammetry [34]. In terms of mapping products, the present study faced the challenge of developing an approach that is consistent with the official monitoring method in terms of extent, density, and status determination. ...
Article
Full-text available
Quantification of reed coverage and vegetation status is fundamental for monitoring and developing lake conservation strategies. The applicability of Unmanned Aerial Vehicles (UAV) three-dimensional data (point clouds) for status evaluation was investigated. This study focused on mapping extent, density, and vegetation status of aquatic reed beds. Point clouds were calculated with Structure from Motion (SfM) algorithms in aerial imagery recorded with Rotary Wing (RW) and Fixed Wing (FW) UAV. Extent was quantified by measuring the surface between frontline and shoreline. Density classification was based on point geometry (height and height variance) in point clouds. Spectral information per point was used for calculating a vegetation index and was used as indicator for vegetation vitality. Status was achieved by combining data on density, vitality, and frontline shape outputs. Field observations in areas of interest (AOI) and optical imagery were used for reference and validation purposes. A root mean square error (RMSE) of 1.58 m to 3.62 m for cross sections from field measurements and classification was achieved for extent map. The overall accuracy (OA) acquired for density classification was 88.6% (Kappa = 0.8). The OA for status classification of 83.3% (Kappa = 0.7) was reached by comparison with field measurements complemented by secondary Red, Green, Blue (RGB) data visual assessments. The research shows that complex transitional zones (water–vegetation–land) can be assessed and support the suitability of the applied method providing new strategies for monitoring aquatic reed bed using low-cost UAV imagery.
... Particularly, much attention has been paid to lowcost UAV systems consisting of RGB or modified CIR cameras and light-weight drones. The low-cost UAV systems were widely used due to the most significant advantages in affordability, ease of operation, and simplicity in image processing [11,[17][18][19]. People often use the visible or CIR images collected with these lowcost UAV systems to generate orthophotos and point cloud data for crop growth monitoring. ...
Article
Full-text available
Background Aboveground biomass (AGB) is a widely used agronomic parameter for characterizing crop growth status and predicting grain yield. The rapid and accurate estimation of AGB in a non-destructive way is useful for making informed decisions on precision crop management. Previous studies have investigated vegetation indices (VIs) and canopy height metrics derived from Unmanned Aerial Vehicle (UAV) data to estimate the AGB of various crops. However, the input variables were derived either from one type of data or from different sensors on board UAVs. Whether the combination of VIs and canopy height metrics derived from a single low-cost UAV system can improve the AGB estimation accuracy remains unclear. This study used a low-cost UAV system to acquire imagery at 30 m flight altitude at critical growth stages of wheat in Rugao of eastern China. The experiments were conducted in 2016 and 2017 and involved 36 field plots representing variations in cultivar, nitrogen fertilization level and sowing density. We evaluated the performance of VIs, canopy height metrics and their combination for AGB estimation in wheat with the stepwise multiple linear regression (SMLR) and three types of machine learning algorithms (support vector regression, SVR; extreme learning machine, ELM; random forest, RF). Results Our results demonstrated that the combination of VIs and canopy height metrics improved the estimation accuracy for AGB of wheat over the use of VIs or canopy height metrics alone. Specifically, RF performed the best among the SMLR and three machine learning algorithms regardless of using all the original variables or selected variables by the SMLR. The best accuracy (R² = 0.78, RMSE = 1.34 t/ha, rRMSE = 28.98%) was obtained when applying RF to the combination of VIs and canopy height metrics. Conclusions Our findings implied that an inexpensive approach consisting of the RF algorithm and the combination of RGB imagery and point cloud data derived from a low-cost UAV system at the consumer-grade level can be used to improve the accuracy of AGB estimation and have potential in the practical applications in the rapid estimation of other growth parameters.
... Thus, from an operational perspective, future work should focus on improving vine vegetation masking. It could be done by using the surface elevation extracted from UAV images to separate rows from the row spacing as suggested by [64]. Another alternative would be the use of image processing algorithm based on dynamic segmentation, Hough Space Clustering and Total Least Squares techniques proposed by [65]. ...
Article
Full-text available
Among grapevine diseases affecting European vineyards, Flavescence dorée (FD) and Grapevine Trunk Diseases (GTD) are considered the most relevant challenges for viticulture because of the damage they cause to vineyards. Unmanned Aerial Vehicle (UAV) multispectral imagery could be a powerful tool for the automatic detection of symptomatic vines. However, one major difficulty is to discriminate different kinds of diseases leading to similar leaves discoloration as it is the case with FD and GTD for red vine cultivars. The objective of this paper is to evaluate the potentiality of UAV multispectral imagery to separate: symptomatic vines including FD and GTD (Esca and black dead arm) from asymptomatic vines (Case 1) and FD vines from GTD ones (Case 2). The study sites are localized in the Gaillac and Minervois wine production regions (south of France). A set of seven vineyards covering five different red cultivars was studied. Field work was carried out between August and September 2016. In total, 218 asymptomatic vines, 502 FD vines and 199 GTD vines were located with a centimetric precision GPS. UAV multispectral images were acquired with a MicaSense RedEdge® sensor and were processed to ultimately obtain surface reflectance mosaics at 0.10 m ground spatial resolution. In this study, the potentiality of 24 variables (5 spectral bands, 15 vegetation indices and 4 biophysical parameters) are tested. The vegetation indices are selected for their potentiality to detect abnormal vegetation behavior in relation to stress or diseases. Among the biophysical parameters selected, three are directly linked to the leaf pigments content (chlorophyll, carotenoid and anthocyanin). The first step consisted in evaluating the performance of the 24 variables to separate symptomatic vine vegetation (FD or/and GTD) from asymptomatic vine vegetation using the performance indicators from the Receiver Operator Characteristic (ROC) Curve method (i.e., Area Under Curve or AUC, sensibility and specificity). The second step consisted in mapping the symptomatic vines (FD and/or GTD) at the scale of the field using the optimal threshold resulting from the ROC curve. Ultimately, the error between the level of infection predicted by the selected variables (proportion of symptomatic pixels by vine) and observed in the field (proportion of symptomatic leaves by vine) is calculated. The same methodology is applied to the three levels of analysis: by vineyard, by cultivar (Gamay, Fer Servadou) and by berry color (all red cultivars). At the vineyard and cultivar levels, the best variables selected varies. The AUC of the best vegetation indices and biophysical parameters varies from 0.84 to 0.95 for Case 1 and 0.74 to 0.90 for Case 2. At the berry color level, no variable is efficient in discriminating FD vines from GTD ones (Case 2). For Case 1, the best vegetation indices and biophysical parameter are Red Green Index (RGI)/ Green-Red Vegetation Index (GRVI) (based on the green and red spectral bands) and Car (linked to carotenoid content). These variables are more effective in mapping vines with a level of infection greater than 50%. However, at the scale of the field, we observe misclassified pixels linked to the presence of mixed pixels (shade, bare soil, inter-row vegetation and vine vegetation) and other factors of abnormal coloration (e.g., apoplectic vines).
... Indeed, several methods have been proposed to deal with UAV-based aerial imagery or with the resulting digital products from the photogrammetric processing. For example, grapevine segmentation [41,42], supervised and unsupervised machine learning [43], point clouds derived from photogrammetric processing [44,45], and DEMs [16,33,40]. Regarding VIs, they are one of the most common segmentation techniques applied in a remote sensing [46], mainly to segment a given image into two classes: vegetation or non-vegetation [47]. ...
Article
Full-text available
Climate change is projected to be a key influence on crop yields across the globe. Regarding viticulture, primary climate vectors with a significant impact include temperature, moisture stress, and radiation. Within this context, it is of foremost importance to monitor soils' moisture levels, as well as to detect pests, diseases, and possible problems with irrigation equipment. Regular monitoring activities will enable timely measures that may trigger field interventions that are used to preserve grapevines' phytosanitary state, saving both time and money, while assuring a more sustainable activity. This study employs unmanned aerial vehicles (UAVs) to acquire aerial imagery, using RGB, multispectral and thermal infrared sensors in a vineyard located in the Portuguese Douro wine region. Data acquired enabled the multi-temporal characterization of the vineyard development throughout a season through the computation of the normalized difference vegetation index, crop surface models, and the crop water stress index. Moreover, vigour maps were computed in three classes (high, medium, and low) with different approaches: (1) considering the whole vineyard, including inter-row vegetation and bare soil; (2) considering only automatically detected grapevine vegetation; and (3) also considering grapevine vegetation by only applying a normalization process before creating the vigour maps. Results showed that vigour maps considering only grapevine vegetation provided an accurate representation of the vineyard variability. Furthermore, significant spatial associations can be gathered through (i) a multi-temporal analysis of vigour maps, and (ii) by comparing vigour maps with both height and water stress estimation. This type of analysis can assist, in a significant way, the decision-making processes in viticulture.
... The method used in Oliveira et al. for coffee failure detection was used herein, as they used the Hough transform method to detect the location of rows, which was the most commonly used method (Pérez-Ortiz et al., 2015;Weiss and Baret, 2017). The Hough transform was proposed by Hough for machine recognition of complex lines in photographs or other pictorial representations (Hough, 1962). ...
Article
Crop failure detection using UAV images is helpful for precision agriculture, enabling the precision management of failure areas to reduce crop loss. For wheat failure area detection at the seedling stage using UAV images, the commonly used methods are not sufficiently accurate. Thus, herein, a new tool for precision wheat management at the seedling stage is designed. For this purpose, field experiments with two wheat cultivars and four nitrogen (N) treatments were conducted to create different scenarios for the failure area, and multispectral UAV images were acquired at the seedling growth stage. Based on the above data, a new failure detection method was designed by assimilating prior knowledge and a filter analysis strategy and compared with classical filter-based methods and Hough transform-based methods for wheat failure area detection. The results showed that the newly proposed assimilation method had a detection accuracy between 83.86% and 97.67% for different N levels and cultivars. In contrast, the filter-based methods and Hough transform-based methods had detection accuracies between 53.73% and 83.95% and between 20.71% and 75.79%, respectively. Thus, the assimilation method demonstrated the best failure detection performance.
... However, volume tree monitoring using 3D point clouds rather than DSMs has become a relevant advancement because photogrammetric techniques from UAV images can yield a 3D point cloud similar to that produced by LiDAR (light detection and ranging) systems [21]. But also because the 3D point clouds allow the mapping of complex and highly variable structures, such as tree orchards in vineyards [19,22] and lychee trees [23]. Recently, Torres-Sánchez et al. [24] accurately mapped almond tree height using 3D point clouds (R 2 = 0.94 and RMSE = 0.39 m in comparison with field data). ...
Article
Full-text available
Background: Almond is an emerging crop due to the health benefits of almond consumption including nutritional, anti-inflammatory, and hypocholesterolaemia properties. Traditional almond producers were concentrated in California, Australia, and Mediterranean countries. However, almond is currently present in more than 50 countries due to breeding programs have modernized almond orchards by developing new varieties with improved traits related to late flowering (to reduce the risk of damage caused by late frosts) and tree architecture. Almond tree architecture and flowering are acquired and evaluated through intensive field labour for breeders. Flowering detection has traditionally been a very challenging objective. To our knowledge, there is no published information about monitoring of the tree flowering dynamics of a crop at the field scale by using color information from photogrammetric 3D point clouds and OBIA. As an alternative, a procedure based on the generation of colored photogrammetric point clouds using a low cost (RGB) camera on-board an unmanned aerial vehicle (UAV), and an semi-automatic object based image analysis (OBIA) algorithm was created for monitoring the flower density and flowering period of every almond tree in the framework of two almond phenotypic trials with different planting dates. Results: Our method was useful for detecting the phenotypic variability of every almond variety by mapping and quantifying every tree height and volume as well as the flowering dynamics and flower density. There was a high level of agreement among the tree height, flower density, and blooming calendar derived from our procedure on both fields with the ones created from on-ground measured data. Some of the almond varieties showed a significant linear fit between its crown volume and their yield. Conclusions: Our findings could help breeders and researchers to reduce the gap between phenomics and genomics by generating accurate almond tree information in an efficient, non-destructive, and inexpensive way. The method described is also useful for data mining to select the most promising accessions, making it possible to assess specific multi-criteria ranking varieties, which are one of the main tools for breeders.
... The concept of precision viticulture [1] has rapidly developed and extended to a variety of unmanned aerial vehicle (UAV) applications in recent years. These include digital 3D vineyard structure reconstruction from UAV imagery for precise row monitoring [2,3] and crop status quantification, such as evaluating growth conditions with hyperspectral sensors [4], RGB, multispectral, thermal sensors combined together [5], and utilizing machine-learning techniques for multispectral sensors to schedule the irrigation [6]. All of the mentioned quantification works have used optical sensors, which heavily rely upon the accurate derivation of canopy reflectance. ...
Article
Full-text available
For grape canopy pixels captured by an unmanned aerial vehicle (UAV) tilt-mounted RedEdge-M multispectral sensor in a sloped vineyard, an in situ Walthall model can be established with purely image-based methods. This was derived from RedEdge-M directional reflectance and a vineyard 3D surface model generated from the same imagery. The model was used to correct the angular effects in the reflectance images to form normalized difference vegetation index (NDVI) orthomosaics of different view angles. The results showed that the effect could be corrected to a certain scope, but not completely. There are three drawbacks that might restrict a successful angular model construction and correction: (1) the observable micro shadow variation on the canopy enabled by the high resolution; (2) the complexity of vine canopies that causes an inconsistency between reflectance and canopy geometry, including effects such as micro shadows and near-infrared (NIR) additive effects; and (3) the resolution limit of a 3D model to represent the accurate real-world optical geometry. The conclusion is that grape canopies might be too inhomogeneous for the tested method to perform the angular correction in high quality.
... Our procedure retrieved growth information through a single flight and few hours of computation. Several studies have shown that the approach based on RGB-derived dense point cloudsobtained with inexpensive devicescan provide accurate estimations of growth traits in natural forests and fruit orchards, comparable to those obtained with more expensive technologies such as light detection and ranging (LiDAR) (Zarco-Tejada et al., 2014;Wallace et al., 2016;Weiss & Baret, 2017). In this regard, the continuous development and optimization of algorithms for automatic crown identification can facilitate the implementation of this methodology to forest genetic trials. ...
Article
Progress in high‐throughput phenotyping and genomics provides the potential to understand the genetic basis of plant functional differentiation. We developed a semi‐automatic methodology based on Unmanned Aerial Vehicle (UAV) imagery for deriving tree‐level phenotypes followed by genome‐wide association study (GWAS). A RGB‐based point cloud was used for tree crown identification in a common garden of Pinus halepensis in Spain. Crowns were combined with multispectral and thermal orthomosaics to retrieve growth traits, vegetation indices and canopy temperature. Thereafter, GWAS was performed to analyse the association between phenotypes and genomic variation at 235 Single Nucleotide Polymorphisms (SNPs). Growth traits were associated with 12 SNPs involved in cellulose and carbohydrate metabolism. Indices related to transpiration and leaf water content were associated with six SNPs involved in stomata dynamics. Indices related to leaf pigments and leaf area were associated with 11 SNPs involved in signalling and peroxisomes metabolism. About 16% to 20% of trait variance was explained by combinations of several SNPs, indicating polygenic control of morpho‐physiological traits. Despite a limited availability of markers and individuals, this study is a successful proof‐of‐concept for the combination of high‐throughput UAV‐based phenotyping with cost‐effective genotyping to disentangle the genetic architecture of phenotypic variation in a widespread conifer.
... In addition to the taking of zenith photographs throughout the crop cycle, measurements of diameter, plant height, root height and weight have to be determined in situ in order to correlate these values with water consumption (González-Esquiva et al., 2017). In various crops, these techniques have been combined with LiDAR and UAV technology and three-dimensional photogrammetry to determine the use of growing areas (Weiss and Baret, 2017;Ruiz et al., 2013;Mesas-Carrascosa et al., 2012). This communication shows the simultaneous use of several of the previously mentioned techniques for irrigation monitoring in lettuce. ...
Conference Paper
An experiment was developed in an experimental plot of the Polytechnic School of Orihuela (EPSO) in Miguel Hernández University of Elche (UMH), Spain. In the aforementioned plot, an irrigation strategy was applied with four treatments to maintain soil tension between 10 and 20 mb. The differences between treatments depends on the addition of different types of amendments and fertilization in the soil and a control treatment was added. Lettuce (Lactuca Sativa L) is chosen as it has a short cycle and results can be obtained relatively quickly and easily. The most appropriate cultural strategies in the management of the crop in the experimental plot were evaluated. The experimental data were provided by a nearby meteorological station and the data of soil moisture sensors and image (low cost digital photography) installed in the four treatments mentioned. In one of the treatments the irrigation was monitored by installing a compact buried weighing lysimeter. Additionally, periodic analysis of the texture and chemical composition of the soil, chromatographic analysis of the soil, chemical composition of the leaves and main harvest parameters (weight, production by surface, size, among others) were carried out. The study was completed with the chemical analysis of irrigation water. Some initial experimental results are shown in this communication, as well as the simultaneous use of compact weighing lysimetry, digital image processing and low cost soil moisture sensors. The sustainable use of water in irrigation for agricultural crops is a challenge for circular economy in a local and a global economy. The obtained data from the experiment can be used in varied socio economic levels with different stakeholders (farmer associations, famer advisors, NGO, local authorities, among others). This study has been funded by the company FULSAN, S: A. and the Spanish Ministry of Economy.
... UAVs may be effective for plant phenotyping, especially in crops grown in large fields, because the natural performance of different lines can be observed. Furthermore, a series of digital images from UAVs can be combined into orthomosaic images by reference to ground control points (Jin et al., 2017;Weiss and Baret, 2017), enabling time-course monitoring. For high-throughput phenotyping, high-resolution imagery is not typically captured with UAVs because imaging at higher resolution not only requires many more images at low altitude and longer flight times, but also greater processing power and data storage capacity. ...
Article
Full-text available
Unmanned aerial vehicles (UAVs) are popular tools for high-throughput phenotyping of crops in the field. However, their use for evaluation of individual lines is limited in crop breeding because research on what the UAV image data represent is still developing. Here, we investigated the connection between shoot biomass of rice plants and the vegetation fraction (VF) estimated from high-resolution orthomosaic images taken by a UAV 10 m above a field during the vegetative stage. Haplotype-based genome-wide association studies of multi-parental advanced generation inter-cross (MAGIC) lines revealed four QTL for VF. VF was correlated with shoot biomass, but the haplotype effect on VF was better correlated with that on shoot biomass at these QTL. Further genetic characterization revealed the relationships between these QTL and plant spreading habit, final shoot biomass and panicle weight. Thus, genetic analysis using high-throughput phenotyping data derived from low-altitude, high-resolution UAV images during early stage of rice in the field provides insight into plant growth, architecture, final biomass and yield.
... Unmanned aerial vehicles (UAVs) provide an effective platform for quickly and cheaply obtaining the parameters of vegetation canopies [22]. This technique has been expected to become increasingly common in forest studies with the availability of more efficient data processing software [23,24]. ...
Article
Full-text available
Taking a typical forest’s underlying surface as our research area, in this study, we employed unmanned aerial vehicle (UAV) photogrammetry to explore more accurate canopy parameters including the tree height and canopy radius, which were used to improve the Noah-MP land surface model, which was conducted in the Dinghushan Forest Ecosystem Research Station (CN-Din). While the canopy radius was fitted as a Burr distribution, the canopy height of the CN-Din forest followed a Weibull distribution. Then, the canopy parameter distribution was obtained, and we improved the look-up table values of the Noah-MP land surface model. It was found that the influence on the simulation of the energy fluxes could not be negligible, and the main influence of these canopy parameters was on the latent heat flux, which could decrease up to −11% in the midday while increasing up to 15% in the nighttime. Additionally, this work indicated that the description of the canopy characteristics for the land surface model should be improved to accurately represent the heterogeneity of the underlying surface.
... The case studies concern i) only UAV-based systems, ii) UGV alone and iii) UAV+UGV working symbiotically and synergistically. A dedicated method was developed in [12] to estimate several vineyard characteristics using the RGB method imagery acquired from an unmanned aerial vehicle (UAV) platform. The included features were row orientation, height, width and row spacing, as well as canopy cover fraction and percentage of missing row segments. ...
... For the calculation of the atmospheric dynamic process, the current land surface models which have been widely used in climate and hydrology researches, such as simple biosphere model (SiB4) and biosphere atmosphere transfer scheme (BATS), are based on the measured empirical wind speed profile in the canopy to provide an empirical solution to calculate the turbulent exchange in the canopy [18,19]. Unmanned aerial vehicle (UAV) provides an effective platform for quickly and cheaply obtaining the parameters of vegetation canopy [20]. This technique has been expected to become increasingly common in forests studies with the availability of more efficient data processing software [21,22]. ...
Preprint
Full-text available
Taking a typical forest underlying surface as the research area, this study employed the 1 unmanned aerial vehicle (UAV) photogrammetry to explore more accurate canopy parameters 2 including tree height and canopy radius, which were used to improve the Noah-MP land surface 3 model conducted in Dinghushan Forest Ecosystem Research Station (CN-Din). While the canopy 4 radius was fitted as a Burr distribution, the canopy height of CN-Din forest followed a Weibull 5 distribution. The replacement of the parameters by these observed UAV would result in the Noah-MP 6 model. It was found that the influence on the simulation of the energy fluxes could not be negligible, 7 and the main influence of these canopy parameters was on the latent heat flux which could decrease 8 up to-11% in the midday while increase up to 15% in the nighttime. Additionally, this work indicated 9 that the description of the canopy characteristics for the land surface model should be improved to 10 accurately deliver the heterogeneity for the underlying surface. 11
... The DSM model can provide many information about the parcel, such as the land variation and objects on its surface. Certain research works have showed the ability to extract vinerows by generating a depth map from the DSM model [41]- [43]. These solutions have been proposed to solve the vinerows misextraction resulting from the NDVI vegetation index. ...
Preprint
Full-text available
Early detection of vine disease is important to avoid spread of virus or fungi. Disease propagation can lead to a huge loss of grape production and disastrous economic consequences, therefore the problem represents a challenge for the precision farming. In this paper, we present a new system for vine disease detection. The article contains two contributions: the first one is an automatic orthophotos registration method from multispectral images acquired with an unmanned aerial vehicle (UAV). The second one is a new deep learning architecture called VddNet (Vine Disease Detection Network). The proposed architecture is assessed by comparing it with the most known architectures: SegNet, U-Net, DeepLabv3+ and PSPNet. The deep learning architectures were trained on multispectral data and depth map information. The results of the proposed architecture show that the VddNet architecture achieves higher scores than the base line methods. Moreover, this study demonstrates that the proposed system has many advantages compared to methods that directly use the UAV images.
... The main difference between them is that whereas DSMs only store one height value for every (x, y) point, point clouds allow having more than one height value, which enhances the 3D representation of the crops. Meanwhile DSMs have been used in vineyards for vine volume and height calculation [13][14][15][16] and point clouds have been reported for vineyard detection and estimation of its macro-structure [17,18], for measuring vine height [19], and for adding geometric traits to the spectral information for disease detection [20]. Another application of the UAV-generated 3D models of orchards is their use as a source of information for simulating and evaluating routes for agricultural robots [21]. ...
Article
Full-text available
Canopy management operations, such as shoot thinning, leaf removal, and shoot trimming, are among the most relevant agricultural practices in viticulture. However, the supervision of these tasks demands a visual inspection of the whole vineyard, which is time-consuming and laborious. The application of photogrammetric techniques to images acquired with an Unmanned Aerial Vehicle (UAV) has proved to be an efficient way to measure woody crops canopy. Consequently, the objective of this work was to determine whether the use of UAV photogrammetry allows the detection of canopy management operations. A UAV equipped with an RGB digital camera was used to acquire images with high overlap over different canopy management experiments in four vineyards with the aim of characterizing vine dimensions before and after shoot thinning, leaf removal, and shoot trimming operations. The images were processed to generate photogrammetric point clouds of every vine that were analyzed using a fully automated object-based image analysis algorithm. Two approaches were tested in the analysis of the UAV derived data: 1) to determine whether the comparison of the vine dimensions before and after the treatments allowed the detection of the canopy management operations; and 2) to study the vine dimensions after the operations and assess the possibility of detecting these operations using only the data from the flight after them. The first approach successfully detected the canopy management. Regarding the second approach, significant differences in the vine dimensions after the treatments were detected in all the experiments, and the vines under the shoot trimming treatment could be easily and accurately detected based on a fixed threshold.
... Although the results obtained show a slightly higher RMSE than the on-ground LiDAR measures, it does not affect the decision making in the management of the crop [80]. In addition, using point clouds generated from UAV flights, other authors have obtained an RMSE in row height determination between 3 cm and 29 cm [81], making manual intervention necessary in the processes of classification, which could make the process less time-efficient. To solve this problem, other authors have developed unsupervised methods without manual intervention [82], which required the establishment of a series of parameters in the classification process, meaning that the quality of the process depends directly on the values selected for these parameters. ...
Article
Full-text available
Remote sensing applied in the digital transformation of agriculture and, more particularly, in precision viticulture offers methods to map field spatial variability to support site-specific management strategies; these can be based on crop canopy characteristics such as the row height or vegetation cover fraction, requiring accurate three-dimensional (3D) information. To derive canopy information, a set of dense 3D point clouds was generated using photogrammetric techniques on images acquired by an RGB sensor onboard an unmanned aerial vehicle (UAV) in two testing vineyards on two different dates. In addition to the geometry, each point also stores information from the RGB color model, which was used to discriminate between vegetation and bare soil. To the best of our knowledge, the new methodology herein presented consisting of linking point clouds with their spectral information had not previously been applied to automatically estimate vine height. Therefore, the novelty of this work is based on the application of color vegetation indices in point clouds for the automatic detection and classification of points representing vegetation and the later ability to determine the height of vines using as a reference the heights of the points classified as soil. Results from on-ground measurements of the heights of individual grapevines were compared with the estimated heights from the UAV point cloud, showing high determination coefficients (R 2 > 0.87) and low root-mean-square error (0.070 m). This methodology offers new capabilities for the use of RGB sensors onboard UAV platforms as a tool for precision viticulture and digitizing applications.
... Its usage also enabled the estimation of grapevine volume [22,24,37,38]. Regarding vineyard vegetation detection, several methods were already proposed based on different approaches using the photogrammetric outcomes from UAV-based imagery by applying image processing techniques, machine learning methods and by filtering dense point clouds and DEMs [41][42][43][44][45][46][47][48][49]. Those methods are capable of distinguishing grapevine from non-grapevine vegetation and to extract different vineyard macro properties such as the number of vine rows, row spacing, width and height, potential missing plants and vineyard vigour maps. ...
Article
Full-text available
The use of unmanned aerial vehicles (UAVs) for remote sensing applications in precision viticulture significantly increased in the last years. UAVs' capability to acquire high spatiotemporal resolution and georeferenced imagery from different sensors make them a powerful tool for a better understanding of vineyard spatial and multitemporal heterogeneity, allowing the estimation of parameters directly impacting plants' health status. In this way, the decision support process in precision viticulture can be greatly improved. However, despite the proliferation of these innovative technologies in viticulture, most of the published studies rely only on data from a single sensor in order to achieve a specific goal and/or in a single/small period of the vineyard development. In order to address these limitations and fully exploit the advantages offered by the use of UAVs, this study explores the multi-temporal analysis of vineyard plots at a grapevine scale using different imagery sensors. Individual grapevine detection enables the estimation of biophysical and geometrical parameters, as well as missing grapevine plants. A validation procedure was carried out in six vineyard plots focusing on the detected number of grapevines and missing grapevines. A high overall agreement was obtained concerning the number of grapevines present in each row (99.8%), as well as in the individual grapevine identification (mean overall accuracy of 97.5%). Aerial surveys were conducted in two vineyard plots at different growth stages, being acquired for RGB, multispectral and thermal imagery. Moreover, the extracted individual grapevine parameters enabled us to assess the vineyard variability in a given epoch and to monitor its multi-temporal evolution. This type of analysis is critical for precision viticulture, constituting as a tool to significantly support the decision-making process.
... In recent years, with the application of UAV technology in eld crop research, synchronous monitoring of large-scale crops could be achieved, and high-precision and repeatable crop height data can be obtained [17]. As the main method for obtaining plant height of crops, the UAV platform had mainly focused on the growth information and phenotypic traits of high-stalk crop cultivation in the eld [18][19][20][21][22]; however, there were relatively few study on the growth information and phenotypic parameters of new dwarf crop varieties. It could be seen from the experimental results that there was a good correlation between plant density and coverage of winter wheat breeding material at seedling stage, and R 2 reached 0.8205. ...
Preprint
Full-text available
【Aims】 In order to obtain the plant density and plant height information of winter wheat breeding material quickly and accurately, it was of great practical significance for the growth monitoring and yield prediction of winter wheat breeding material. 【Method】 In this study, a drone equipped with a digital camera was used to obtain the drone digital orthophoto (DOM) and digital surface model (DSM) of winter wheat breeding material at the seedling stage, jointing stage, booting stage, flowering stage and grain filling stage. The coverage of winter wheat was extracted from drone images, and the relationship between coverage and plant density was established; the plant height estimation models of winter wheat breeding material were established respectively at jointing, booting, flowering and grain filling stages. Based on the ground measurement of winter wheat breeding material height (H), the accuracy of plant height of winter wheat breeding material extracted by DSM was verified. 【Result】 The results showed that the winter wheat coverage extracted based on the UAV images at the seedling stage had a high correlation with the measured plant density, and the R² was 0.8205. The new winter wheat cultivar H extracted by DSM was significantly correlated with the measured H, and the fitted R2 and RMSE of the predicted plant height and the measured value were 0.9554 and 6.3233 cm, respectively. 【Conclusion】 The results indicated that the use of UAV aerial imagery to predict the plant density and plant height of winter wheat breeding material has good applicability, and can provide technical reference for future crop phenotype information monitoring.
... As a result of these advantages, UAVs are becoming the most suitable remote sensing platform for PV purposes, thereby making the development of new techniques based on UAV imagery a required target for PV [8]. Moreover, its capacity to transport different kinds of sensors has broadened its use to different vineyard applications, such as 3D vineyard characterization using RGB cameras [5,9,10], detection of vine diseases and pests with conventional and multispectral sensors [11][12][13], assessments of the spatial variability of yield and berry composition using multispectral sensors [14][15][16], trimming and leaf removal employing a modified camera [17], water status with thermal and multispectral sensors [18][19][20], and the building of prescription maps using RGB, modified cameras and multispectral ones [21,22]. In spite of its wide use, UAV imagery have not been employed to identify weed infestations in vineyard cover crop systems. ...
Article
Full-text available
The establishment and management of cover crops are common practices widely used in irrigated viticulture around the world, as they bring great benefits not only to protect and improve the soil, but also to control vine vigor and improve the yield quality, among others. However, these benefits are often reduced when cover crops are infested by Cynodon dactylon (bermudagrass), which impacts crop production due to its competition for water and nutrients and causes important economic losses for the winegrowers. Therefore, the discrimination of Cynodon dactylon in cover crops would enable site-specific control to be applied and thus drastically mitigate damage to the vineyard. In this context, this research proposes a novel, automatic and robust image analysis algorithm for the quick and accurate mapping of Cynodon dactylon growing in vineyard cover crops. The algorithm was developed using aerial images taken with an Unmanned Aerial Vehicle (UAV) and combined decision tree (DT) and object-based image analysis (OBIA) approaches. The relevance of this work consisted in dealing with the constraint caused by the spectral similarity of these complex scenarios formed by vines, cover crops, Cynodon dactylon, and bare soil. The incorporation of height information from the Digital Surface Model and several features selected by machine learning tools in the DT-OBIA algorithm solved this spectral similarity limitation and allowed the precise design of Cynodon dactylon maps. Another contribution of this work is the short time needed to apply the full process from UAV flights to image analysis, which can enable useful maps to be created on demand (within two days of the farmer´s request) and is thus timely for controlling Cynodon dactylon in the herbicide application window. Therefore, this combination of UAV imagery and a DT-OBIA algorithm would allow winegrowers to apply site-specific control of Cynodon dactylon and maintain cover crop-based management systems and their consequent benefits in the vineyards, and also comply with the European legal framework for the sustainable use of agricultural inputs and implementation of integrated crop management.
... Using this technology, spatial information linked, both directly and indirectly, with canopy characteristics or information about designed areas can be recorded in a practical and efficient way. Examples of this information include water status [16], disease detection [17] and canopy characterization [18][19][20][21]. De Castro et al. developed a fully automatic process for vineyard canopy characterization [13], which self-adapts to varying crop conditions. ...
Preprint
Full-text available
Canopy characteristics are crucial for accurately and safely determining the pesticide quantity and volume of water used for spray applications in vineyards. The inevitably high degree of intra-plot variability makes it difficult to develop a global solution for the optimal volume application rate. Here, the design procedure of, and the results obtained from, a variable rate application (VRA) sprayer are presented. Prescription maps were generated after detailed canopy characterization, using a multispectral camera embedded on an unmanned aerial vehicle, throughout the entire growing season in Torrelavit (Barcelona) in four vineyard plots of Chardonnay (2.35 ha), Merlot (2.97 ha), and Cabernet Sauvignonn (4.67 ha). The maps were obtained by merging multispectral images with information provided by DOSAVIÑA®, a decision support system, to determine the optimal volume rate. They were then uploaded to the VRA prototype, obtaining actual variable application maps after the application processes were complete. The prototype had an adequate spray distribution quality and exhibited similar results in terms of biological efficacy on powdery mildew compared to conventional (and constant) application volumes. The VRA results demonstrated an accurate and reasonable pesticide distribution, with potential for reduced disease damage even in cases with reduced amounts of plant protection products and water.
... The choice of one platform over another is always based on a compromise, which depends on the objectives of monitoring spatial variability and the economic and human resources available to end-users such as agricultural companies. It is undeniable that the factor that has exponentially encouraged the spread of UAV application in agriculture is the continuous advance in sensor technologies, providing higher resolution, lower weight and dimensions, and cost reduction [23,[25][26][27][28]. Several authors describe a wide range of UAV applications for PV purposes: vigor and biomass [29][30][31][32][33][34], yield and quality monitoring [35,36], water stress [37][38][39][40][41], canopy management [42], diseases [43][44][45][46], weeds [47][48][49], and missing plants [50][51][52][53]. ...
Article
Full-text available
Several remote sensing technologies have been tested in precision viticulture to characterize vineyard spatial variability, from traditional aircraft and satellite platforms to recent unmanned aerial vehicles (UAVs). Imagery processing is still a challenge due to the traditional row-based architecture, where the inter-row soil provides a high to full presence of mixed pixels. In this case, UAV images combined with filtering techniques represent the solution to analyze pure canopy pixels and were used to benchmark the e_ectiveness of Sentinel-2 (S2) performance in overhead training systems. At harvest time, UAV filtered and unfiltered images and ground sampling data were used to validate the correlation between the S2 normalized di_erence vegetation indices (NDVIs) with vegetative and productive parameters in two vineyards (V1 and V2). Regarding the UAV vs. S2 NDVI comparison, in both vineyards, satellite data showed a high correlation both with UAV unfiltered and filtered images (V1 R2 = 0.80 and V2 R2 = 0.60 mean values). Ground data and remote sensing platform NDVIs correlation were strong for yield and biomass in both vineyards (R2 from 0.60 to 0.95). These results demonstrate the e_ectiveness of spatial resolution provided by S2 on overhead trellis system viticulture, promoting precision viticulture also within areas that are currently managed without the support of innovative technologies.
... Ultrasonic sensors are considered as low-cost and user-friendly approach, but are often limited by their spatial resolution and their susceptibility to wind [14]. RGB image-based detection of crop height is the most recently evolving approach for many different crops, including barley [12], maize [15], vineyards [16], wheat [17], sorghum [1], or alfalfa [18]. Especially, 3D point clouds generated from UAV-borne RGB images using SfM (structure from motion) techniques offer new options for deriving crop height information [1]. ...
Article
Full-text available
3D point cloud analysis of imagery collected by unmanned aerial vehicles (UAV) has been shown to be a valuable tool for estimation of crop phenotypic traits, such as plant height, in several species. Spatial information about these phenotypic traits can be used to derive information about other important crop characteristics, like fresh biomass yield, which could not be derived directly from the point clouds. Previous approaches have often only considered single date measurements using a single point cloud derived metric for the respective trait. Furthermore, most of the studies focused on plant species with a homogenous canopy surface. The aim of this study was to assess the applicability of UAV imagery for capturing crop height information of three vegetables (crops eggplant, tomato, and cabbage) with a complex vegetation canopy surface during a complete crop growth cycle to infer biomass. Additionally, the effect of crop development stage on the relationship between estimated crop height and field measured crop height was examined. Our study was conducted in an experimental layout at the University of Agricultural Science in Bengaluru, India. For all the crops, the crop height and the biomass was measured at five dates during one crop growth cycle between February and May 2017 (average crop height was 42.5, 35.5, and 16.0 cm for eggplant, tomato, and cabbage). Using a structure from motion approach, a 3D point cloud was created for each crop and sampling date. In total, 14 crop height metrics were extracted from the point clouds. Machine learning methods were used to create prediction models for vegetable crop height. The study demonstrates that the monitoring of crop height using an UAV during an entire growing period results in detailed and precise estimates of crop height and biomass for all three crops (R2 ranging from 0.87 to 0.97, bias ranging from −0.66 to 0.45 cm). The effect of crop development stage on the predicted crop height was found to be substantial (e.g., median deviation increased from 1% to 20% for eggplant) influencing the strength and consistency of the relationship between point cloud metrics and crop height estimates and, thus, should be further investigated. Altogether the results of the study demonstrate that point cloud generated from UAV-based RGB imagery can be used to effectively measure vegetable crop biomass in larger areas (relative error = 17.6%, 19.7%, and 15.2% for eggplant, tomato, and cabbage, respectively) with a similar accuracy as biomass prediction models based on measured crop height (relative error = 21.6, 18.8, and 15.2 for eggplant, tomato, and cabbage).
Article
Full-text available
Leaf area index (LAI) is a critical vegetation structural variable and is essential in the feedback of vegetation to the climate system. The advancement of the global Earth Observation has enabled the development of global LAI products and boosted global Earth system modeling studies. This overview provides a comprehensive analysis of LAI field measurements and remote sensing estimation methods, the product validation methods and product uncertainties, and the application of LAI in global studies. First, the paper clarifies some definitions related to LAI and introduces methods to determine LAI from field measurements and remote sensing observations. After introducing some major global LAI products, progresses made in temporal compositing and prospects for future LAI estimation are analyzed. Subsequently, the overview discusses various LAI product validation schemes, uncertainties in global moderate resolution LAI products, and high resolution reference data. Finally, applications of LAI in global vegetation change, land surface modeling, and agricultural studies are presented. It is recommended that (1) continued efforts are taken to advance LAI estimation algorithms and provide high temporal and spatial resolution products from current and forthcoming missions; (2) further validation studies be conducted to address the inadequacy of current validation studies, especially for underrepresented regions and seasons; and (3) new research frontiers, such as machine learning algorithms, light detection and ranging technology, and unmanned aerial vehicles be pursued to broaden the production and application of LAI.
Article
Full-text available
For monitoring purposes and in the context of geomorphological research, Unmanned Aerial Vehicles (UAV) appear to be a promising solution to provide multi-temporal Digital Surface Models (DSMs) and orthophotographs. There are a variety of photogrammetric software tools available for UAV-based data. The objective of this study is to investigate the level of accuracy that can be achieved using two of these software tools: Agisoft PhotoScan® Pro and an open-source alternative, IGN© MicMac®, in sub-optimal survey conditions (rugged terrain, with a large variety of morphological features covering a range of roughness sizes, poor GPS reception). A set of UAV images has been taken by a hexacopter drone above the Rivière des Remparts, a river on Reunion Island. This site was chosen for its challenging survey conditions: the topography of the study area (i) involved constraints on the flight plan; (ii) implied errors on some GPS measurements; (iii) prevented an optimal distribution of the Ground Control Points (GCPs) and; (iv) was very complex to reconstruct. Several image processing tests are performed with different scenarios in order to analyze the sensitivity of each software package to different parameters (image quality, numbers of GCPs, etc.). When computing the horizontal and vertical errors within a control region on a set of ground reference targets, both methods provide rather similar results. A precision up to 3–4 cm is achievable with these software packages. The DSM quality is also assessed over the entire study area comparing PhotoScan DSM and MicMac DSM with a Terrestrial Laser Scanner (TLS) point cloud. PhotoScan and MicMac DSM are also compared at the scale of particular features. Both software packages provide satisfying results: PhotoScan is more straightforward to use but its source code is not open; MicMac is recommended for experimented users as it is more flexible.
Article
Full-text available
Unmanned Aerial Vehicles (UAV)-based remote sensing offers great possibilities to acquire in a fast and easy way field data for precision agriculture applications. This field of study is rapidly increasing due to the benefits and advantages for farm resources management, particularly for studying crop health. This paper reports some experiences related to the analysis of cultivations (vineyards and tomatoes) with Tetracam multispectral data. The Tetracam camera was mounted on a multi-rotor hexacopter. The multispectral data were processed with a photogrammetric pipeline to create triband orthoimages of the surveyed sites. Those orthoimages were employed to extract some Vegetation Indices (VI) such as the Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), and the Soil Adjusted Vegetation Index (SAVI), examining the vegetation vigor for each crop. The paper demonstrates the great potential of high-resolution UAV data and photogrammetric techniques applied in the agriculture framework to collect multispectral images and evaluate different VI, suggesting that these instruments represent a fast, reliable, and cost-effective resource in crop assessment for precision farming applications.
Conference Paper
Full-text available
UAVs systems represent a flexible technology able to collect a big amount of high resolution information, both for metric and interpretation uses. In the frame of experimental tests carried out at Dept. ICA of Politecnico di Milano to validate vector-sensor systems and to assess metric accuracies of images acquired by UAVs, a block of photos taken by a fixed wing system is triangulated with several software. The test field is a rural area included in an Italian Park ("Parco Adda Nord"), useful to study flight and imagery performances on buildings, roads, cultivated and uncultivated vegetation. The UAV SenseFly, equipped with a camera Canon Ixus 220HS, flew autonomously over the area at a height of 130 m yielding a block of 49 images divided in 5 strips. Sixteen pre-signalized Ground Control Points, surveyed in the area through GPS (NRTK survey), allowed the referencing of the block and accuracy analyses. Approximate values for exterior orientation parameters (positions and attitudes) were recorded by the flight control system. The block was processed with several software: Erdas-LPS, EyeDEA (Univ. of Parma), Agisoft Photoscan, Pix4UAV, in assisted or automatic way. Results comparisons are given in terms of differences among digital surface models, differences in orientation parameters and accuracies, when available. Moreover, image and ground point coordinates obtained by the various software were independently used as initial values in a comparative adjustment made by scientific in-house software, which can apply constraints to evaluate the effectiveness of different methods of point extraction and accuracies on ground check points.
Article
Full-text available
Precision Viticulture is experiencing substantial growth thanks to the availability of improved and cost-effective instruments and methodologies for data acquisition and analysis, such as Unmanned Aerial Vehicles (UAV), that demonstrated to compete with traditional acquisition platforms, such as satellite and aircraft, due to low operational costs, high operational flexibility and high spatial resolution of imagery. In order to optimize the use of these technologies for precision viticulture, their technical, scientific and economic performances need to be assessed. The aim of this work is to compare NDVI surveys performed with UAV, aircraft and satellite, to assess the capability of each platform to represent the intra-vineyard vegetation spatial variability. NDVI images of two Italian vineyards were acquired simultaneously from different multi-spectral sensors onboard the three platforms, and a spatial statistical framework was used to assess their degree of similarity. Moreover, the pros and cons of each technique were also assessed performing a cost analysis as a function of the scale of application. Results indicate that the different platforms provide comparable results in vineyards characterized by coarse vegetation gradients and large vegetation clusters. On the contrary, in more heterogeneous vineyards, low-resolution images fail in representing part of the intra-vineyard variability. The cost analysis showed that the adoption of UAV platform is advantageous for small areas and that a break-even point exists above five hectares; above such threshold, airborne and then satellite have lower imagery cost.
Article
Full-text available
The miniaturization of electronics, computers and sensors has created new opportunities for remote sensing applications. Despite the current restrictions on regulation, the use of unmanned aerial vehicles equipped with small thermal, laser or spectral sensors has emerged as a promising alternative for assisting modeling, mapping and monitoring applications in rangelands, forests and agricultural environments. This review provides an overview of recent research that has reported UAV flight experiments on the remote sensing of vegetated areas. To provide a differential trend to other reviews, this paper is not limited to crops and precision agriculture applications, but also includes forest and rangeland applications. This work follows a top-down categorization strategy and attempts to fill the gap between application requirements and the characteristics of selected tools, payloads and platforms. Furthermore, correlations between common requirements and the most frequently used solutions are highlighted.
Article
Full-text available
No single sensor can acquire complete information by applying one or several multi-surveys to cultural object reconstruction. For instance, a terrestrial laser scanner (TLS) usually obtains information on building facades, whereas aerial photogrammetry is capable of providing the perspective for building roofs. In this study, a camera-equipped unmanned aerial vehicle system (UAV) and a TLS were used in an integrated design to capture 3D point clouds and thus facilitate the acquisition of whole information on an object of interest for cultural heritage. A camera network is proposed to modify the image-based 3D reconstruction or structure from motion (SfM) method by taking full advantage of the flight control data acquired by the UAV platform. The camera network improves SfM performances in terms of image matching efficiency and the reduction of mismatches. Thus, this camera network modified SfM is employed to process the overlapping UAV image sets and to recover the scene geometry. The SfM output covers most information on building roofs, but has sparse resolution. The dense multi-view 3D reconstruction algorithm is then applied to improve in-depth detail. The two groups of point clouds from image reconstruction and TLS scanning are registered from coarse to fine with the use of an iterative method. This methodology has been tested on one historical monument in Fujian Province, China. Results show a final point cloud with complete coverage and in-depth details. Moreover, findings demonstrate that these two platforms, which integrate the scanning principle and image reconstruction methods, can supplement each other in terms of coverage, sensing resolution, and model accuracy to create high-quality 3D recordings and presentations.
Article
Full-text available
In viticulture, knowledge of vineyard vigour represents a useful tool for management. Over large areas, the grapevine vigour is mapped by remote sensing usually with vegetation indices like NDVI. To achieve good correlations between NDVI and other vine parameters the rows of a vineyard must be previously identified. This paper presents an unsupervised classification method for the identification of grapevine rows. Only the red channel of an RGB aerial image is considered as input data. The image is first masked preserving only the considered vineyard and then pre-processed with a high pass filter. The pixel populations are split in “row” and “inter-row” subset through a Ward’s modified technique. The proposed methodology is compared with standard object oriented procedure tested on six vineyards located in Tuscany using as reference manually digitalized vine rows.
Conference Paper
Full-text available
Unmanned aerial vehicles (UAV) are a novel and flexible tool in precision viticulture, as they can be used to acquire useful information to evaluate the spatial variation of vegetative growth, yield components and grape quality. In this work, the capability of multispectral imagery acquired by a UAV and the derived spectral information to assess the spatial variability of a Tempranillo (Vitis vinifera L.) vineyard has been explored. The study was conducted in a vertical shoot positioned (VSP) Tempranillo vineyard of 3.5 ha located in Navarra, Spain. With the aim of establishing relationships between field variables and the remotely sensed information, a grid of 74 experimental blocks at 20 m intervals was built. Several variables related to the vegetative growth and yield were measured prior to harvest in each block. Multispectral imagery was acquired at 17 cm spatial resolution with a Mini-MCA 6 (Tetracam inc, USA) camera which contained information on 6 different spectral bands located in the visible and near infrared spectral regions. After geometric and radiometric corrections, the images were combined to generate a mosaic of the whole vineyard and pixel values across bands were extracted for the corresponding experimental blocks in the field. Spectral bands and indices derived from the aerial images were correlated with vine vigor and yield variables and their statistical significance was tested. Overall, the study has shown that, while spectral information and derived indices were effectively correlated to vine vigor and yield parameters, the correlation was lower than expected. We hypothesize that a specific post-processing chain will overcome the inherent systematic and random noise and distortions of the original images thus improving correlation with field data.
Article
Full-text available
This study explores the use of structure from motion (SfM), a computer vision technique, to model vine canopy structure at a study vineyard in the Texas Hill Country. Using an unmanned aerial vehicle (UAV) and a digital camera, 201 aerial images (nadir and oblique) were collected and used to create a SfM point cloud. All points were classified as ground or non-ground points. Non-ground points, presumably representing vegetation and other above ground objects, were used to create visualizations of the study vineyard blocks. Further, the relationship between non-ground points in close proximity to 67 sample vines and collected leaf area index (LAI) measurements for those same vines was also explored. Points near sampled vines were extracted from which several metrics were calculated and input into a stepwise regression model to attempt to predict LAI. This analysis resulted in a moderate R2 value of 0.567, accounting for 57 percent of the variation of LAISQRT using six predictor variables. These results provide further justification for SfM datasets to provide three-dimensional datasets necessary for vegetation structure visualization and biophysical modeling over areas of smaller extent. Additionally, SfM datasets can provide an increased temporal resolution compared to traditional three-dimensional datasets like those captured by light detection and ranging (lidar).
Article
Full-text available
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).
Article
Full-text available
Aerial image analysis was utilised to predict dormant pruning weights between two growing seasons. We utilised an existing in-row spacing trial in order to examine the relationship between dormant pruning weights and remotely sensed data. The experimental vineyard had a constant between-row spacing (2.44 m) and five different in-row spacings (0.91, 1.52, 2.13, 2.74 and 3.35 m) resulting in spatial variation in canopy volume and dormant pruning weights (kg/metre of row). It was shown that the ratio vegetation index (NIR/R) was linearly correlated with field-wide measurements of pruning weight density (dormant pruning weight per metre of canopy) for both the 1998 and 1999 growing seasons (r2= 0.68 and 0.88, respectively). Additionally, it was shown that the regression parameters remained consistent between the two growing seasons allowing for an inter-annual comparison such that the vegetation index vs canopy parameter relationship determined for the 1998 growing season was used to predict field-wide pruning weight densities in the 1999 growing season prior to harvest.
Article
Full-text available
Steephill and mountain slopes severely affect remote sensing of vegeta-tion. The irradiation on a slope varies strongly with slope azimuth relative to the sun, and the reflectance of the slope varies with the angles of incidence and exitance relative to the slope normal. Topographic correction involves standardiz-ing imagery for these two effects. We use an atmospheric model with a Digital Elevation Model (DEM) to calculate direct and diffuse illumination, and a simple function of incidence and exitance angles to calculate vegetation-canopy reflectance on terrain slope. The reflectance correction has been derived from the physics of visible direct radiation on a vegetation canopy, but has proved applic-able to infrared wavelengths and only requires solar position, slope and aspect. We applied the reflectance and illumination correction to a SPOT 4 image of New Zealand to remove topographic variation. In all spectral bands, the algorithm markedly reduced the coefficients of variation of vegetation groups on rugged terrain. This produced clean spectral signatures, improving the capacity for auto-mated classification. If illumination correction is performed alone, the coefficients of variation can be increased, and so should not be applied without a reflectance correction. The algorithm output is reflectance on a level surface, enabling the monitoring of vegetation in hilly and mountainous areas.
Article
Full-text available
High-spatial resolution multispectral imagery was acquired at mid-season 1997 by an airborne digital camera system and used to establish management zones within a 3-ha commercial wine vineyard in California s Napa Valley. Image processing included off-axis brightness correction, band-to-band alignment, ground registration and conversion to a Vegetation Index to enhance sensitivity to canopy density. The image was then stratified by Vegetation Index and color-coded for visual discrimination. An output image was generated in TIFF-World format for input to mapping software on the grower's laptop computer. The imagery was used to delineate low-, moderate-, and high-vigor zones within the study block. Supporting field measurements per zone then included canopy structure (woody biomass, canopy transmittance), vine physiology (leaf water potential, chlorophyll content), and fruit biochemistry. Grapes front each zone were fermented separately and the resulting wines were formally evaluated for difference and quality. The low- and high-vigor zones were clearly distinct from one another with respect to most measurements. Block subdivision enabled the production of a "reserve" (highest) quality wine for the first time ever from this particular block.
Article
Full-text available
Leaf Area Index (LAI) retrieval performances from gap fraction measurements are investigated over vertically trained vineyards. A 3D vineyard model was constructed to analyze the influence of canopy architecture characteristics and light direction on LAI estimation. Results show that for specific directions – close to zenith and parallel to the rows – gap fraction (Po) is mainly driven by vineyard architectural characteristics with small effect of LAI due to the clumped foliage distribution. The sensitivity of Po to LAI is enhanced for directions far from zenith and perpendicular to the rows, resulting in lower uncertainties in LAI retrieval. The 3D vineyard model was used to simulate a range of cases and those findings were supported with two calibrated neural networks to retrieve LAI from Po measured in several directional configurations. These relationships were tested over independent field experiments conducted on commercial vineyards using either hemispherical photographs or ceptometer. Both experiments highlighted the importance of selecting an appropriate geometrical configuration and introducing information on canopy architecture (canopy height and vegetation width on the row relative to inter-row spacing) to reduce LAI uncertainties associated with vineyard spatial arrangement. For the optimal configurations, estimates of LAI with hemispherical photographs (RMSE = 0.389) and ceptometer (RMSE = 0.27) were obtained and compared to destructive LAI measurements.
Conference Paper
Full-text available
We analyze the capabilities of CASI data for the discrimination of vine varieties in hyperspectral images. To analyze the discrimination capabilities of the CASI data, principal components analysis and linear discriminant analysis methods are used. We assess the performance of various classification techniques: Multi-layer perceptrons, radial basis function neural networks, and support vector machines. We also discuss the trade-off between spatial and spectral resolutions in the framework of precision viticulture.
Article
Full-text available
This paper presents a study of precision agriculture in the wine industry. While precision viticulture mostly aims to maximise yields by delivering the right inputs to appropriate places on a farm in the correct doses and at the right time, the objective of this study was rather to assess vine biomass differences. The solution proposed in this paper uses aerial imagery as the primary source of data for vine analysis. The first objective to be achieved by the solution is to automatically identify vineyards blocks, vine rows, and individual vines within rows. This is made possible through a series of enhancements and hierarchical segmentations of the aerial images. The second objective is to determine the correlation of image data with the biophysical data (yield and pruning mass) of each vine. A multispectral aerial image is used to compute vegetation indices, which serve as indicators of biophysical measures. The results of the automatic detection are compared against a test field, to verify both vine location and vegetation index correlation with relevant vine parameters. The advantage of this technique is that it functions in environments where active cover crop growth between vines is evident and where variable vine canopy conditions are present within a vineyard block.
Article
Full-text available
Special issue: Quantitative retrieval of surface properties from optical remote sensing: advancing applications with physical models – In honour of the retirement of Professor John R. Miller Methods for chlorophyll a + b (Cab) estimation in row-structured crops that account for row orientation and sun geometry are presented in this research. Airborne campaigns provided imagery over a total of 72 study sites from 14 Vitis vinifera L. fields with the Compact Airborne Spectrographic Imager (CASI) hyperspectral sensor in different sun geometries and a wide range of row orientations. Two different CASI acquisition modes were used, comprising 1 and 4 m spatial resolutions with 8 and 72 bands, respectively, in the visible and near-infrared spectral regions. Airborne campaigns were acquired over the same sites in the morning and in the afternoon to assess the bidirectional reflectance distribution function (BRDF) effects on the imagery owing to the different fractions of shadow as a function of the sun viewing geometries and the row orientation. Narrow-band indices sensitive to chlorophyll content (TCARI/OSAVI (transformed chlorophyll absorption in reflectance index / optimized soil-adjusted vegetation index)) and canopy structure (normalized difference vegetation index (NDVI)) were calculated from the CASI imagery. The effects on the canopy reflectance of different sun viewing geometries and row orientation were s