Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Plant density is useful variable that determines the fate of the wheat crop. The most commonly used method for plant density quantification is based on visual counting from ground level. The objective of this study is to develop and evaluate a method for estimating wheat plant density at the emergence stage based on high resolution imagery taken from UAV at very low altitude with application to high throughput phenotyping in field conditions. A Sony ILCE α5100L RGB camera with 24 Mpixels and equipped with a 60 mm focal length lens was flying aboard an hexacopter at 3 to 7 m altitude at about 1 m/s speed. This allows getting ground resolution between 0.20 mm to 0.45 mm, while providing 59–77% overlap between images. The camera was looking with 45° zenith angle in a compass direction perpendicular to the row direction to maximize the cross section viewed of the plants and minimize the effect of the wind created by the rotors. Agisoft photoscan software was then used to derive the position of the cameras for each image. Images were then projected on the ground surface to finally extract subsamples used to estimate the plant density. The extracted images were first classified to separate the green pixels from the background and the rows were then identified and extracted. Finally, image object (group of connected green pixels) was identified on each row and the number of plants they contain was estimated using a Support Vector Machine whose training was optimized using a Particle Swarm Optimization.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Unmanned aerial vehicles (UAV) have become more widely used for autonomous mission planning in precision agriculture (Duan et al., 2017;Li et al., 2020) as this method is completely nondestructive at all growth stages (Portz et al., 2012) and can be used under adverse field conditions with adjustable speeds (Chapman et al., 2014). Vegetation indices (VIs) calculated from UAV images demonstrated promising results in ground cover estimation and plant emergence for a variety of row crops (Chu et al., 2016;Duan et al., 2017;Jin et al., 2017;Koh et al., 2019;Li et al., 2019;Liu et al., 2017;Zhao et al., 2018). ...
... However, due to low spectral resolution, it was challenging to identify and segment plants from the background (Zhao et al., 2018). Jin et al. (2017) and Liu et al. (2017) advocated high-resolution multispectral images for improved accuracy under overlapped seedling conditions. ...
... One of the major limitations of the quality of the UAV image analysis was the ground resolution that might affect segmentation accuracy and repeatability (Jin et al., 2017;Liu et al., 2017;Weber et al., 2006). The RMSE and MAE in our data could have been significantly reduced if the spatial resolution was higher. ...
Full-text available
Article
Abstract Plant density and canopy cover are key agronomic traits for cotton (Gossypium hirsutum L.) and sorghum [Sorghum bicolor (L.) Moench] phenotypic evaluation. The objective of this study was to evaluate utility of broadband red–green–blue (RGB) and narrowband green, red, red‐edge, and near‐infrared spectral data taken by an unmanned aerial vehicle (UAV), and RGB taken by a digital single‐lens reflex camera for assessing the cotton and sorghum stands. Support Vector Machine was used to analyze UAV images, whereas ImageJ was used for RGB images. Fifteen vegetation indices (VIs) were evaluated for their accuracy, predictability, and residual yield. All VIs had Cohen's k > .65, F score > .63, and User and Producer accuracy of more than 71 and 69%, respectively. Soil‐adjusted vegetation indices (SAVIs) among narrowband VIs and excess green minus excess red (ExG–ExR) among broadband VIs provided more agreeable estimates of cotton and sorghum density than the remaining VIs with R2 and index of agreement (IoA) up to .79 and .92, respectively. The estimated canopy cover explained up to 83 and 82% variability in leaf area index (LAI) of cotton and sorghum, respectively. The ImageJ produced R2 from .79 to .90 and .83 to .86 and IoA .89 to .97 and ∼.91 between estimated and observed cotton and sorghum density, respectively. ImageJ explained up to 82 and 79% variability in cotton and sorghum LAI, respectively. Although ImageJ can give close estimates of crop density and cover, UAV‐based narrowband VIs still can provide an agreeable, reliable, and time‐efficient estimate of these attributes.
... Images obtained from UAV were used to calculate the seedling number (Feng et al., 2020;Osco et al., 2020), seedling distance (Gnädinger & Schmidhalter, 2017), plant density (Thorp et al., 2008), plants detection and classification (Guo et al., 2021), plant height (Hu et al., 2018), tassel number (Liu et al., 2020), biomass (Togeiro de Alckmin et al., 2021;Yu et al., 2016), LAI Lei et al., 2019) and yield (Herrmann et al., 2020). Different machine vision-learning approaches like random forest classifier Li, Xu, et al., 2019), support vector machine (Jin et al., 2017) and deep learning (Fan et al., 2018;Feng et al., 2020) were applied for plant counting. Deep learning can obtain higher estimation accuracy, but needs a larger quantity of sampled data, longer training time and higher cost in order to classify all of the plants (Finizola et al., 2019;Kamath et al., 2018;Shinde & Shah, 2018). ...
... Excess green index (EXG; Meyer and Neto, 2008), Lab colour space , excess green minus excess red index (EXGR; Upendar et al., 2021), green leaf index (GLI; Blancon et al., 2019) and support vector machine (SVM; Jin et al., 2017) were proposed in previous research for well identification of the green pixels. EXG (Eq. ...
... Increased costs were also needed for large-scale experiments. UAV have the advantages with the merits of fast image acquisition, automation and easy operation (Feng et al., 2020;Jin et al., 2017;Koh et al., 2019). Varela et al. (2018) used UAV to automatically scout fields with an overall crop classification accuracy of 0.96. ...
Full-text available
Article
Accurate maize plant counting plays an essential role in prediction of leaf area index (LAI), aboveground biomass (AGB) and yield. Plant counting of maize inbred lines at early growth stage will result in counting bias caused by death and growth of small seedlings. Therefore, the estimation of LAI and AGB might be negatively affected by plant counting bias at early growth stage. In this study, morphologic discrimination model (MDM) and interpolation discriminant model (IDM) were proposed for plant counting of maize inbred lines at second to fourth (V2–V4) leaf and fourth to sixth (V4–V6) leaf stages with different uncrewed aerial vehicles (UAV) flight heights. Automatic optimum angle calculation of each row, location-based plant cluster segmentation and mosaic method were presented to improve the estimation accuracy of plant counting. Then, the impact of accurate plant counting was evaluated in LAI and AGB prediction at the two growth stages. The results indicated that germination rate difference of some inbred lines could reach up to 38% between V2–V4 and V4–V6 leaf stages. The proposed method accurately estimated the plant counting in the UAV images during V2–V4 leaf stage (R² = 0.98, RMSE = 7.7, rRMSE = 2.6%) and V4–V6 leaf stage (R² = 0.86, RMSE = 2.0, rRMSE = 5.5%). The estimated LAI and AGB with plant numbers calculated at V4–V6 leaf stage correlated better with the field measurements (R² = 0.85 and R² = 0.9, respectively) compared with those estimated at V2–V4 leaf stage (R² = 0.8 and R² = 0.86, respectively). This research indicates that better estimation of LAI and AGB in the field were obtained by accurate plant counting in the late growth stage using UAV images and provides valuable insight for more accurate prediction of yield and crop management and breeding.
... Methods have been developed to accurately estimate the number of plants for various crops. For example, the stand count of maize (Jiang et al., 2019), wheat (Jin et al., 2017;Liu et al., 2015Liu et al., , 2017cLiu et al., , 2018, rice (Wu et al., 2019), safflower (Koh et al., 2019), and potatoes were successfully estimated using high-resolution UAV images. The deep learning framework can automatically excavate the feature information in remote sensing images to achieve plant counting. ...
... However, previous studies that utilized machine learning only focused on the number of plants of a single type of crop. These supervised models may not work well if they were directly migrated to another crop because plant stand counting was affected by many factors (crop type, plant size, leaf overlap, variable spacing, etc.) (Csillik et al., 2018;Jin et al., 2017;Liu et al., 2017aLiu et al., , 2017bZhao et al., 2018). When these factors change, the model needs to be re-trained, which requires extensive training data, considerable time, and space (Machefer, 2020). ...
... Besides, Shuai, Martinez-Feria, et al. (2019), Shuai, Martinezferia, et al. (2019)) proposed an algorithm to compare the variation of plant-spacing intervals to extract the number of maize seedlings. However, in the late seedling stage, the loss of precision caused by the overlapping is still a challenge to plant count (Jin et al., 2017). ...
Full-text available
Article
Acquiring the crop plant count is critical for enhancing field decision-making at the seedling stage. Remote sensing using unmanned aerial vehicles (UAVs) provide an accurate and efficient way to estimate plant count. However, there is a lack of a fast and robust method for counting plants in crops with equal spacing and overlapping. Moreover, previous studies only focused on the plant count of a single crop type. Therefore, this study developed a method to fast and non-destructively count plant numbers using high-resolution UAV images. A computer vision-based peak detection algorithm was applied to locate the crop rows and plant seedlings. To test the method’s robustness, it was used to estimate the plant count of two different crop types (maize and sunflower), in three different regions, at two different growth stages, and on images with various resolutions. Maize and sunflower were chosen to represent equidistant crops with distinct leaf shapes and morphological characteristics. For the maize dataset (with different regions and growth stages), the proposed method attained R2 of 0.76 and relative root mean square error (RRMSE) of 4.44%. For the sunflower dataset, the method resulted in R2 and RRMSE of 0.89 and 4.29%, respectively. These results showed that the proposed method outperformed the watershed method (maize: R2 of 0.48, sunflower: R2 of 0.82) and better estimated the plant numbers of high-overlap plants at the seedling stage. Meanwhile, the method achieved higher accuracy than watershed method during the seedling stage (2–4 leaves) of maize in both study sites, with R2 up to 0.78 and 0.91, respectively, and RRMSE of 2.69% and 4.17%, respectively. The RMSE of plant count increased significantly when the image resolution was lower than 1.16 cm and 3.84 cm for maize and sunflower, respectively. Overall, the proposed method can accurately count the plant numbers for in-field crops based on UAV remote sensing images.
... High-throughput crop phenotyping methods have received increasing attention for their potential for using genomic resources for the genetic improvement of crop yield. They provide powerful tools for measuring physiological and agronomic trait phenotypes, quantifying and monitoring large genetically defined populations in field experiments and breeding nurseries on multiple temporal and spatial scales [4][5][6][7][8]. To do this, they apply advanced robotics, high-tech sensors, data processing systems, and images. ...
... Crop phenotyping platforms and sensors are expensive in most crop breeding studies, but the rapid development of mobile and miniaturized technologies will offer powerful and affordable micro-sensors for monitoring crop phenotypes via multi-temporal high-resolution images. Smaller and lighter sensors have been combined with phenotyping platforms to conduct the study of crop phenotyping [4,5,8]. Various optical sensors have been used to estimate crop traits under multiple stress conditions. ...
... Various algorithms and models have been applied to predict wheat LAI and CC through collecting images with UAV and multispectral sensor technologies [12,35,36]. Gao et al. [36] used the UAV hyperspectral vegetation index to model 103 LAIs of multiple wheat single-growth periods, and the verification R 2 of the model was 0.783. ...
... Various algorithms and models have been applied to predict wheat LAI and CC through collecting images with UAV and multispectral sensor technologies [12,35,36]. Gao et al. [36] used the UAV hyperspectral vegetation index to model 103 LAIs of multiple wheat single-growth periods, and the verification R 2 of the model was 0.783. Other reports of UAV-based multispectral estimated vegetation indices in wheat, as well as other crops, under drought conditions had similar coefficients [11]. ...
Full-text available
Article
High-throughput phenotypic identification is a prerequisite for large-scale identification and gene mining of important traits. However, existing work has rarely leveraged high-throughput phenotypic identification into quantitative trait locus (QTL) acquisition in wheat crops. Clarifying the feasibility and effectiveness of high-throughput phenotypic data obtained from UAV multispectral images in gene mining of important traits is an urgent problem to be solved in wheat. In this paper, 309 lines of the spring wheat Worrakatta × Berkut recombinant inbred line (RIL) were taken as materials. First, we obtained the leaf area index (LAI) including flowering, filling, and mature stages, as well as the flag leaf chlorophyll content (CC) including heading, flowering, and filling stages, from multispectral images under normal irrigation and drought stress, respectively. Then, on the basis of the normalized difference vegetation index (NDVI) and green normalized difference vegetation index (GNDVI), which were determined by multispectral imagery, the LAI and CC were comprehensively estimated through the classification and regression tree (CART) and cross-validation algorithms. Finally, we identified the QTLs by analyzing the predicted and measured values. The results show that the predicted values of determination coefficient (R2) ranged from 0.79 to 0.93, the root-mean-square error (RMSE) ranged from 0.30 to 1.05, and the relative error (RE) ranged from 0.01 to 0.18. Furthermore, the correlation coefficients of predicted and measured values ranged from 0.93 to 0.94 for CC and from 0.80 to 0.92 for LAI at different wheat growth stages under normal irrigation and drought stress. Additionally, a linkage map of this RIL population was constructed by 11,375 SNPs; eight QTLs were detected for LAI on wheat chromosomes 1BL, 2BL (four QTLs), 3BL, 5BS, and 5DL, and three QTLs were detected for CC on chromosomes 1DS (two QTLs) and 3AL. The closely linked QTLs formed two regions on chromosome 2BL (from 54 to 56 cM and from 96 to 101 cM, respectively) and one region on 1DS (from 26 to 27 cM). Each QTL explained phenotypic variation for LAI from 2.5% to 13.8% and for CC from 2.5% to 5.8%. For LAI, two QTLs were identified at the flowering stage, two QTLs were identified at the filling stage, and three QTLs were identified at the maturity stage, among which QLAI.xjau-5DL-pre was detected at both filling and maturity stages. For CC, two QTLs were detected at the heading stage and one QTL was identified at the flowering stage, among which QCC.xjau-1DS was detected at both stages. Three QTLs (QLAI.xjau-2BL-pre.2, QLAI.xjau-2BL.2, and QLAI.xjau-3BL-pre) for LAI were identified under drought stress conditions. Five QTLs for LAI and two QTLs for CC were detected by imagery-predicted values, while four QTLs for LAI and two QTLs for CC were identified by manual measurement values. Lastly, investigations of these QTLs on the wheat reference genome identified 10 candidate genes associated with LAI and three genes associated with CC, belonging to F-box family proteins, peroxidase, GATA transcription factor, C2H2 zinc finger structural protein, etc., which are involved in the regulation of crop growth and development, signal transduction, and response to drought stress. These findings reveal that UAV sensing technology has relatively high reliability for phenotyping wheat LAI and CC, which can play an important role in crop genetic improvement.
... It can establish a relationship between the acquired crop canopy spectral information and growth parameters, which allows for the analysis of periodic quantitative variations in crop parameters, which has become a hot spot in agricultural quantitative remote-sensing research [9,10]. In recent years, remote-sensing platforms based on the ultralow-altitude unmanned aerial vehicles (UAVs) carrying various microsensors have been widely implemented in precision agricultural management because of their high mobility, low cost, high operational efficiency, and capacity to acquire crop canopy images with higher temporal and spatial resolution beneath cloud cover [11][12][13]. Currently, hyperspectral imaging spectrometers carried by UAVs can capture copious detailed spectral information of crop canopies due to their vast number of wavebands, allowing them to better monitor crop-growth metrics [14,15]. ...
... This study involves 32 modeling-set data and 16 validation-set data, which are small data sets. Second, the spectral variables selected in this study are multicollinear, and the RF algorithm is not sensitive to multicollinearity, so models built on the RF method produce less accurate AGB estimates [11,29]. ...
Full-text available
Article
Aboveground biomass (AGB) is an important indicator to evaluate crop growth, which is closely related to yield and plays an important role in guiding fine agricultural management. Compared with traditional AGB measurements, unmanned aerial vehicle (UAV) hyperspectral remote sensing technology has the advantages of being non-destructive, highly mobile, and highly efficient in precision agriculture. Therefore, this study uses a hyperspectral sensor carried by a UAV to obtain hyperspectral images of potatoes in stages of tuber formation, tuber growth, starch storage, and maturity. Linear regression, partial least squares regression (PLSR), and random forest (RF) based on vegetation indices (Vis), green-edge parameters (GEPs), and combinations thereof are used to evaluate the accuracy of potato AGB estimates in the four growth stages. The results show that (i) the selected VIs and optimal GEPs correlate significantly with AGB. Overall, VIs correlate more strongly with AGB than do GEPs. (ii) AGB estimates made by linear regression based on the optimal VIs, optimal GEPs, and combinations thereof gradually improve in going from the tuber-formation to the tuber-growth stage and then gradually worsen in going from the starch-storage to the maturity stage. Combining the optimal GEPs with the optimal VIs produces the best estimates, followed by using the optimal VIs alone, and using the optimal GEPs produces the worst estimates. (iii) Compared with the single-parameter model, which uses the PLSR and RF methods based on VIs, the combination of VIs with the optimal GEPs significantly improves the estimation accuracy, which gradually improves in going from the tuber-formation to the tuber-growth stage, and then gradually deteriorates in going from the starch-storage to the maturity stage. The combination of VIs with the optimal GEPs produces the most accurate estimates. (iv) The PLSR method is better than the RF method for estimating AGB in each growth period. Therefore, combining the optimal GEPs and VIs and using the PLSR method improves the accuracy of AGB estimates, thereby allowing for non-destructive dynamic monitoring of potato growth.
... Previous research has also used UAV-derived image features, including size and shape (e.g., area, diameter, major axis length, minor axis length, solidity, and eccentricity) to estimate wheat density (Jin et al., 2017) and detect corn at an early growth stage (Varela et al., 2018). Jin et al. (2017) used these features in a support vector machine to estimate the wheat density and achieved R 2 values from 0.80 to 0.91 at different experiment sites. ...
... Previous research has also used UAV-derived image features, including size and shape (e.g., area, diameter, major axis length, minor axis length, solidity, and eccentricity) to estimate wheat density (Jin et al., 2017) and detect corn at an early growth stage (Varela et al., 2018). Jin et al. (2017) used these features in a support vector machine to estimate the wheat density and achieved R 2 values from 0.80 to 0.91 at different experiment sites. Varela et al. (2018) used image features in a decision tree to classify corn and non-corn objects (weeds) and found that aspect ratio, axis-diameter ratio, convex area, thinness, and solidity were significant image features in the classification. ...
Full-text available
Article
Assessing corn (Zea Mays L.) emergence uniformity soon after planting is important for relating to grain production and for making replanting decisions. Unmanned aerial vehicle (UAV) imagery has been used for determining corn densities at vegetative growth stage 2 (V2) and later, but not as a tool for detecting emergence date. The objective of this study was to estimate days after corn emergence (DAE) using UAV imagery. A field experiment was designed with four planting depths to obtain a range of corn emergence dates. UAV imagery was collected during the first, second and third weeks after emergence. Acquisition height was approximately 5m above ground level resulted in a ground sampling distance 1.5 mm pixel-1. Seedling size and shape features derived from UAV imagery were used for DAE classification based on the Random Forest machine learning model. Results showed image features were distinguishable for different DAE (single day) within the first week after initial corn emergence with a moderate overall classification accuracy of 0.49. However, for the second week and beyond the overall classification accuracy diminished (0.20 to 0.35). When estimating DAE within a three-day window (± 1 DAE), overall 3-day classification accuracies ranged from 0.54 to 0.88. Diameter, area, and major axis length/area were important image features to predict corn DAE. Findings demonstrated that UAV imagery can detect newly-emerged corn plants and estimate their emergence date to assist in establishing emergence uniformity. Additional studies are needed for fine-tuning image collection procedures and image feature identification in order to improve accuracy.
... B. Huang et al., 2013;Khanal et al., 2017;López-Granados, 2011;Manfreda et al., 2018;Pádua et al., 2017;Peña et al., 2013;Pérez-Ortiz et al., 2015;Rasmussen et al., 2013Rasmussen et al., , 2016Torres-Sánchez et al., 2014;Verger et al., 2014;Von Bueren et al., 2015;C. Zhang & Kovacs, 2012) 2 Remote phenotyping, yield estimation, crop surface model, counting of plants (Bendig et al., 2013(Bendig et al., , 2014Geipel et al., 2014;Gnädinger & Schmidhalter, 2017;Haghighattalab et al., 2016;Holman et al., 2016;Jin et al., 2017;W. Li et al., 2016;Maimaitijiang et al., 2017;Sankaran et al., 2015;Schirrmann et al., 2016;Shi et al., 2016;Yue et al., 2017;X. ...
... As yield estimation is an incredibly vital piece of information, particularly when being available on time, there is a potential for UAVs to provide all field measurements and efficiently acquire high-quality data (Daakir et al., 2017;Demir et al., 2018;Enciso et al., 2019;Kulbacki et al., 2018;Pudelko et al., 2012). In this regard, Jin et al. (2017) took advantage of the high resolution imagery obtained by UAVs at very low altitudes to develop and assess a method for estimating wheat plant density at the emergence stage. According to the authors, UAVs overcome the limitations of rover systems equipped with cameras and represent a non-invasive method to estimate plant density in crops, allowing farmers to achieve the high throughput necessary for field phenotyping independent of the trafficability of the soil. ...
Full-text available
Article
Drones, also called Unmanned Aerial Vehicles (UAV), have witnessed a remarkable development in recent decades. In agriculture, they have changed farming practices by offering farmers substantial cost savings, increased operational efficiency, and better profitability. Over the past decades, the topic of agricultural drones has attracted remarkable academic attention. We therefore conduct a comprehensive review based on bibliometrics to summarize and structure existing academic literature and reveal current research trends and hotspots. We apply bibliometric techniques and analyze the literature surrounding agricultural drones to summarize and assess previous research. Our analysis indicates that remote sensing, precision agriculture, deep learning, machine learning, and the Internet of Things are critical topics related to agricultural drones. The co-citation analysis reveals six broad research clusters in the literature. This study is one of the first attempts to summarize drone research in agriculture and suggest future research directions.
... This study presented a deep learning model for wheat plant density estimation. Unlike traditional research that estimated wheat plant density before tillering [1,13], this study challenged the wheat density after tillering when wheat plants were highly clustered. Specifically, this study's main research contributions are as follows. ...
... The model-predicted heatmaps already presented an excellent distribution of plant row and are promising to be extracted in future works. Fourth, the model is essential to be applied to the field scale, as implemented in [13]. With a predefined flight path, UAVs can obtain orthomosaic images [53]. ...
Full-text available
Article
Plant density is a significant variable in crop growth. Plant density estimation by combining unmanned aerial vehicles (UAVs) and deep learning algorithms is a well-established procedure. However, flight companies for wheat density estimation are typically executed at early development stages. Further exploration is required to estimate the wheat plant density after the tillering stage, which is crucial to the following growth stages. This study proposed a plant density estimation model, DeNet, for highly accurate wheat plant density estimation after tillering. The validation results presented that (1) the DeNet with global-scale attention is superior in plant density estimation, outperforming the typical deep learning models of SegNet and U-Net; (2) the sigma value at 16 is optimal to generate heatmaps for the plant density estimation model; (3) the normalized inverse distance weighted technique is robust to assembling heatmaps. The model test on field-sampled datasets revealed that the model was feasible to estimate the plant density in the field, wherein a higher density level or lower zenith angle would degrade the model performance. This study demonstrates the potential of deep learning algorithms to capture plant density from high-resolution UAV imageries for wheat plants including tillers.
... Previous studies using UAS data have looked into the mapping of biophysical parameters such as leaf area index (LAI) (Verger et al., 2014;Yao et al., 2017), chlorophyll (Jay et al., 2017), biomass Viljanen et al., 2018), plant density (Jin et al., 2017), and canopy height (Song and Wang, 2019;Ziliani et al., 2018) as well as combinations of these parameters (Jay et al., 2019). However, most UAS studies investigate the mapping of plant traits in monocultural crop stands, while multispecies systems such as natural or cultivated permanent grassland ecosystems like in pre-Alpine regions have been studied less often. ...
Full-text available
Article
Grasslands are an important part of pre-Alpine and Alpine landscapes. Despite the economic value and the significant role of grasslands in carbon and nitrogen (N) cycling, spatially explicit information on grassland biomass and quality is rarely available. Remotely sensed data from unmanned aircraft systems (UASs) and satellites might be an option to overcome this gap. Our study aims to investigate the potential of low-cost UAS-based multispectral sensors for estimating above-ground biomass (dry matter, DM) and plant N concentration. In our analysis, we compared two different sensors (Parrot Sequoia, SEQ; MicaSense RedEdge-M, REM), three statistical models (linear model; random forests, RFs; gradient-boosting machines, GBMs), and six predictor sets (i.e. different combinations of raw reflectance, vegetation indices, and canopy height). Canopy height information can be derived from UAS sensors but was not available in our study. Therefore, we tested the added value of this structural information with in situ measured bulk canopy height data. A combined field sampling and flight campaign was conducted in April 2018 at different grassland sites in southern Germany to obtain in situ and the corresponding spectral data. The hyper-parameters of the two machine learning (ML) approaches (RF, GBM) were optimized, and all model setups were run with a 6-fold cross-validation. Linear models were characterized by very low statistical performance measures, thus were not suitable to estimate DM and plant N concentration using UAS data. The non-linear ML algorithms showed an acceptable regression performance for all sensor–predictor set combinations with average (avg; cross-validated, cv) Rcv2 of 0.48, RMSEcv,avg of 53.0 g m2, and rRMSEcv,avg (relative) of 15.9 % for DM and with Rcv,avg2 of 0.40, RMSEcv,avg of 0.48 wt %, and rRMSEcv, avg of 15.2 % for plant N concentration estimation. The optimal combination of sensors, ML algorithms, and predictor sets notably improved the model performance. The best model performance for the estimation of DM (Rcv2=0.67, RMSEcv=41.9 g m2, rRMSEcv=12.6 %) was achieved with an RF model that utilizes all possible predictors and REM sensor data. The best model for plant N concentration was a combination of an RF model with all predictors and SEQ sensor data (Rcv2=0.47, RMSEcv=0.45 wt %, rRMSEcv=14.2 %). DM models with the spectral input of REM performed significantly better than those with SEQ data, while for N concentration models, it was the other way round. The choice of predictors was most influential on model performance, while the effect of the chosen ML algorithm was generally lower. The addition of canopy height to the spectral data in the predictor set significantly improved the DM models. In our study, calibrating the ML algorithm improved the model performance substantially, which shows the importance of this step.
... These relationships are generally nonlinear (Weiss et al., 2019). The use of machine-learning algorithms are well suited for dealing with nonlinear heteroscedastic problems, and can be used for efficient data processing and data mining (Holloway and Mengersen, 2018;Jin et al., 2017b). Machine learning algorithms, such as support vector regression (Maimaitijiang et al., 2020), random forest regression (Aghighi et al., 2018) and artificial neural networks (Maimaitijiang et al., 2020), have already been widely used to analyze agricultural remotely sensed data, with excellent results. ...
Full-text available
Article
The accurate and timely prediction of crop yield at a large scale is important for food security and the development of agricultural policy. An adaptable and robust method for estimating maize yield for the entire territory of China, however, is currently not available. The inherent trade-off between early estimates of yield and the accuracy of yield prediction also remains a confounding issue. To explore these challenges, we employ indicators such as GPP, ET, surface temperature (Ts), LAI, soil properties and maize phenological information with random forest regression (RFR) and gradient boosting decision tree (GBDT) machine learning approaches to provide maize yield estimates within China. The aims were to: (1) evaluate the accuracy of maize yield prediction obtained from multimodal data analysis using machine-learning; (2) identify the optimal period for estimating yield; and (3) determine the spatial robustness and adaptability of the proposed method. The results can be summarized as: (1) RFR estimated maize yield more accurately than GBDT; (2) Ts was the best single indicator for estimating yield, while the combination of GPP, Ts, ET and LAI proved best when multi-indicators were used (R 2 = 0.77 and rRMSE = 16.15% for the RFR); (3) the prediction accuracy was lower with earlier lead time but remained relatively high within at least 24 days before maturity (R 2 > 0.77 and rRMSE <16.92%); and (4) combining machine-learning algorithms with multi-indicators demonstrated a capacity to cope with the spatial heterogeneity. Overall, this study provides a reliable reference for managing agricultural production.
... Since the advent of computer vision and machine learning agriculture has experienced exponential growth in new technological applications (Lu and Young, 2020;Chang and Lin, 2018). Plant detection techniques for various vegetation types have been developed including segmentation (Junior et al, 2021;Shirzadifar et al, 2020), specifically using the excess green index (ExG) and Otsu's thresholding method (Li et al, 2019;Shrestha et al, 2004), deep learning with convolutional neural networks (CNN) (Fan et al, 2018;Zamboni et al, 2021;Li et al, 2016, Li et al, 2019, or a combination of these methods (Valente et al, 2020;Neupane et al, 2019;Jin et al, 2017). For corn fields, RGB data for early-stage plant stand validation is derived from manually transported sensors (Carreira et al, 2022;Easton, 1996), groundbased vehicle-mounted sensors (Jiang et al, 2015;Montalvo et al, 2012;Tang and Tian, 2008;Shrestha and Steward, 2003), and UAV-mounted sensors (Kumar et al, 2020). ...
Full-text available
Article
Uniform plant spacing along crop rows is a primary concern in maximising yield in precision agriculture, and research has shown that variation in this spacing uniformity has a detrimental effect on productive potential. This irregularity needs to be evaluated as early and efficiently as possible to facilitate effective decision-making. Traditionally, variation in seedling spacing is sampled manually on site, however recent technological developments have made it possible to refine, scale and automate this process. Using machine-learning (ML) object detection techniques, plants can be detected in very high-resolution RGB (redgreen-blue) imagery acquired by an unmanned aerial vehicle (UAV), and after processing and geometric analysis of the results a measurement of the variability in intra-row plant distances can be obtained. This proposed technique is superior to traditional methods since the sampling can be made over more area in less time, and the results are more representative and objective. The main benefits are speed, accuracy and cost reduction. This work aims to demonstrate the feasibility of automatically assessing sowing quality in any number of images, using ML object detection and the Shapely Python library for geometrical analysis. The prototype model can detect 99.35% of corn plants in test data from the same field, but also detects 1.89% false positives. Our geometric analysis algorithm has been shown to be robust in finding planting rows orientation and interplant lines in test cases. The result is a coefficient of variation (CV) calculated per sample image, which can be visualised geographically to support decision-making.
... The results of Khozaei et al. (2020) showed that in high plant density the transplanting method is more efficient than direct seeding method in sugar beet production. Plant density is an important agronomic parameter due to its effects on the light interception of leaves during the photosynthesis process and also influencing many aspects of cultural practices including susceptibility to pathogens, water and fertilizer requirements (Dai et al., 2014;Haghverdi et al., 2017;Jin et al., 2017;Khozaei et al., 2020;Thapa et al., 2019). ...
Article
Under water shortage conditions, the farmers are faced with risk and uncertainty in decision making. The main objectives of this study are to selecting the well-managed deficit irrigation, the optimum plant density and appropriate planting method in order to decrease the sugar beet yield risk and water consumption risk. The gross margin (GM) and economic water pro- ductivity (EWP) are two main economic criteria, which are affected by yield and water consumption; therefore, they can be considered as risky parameters. This study uses Stochastic Dominance with Respect to a Function (SDRF) and stoplight graphs to evaluate the GM and EWP in three levels of irrigation regimes (100% (II), 75% (I2), and 50% (I3) of full irriga- tion), four plant densities (180,000 (P1), 135,000 (P2), 90,000 (P3), and 45,000 (P4) plants ha −1 ) and two planting methods (direct seeding (D) and transplanting (T)) in sugar beet cultivation experiment, conducted in a split-split plot arrangement. The results from the SDRF method for GM and EWP analysis show that for lower risk aversion farmers, the I2 irrigation level and for upper-risk aversion farmers, the I1 irrigation level are the most preferred options. However, for both upper and lower risk aversion farmers, the P3 plant density and transplanting method are the most preferred options. By consider- ing the interaction effects of irrigation regimes, plant densities, and planting methods on EWP, the I2P3T treatment is the most preferred treatment followed by I2P2T and I3P3T for both lower and upper-risk aversion farmers based on the SDRF method. The results showed that by using the deficit irrigation strategy in the same level of probability, the higher GM and EWP can be achieved as compared to the full irrigation strategy. The stoplights graph also indicated that the irrigation level of I2, plant density of P3, and transplanting method can increase the value of EWP to a higher level as compared to other treatments. Therefore, under condition of water deficit, combining these treatments can be helpful for obtaining the optimum GM and EWP in sugar beet cultivation.
... Moreover, there are also multispectral images from UAVs that serve as a good reference for the determination of seedling emergence, as well as the rise of spring wheat. Recently, some scholars have judged the maturity of wheat, as well as sorghum under drought conditions by UAV-based multispectral indices (Guillen-Climent et al., 2012;Verger et al., 2014;Jin et al., 2017). Hunt et al. (2013) constructed the Green Normalized Difference Vegetation Index (GNDVI) from multispectral images obtained by UAV and inversed the leaf area index of wheat through the vegetation index. ...
Full-text available
Article
To obtain the canopy chlorophyll content of winter wheat in a rapid and non-destructive high-throughput manner, the study was conducted on winter wheat in Xinjiang Manas Experimental Base in 2021, and the multispectral images of two water treatments' normal irrigation (NI) and drought stress (DS) in three key fertility stages (heading, flowering, and filling) of winter wheat were obtained by DJI P4M unmanned aerial vehicle (UAV). The flag leaf chlorophyll content (CC) data of different genotypes in the field were obtained by SPAD-502 Plus chlorophyll meter. Firstly, the CC distribution of different genotypes was studied, then, 13 vegetation indices, combined with the Random Forest algorithm and correlation evaluation of CC, and 14 vegetation indices were used for vegetation index preference. Finally, preferential vegetation indices and nine machine learning algorithms, Ridge regression with cross-validation (RidgeCV), Ridge, Adaboost Regression, Bagging_Regressor, K_Neighbor, Gradient_Boosting_Regressor, Random Forest, Support Vector Machine (SVM), and Least absolute shrinkage and selection operator (Lasso), were preferentially selected to construct the CC estimation models under two water treatments at three different fertility stages, which were evaluated by correlation coefficient (r), root means square error (RMSE) and the normalized root mean square error (NRMSE) to select the optimal estimation model. The results showed that the CC values under normal irrigation were higher than those underwater limitation treatment at different fertility stages; several vegetation indices and CC values showed a highly significant correlation, with the highest correlation reaching.51; in the prediction model construction of CC values, different models under normal irrigation and water limitation treatment had high estimation accuracy, among which the model with the highest prediction accuracy under normal irrigation was at the heading stage. The highest precision of the model prediction under normal irrigation was in the RidgeCV model (r = 0.63, RMSE = 3.28, NRMSE = 16.2%) and the highest precision of the model prediction under water limitation treatment was in the SVM model (r = 0.63, RMSE = 3.47, NRMSE = 19.2%).
... Yao et al. (2017), for example, were able to estimate wheat leaf area index (LAI) effectively with UAVs narrowband multispectral image (400-800 nm spectral regions, and 10 cm resolution) under varying growth conditions during five critical growth stages, and provide potential technical support for nitrogen fertilization optimization. Satellite data with a wider spectral band and multispectral imagery can help differentiate crop characteristics (i.e., leaves, area) at a stand level (Gnädinger and Schmidhalter, 2017;Jin et al., 2017;Varela et al., 2018), estimate crop yield (Fernandez-Ordonez and Soria-Ruiz, 2017; Yadav and Geli, 2021;Rigden et al., 2022); assess crop health . /fclim. . ...
Full-text available
Article
Systematic tools and approaches for measuring climate change adaptation at multiple scales of spatial resolution are lacking, limiting measurement of progress toward the adaptation goals of the Paris Agreement. In particular, there is a lack of adaptation measurement or tracking systems that are coherent (measuring adaptation itself), comparable (allowing comparisons across geographies and systems), and comprehensive (are supported by the necessary data). In addition, most adaptation measurement efforts lack an appropriate counterfactual baseline to assess the effectiveness of adaptation-related interventions. To address this, we are developing a “Biomass Climate Adaptation Index” (Biomass CAI) for agricultural systems, where climate adaptation progress across multiple scales can be measured by satellite remote sensing. The Biomass CAI can be used at global, national, landscape and farm-level to remotely monitor agri-biomass productivity associated with adaptation interventions, and to facilitate more tailored “precision adaptation”. The Biomass CAI places focus on decision-support for end-users to ensure that the most effective climate change adaptation investments and interventions can be made in agricultural and food systems.
... A supervised ML algorithm, a support vector machine with radial function and tuned with particle swarm optimization algorithm, was used to count the number of wheat plants present in RGB field imagery (Jin et al., 2017). For counting rapeseed plants at two different stages, two different regression models were developed, and R 2 values of 0.845 and 0.867 were reported. ...
Thesis
Evaluating plant stand count or classifying weeds by manual scouting is time-consuming, laborious, and subject to human errors. Proximal remote sensed imagery used in conjunction with machine vision algorithms can be used for these purposes. Despite its great potential, the rate of using these technologies is still slow due to their subscription cost and data privacy issues. Therefore, in this research, open-source image processing software, ImageJ and Python that support in-house processing, was used to develop algorithms to evaluate stand count, develop spatial distribution maps, and classify the four common weeds of North Dakota. A novel sliding and shifting region of interest method was developed for plant stand count. Handcrafted simple image processing and machine learning approaches with shape features were successfully employed for weed species classification. Such tools and methodologies using open-source platforms can be extended to other scenarios and are expected to be impactful and helpful to stakeholders.
... A supervised ML algorithm, a support vector machine with radial function and tuned with particle swarm optimization algorithm, was used to count the number of wheat plants present in RGB field imagery (Jin et al., 2017). For counting rapeseed plants at two different stages, two different regression models were developed, and R 2 values of 0.845 and 0.867 were reported. ...
Article
Plant stand count helps in estimating the yield and evaluating the planter’s efficiency and seed quality. Traditional methods of counting the plants by manual measurement are time-consuming, laborious, and error-prone. In contrast, the ground-based sensing methods are limited to smaller spaces. High spatial resolution images obtained from unmanned aerial vehicles (UAV) can be used in conjunction with computer vision algorithms to evaluate plant stand count, as it directly influences the yield. In spite of the importance of high-throughput plant stand count in row crop agriculture, no synthesized information in this specific subject matter is available. Therefore, the objective of this paper was to perform a systematic literature review of the current studies that focus on evaluating plant stand count using UAV imagery to provide well-synthesized information, identify research gaps, and provide suitable recommendations. In this study, a comprehensive literature search was performed on three academic databases (Agricola, Web of Science, and Scopus), and a total of 29 articles were found based on search terms and selection criteria for review. From the systematic review, it can be concluded that: an appropriate stage after plant emergence without canopy overlap is necessary for image acquisition; optimal flying height should be selected to balance the field coverage and accuracy; L*a*b* color space can provide better segmentation; hyperspectral camera imagery can provide good discrimination; deep learning with data augmentation and transfer learning models can be used to reduce the computational time and resources; the stand count methodology that is successful with corn and cotton could be extended to other row crops and horticultural crops; and application of direct image processing and use of open-source platforms is required for stakeholder participation. The review will be helpful to the farmers, producers, and researchers in selecting and employing the UAV-based algorithms for evaluating plant stand count.
... The correlation coefficients in the subplots were slightly higher than those from the whole plots, similar to the VI-based CV approach. For the two flying altitudes, the results were not significantly different, contrasting with the results reported in other studies [73,74]. This could be associated with the speed of UAV during data acquisition. ...
Full-text available
Article
Forage and field peas provide essential nutrients for livestock diets, and high-quality field peas can influence livestock health and reduce greenhouse gas emissions. Above-ground biomass (AGBM) is one of the vital traits and the primary component of yield in forage pea breeding programs. However, a standard method of AGBM measurement is a destructive and labor-intensive process. This study utilized an unmanned aerial vehicle (UAV) equipped with a true-color RGB and a five-band multispectral camera to estimate the AGBM of winter pea in three breeding trials (two seed yields and one cover crop). Three processing techniques—vegetation index (VI), digital surface model (DSM), and 3D reconstruction model from point clouds—were used to extract the digital traits (height and volume) associated with AGBM. The digital traits were compared with the ground reference data (measured plant height and harvested AGBM). The results showed that the canopy volume estimated from the 3D model (alpha shape, α = 1.5) developed from UAV-based RGB imagery’s point clouds provided consistent and high correlation with fresh AGBM (r = 0.78–0.81, p < 0.001) and dry AGBM (r = 0.70–0.81, p < 0.001), compared with other techniques across the three trials. The DSM-based approach (height at 95th percentile) had consistent and high correlation (r = 0.71–0.95, p < 0.001) with canopy height estimation. Using the UAV imagery, the proposed approaches demonstrated the potential for estimating the crop AGBM across winter pea breeding trials.
... Monitoring in the latter case will tend to be on the scale of small groups of plants, rather than that of individual plants. Jin et al. [44] showed in a study on the counting of wheat plants by UAV with flights at very low altitudes (less than 7 m) the importance of having sufficiently fine image resolution to properly detect and locate individual plants, with the RMSE of the number of plants per m 2 they obtained being directly related to the resolution of the images used. The remote controller supplied with the P4-RTK and its software (GS RTK) does not allow flights to be carried out at altitudes lower than 25 m, which limits the resolution that can be achieved (7 mm). ...
Full-text available
Article
To implement agricultural practices that are more respectful of the environment, precision agriculture methods for monitoring crop heterogeneity are becoming more and more spatially detailed. The objective of this study was to evaluate the potential of Ultra-High-Resolution UAV images with centimeter GNSS positioning for plant-scale monitoring. A Dji Phantom 4 RTK UAV with a 20 MPixel RGB camera was used, flying at an altitude of 25 m (0.7 cm resolution). This study was conducted on an experimental plot sown with maize. A centimeter-precision Trimble Geo7x GNSS receiver was used for the field measurements. After evaluating the precision of the UAV’s RTK antenna in static mode on the ground, the positions of 17 artificial targets and 70 maize plants were measured during a series of flights in different RTK modes. Agisoft Metashape software was used. The error in position of the UAV RTK antenna in static mode on the ground was less than one centimeter, in terms of both planimetry and elevation. The horizontal position error measured in flight on the 17 targets was less than 1.5 cm, while it was 2.9 cm in terms of elevation. Finally, according to the RTK modes, at least 81% of the corn plants were localized to within 5 cm of their position, and 95% to within 10 cm.
... Recent studies have shown the usage of high-resolution red, green, and blue (RGB) UAV imagery for evaluating emergence in corn (Shirzadifar et al., 2020;Shuai et al., 2019;Vong, Conway, et al., 2021a), wheat Liu et al., 2017), cotton (Feng et al., 2020), and potato . The UAV imagery data can be used to evaluate stand count or plant density (Feng et al., 2020;Jin et al., 2017;Shirzadifar et al., 2020;Shuai et al., 2019;Vong, Conway, et al., 2021a), spacing uniformity (Shirzadifar et al., 2020), emergence rate , coefficient of variation (CV) of an emergence region , and canopy cover (Feng et al., 2020;Li et al., 2019), with relatively high coefficients of determination (R 2 > 0.80). In addition, UAV imagery has been used to map and visualize crop emergence and growth status in a large field (Shirzadifar et al., 2020;Sona et al., 2016;Torres-Sánchez et al., 2014). ...
Article
Assessment of corn (Zea Mays L.) emergence uniformity is important to evaluate crop yield potential. Previous studies have shown the potential of unmanned aerial vehicle (UAV) imagery and deep learning (DL) models in estimating early stand count and plant spacing uniformity, but few have extended further to field-scale mapping. Additionally, estimation of plant emergence date using UAV imagery in field-scale studies has not been achieved. This study aimed to estimate and map corn emergence uniformity using UAV imagery and DL modeling. Corn emergence uniformity was quantified with plant density, plant spacing standard deviation (PSstd), and mean days to imaging after emergence (DAEmean). Corn was planted at four depths (3.8, 5.1, 6.4, and 7.6 cm). A UAV imaging system equipped with a red, green, and blue (RGB) camera was used to acquire images at 10 m above ground level at 32 days after planting (20 days after emergence at V2-V4 growth stage). A pre-trained convolutional neural network, ResNet18, was used to estimate the three emergence parameters. Results showed the estimation accuracies in the testing dataset for plant density, PSstd, and DAEmean were 0.97, 0.73, and 0.95, respectively. The developed method had higher accuracy and lower root-mean-square-error for plant density and DAEmean, indicating better performance than previous studies. A case study was conducted to assess the emergence uniformity of corn at different planting depths using the developed estimation models at the field scale. From this, field maps were produced. Results showed that the average plant density and DAEmean decreased and the average PSstd increased with increasing depths, indicating deeper planting depths caused less and later emergence and less spatial uniformity in this field. These emergence uniformity field maps could be used for future field-scale agronomic studies on temporal and spatial crop emergence uniformity and for making planting decisions in commercial production.
... However, satellite remote sensing images can be affected by temporal and spatial resolution as well as cloud cover, so they are usually not enough to accurately estimate crop yields on the field scale. In contrast, unmanned aerial vehicles (UAVs) have quickly become ideal tools for precise crop monitoring due to their flexibility and low-altitude flight capability [7,8]. UAV-based low-altitude remote sensing platform can obtain high spatial-temporal resolution images free from atmospheric interference [9][10][11]. ...
Full-text available
Article
Background China has a unique cotton planting pattern. Cotton is densely planted in alternating wide and narrow rows to increase yield in Xinjiang, China, causing the difficulty in the accurate estimation of cotton yield using remote sensing in such field with branches occluded and overlapped. Results In this study, unmanned aerial vehicle (UAV) imaging and deep convolutional neural networks (DCNN) were used to estimate densely planted cotton yield. Images of cotton fields were acquired by the UAV at an altitude of 5 m. Cotton bolls were manually harvested and weighed afterwards. Then, a modified DCNN model (CD-SegNet) was constructed for pixel-level segmentation of cotton boll images by reorganizing the encoder-decoder and adding dilated convolutions. Besides, linear regression analysis was employed to build up the relationship between cotton boll pixels ratio and cotton yield. Finally, the estimated yield for four cotton fields were verified by weighing harvested cotton. The results showed that CD-SegNet outperformed the other tested models, including SegNet, support vector machine (SVM), and random forest (RF). The average error in yield estimates of the cotton fields was as low as 6.2%. Conclusions Overall, the estimation of densely planted cotton yields based on low-altitude UAV imaging is feasible. This study provides a methodological reference for cotton yield estimation in China.
... UAV images are also applied in precision agriculture due to high resolution and low cost. For example, Jin et al. (2017) introduced a method to estimate wheat density using UAV images obtained at very low altitudes. Pádua et al. (2018) conducted multitemporal monitoring of vineyard vegetation using UAVs equipped with consumer-grade RGB sensors. ...
Full-text available
Article
Estimating the crop leaf area index (LAI) accurately is very critical in agricultural remote sensing, especially in monitoring crop growth and yield prediction. The development of unmanned aerial vehicles (UAVs) has been significant in recent years and has been extensively applied in agricultural remote sensing (RS). The vegetation index (VI), which reflects spectral information, is a commonly used RS method for estimating LAI. Texture features can reflect the differences in the canopy structure of rice at different growth stages. In this research, a method was developed to improve the accuracy of rice LAI estimation during the whole growing season by combining texture information based on wavelet transform and spectral information derived from the VI. During the whole growth period, we obtained UAV images of two study areas using a 12-band Mini-MCA system and performed corresponding ground measurements. Several VI values were calculated, and the texture analysis was carried out. New indices were constructed by mathematically combining the wavelet texture and spectral information. Compared with the corresponding VIs, the new indices reduced the saturation effect and were less sensitive to the emergence of panicles. The determination coefficient (R2) increased for most VIs used in this study throughout the whole growth period. The results indicated that the estimation accuracy of LAI by combining spectral information and texture information was higher than that of VIs. The method proposed in this study used the spectral and wavelet texture features extracted from UAV images to establish a model of the whole growth period of rice, which was easy to operate and had great potential for large-scale auxiliary rice breeding and field management research.
... UAVs have greater maneuverability and timeliness than satellite platforms, and can effectively compensate for the low temporal and spatial resolution of satellite remote sensing data in precision agriculture (Deng et al., 2018;Yue et al., 2019;Jin et al., 2020). Owing to the high resolution of UAV images, UAV data have been widely used to monitor crop phenotypic traits, including emergence rate (Jin et al., 2017), leaf area index (LAI) , aboveground biomass (Che et al., 2020), plant height (Liu et al., 2021), chlorophyll content , and nitrogen content . ...
Full-text available
Article
The estimation of water status of maize is important for evaluating crop growth and conducting precision irrigation. The development of unmanned aerial vehicles (UAVs) equipped with sensor technologies provides high-quality data for estimating maize water status. Only a few studies have been conducted on the estimation of maize equivalent water thickness (EWT) and fuel moisture content (FMC) using UAV hyperspectral images. This study aimed to estimate the leaf and canopy water status of maize inbred lines using UAV digital and hyperspectral data. Leaf area index (LAI) is required to obtain canopy water indicators, canopy equivalent water thickness (EWTc), and canopy fuel moisture content (FMCc). However, obtaining the LAI from remote sensing images requires the support of samples or prior knowledge. The LAI is positively correlated with canopy coverage (CC), which can be extracted accurately from UAV images. Therefore, for EWTc and FMCc, this study aimed to use the CC instead of the LAI to construct and test new canopy water indicators in order to improve the convenience of UAV imaging technology in monitoring maize water status. The results showed that, after the introduction of CC, two indicators, revised canopy equivalent water thickness (r-EWTc) and revised canopy fuel moisture content (r-FMCc), were both sensitive to the difference vegetation index (DVI) derived from UAV hyperspectral images. The r-FMCc was the most effective of the six water indicators used in this study, and the Pearson’s correlation coefficient (r) with DVI was 0.93. The results indicate that the CC extracted directly from UAV digital images is suitable to replace LAI, and helps improve the data availability for estimating canopy water status. The UAV hyperspectrum can accurately estimate the water status in maize inbred lines, which is helpful for further application of UAV data in breeding.
... Rice is one of the important cereal crops in the world, especially in Asia. The yield of cereal crops is related to the number of panicles per square meter, grains per panicle and grain size (Slafer et al., 2014;Lu et al., 2015;Ferrante et al., 2017;Jin et al., 2017). Thus, in order to predict the yield of rice, panicle count is an appropriate method. ...
Full-text available
Article
Panicle number is directly related to rice yield, so panicle detection and counting has always been one of the most important scientific research topics. Panicle counting is a challenging task due to many factors such as high density, high occlusion, and large variation in size, shape, posture et.al. Deep learning provides state-of-the-art performance in object detection and counting. Generally, the large images need to be resized to fit for the video memory. However, small panicles would be missed if the image size of the original field rice image is extremely large. In this paper, we proposed a rice panicle detection and counting method based on deep learning which was especially designed for detecting rice panicles in rice field images with large image size. Different object detectors were compared and YOLOv5 was selected with MAPE of 3.44% and accuracy of 92.77%. Specifically, we proposed a new method for removing repeated detections and proved that the method outperformed the existing NMS methods. The proposed method was proved to be robust and accurate for counting panicles in field rice images of different illumination, rice accessions, and image input size. Also, the proposed method performed well on UAV images. In addition, an open-access and user-friendly web portal was developed for rice researchers to use the proposed method conveniently.
... UAVs have shown great potential in the fields of mobile communications, Internet of Things (IoT), agriculture, environmental monitoring, urban transportation and disaster relief. UAVs are applied in predicting crop yield [1], estimating wheat plant density [2], and monitoring aboveground biomass over large areas in agriculture [3]. The application of UAVs in environmental monitoring provides great convenience in field surveying and mapping [4]. ...
Full-text available
Article
Many new challenges are encountered in public security because of the rapid development of an unmanned aerial vehicle (UAV) technology. In order to meet these challenges brought by UAV, it is necessary to realize timely and accurate UAV detection. In this paper, an acoustic UAV detection method based on blind source separation (BSS) framework is introduced to solve the UAV sound detection problem with multi-source interference. In the framework proposed in this paper, source number estimation is used to determine the type of BSS problem, and three methods are applied to separate the UAV sound from the mixing signal in different BSS situations. In the experiment, a variety of machine learning algorithms are used to detect UAVs. The robustness of the proposed method is tested on different UAVs and different sound features. Experimental results show that the UAV sound detection method based on the BSS framework realizes the effective detection of UAVs in different tests and achieves excellent robust-ness. Compared with mixed signal and filtered signal, the detection rate of our scheme is more than 90%, whether in the overdetermined, positive-definite, and underdetermined cases.
... The relationship between VIs and canopy status was found to be strongly influenced by the phenological stage of maize plants (Burkart et al., 2018). During the initial stage when the maize leaves did not cover the whole plots, the VI values calculated from UAV images may be influenced by the background such as soil and other disturbances even though methods had been adopted to eliminate the impacts (Meyer and Neto, 2008;Jin et al., 2017;Liu et al., 2017). Therefore, it was suggested to integrate the VI acquired at important growth stages of maize which can reflect more precisely the temporal dynamic changes of growth conditions and achieve the highest precision for yield prediction (Guo et al., 2020). ...
Full-text available
Article
Recovery of biobased fertilizers derived from manure to replace synthetic fertilizers is considered a key strategy to close the nutrients loop for a more sustainable agricultural system. This study evaluated the nitrogen (N) fertilizer value of five biobased fertilizers [i.e., raw pig manure (PM), digestate (DIG), the liquid fraction of digestate (LFD), evaporator concentrate (EVA) and ammonia water (AW)] recovered from an integrated anaerobic digestion–centrifugation–evaporation process. The shoot and root growth of maize (Zea mays L.) under biobased fertilization was compared with the application of synthetic mineral N fertilizer, i.e., calcium ammonium nitrate (CAN). The non-invasive technologies, i.e., minirhizotron and unmanned aerial vehicle (UAV) based spectrum sensing, were integrated with the classic plant and soil sampling to enhance the in-season monitoring of the crop and soil status. Results showed no significant difference in the canopy status, biomass yield or crop N uptake under biobased fertilization as compared to CAN, except a lower crop N uptake in DIG treatment. The total root length detected by minirhizotron revealed a higher early-stage N availability at the rooting zone under biobased fertilization as compared to CAN, probably due to the liquid form of N supplied by biobased fertilizers showing higher mobility in soil under dry conditions than the solid form of CAN. Given a high soil N supply (averagely 70–232 kg ha−1) in the latter growing season of this study, the higher N availability in the early growing season seemed to promote a luxury N uptake in maize plants, resulting in significantly (p < 0.05) higher N concentrations in the harvested biomass of PM, LFD and AW than that in the no-N fertilized control. Therefore, the biobased fertilizers, i.e., PM, LFD, EVA and AW have a high potential as substitutes for synthetic mineral N fertilizers, with additional value in providing easier accessible N for crops during dry seasons, especially under global warming which is supposed to cause more frequent drought all over the world.
... In this work, features from segmentation area were correlated with actual plant number. Jin et al. (2017) separated the green pixels from the background using a vegetation index of excess green minus excess red (ExG − ExR) (Meyer and Neto, 2008), and estimated the number of wheat crops using support vector machine (SVM) tuned by particle swarm optimization algorithm. Zhao et al. (2018) segmented rapeseed plants from vegetation index images using typical Otsu's thresholding method and extracted shape features to develop multiple stepwise regression model. ...
Article
Accurate plant density information is important for crop yield and quality. In general, human has to estimate plant density either in field or with accessory equipment, which is time-consuming and inaccurate. In this work, multi-object tracking method based on tracking-by-detection strategy was developed to automatically count cotton seedlings. Videos were collected 0.5 meters above cotton seedlings, and analyzed to train object detection model and evaluate counting accuracy with a separate dataset (TAMU2015-ID). An advanced anchor-free object detection model was developed using CenterNet to detect cotton seedling and extract its identity embedding. The localization and identity information were fused based on Deep SORT for data association. The object detection model outperformed Faster R-CNN model with an F1 score of 0.982 (〖IOU〗_0.5) and 0.937 (〖IOU〗_0.8), and an average precision of 0.9901 (〖IOU〗_0.5) and 0.8998 (〖IOU〗_0.8). The counting results were fitted to ground truth with a R2 of 0.967 and RMSE of 0.394. We evaluated the method on TAMU2015-ID to get a R2 of 0.99 and RMSE of 0.8.
... The segmentation accuracy of the U-Net with Vgg16 model decreased with the coarsening of image resolution. This finding is consistent with previous studies (Jin et al., 2017;Madec et al., 2019). The degradation of segmentation accuracy could be explained by the loss of textural information in coarser resolution images. ...
Full-text available
Research
The dynamics of maize tassel area reflect the growth and development of maize plants, monitoring which facilitates crop breeding and management. At present, the monitoring of maize tassels mainly depends on manual work, which is very labor intensive and may be biased by human errors. The U-Net model has proved effective for crop segmentation using RGB imagery. However, there has not been a systematic study to test how the accuracy of U-Net model vary when applied to different maize varieties, at different tasseling stages, and on images of different spatial resolutions. Moreover, the capability of U-Net model for monitoring the dynamics of tassel area has not been explored. In this study, the potential of the U-Net model to provide an accurate segmentation of the tassels in complex situations from near-ground RGB images and UAV images were comprehensively studied. The results showed that the segmentation accuracy of U-Net model with Vgg16 as feature extraction network (IoU = 0.71) for tassels at the whole tasseling stages was better than that of U-Net model with MobileNet (IoU = 0.63). The U-Net model with Vgg16 as the feature extraction network maintained a good segmentation accuracy for maize tassels at different tasseling stages (IoU = 0.63-0.76), for different varieties (IoU = 0.65-0.79), and at different resolutions (IoU = 0.57-0.71), which proved the robustness of the model. Changes in the segmented area of tassels from images were basically consistent with the trends observed in the actual area of tassel measured manually. UAV RGB images with resolution of 3.06 mm showed a good segmentation accuracy (IoU = 0.54). In summary, the results showed that the U-Net model has a good segmentation accuracy of maize tassels under various complex situations. This study provides an effective method to monitor the maize tassel status in crop phenotyping experiments in the future.
... Unmanned aerial vehicles (UAVs) equipped with different types of sensors become a promising tool in various fields of agricultural research, e.g., biomass and grain yield estimation, planting density assessment, pest and disease detection, and crop stress monitoring [25][26][27][28][29][30]. Among them, the thermal infrared cameras onboard unmanned aerial vehicles (UAV) have been increasingly used for crop drought stress detections [28,31,32]. ...
Full-text available
Article
Deficit irrigation is a common approach in water-scarce regions to balance productivity and water use, whereas drought stress still occurs to various extents, leading to reduced physiological performance and a decrease in yield. Therefore, seeking a rapid and reliable method to identify wheat varieties with drought resistance can help reduce yield loss under water deficit. In this study, we compared ten wheat varieties under three deficit irrigation systems (W0, no irrigation during the growing season; W1, irrigation at jointing; W2, irrigation at jointing and anthesis). UAV thermal imagery, plant physiological traits [leaf area index (LAI), SPAD, photosynthesis (Pn), transpiration (Tr), stomatal conductance (Cn)], biomass and yield were acquired at different growth stages. Wheat drought resistance performance was evaluated through using the canopy temperature extracted from UAV thermal imagery (CT-UAV), in combination with hierarchical cluster analysis (HCA). The CT-UAV of W0 and W1 treatments was significantly higher than in the W2 treatment, with the ranges of 24.8–33.3 °C, 24.3–31.6 °C, and 24.1–28.9 °C in W0, W1 and W2, respectively. We found negative correlations between CT-UAV and LAI, SPAD, Pn, Tr, Cn and biomass under the W0 (R2 = 0.41–0.79) and W1 treatments (R2 = 0.22–0.72), but little relevance for W2 treatment. Under the deficit irrigation treatments (W0 and W1), UAV thermal imagery was less effective before the grain-filling stage in evaluating drought resistance. This study demonstrates the potential of ensuring yield and saving irrigation water by identifying suitable wheat varieties for different water-scarce irrigation scenarios.
... multispectral/hyperspectral cameras, and other sensors. Disease diagnosis (Sugiura et al., 2016), yield estimation (Chang et al., 2021), growth state monitoring and evaluation (Jin et al., 2017;Hu et al., 2018), and analysis of critical phenotypic features have been performed using this type of platform (Ding et al., 2019). Trevisan et al. (2020) used a 3D point cloud derived from UAV images to develop a method for detecting sorghum spikes. ...
Full-text available
Article
The genetic information and functional properties of plants have been further identified with the completion of the whole-genome sequencing of numerous crop species and the rapid development of high-throughput phenotyping technologies, laying a suitable foundation for advanced precision agriculture and enhanced genetic gains. Collecting phenotypic data from dicotyledonous crops in the field has been identified as a key factor in the collection of large-scale phenotypic data of crops. On the one hand, dicotyledonous plants account for 4/5 of all angiosperm species and play a critical role in agriculture. However, their morphology is complex, and an abundance of dicot phenotypic information is available, which is critical for the analysis of high-throughput phenotypic data in the field. As a result, the focus of this paper is on the major advancements in ground-based, air-based, and space-based field phenotyping platforms over the last few decades and the research progress in the high-throughput phenotyping of dicotyledonous field crop plants in terms of morphological indicators, physiological and biochemical indicators, biotic/abiotic stress indicators, and yield indicators. Finally, the future development of dicots in the field is explored from the perspectives of identifying new unified phenotypic criteria, developing a high-performance infrastructure platform, creating a phenotypic big data knowledge map, and merging the data with those of multiomic techniques.
... Early yield estimation is more beneficial to producers, for example, early yield estimation allows producers to adjust fertilization and watering according to the specific conditions of early yield estimation, thus obtaining maximum harvest with minimum cost. Due to its high nutritional value and convenient planting, winter wheat has become one of the most widely eaten food crops in the world [5,6]. Therefore, studying the yield of winter wheat has become an important issue. ...
Full-text available
Article
Crop yields are important for food security and people’s living standards, and it is therefore very important to predict the yield in a timely manner. This study used different vegetation indices and red-edge parameters calculated based on the canopy reflectance obtained from near-surface hyperspectral data and UAV hyperspectral data and used the partial least squares regression (PLSR) and artificial neural network (ANN) methods to estimate the yield of winter wheat at different growth stages. Verification was performed based on these two types of hyperspectral remote sensing data and the yield was estimated using vegetation indices and a combination of vegetation indices and red-edge parameters as the modeling independent variables, respectively, using PLSR and ANN regression, respectively. The results showed that, for the same data source, the optimal vegetation index for estimating the yield was the same in all of the studied growth stages; however, the optimal red-edge parameters were different for different growth stages. Compared with using only the vegetation indices as the modeling factor to estimate yield, the combination of the vegetation indices and red-edge parameters obtained superior estimation results. Additionally, the accuracy of yield estimation was shown to be improved by using the PLSR and ANN methods, with the yield estimation model constructed using the PLSR method having a better prediction effect. Moreover, the yield prediction model obtained using the near-surface hyperspectral sensors had a higher fitting and accuracy than the model obtained using the UAV hyperspectral remote sensing data (the results were based on the specific growth stressors, N and water supply). This study shows that the use of a combination of vegetation indices and red-edge parameters achieved an improved yield estimation compared to the use of vegetation indices alone. In the future, the selection of suitable sensors and methods needs to be considered when constructing models to estimate crop yield.
... In recent years, unmanned aerial vehicle (UAV) imaging technology provides an effective means to obtain crop phenotypic traits at plot scale [9,10]. UAV imaging technology has been widely used to research of phenotypic trait estimation for crop breeding, including emergence rate [11], LAI [12,13], plant height [14], biomass [15], and lodging [16]. ...
Full-text available
Article
High-throughput estimation of phenotypic traits from UAV (unmanned aerial vehicle) images is helpful to improve the screening efficiency of breeding maize. Accurately estimating phenotyping traits of breeding maize at plot scale helps to promote gene mining for specific traits and provides a guarantee for accelerating the breeding of superior varieties. Constructing an efficient and accurate estimation model is the key to the application of UAV-based multiple sensors data. This study aims to apply the ensemble learning model to improve the feasibility and accuracy of estimating maize phenotypic traits using UAV-based red-green-blue (RGB) and multispectral sensors. The UAV images of four growth stages were obtained, respectively. The reflectance of visible light bands, canopy coverage, plant height (PH), and texture information were extracted from RGB images, and the vegetation indices were calculated from multispectral images. We compared and analyzed the estimation accuracy of single-type feature and multiple features for LAI (leaf area index), fresh weight (FW), and dry weight (DW) of maize. The basic models included ridge regression (RR), support vector machine (SVM), random forest (RF), Gaussian process (GP), and K-neighbor network (K-NN). The ensemble learning models included stacking and Bayesian model averaging (BMA). The results showed that the ensemble learning model improved the accuracy and stability of maize phenotypic traits estimation. Among the features extracted from UAV RGB images, the highest accuracy was obtained by the combination of spectrum, structure, and texture features. The model had the best accuracy constructed using all features of two sensors. The estimation accuracies of ensemble learning models, including stacking and BMA, were higher than those of the basic models. The coefficient of determination ( R 2 ) of the optimal validation results were 0.852, 0.888, and 0.929 for LAI, FW, and DW, respectively. Therefore, the combination of UAV-based multisource data and ensemble learning model could accurately estimate phenotyping traits of breeding maize at plot scale.
... Multi-spectral digital images obtained by UAVs are used for evaluating vegetation indices (VIs) and multi-temporal VIs to predict grain yield in wheat [152]. The indices, including the normalized difference vegetation index (NDVI), spectral vegetation index (SVI), and green area index (GAI), are evaluated in wheat crops to predict grain yield [153], monitor breeding [154], detect plant stress caused by yellow rust disease [155], and quantify plant density [156]. The usage of pesticides in agriculture is crucial for crop yields and the environment, and efforts have been made to develop and evaluate an algorithm to self-adjust UAV routes during chemical spraying in a crop field to reduce the waste of pesticides and fertilizers [157]. ...
Full-text available
Article
Smart farming is a development that has emphasized information and communication technology used in machinery, equipment, and sensors in network-based hi-tech farm supervision cycles. Innovative technologies, the Internet of Things (IoT), and cloud computing are anticipated to inspire growth and initiate the use of robots and artificial intelligence in farming. Such ground-breaking deviations are unsettling current agriculture approaches, while also presenting a range of challenges. This paper investigates the tools and equipment used in applications of wireless sensors in IoT agriculture, and the anticipated challenges faced when merging technology with conventional farming activities. Furthermore, this technical knowledge is helpful to growers during crop periods from sowing to harvest; and applications in both packing and transport are also investigated.
... Such autocalibration techniques include different optimization algorithms. Examples of such combinations are AquaCrop-PSO (Jin et al., 2017a), WOFOST-PSO (Bai et al., 2019), DSSAT with general likelihood uncertainty estimation (GLUE) (DSSAT-GLUE) , stochastic crop modeling with Bayesian Monte Carlo approach (Marin et al., 2017), genetic algorithm (GA) (DSSAT-HYDRUS 2D-GA) (Roy et al., 2019). ...
Article
Under changing climate and burgeoning food production demands, climate-smart agriculture (CSA) practices are the need of the hour. Physically-based crop models have been designed, which require large sets of input variables, besides being timeworn, tedious, and not user-friendly in the current situation. There is a rapid advancement in the modeling approaches in agricultural practices. But, still, there is a shortage of comprehensive information related to the recent technological advancement in the modeling applications toward CSA and its prospects. The paper reviews, critically assess, and discusses the present state-of-art modeling technologies related to the CSA. Special emphasis is placed on highlighting the current research trends in the different crop simulation models and their CSA applications. The assessment and analysis of the combined implications of crop simulation and hydrological models form another vital area of our review. Subsequently, the suitability and future scope of AI in crop simulation models is thoroughly assessed. Finally, the ways to address future challenges for CSA through the models and AI-based approaches are evaluated. We have selected and critically reviewed the literature to deliberate on the various aspects of modeling and their impact on CSA. Overall, we attempted to distinctly bring out the different latest and advanced technologies and their potential roles in modeling exercises towards CSA. The current review also collates and presents new and innovative modeling approaches toward CSA with techno-feasible solutions. The review involves the simulation-optimization framework for coupling hydrological and climate models with crop models that presently have the utmost importance for the CSA. Concurrently it also highlights the overarching significance of different models and suggests refined solutions that enable better crop and environment estimation, field management, and decision-making.
... Fernandez-Gallego et al. (2019) used zenithal/nadir thermal images to count the number of wheat spikes. Jin et al. (2017) adopted unmanned aerial vehicles (UAVs) to obtain high-resolution imagery for estimating wheat plant density. In these traditional machine learning studies, image texture, geometry, and color intensity are primarily used to discriminate spikes. ...
Full-text available
Article
Wheat spike detection has important research significance for production estimation and crop field management. With the development of deep learning-based algorithms, researchers tend to solve the detection task by convolutional neural networks (CNNs). However, traditional CNNs equip with the inductive bias of locality and scale-invariance, which makes it hard to extract global and long-range dependency. In this paper, we propose a Transformer-based network named Multi-Window Swin Transformer (MW-Swin Transformer). Technically, MW-Swin Transformer introduces the ability of feature pyramid network to extract multi-scale features and inherits the characteristic of Swin Transformer that performs self-attention mechanism by window strategy. Moreover, bounding box regression is a crucial step in detection. We propose a Wheat Intersection over Union loss by incorporating the Euclidean distance, area overlapping, and aspect ratio, thereby leading to better detection accuracy. We merge the proposed network and regression loss into a popular detection architecture, fully convolutional one-stage object detection, and name the unified model WheatFormer. Finally, we construct a wheat spike detection dataset (WSD-2022) to evaluate the performance of the proposed methods. The experimental results show that the proposed network outperforms those state-of-the-art algorithms with 0.459 mAP (mean average precision) and 0.918 AP50. It has been proved that our Transformer-based method is effective to handle wheat spike detection under complex field conditions.
... In comparison to satellite remote sensing, the spatial and temporal resolution of UAV can be adjusted flexibly according to the requirements. This is particularly beneficial for crop monitoring with an short observation interval and cm-level spatial resolution 8,9 . Different sensors have been equipped on UAV for crop growth monitoring, of which the multispectral and hyperspectral sensors have shown great capability 10,11 . ...
Full-text available
Article
Leaf area index (LAI) is a fundamental indicator of crop growth status, timely and non-destructive estimation of LAI is of significant importance for precision agriculture. In this study, a multi-rotor UAV platform equipped with CMOS image sensors was used to capture maize canopy information, simultaneously, a total of 264 ground‐measured LAI data were collected during a 2-year field experiment. Linear regression (LR), backpropagation neural network (BPNN), and random forest (RF) algorithms were used to establish LAI estimation models, and their performances were evaluated through 500 repetitions of random sub-sampling, training, and testing. The results showed that RGB-based VIs derived from UAV digital images were strongly related to LAI, and the grain-filling stage (GS) of maize was identified as the optimal period for LAI estimation. The RF model performed best at both whole period and individual growth stages, with the highest R2 (0.71–0.88) and the lowest RMSE (0.12–0.25) on test datasets, followed by the BPNN model and LR models. In addition, a smaller 5–95% interval range of R2 and RMSE was observed in the RF model, which indicated that the RF model has good generalization ability and is able to produce reliable estimation results.
Full-text available
Conference Paper
Stripe rust or yellow rust, caused by the fungus Puccinia striiformis (Pst), is one of the most destructive diseases of wheat (Triticum aestivum L.). It leads to a significant economic damage with losses up to 70-80% in epidemic conditions and about 5.47 million tonnes of total grain yields annually. Sustainable management of wheat stripe rust is achievable by identifying, introducing and subsequently selecting rust resistance genes during the breeding cycles. A total of 83 stripe rust resistance genes (Yr) have been already identified in wheat. Virulent Pst races against most Yr single genes have been emerged so far, therefore, continuous efforts to characterize new resistance genes with tightly linked molecular markers are needed. The selection and combination of acting genes already present in the advanced breeding pool is also essential to enhance durable stripe rust resistance and for the development of new wheat cultivars.
Full-text available
Article
Accurate and high-resolution crop yield and crop water productivity (CWP) datasets are required to understand and predict spatiotemporal variation in agricultural production capacity; however, datasets for maize and wheat, two key staple dryland crops in China, are currently lacking. In this study, we generated and evaluated a long-term data series, at 1-km resolution of crop yield and CWP for maize and wheat across China, based on the multiple remotely sensed indicators and random forest algorithm. Results showed that MOD16 products are an accurate alternative to eddy covariance flux tower data to describe crop evapotranspiration (maize and wheat RMSE: 4.42 and 3.81 mm/8d, respectively) and the proposed yield estimation model showed accuracy at local (maize and wheat rRMSE: 26.81 and 21.80%, respectively) and regional (maize and wheat rRMSE: 15.36 and 17.17%, respectively) scales. Our analyses, which showed spatiotemporal patterns of maize and wheat yields and CWP across China, can be used to optimize agricultural production strategies in the context of maintaining food security.
Article
Urbanization, agricultural production, natural resource extractions and climate change are global drivers of terrestrial ecosystem degradation and decline in ecosystem health. Assessing vegetation structures provides valuable direct and indirect information on biodiversity and ecosystem health, since plant communities govern terrestrial ecosystem functions and overall biodiversity. A growing body of literature supports the hypothesis that acoustic profiling and characterization of soundscapes may reveal natural patterns and ecosystem responses to environmental disturbances. Although assessments of ecosystem soundscapes are a promising tool for ecological monitoring, the influence of vegetation structure on acoustic indices remains largely unaddressed. An effective non‐invasive approach to monitoring ecosystem health includes use of active acoustic transducers, which convert mechanical/acoustic energy into electrical energy and visa‐versa. This article reviews and discusses possible applications (and also constraints) of active acoustic transducers in monitoring and assessments of terrestrial ecosystem health. Specifically, this article includes a brief introduction to the basic principles of sound and types of active acoustic transducers. Moreover, we provide reviews of common uses of active acoustic transducers in assessing plant structures and plant functional traits. We emphasize that active acoustic transducers can be used to analyse plant characteristics in a remote, scalable, non‐invasive and cost‐effective manner to characterize overall terrestrial ecosystem health. Suggestions and recommendations for future research and management directions that may facilitate applications of acoustic transducers to natural terrestrial landscapes are provided.
Article
Crop failure detection using UAV images is helpful for precision agriculture, enabling the precision management of failure areas to reduce crop loss. For wheat failure area detection at the seedling stage using UAV images, the commonly used methods are not sufficiently accurate. Thus, herein, a new tool for precision wheat management at the seedling stage is designed. For this purpose, field experiments with two wheat cultivars and four nitrogen (N) treatments were conducted to create different scenarios for the failure area, and multispectral UAV images were acquired at the seedling growth stage. Based on the above data, a new failure detection method was designed by assimilating prior knowledge and a filter analysis strategy and compared with classical filter-based methods and Hough transform-based methods for wheat failure area detection. The results showed that the newly proposed assimilation method had a detection accuracy between 83.86% and 97.67% for different N levels and cultivars. In contrast, the filter-based methods and Hough transform-based methods had detection accuracies between 53.73% and 83.95% and between 20.71% and 75.79%, respectively. Thus, the assimilation method demonstrated the best failure detection performance.
Article
Estimating plant nitrogen concentrations (PNCs) with remote sensing technology is critical for ensuring precise field nitrogen (N) management. Compared with other remote sensing platforms, low-altitude unmanned aerial vehicles (UAVs) produce images with high spatial resolutions that can be used to clearly identify soil and vegetation. Previously, many spectral indices were designed to remove soil effects to obtain optimal PNC predictions. Herein, we attempt to enhance the PNC prediction accuracy only by removing soil pixels in high-resolution images. Thus, we aimed to collect a dataset containing different crops and image types to investigate whether removing soil pixels to purify crop spectra can improve PNC estimations. For this purpose, N fertilizer experiments were conducted on cotton (Gossypium hirsutum L.), wheat (Triticum aestivum L.) and maize (Zea mays L.), and multispectral and hyperspectral UAV images and PNCs were collected at different growth stages. The multispectral images had actual high spatial resolutions, while the hyperspectral images had virtual high spatial resolutions constructed by fusing high resolution panchromatic images and coarse resolution hyperspectral images. These images represent two typical UAV image types. First, for each crop, the relative changes and driving forces associated with the purified and nonpurified spectra were analyzed under different growth stage, N treatment. Then, three commonly used methods, the spectral index (SI), partial least squares regression (PLSR) and artificial neural network (ANN) methods were used to design PNC prediction model using purified and nonpurified spectra respectively. The results showed the differences between purified and nonpurified spectra were affected by the proportion of crop pixel, sunlit soil pixel and sunshade soil pixel in image. This influence had various trends and magnitudes among different N treatment, growth stages and crop types. It is better to remove soil pixels in imagery, when designing PNC prediction model for plants across growth stages, crop types or even in a single growth stage. The results from actual high spatial resolution images demonstrated this point, with the best PNC prediction model from purified spectra. When considering virtual high spatial resolution image, as the spectrum obtained for each vegetation pixel still represented a mixed vegetation and soil spectrum, removing soil pixels showed no improved performance for PNC estimation. These results provide a reference for others to reasonably choose an optimal data-processing method for constructing PNC prediction models.
Full-text available
Article
Canopy coverage and plant height are the main crop canopy parameters, which can obviously reflect the growth status of crops on the field. The ability to identify canopy coverage and plant height quickly is critical for farmers or breeders to arrange their working schedule. In precision agriculture, choosing the opportunity and amount of farm inputs is the critical part, which will improve the yield and decrease the cost. The potato canopy coverage and plant height were quickly extracted, which could be used to estimate the spraying volume using the evaluation model obtained by indoor tests. The vegetation index approach was used to extract potato canopy coverage, and the color point cloud data method at different height rates was formed to estimate the plant height of potato at different growth stages. The original data were collected using a low-cost UAV, which was mounted on a high-resolution RGB camera. Then, the Structure from Motion (SFM) algorithm was used to extract the 3D point cloud from ordered images that could form a digital orthophoto model (DOM) and sparse point cloud. The results show that the vegetation index-based method could accurately estimate canopy coverage. Among EXG, EXR, RGBVI, GLI, and CIVE, EXG achieved the best adaptability in different test plots. Point cloud data could be used to estimate plant height, but when the potato coverage rate was low, potato canopy point cloud data underwent rarefaction; in the vigorous growth period, the estimated value was substantially connected with the measured value (R2 = 0.94). The relationship between the coverage area of spraying on potato canopy and canopy coverage was measured indoors to form the model. The results revealed that the model could estimate the dose accurately (R2 = 0.878). Therefore, combining agronomic factors with data extracted from the UAV RGB image had the ability to predict the field spraying volume.
Full-text available
Article
Although crop-growth monitoring is important for agricultural managers, it has always been a difficult research topic. However, unmanned aerial vehicles (UAVs) equipped with RGB and hyperspectral cameras can now acquire high-resolution remote-sensing images, which facilitates and accelerates such monitoring. To explore the effect of monitoring a single crop-growth indicator and multiple indicators, this study combines six growth indicators (plant nitrogen content, above-ground biomass, plant water content, chlorophyll, leaf area index, and plant height) into the new comprehensive growth index (CGI). We investigate the performance of RGB imagery and hyperspectral data for monitoring crop growth based on multi-time estimation of the CGI. The CGI is estimated from the vegetation indices based on UAV hyperspectral data treated by linear, nonlinear, and multiple linear regression (MLR), partial least squares (PLSR), and random forest (RF). The results are as follows: (1) The RGB-imagery indices red reflectance (r), the excess-red index (EXR), the vegetation atmospherically resistant index (VARI), and the modified green-red vegetation index (MGRVI), as well as the spectral indices consisting of the linear combination index (LCI), the modified simple ratio index (MSR), the simple ratio vegetation index (SR), and the normalized difference vegetation index (NDVI), are more strongly correlated with the CGI than a single growth-monitoring indicator. (2) The CGI estimation model is constructed by comparing a single RGB-imagery index and a spectral index, and the optimal RGB-imagery index corresponding to each of the four growth stages in order is r, r, r, EXR; the optimal spectral index is LCI for all four growth stages. (3) The MLR, PLSR, and RF methods are used to estimate the CGI. The MLR method produces the best estimates. (4) Finally, the CGI is more accurately estimated using the UAV hyperspectral indices than using the RGB-image indices.
Article
Detecting and characterizing spikes from wheat field images is essential in wheat growth monitoring for precision farming. Along with various technological developments, deep-learning-based methods have remarkably improved wheat spike detection performance. However, detecting small and overlapping wheat spikes in UAV images is still challenging because high spike occlusion and complex background can cause error detection and miss detection problems. This paper proposes a deep learning method for oriented and small wheat spike detection (OSWSDet). Unlike classical wheat spike detection methods, OSWSDet introduces the orientation of wheat spikes into the YOLO framework by integrating a circle smooth label (CSL) and a micro-scale detection layer. These improvements enhance the ability to detect small-sized wheat spikes and prevent wheat spike detection errors. The experiment results show that OSWSDet outperforms classical wheat spike detection methods, and the average accuracy (AP) is 90.5%. OSWSDet can accurately detect spikes in UAV images with complex field backgrounds and provides technical references for future field wheat phenotype monitoring.
Article
The heading date and effective tiller percentage are important traits in rice, and they directly affect plant architecture and yield. Both traits are related to the ratio of the panicle number to the maximum tiller number, or simply called panicle ratio (PR). In this study, an automatic PR estimation model (PRNet) based on a deep convolutional neural network was developed. Ultra-high-definition unmanned aerial vehicle (UAV) images were collected from cultivated rice varieties planted in 2384 experimental plots in 2019 and 2020 and in a large field in 2021. The determination coefficient between estimated PR and ground measured PR reached 0.935, and the root mean square error (RMSE) values for the estimations of the heading date and effective tiller percentage were 0.687 days and 4.84%, respectively. Based on the result analysis, various factors affecting PR estimation and strategies for improving PR estimation accuracy were investigated. The satisfactory results obtained in this study demonstrate the feasibility of using UAV and deep learning techniques to replace ground-based manual methods to accurately extract phenotypic information of crop micro targets (such as grains per panicle, panicle flowering, etc.) for rice and potentially for other cereal crops with future research.
Full-text available
Article
The number and growth of new shoots are very important information for bamboo forest cultivation and management. At present, there is no real-time, efficient and accurate monitoring method. In this study, a fixed webcam was applied for image capture, optimized YOLOv4 was used to model the detection of moso bamboo shoots, and a strategy of sorting and screening was proposed to track each moso bamboo shoot. The change in the number and height of moso bamboo shoots was obtained according to the number and height of detection boxes. The experimental results show that the system can remotely and automatically obtain the number of moso bamboo shoots and the pixel height of each bamboo shoot at any given time. The average relative error and variance in the number of moso bamboo shoots were 1.28% and 0.016%, respectively, and those for the corresponding pixel height results were −0.39% and 0.02%. This system can be applied to a series of monitoring purposes, such as the daily or weekly growth rate of moso bamboo shoots at monitoring stations and trends in the height of selected bamboo shoots.
Full-text available
Article
Wheat (Triticum aestivum L.) is an essential crop that is widely consumed globally. The tiller density is an important factor affecting wheat yield. Therefore, it is necessary to measure the number of tillers during wheat cultivation and breeding, which requires considerable labor and material resources. At present, there is no effective high-throughput measurement method for tiller number estimation, and the conventional tiller survey method cannot accurately reflect the spatial variation of wheat tiller density within the whole field. Therefore, in order to meet the demand for the thematic map of wheat tiller density at the field scale for the variable operation of nitrogen fertilizer, the multispectral images of wheat in Feekes growth stages 2–3 were obtained by unmanned aerial vehicle (UAV), and the characteristic parameters of the number of tillers were used to construct a model that could accurately estimate the number of tillers. Based on the vegetation index (VIs), this work proposed a gradual change features (GCFs) approach, which can greatly improve the disadvantages of using VIs to estimate tiller number, better reflect the tiller status of the wheat population, and have good results on the estimation of tiller in common models. A Lasso + VIs + GCFs method was constructed for accurate estimation of tiller number in multiple growth periods and fertilizer-treated wheat, with an average RMSE of fewer than 9 tillers per square meter, average MAE less than 8 tillers per square meter, and R² above 0.7. The results of the study not only proposed a high-throughput measurement method for the number of tillers but also provided a reference for the estimation of tiller number and other agronomic parameters.
Article
Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution (FSR) remotely sensed imagery now offers great opportunities for mapping crop types in great detail. However, within-class variance can hamper attempts to discriminate crop classes at fine resolutions. Multi-temporal FSR remotely sensed imagery provides a means of increasing crop classification from FSR imagery, although current methods do not exploit the available information fully. In this research, a novel Temporal Sequence Object-based Convolutional Neural Network (TS-OCNN) was proposed to classify agricultural crop type from FSR image time-series. An object-based CNN (OCNN) model was adopted in the TS-OCNN to classify images at the object level (i.e., segmented objects or crop parcels), thus, maintaining the precise boundary information of crop parcels. The combination of image time-series was first utilized as the input to the OCNN model to produce an ‘original’ or baseline classification. Then the single-date images were fed automatically into the deep learning model scene-by-scene in order of image acquisition date to increase successively the crop classification accuracy. By doing so, the joint information in the FSR multi-temporal observations and the unique individual information from the single-date images were exploited comprehensively for crop classification. The effectiveness of the proposed approach was investigated using multitemporal SAR and optical imagery, respectively, over two heterogeneous agricultural areas. The experimental results demonstrated that the newly proposed TS-OCNN approach consistently increased crop classification accuracy, and achieved the greatest accuracies (82.68% and 87.40%) in comparison with state-of-the-art benchmark methods, including the object-based CNN (OCNN) (81.63% and 85.88%), object-based image analysis (OBIA) (78.21% and 84.83%), and standard pixel-wise CNN (79.18% and 82.90%). The proposed approach is the known attempt to explore simultaneously the joint information from image time-series with the unique information from single-date images for crop classification using a deep learning framework. The TS-OCNN, therefore, represents a new approach for agricultural landscape classification from multi-temporal FSR imagery. Besides, it is readily generalizable to other landscapes (e.g., forest landscapes), with a wide application prospect.
Full-text available
Article
Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over 3 experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds·m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages
Full-text available
Article
Small unmanned aerial vehicles (SUAVs) can change the way researchers collect field data, monitor field equipment, and even operate agricultural machinery. Small unmanned aerial vehicles are a category of UAVs that weigh less than 10 kg (22 lbs) and can be operated remotely, with or without fully autonomous flying capabilities. Potential applications of this technology in agriculture include crop scouting, pest distribution mapping, bare soil imagery, and irrigation and drainage planning. Since SUAVs fly at low altitudes, they can collect very high resolution images. Using thermal and visible/near-infrared images of individual healthy and stressed citrus trees, SUAV imaging system can distinguish stressed trees from healthy trees with a higher accuracy than can be obtained from conventional aerial imaging.
Full-text available
Article
The development of reliable methods for the estimation of crown architecture parameters is a key issue for the quantitative evaluation of tree crop adaptation to environment conditions and/or growing system. In the present work, we developed and tested the performance of a method based on low-cost unmanned aerial vehicle (UAV) imagery for the estimation of olive crown parameters (tree height and crown diameter) in the framework of olive tree breeding programs, both on discontinuous and continuous canopy cropping systems. The workflow involved the image acquisition with consumer-grade cameras on board a UAV and orthomosaic and digital surface model generation using structure-from-motion image reconstruction (without ground point information). Finally, geographical information system analyses and object-based classification were used for the calculation of tree parameters. Results showed a high agreement between remote sensing estimation and field measurements of crown parameters. This was observed both at the individual tree/hedgerow level (relative RMSE from 6% to 20%, depending on the particular case) and also when average values for different genotypes were considered forphenotyping purposes (relative RMSE from 3% to 16%), pointing out the interest and applicability of these data and techniques in the selection scheme of breeding programs.
Full-text available
Article
Plant phenotyping refers to a quantitative description of the plant's anatomical, ontogenetical, physiological and biochemical properties. Today, rapid developments are taking place in the field of non-destructive, image-analysis -based phenotyping that allow for a characterization of plant traits in high-throughput. During the last decade, 'the field of image-based phenotyping has broadened its focus from the initial characterization of single-plant traits in controlled conditions towards 'real-life' applications of robust field techniques in plant plots and canopies. An important component of successful phenotyping approaches is the holistic characterization of plant performance that can be achieved with several methodologies, ranging from multispectral image analyses via thermographical analyses to growth measurements, also taking root phenotypes into account.