Article

Detection of Carolina Geranium ( Geranium carolinianum ) Growing in Competition with Strawberry Using Convolutional Neural Networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Weed interference during crop establishment is a serious concern for Florida strawberry [ Fragaria × ananassa (Weston) Duchesne ex Rozier (pro sp.) [ chiloensis × virginiana ]] producers. In situ remote detection for precision herbicide application reduces both the risk of crop injury and herbicide inputs. Carolina geranium ( Geranium carolinianum L.) is a widespread broadleaf weed within Florida strawberry production with sensitivity to clopyralid, the only available POST broadleaf herbicide. Geranium carolinianum leaf structure is distinct from that of the strawberry plant, which makes it an ideal candidate for pattern recognition in digital images via convolutional neural networks (CNNs). The study objective was to assess the precision of three CNNs in detecting G. carolinianum . Images of G. carolinianum growing in competition with strawberry were gathered at four sites in Hillsborough County, FL. Three CNNs were compared, including object detection–based DetectNet, image classification–based VGGNet, and GoogLeNet. Two DetectNet networks were trained to detect either leaves or canopies of G. carolinianum . Image classification using GoogLeNet and VGGNet was largely unsuccessful during validation with whole images ( Fscore <0.02). CNN training using cropped images increased G. carolinianum detection during validation for VGGNet ( Fscore =0.77) and GoogLeNet ( Fscore =0.62). The G. carolinianum leaf–trained DetectNet achieved the highest Fscore (0.94) for plant detection during validation. Leaf-based detection led to more consistent detection of G. carolinianum within the strawberry canopy and reduced recall-related errors encountered in canopy-based training. The smaller target of leaf-based DetectNet did increase false positives, but such errors can be overcome with additional training images for network desensitization training. DetectNet was the most viable CNN tested for image-based remote sensing of G. carolinianum in competition with strawberry. Future research will identify the optimal approach for in situ detection and integrate the detection technology with a precision sprayer.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Though all the models performed well, DetectNet exhibited a bit higher F1 score of ≥0.99. On the other hand, Sharpe et al. (2019) evaluated the performance of VGGNet, GoogLeNet, and DetectNet architecture using two variations of images (i.e., whole and cropped images). ...
... Adhikari et al., 2019;Espejo-Garcia et al., 2020;Gao et al., 2020;Knoll et al., 2019;Ma et al., 2019;Sarvini et al., 2019;Sharpe et al., 2020;Tang et al., 2017;Teimouri et al., 2018;Yan et al., 2020;Yu et al., 2019a;Yu et al., 2019b).Sharpe et al. (2019) collected their data by maintaining a certain height (130 cm) from the soil surface. Brimrose VA210 filter and JAI BM-141 cameras have been used to collect hyperspectral images of weeds and crops without using any platform(Farooq et al., 2018a(Farooq et al., , 2018bFarooq et al., 2019).Andrea et al. (2017) manually focused a camera on t ...
... They split the images into small patches of 1000×1000 pixels. In the study ofSharpe et al. (2019), the images were resized to 1280×720 pixels and then cropped into four sub-images.Osorio et al. (2020) used 1280×960 pixel size image with four spectral bands. By applying union operation on the red, green, and near infrared bands, they generated a false green image in order to highlight the vegetation.Sharpe et al. (2020) resized the collected image to 1280×853 pixels and then cropped it to 1280×720 pixels.Background Removal H.Huang et al. (2020) collected images using a UAV and applied image mosaicing to generate an orthophoto.Bah et al. (2018) applied Hough-transform to highlight the aligned pixels and used Otsu-adaptive-thresholding method to differentiate the background and green crops or weeds. ...
Article
Full-text available
The rapid advances in Deep Learning (DL) techniques have enabled rapid detection, localisation, and recognition of objects from images or videos. DL techniques are now being used in many applications related to agriculture and farming. Automatic detection and classification of weeds can play an important role in weed management and so contribute to higher yields. Weed detection in crops from imagery is inherently a challenging problem because both weeds and crops have similar colours (‘green-on-green’), and their shapes and texture can be very similar at the growth phase. Also, a crop in one setting can be considered a weed in another. In addition to their detection, the recognition of specific weed species is essential so that targeted controlling mechanisms (e.g. appropriate herbicides and correct doses) can be applied. In this paper, we review existing deep learning-based weed detection and classification techniques. We cover the detailed literature on four main procedures, i.e., data acquisition, dataset preparation, DL techniques employed for detection, location and classification of weeds in crops, and evaluation metrics approaches. We found that most studies applied supervised learning techniques, they achieved high classification accuracy by fine-tuning pre-trained models on any plant dataset, and past experiments have already achieved high accuracy when a large amount of labelled data is available.
... This technology has been used in wild blueberry production for detecting fruit ripeness stages and estimating potential fruit yield [30]. In other cropping systems, CNNs have been effective for detecting weeds in strawberry fields [31], potato fields [32], turfgrasses [33,34], ryegrasses [35] and Florida vegetables [36]. CNNs have also been used for detecting diseases on tomato [37,38], apple, strawberry and various other plants [38]. ...
... The aforementioned CNNs used between 1472 [31] and 40,800 [35] labelled images for training and validation. Given that there are more than 100 unique weed species in Nova Scotia wild blueberry fields, it would be best to train CNNs to identify more than just fescue and sheep sorrel. ...
... The results from the other networks indicate that datasets of approximately 500 images can be used to train all tested CNNs except Darknet Reference for hair fescue detection in wild blueberry fields. The number of images needed to effectively train these CNNs is much lower than the 1472 [31] to 40,800 [35] images used in other agricultural applications of CNNs. This test should be performed with other datasets to confirm whether this is always the case for other weeds. ...
Article
Full-text available
Deep learning convolutional neural networks (CNNs) are an emerging technology that provide an opportunity to increase agricultural efficiency through remote sensing and automatic inferencing of field conditions. This paper examined the novel use of CNNs to identify two weeds, hair fescue and sheep sorrel, in images of wild blueberry fields. Commercial herbicide sprayers pro-vide a uniform application of agrochemicals to manage patches of these weeds. Three object-detection and three image-classification CNNs were trained to identify hair fescue and sheep sorrel using images from 58 wild blueberry fields. The CNNs were trained using 1280x720 images and were tested at four different internal resolutions. The CNNs were retrained with progressively smaller training datasets ranging from 3780 to 472 images to determine the effect of dataset size on accuracy. YOLOv3-Tiny was the best object-detection CNN, detecting at least one target weed per image with F1-scores of 0.97 for hair fescue and 0.90 for sheep sorrel at 1280x736 resolution. Darknet Reference was the most accurate image-classification CNN, classifying images containing hair fescue and sheep sorrel with F1-scores of 0.96 and 0.95, respectively at 1280x736. MobileNetV2 achieved comparable results at the lowest resolution, 864x480, with F1-scores of 0.95 for both weeds. Training dataset size had minimal effect on accuracy for all CNNs except Darknet Reference. This technology can be used in a smart sprayer to control target specific spray applications, reducing herbicide use. Future work will involve testing the CNNs for use on a smart sprayer and the development of an application to provide growers with field-specific information. Using CNNs to improve agricultural efficiency will create major cost-savings for wild blueberry producers.
... Though all the models performed well, DetectNet exhibited a bit higher F1 score of ≥0.99. On the other hand, Sharpe et al. (2019) evaluated the performance of VGGNet, GoogLeNet, and DetectNet architecture using two variations of images (i.e., whole and cropped images). ...
... Adhikari et al., 2019;Espejo-Garcia et al., 2020;Gao et al., 2020;Knoll et al., 2019;Ma et al., 2019;Sarvini et al., 2019;Sharpe et al., 2020;Tang et al., 2017;Teimouri et al., 2018;Yan et al., 2020;Yu et al., 2019a;Yu et al., 2019b).Sharpe et al. (2019) collected their data by maintaining a certain height (130 cm) from the soil surface. Brimrose VA210 filter and JAI BM-141 cameras have been used to collect hyperspectral images of weeds and crops without using any platform(Farooq et al., 2018a(Farooq et al., , 2018bFarooq et al., 2019).Andrea et al. (2017) manually focused a camera on t ...
... They split the images into small patches of 1000×1000 pixels. In the study ofSharpe et al. (2019), the images were resized to 1280×720 pixels and then cropped into four sub-images.Osorio et al. (2020) used 1280×960 pixel size image with four spectral bands. By applying union operation on the red, green, and near infrared bands, they generated a false green image in order to highlight the vegetation.Sharpe et al. (2020) resized the collected image to 1280×853 pixels and then cropped it to 1280×720 pixels.Background Removal H.Huang et al. (2020) collected images using a UAV and applied image mosaicing to generate an orthophoto.Bah et al. (2018) applied Hough-transform to highlight the aligned pixels and used Otsu-adaptive-thresholding method to differentiate the background and green crops or weeds. ...
Preprint
Full-text available
The rapid advances in Deep Learning (DL) techniques have enabled rapid detection, localisation, and recognition of objects from images or videos. DL techniques are now being used in many applications related to agriculture and farming. Automatic detection and classification of weeds can play an important role in weed management and so contribute to higher yields. Weed detection in crops from imagery is inherently a challenging problem because both weeds and crops have similar colours ('green-on-green'), and their shapes and texture can be very similar at the growth phase. Also, a crop in one setting can be considered a weed in another. In addition to their detection, the recognition of specific weed species is essential so that targeted controlling mechanisms (e.g. appropriate herbicides and correct doses) can be applied. In this paper, we review existing deep learning-based weed detection and classification techniques. We cover the detailed literature on four main procedures, i.e., data acquisition, dataset preparation, DL techniques employed for detection, location and classification of weeds in crops, and evaluation metrics approaches. We found that most studies applied supervised learning techniques, they achieved high classification accuracy by fine-tuning pre-trained models on any plant dataset, and past experiments have already achieved high accuracy when a large amount of labelled data is available.
... In all cases, the authors have demonstrated the potential use of Neural Networks also in the agricultural discipline. Nevertheless, choosing a suitable network requires careful planning as it must fit the task at hand [39]. Furthermore, the robustness of the trained network, along with the robustness of training similar networks has not been examined. ...
... Furthermore, the robustness of the trained network, along with the robustness of training similar networks has not been examined. In the context of weed and crop classification, supervised training with a prelabeled dataset is widely used to cope with the high variability in the morphology of the plants based on the development stages and environmental influence, which can lead to poor classification accuracy [39]. Yet, the difficulty of acquiring multiple labeled instances of each plant in different development stages still poses an important academic and practical challenge [20]. ...
... ResNet-50 and Xception performed better than VGG16, achieving a performance of 97% and 98%, respectively. Recent publications like dos Santos Ferreira et al. [15], Potena et al. [18], Tang et al. [4], Sharpe et al. [39] and Elnemr [19] have also achieved classification results of over 90%. Yet in the majority of these cases, a low number of classes were used (2)(3)(4), or the datasets were only sufficient to prove the researched hypothesis but not sufficient to transfer the results into the complexity of the real world. ...
Article
Full-text available
The increasing public concern about food security and the stricter rules applied worldwide concerning herbicide use in the agri-food chain, reduce consumer acceptance of chemical plant protection. Site-Specific Weed Management can be achieved by applying a treatment only on the weed patches. Crop plants and weeds identification is a necessary component for various aspects of precision farming in order to perform on the spot herbicide spraying or robotic weeding and precision mechanical weed control. During the last years, a lot of different methods have been proposed, yet more improvements need to be made on this problem, concerning speed, robustness, and accuracy of the algorithms and the recognition systems. Digital cameras and Artificial Neural Networks (ANNs) have been rapidly developed in the past few years, providing new methods and tools also in agriculture and weed management. In the current work, images gathered by an RGB camera of Zea mays, Helianthus annuus, Solanum tuberosum, Alopecurus myosuroides, Amaranthus retroflexus, Avena fatua, Chenopodium album, Lamium purpureum, Matricaria chamomila, Setaria spp., Solanum nigrum and Stellaria media were provided to train Convolutional Neural Networks (CNNs). Three different CNNs, namely VGG16, ResNet–50, and Xception, were adapted and trained on a pool of 93,000 images. The training images consisted of images with plant material with only one species per image. A Top-1 accuracy between 77% and 98% was obtained in plant detection and weed species discrimination, on the testing of the images.
... The system relies on pattern recognition via filters within the convolutional layers for detection and classification 15 . Convolutional neural networks for weed detection have been employed in several crops including turfgrass 16,17 , wheat 18 , and strawberry 19 . For horticultural plasticulture row middles, a convolutional neural network has been developed to detect grasses among broadleaves and sedges 20 . ...
... c LB = Leaf-blade annotation method. This refers to using multiple, small square boxes placed along leaf blades and inflorescence to identify goosegrass within digital images. in 99% precision and 78% recall 19 . Current results for goosegrass detection in strawberry obtained a relatively similar overall accuracy compared to similar studies using convolutional neural networks alone, but detection in tomatoes may require further sampling. ...
... Training data (Training 1, Table 4) were acquired during the strawberry growing season at GCREC and SGA. Images were taken in tandem with a previous study 19 . Strawberry plants were transplanted on October 10, 2017, and October 16, 2017, at the GCREC and SGA, respectively. ...
Article
Full-text available
Goosegrass is a problematic weed species in Florida vegetable plasticulture production. To reduce costs associated with goosegrass control, a post-emergence precision applicator is under development for use atop the planting beds. To facilitate in situ goosegrass detection and spraying, tiny- You Only Look Once 3 (YOLOv3-tiny) was evaluated as a potential detector. Two annotation techniques were evaluated: (1) annotation of the entire plant (EP) and (2) annotation of partial sections of the leaf blade (LB). For goosegrass detection in strawberry, the F-score was 0.75 and 0.85 for the EP and LB derived networks, respectively. For goosegrass detection in tomato, the F-score was 0.56 and 0.65 for the EP and LB derived networks, respectively. The LB derived networks increased recall at the cost of precision, compared to the EP derived networks. The LB annotation method demonstrated superior results within the context of production and precision spraying, ensuring more targets were sprayed with some over-spraying on false targets. The developed network provides online, real-time, and in situ detection capability for weed management field applications such as precision spraying and autonomous scouts.
... Precision technology for spot-application of herbicides predominantly relies on machine vision-linked detectors for autonomous weed control applications (Fennimore et al. 2016). The most common sensor technologies are multispectral cameras (Vrindts et al. 2002), hyperspectral cameras (Zhang et al. 2012) and RGB cameras (dos Santos Ferreira et al. 2017, Sharpe et al. 2018. Consumer RGB cameras are a viable low-cost alternative sensor compared to hyperspectral technology (Fennimore et al. 2016). ...
... Object detection-based CNNs have been applied to detect weeds within digital images from wheat (Triticum aestivum L.) (Dyrmann et al. 2017 and strawberry cropping systems (Sharpe et al. 2018). A segmentation-based CNN has been used to discriminate broadleaf and grass weeds from soybean [Glycine max (L.) Merr.] and bare-ground (dos Santos Ferreira et al. 2017). ...
... us-ide.org/). Previous research demonstrated that smaller labels on the most prevalent, visible part of the plant were more effective than labeling the presence of whole plants (Sharpe et al. 2018) and a similar approach was undertaken. ...
Article
Full-text available
Weed control between plastic covered, raised beds in Florida vegetable crops relies predominantly on herbicides. Broadcast applications of post-emergence herbicides are unnecessary due to the general patchy distribution of weed populations. Development of precision herbicide sprayers to apply herbicides where weeds occur would result in input reductions. The objective of the study was to test a state-of-the-art object detection convolutional neural network, You Only Look Once 3 (YOLOV3), to detect vegetation both indiscriminately (1-class network) and to detect and discriminate three classes of vegetation commonly found within Florida vegetable plasticulture row-middles (3-class network). Vegetation was discriminated into three categories: broadleaves, sedges and grasses. The 3-class network (Fscore = 0.95) outperformed the 1-class network (Fscore = 0.93) in overall vegetation detection. The increase in target variability when combining classes increased and potentially negated benefits from pooling classes into a single target (and increasing the available data per class). The 3-class network Fscores for grasses, sedges and broadleaves were 0.96, 0.96 and 0.93 respectively. Recall was the limiting factor for all classes. With consideration to how much of the plant was identified (broadleaves and grasses), the 3-class network (Fscore = 0.93) outperformed the 1-class network (Fscore = 0.79). The 1-class network struggled to detect grassy weed species (recall = 0.59). Use of YOLOV3 as an object detector for discrimination of vegetation classes is a feasible option for incorporation into precision applicators.
... 16 Notably, deep learning, specifically deep convolutional neural networks (DCNNs), has demonstrated impressive object detection and image classification capabilities and is being utilized for real-time weed detection. 9,17 Recent studies have indicated that DCNNs are capable of detecting weeds in various cropping systems, such as turfgrass landscapes, 9,17 soybean [Glycine max (L.) Merr.], 18,19 peas (Pisum sativum L.), 20 strawberry (Fragaria × ananassa Duch), 20,21 and plastic-mulched tomatoes (Lycopersicon esculentum L.). 21 A review of the literature indicates that all preceding studies designed DCNNs for classifying a single or limited species of weeds. ...
... 9,17 Recent studies have indicated that DCNNs are capable of detecting weeds in various cropping systems, such as turfgrass landscapes, 9,17 soybean [Glycine max (L.) Merr.], 18,19 peas (Pisum sativum L.), 20 strawberry (Fragaria × ananassa Duch), 20,21 and plastic-mulched tomatoes (Lycopersicon esculentum L.). 21 A review of the literature indicates that all preceding studies designed DCNNs for classifying a single or limited species of weeds. 22,23 For instance, Sharpe et al. 22 investigated You Only Look Once 3 (YOLOv3-tiny) for detecting a single grass weed species, goosegrass [Eleusine indica (L.) Gaertn.], in plastic mulched tomato and strawberry crops. ...
Article
Full-text available
BACKGROUND Reliable, fast, and accurate weed detection in farmland is crucial for precision weed management but remains challenging due to the diverse weed species present across different fields. While deep learning models for direct weed detection have been developed in previous studies, creating a training dataset that encompasses all possible weed species, ecotypes, and growth stages is practically unfeasible. This study proposes a novel approach to detect weeds by integrating semantic segmentation with image processing. The primary aim is to simplify the weed detection process by segmenting crop pixels and identifying all vegetation outside the crop mask as weeds. RESULTS The proposed method employs a semantic segmentation model to generate a mask of corn (Zea mays L.) crops, identifying all green plant pixels outside the mask as weeds. This indirect segmentation approach reduces model complexity by avoiding the need for direct detection of diverse weed species. To enhance real‐time performance, the semantic segmentation model was optimized through knowledge distillation, resulting in a faster, lighter‐weight inference. Experimental results demonstrated that the DeepLabV3+ model, after applying knowledge distillation, achieved an average accuracy (aAcc) exceeding 99.5% and a mean intersection over union (mIoU) across all categories above 95.5%. Furthermore, the model's operating speed surpassed 34 frames per second (FPS). CONCLUSION This study introduces a novel method that accurately segments crop pixels to form a mask, identifying vegetation outside this mask as weeds. By focusing on crop segmentation, the method avoids the complexity associated with diverse weed species, varying densities, and different growth stages. This approach offers a practical and efficient solution to facilitate the training of effective computer vision models for precision weed detection and control. © 2024 Society of Chemical Industry.
... Precision herbicide application, based on an accurate, reliable, and automatic weed detection technology, can substantially reduce herbicide input and weed control costs. 7,8 Previous researchers explored a variety of sensing methods, such as fluorescence, [9][10][11] visible or near-infrared spectroscopy, 12,13 hyper-or multi-spectral imaging, 14,15 and machine vision, [16][17][18] for weed detection. Nevertheless, the introduction of smart sprayers into practical farming, particularly for wheat, is still lacking. ...
... This is undesirable because weeds would be missed in field applications, resulting in poor herbicide coverage and poor weed control. In previous research, Sharpe et al. 16,51 reported that annotation method can affect the performance of neural networks for weed detection. The authors noted that the overall accuracy of YOLOv3 for detection of goosegrass [Eleusine indica (L.) Gaertn.] in plastic-mulched strawberry and tomato (Solanum lycopersicum L.) significantly improved when the partial sections of leaf blade were annotated rather than the entire weed plant. ...
Article
Full-text available
BACKGROUND In‐field weed detection in wheat (Triticum aestivum L.) is challenging due to the occurrence of weeds in close proximity with the crop. The objective of this research was to evaluate the feasibility of using deep convolutional neural networks for detecting broadleaf weed seedlings growing in wheat. RESULTS The object detection neural networks, including CenterNet, Faster R‐CNN, TridenNet, VFNet, and You Only Look Once Version 3 (YOLOv3) were insufficient for weed detection in wheat because the recall never exceeded 0.58 in the testing dataset. The image classification neural networks including AlexNet, DenseNet, ResNet, and VGGNet were trained with small (5500 negative and 5500 positive images) or large training datasets (11 000 negative and 11 000 positive images) and three training image sizes (200 × 200, 300 × 300, and 400 × 400 pixels). For the small training dataset, increasing image sizes decreased the F1 scores of AlexNet and VGGNet but generally increased the F1 scores of DenseNet and ResNet. For the large training dataset, no obvious difference was detected between the training image sizes since all neural networks exhibited remarkable classification accuracies with high F1 scores (≥0.96). All image classification neural networks exhibited high F1 scores (≥0.99) when trained with the large training dataset and the training images of 200 × 200 pixels. CONCLUSION CenterNet, Faster R‐CNN, TridentNet, VFNet, and YOLOv3 were insufficient, while AlexNet, DenseNet, ResNet, and VGGNet trained with a large training dataset were highly effective for detection of broadleaf weed seedlings in wheat. © 2021 Society of Chemical Industry.
... Previous studies mostly focused on detecting weeds directly. 30,31,55 However, vegetable fields from different locations, climates, and management practices (e.g., cover crops, herbicide usage, and irrigation) may be infested with a diverse spectrum of weed species. Moreover, because of phenotypic plasticity, significant morphological variations exist between the weed ecotypes. ...
Article
Full-text available
BACKGROUND Machine vision‐based precision weed management is a promising solution to substantially reduce herbicide input and weed control cost. The objective of this research was to compare two different deep learning‐based approaches for detecting weeds in cabbage: (1) detecting weeds directly, and (2) detecting crops by generating the bounding boxes covering the crops and any green pixels outside the bounding boxes were deemed as weeds. RESULTS The precision, recall, F1‐score, mAP0.5, mAP0.5:0.95 of You Only Look Once (YOLO) v5 for detecting cabbage were 0.986, 0.979, 0.982, 0.995, and 0.851, respectively, while these metrics were 0.973, 0.985, 0.979, 0.993, and 0.906 for YOLOv8, respectively. However, none of these metrics exceeded 0.891 when detecting weeds. The reduced performances for directly detecting weeds could be attributed to the diverse weed species at varying densities and growth stages with different plant morphologies. A segmentation procedure demonstrated its effectiveness for extracting weeds outside the bounding boxes covering the crops, and thereby realizing effective indirect weed detection. CONCLUSION The indirect weed detection approach demands less manpower as the need for constructing a large training dataset containing a variety of weed species is unnecessary. However, in a certain case, weeds are likely to remain undetected due to their growth in close proximity with crops and being situated within the predicted bounding boxes that encompass the crops. The models generated in this research can be used in conjunction with the machine vision subsystem of a smart sprayer or mechanical weeder. © 2024 Society of Chemical Industry.
... In addition, the large extent of plantings or mixed crops weeds out computational time problems in image processing methods. The recent Deep Learning (DL) technique has proven to overcome the limitations of the classical image processing model [5]. ...
Article
Full-text available
One of the most damaging obstacles to crop production is weeds; weeds pose a serious risk to agricultural output. Due to the homogenous morphological properties of weeds, farmers are unable to identify and classify the weed leaves.This study can aid farmers in identifying, categorizing, and quantifying the true extent of crop yield reduction. Computer vision is a sophisticated technique widely used for weed and crop leaf identification and detection in the agricultural field. This work has used three different datasets, such as ‘Deep Weed’, ‘Crop Weed Filed Image Dataset (CWFID), and Multi-view Image Dataset for Weed Detection in Wheat Field (MMIDDWF), and collected 5090 images for training the model. This work uses segmentation techniques for vegetation and semantics for weed object detection. Furthermore, the masked image is distributed as small tiles; often the patches are square tiles, as in 25 × 25 (px), 50 × 50 (px), and 100 × 100 (px). This work has proposed a Deep Learning segmentation model named ‘Pyramid Scene Parsing Network-USegNet’ (PSPUSegNet) for data classification and compared the accuracy of the data from existing segmentation models such as UNet, SegNet, and USegNet. The suggested model, PSPUSegNet, obtained 96.98% precision, 97.98% recall, and 98.96% data accuracy in the Deep Weed dataset. The proposed model has self-supervised in term of deep learning mechanism.Our findings demonstrate that the deep weed dataset has achieved greater data accuracy compared to the CWFID and MMIDDWF datasets. The findings support the effectiveness of the suggested approach for weed species recognition.
... Carolina Geranium (Geranium carolinianum) is a broadleaf weed that is widespread in Florida strawberries. the results of the training data are from 705 sample images, 88 positive images and 109 negative images are obtained [44]. Goosegrass is a weed species that grows on tomato and strawberry plants. ...
Article
Full-text available
Artificial Neural Networks use high-performance computing and big data technology, opportunities for science to create new opportunities in agriculture. The purpose of writing this article is to analyze the use of artificial neural networks on (a) plant diseases based on plant leaf diseases, (b) plant pests, (c) growth or quality, and (d) agricultural products. The writing method used is a literature study of the research that has been done. The keywords used in the search for references include ANN, plant, diseases, pests, growth or quality, and agricultural products. Publishers for the reference in this article are ScienceDirect and IEEE. The years of publication of the references are restricted from 2015 to 2022. Based on the literature study results, it was concluded that Artificial Neural Networks' deep learning models are accurate for detecting and classifying leaf diseases and pests, detecting growth, and application to agricultural plant products.
... Deep learning, a subset of machine learning technology, has emerged as successful applications in various scientific domains, including computer vision [22][23][24]. Deep convolutional neural networks (DCNNs) demonstrated extraordinary capability to extract complex features from images [25] and are utilized as a tool to detect weeds and perform precision herbicide spraying [26][27][28][29][30][31]. For example, See & Spray ® , an autonomous smart sprayer utilizing DCNNs for weed detection, has been developed for precision herbicide application in agronomic crops [32]. ...
Article
Full-text available
Precision spraying can significantly reduce herbicide input for turf weed management. A major challenge for autonomous precision herbicide spraying is to accurately and reliably detect weeds growing in turf. Deep convolutional neural networks (DCNNs), an important artificial intelligent tool, demonstrated extraordinary capability to learn complex features from images. The feasibility of using DCNNs, including various image classification or object detection neural networks, has been investigated to detect weeds growing in turf. Due to the high level of performance of weed detection, DCNNs are suitable for the ground-based detection and discrimination of weeds growing in turf. However, reliable weed detection may be subject to the influence of weeds (e.g., biotypes, species, densities, and growth stages) and turf factors (e.g., turf quality, mowing height, and dormancy vs. non-dormancy). The present review article summarizes the previous research findings using DCNNs as the machine vision decision system of smart sprayers for precision herbicide spraying, with the aim of providing insights into future research.
... Controlling invasive species in rangelands may only require whole-image classification (Olsen et al. 2019) if the control treatment is coarse (e.g., spot spraying), whereas the application of laser weed control treatments requires the knowledge of plant morphology to enable targeting of growing points and other critical plant parts (Champ et al. 2020). Exploring how different architectures affect performance, Sharpe et al. (2019a) found that DetectNet detected all Carolina geranium (Geranium carolinianum L.) growing among plasticulture strawberry plants, compared to just 21% for the image classification architectures tested. In contrast, Zhuang et al. (2022) found that image classification algorithms outperformed object detection algorithms for broadleaf weed seedlings in wheat. ...
Article
Full-text available
The past 50 years of advances in weed recognition technologies have poised site-specific weed control (SSWC) on the cusp of requisite performance for large-scale production systems. The technology offers improved management of diverse weed morphology over highly variable background environments. SSWC enables the use of non-selective weed control options, such as lasers and electrical weeding, as feasible in-crop selective alternatives to herbicides by targeting individual weeds. This review looks at the progress made over this half-century of research and its implications for future weed recognition and control efforts; summarizing advances in computer vision techniques and the most recent deep convolutional neural network (CNN) approaches to weed recognition. The first use of CNNs for plant identification in 2015 began an era of rapid improvement in algorithm performance on larger and more diverse datasets. These performance gains and subsequent research have shown that the variability of large-scale cropping systems is best managed by deep learning for in-crop weed recognition. The benefits of deep learning and improved accessibility to open-source software and hardware tools has been evident in the adoption of these tools by weed researchers and the increased popularity of CNN-based weed recognition research. The field of machine learning holds substantial promise for weed control, especially the implementation of truly integrated weed management strategies. While previous approaches sought to reduce environmental variability or manage it with advanced algorithms, research in deep learning architectures suggests that large-scale, multi-modal approaches are the future for weed recognition.
... Previous studies suggest that deep learning-based weed detection methods generally outperform other weed detection techniques (Fennimore et al., 2016;Grinblat et al., 2016;Peteinatos et al., 2014;Sharpe et al., 2018Sharpe et al., , 2019Teimouri et al., 2018;Wang et al., 2019 The VD or TD contained a total of 450 positive images (150 images for each of the three stress conditions: severe stress, moderate stress, and no stress) and 450 negative images containing bahiagrass without weeds (the number of negative images for each water stress condition was equal to that of the number of positive images). ...
Article
Machine vision‐based weed detection relies on features such as plant colour, leaf texture, shape, and patterns. Drought stress in plants can alter leaf colour and morphological features, which may in turn affect the reliability of machine vision‐based weed detection. The objective of this research was to evaluate the feasibility of using deep convolutional neural networks for the detection of Florida pusley (Richardia scabra L.) growing in drought stressed and unstressed bahiagrass (Paspalum natatum Flugge). The object detection neural networks you only look once (YOLO)v3, faster region‐based convolutional network (Faster R‐CNN), and variable filter net (VFNet) failed to effectively detect Florida pusley growing in drought stressed or unstressed bahiagrass, with F1 scores ≤0.54 in the testing dataset. Nevertheless, the use of the image classification neural networks AlexNet, GoogLeNet, and Visual Geometry Group‐Network (VGGNet) was highly effective and achieved high (≥0.97) F1 scores and recall values (≥0.98) in detecting images containing Florida pusley growing in drought stressed or unstressed bahiagrass. Overall, these results demonstrated the effectiveness of using an image classification convolutional neural network for detecting Florida pusley in drought stressed or unstressed bahiagrass. These findings illustrate the broad applicability of these neural networks for weed detection.
... In the event of irregularly shaped plants or patches of plants, multiple bounding boxes were drawn to encompass the entirety of the plant features. Labeling partial sections of irregularly shaped plants has been shown to be beneficial to object detectors (Sharpe et al. 2018(Sharpe et al. , 2020aZhuang et al. 2022), so these irregular features were not ignored. In any given image, both plant species could be present, so they were labeled accordingly. ...
Article
Full-text available
Site-specific weed management using open-sourced object detection algorithms could accurately detect weeds in cropping systems. We investigated the use of object detection algorithms to detect Palmer amaranth ( Amaranthus palmeri S. Watson) in soybean [ Glycine max (L.) Merr.]. The objectives were to 1) develop an annotated image database of A. palmeri and soybean to fine-tune object detection algorithms, 2) compare effectiveness of multiple open-sourced algorithms in detecting A. palmeri , and 3) evaluate the relationship between A. palmeri growth features and A. palmeri detection ability. Soybean field sites were established in Manhattan, KS and Gypsum, KS with natural populations of A. palmeri . A total of 1108 and 392 images were taken aerially and at ground level, respectively, between May 27 and July 27, 2021. After image annotation, a total of 4492 images were selected. Annotated images were used to fine tune open-source Faster regional convolutional (Faster R-CNN) and Single Shot Detector (SSD) algorithms using a Resnet backbone, as well as the You Only Look Once (YOLO) series algorithms. Results demonstrated that YOLO version 5 achieved the highest mean average precision score of 0.77. For both A. palmeri and soybean detections within this algorithm, the highest F1 score was 0.72 when using a confidence threshold of 0.298. A lower confidence threshold of 0.15 increased the likelihood of species detection, but also increased the likelihood of false positive detections. The trained YOLOv5 dataset was used to identify A. palmeri in a dataset paired with measured growth features. Linear regression models predicted that as A. palmeri densities increased and as A. palmeri height increased, precision, recall, and F1 scores of algorithms decreased. We conclude that open-sourced algorithms such as YOLOv5 show great potential in detecting A. palmeri in soybean cropping systems.
... 13,22,23 Recent studies documented that DCNNs can effectively detect weeds in various cropping systems, such as corn (Zea mays L.), 24 soybean (Glycine max L. Merrill), 24,25 and plasticmulched small fruiting and vegetable crops. 26,27 Sharpe et al. performed goosegrass (Eleusine indica L.) detection in strawberry (Trifolium fragiferum L.) and tomato (Solanum lycopersicum L.) with tiny YOLO-v3. 28 Sivakumar et al. reported that Faster R-CNN reliably identified late-season weeds in soybean fields. ...
Article
Full-text available
BACKGROUND Precision spraying of synthetic herbicides can reduce herbicide input. Previous research demonstrated the effectiveness of using image classification neural networks for detecting weeds growing in turfgrass, but did not attempt to discriminate weed species and locate the weeds on the input images. The objectives of this research were to: (i) investigate the feasibility of training deep learning models using grid cells (subimages) to detect the location of weeds on the image by identifying whether or not the grid cells contain weeds; and (ii) evaluate DenseNet, EfficientNetV2, ResNet, RegNet and VGGNet to detect and discriminate multiple weed species growing in turfgrass (multi‐classifier) and detect and discriminate weeds (regardless of weed species) and turfgrass (two‐classifier). RESULTS The VGGNet multi‐classifier exhibited an F1 score of 0.950 when used to detect common dandelion and achieved high F1 scores of ≥0.983 to detect and discriminate the subimages containing dallisgrass, purple nutsedge and white clover growing in bermudagrass turf. DenseNet, EfficientNetV2 and RegNet multi‐classifiers exhibited high F1 scores of ≥0.984 for detecting dallisgrass and purple nutsedge. Among the evaluated neural networks, EfficientNetV2 two‐classifier exhibited the highest F1 scores (≥0.981) for exclusively detecting and discriminating subimages containing weeds and turfgrass. CONCLUSION The proposed method can accurately identify the grid cells containing weeds and thus precisely locate the weeds on the input images. Overall, we conclude that the proposed method can be used in the machine vision subsystem of smart sprayers to locate weeds and make the decision for precision spraying herbicides onto the individual map cells. © 2022 Society of Chemical Industry.
... CNNs have had success in a broad range of agricultural applications [31] such as weed detection [28,48,61], disease detection ( [1,21]; A. [47]), tracking animal behaviour [56,59], and detecting fruit ripeness stages [22,54]. YOLOv3 was trained to detect three types of unwanted vegetation in 1280 × 720 images of plastic-covered, raised vegetable beds in Florida [49]. ...
Article
This study looked at the development of six deep learning artificial neural network models for detecting ripeness stage in wild blueberries, along with developing models for yield estimation. The six networks used were YOLOv3, YOLOv3-SPP, YOLOv3-Tiny, YOLOv4, YOLOv4-Small and YOLOv4-Tiny. Both 3-class (green berries, red berries, blue berries) and 2-class (unripe berries, ripe berries) models were developed with YOLOv4 performing the best with mean average precisions of 79.79% and 88.12% respectively. This result was further supported by YOLOv4 achieving the highest F1 score of 0.82. YOLOv4-Tiny performed the best from a computational load perspective having a mean inference time of 7.8 ms and a mean memory usage of 1.63 GB for single 1280 × 736 pixel images. Only minor differences in the accuracy of the nonlinear regression yield prediction models were detected, with YOLOv4-Small performing the best with a mean absolute error of 24.1%. Despite this error, the results are encouraging, and this novel approach to yield estimation in wild blueberries will aid growers in making better, more localized, management decisions, improving yields and ultimately increasing profits by better understanding their fields ripening characteristics.
... The main objective to creating effective models for identification and classification of overlapping weed /crops leaves, uneven weed patch densities, varying sizes across multiple images, and discriminating the similar morphological property of weeds and crops leaves [4].In addition, the large extent of plantings or mixed crop weeds of computational time problem in image processing methods. The recent Deep learning technique has proven to overcome the limitation of the classical image processing model [5]. ...
Preprint
Full-text available
Weeds are unwanted plants that compete with target crops and absorbed the required nutrients from the soil, sunlight, air, etc. The farmers are suffering from weed identification detection due to the homogeneous morphological feature of weed and crop leaves. Computer vision is a sophisticated technique which widely used for weed and crop leaves identification and detection in the agricultural field. This work has used the three different datasets such as “Deep weed”, “Crop Weed Filed Image Dataset” (CWFID), and “Leaf Segmentation Challenge” (LSC), and collected 5090 images for training the model. In this work we have used three grass species as Setariaverticillata, Digitariasanguinalis, Echinochloa crus-Galli, and three broad leaf species as Cerastiumvulgatum L.,Chenopodiumalbums, Amaranthusretroflexus in Vignamungo crops. The first dataset includes 2000, the second 1720 images, and 1370 images from the third dataset. We have proposed a deep learning segmentation model as PSPNet-U-SegNet for data classification and compared the data accuracy from existing segmentation models such as U-SegNet, and U-SegNet.Theresult has been shown the deep weed dataset has achieved98.97% data accuracy and 8.9m IoU. Our findings demonstrate that adding more datasets to the actual field picture collection improves network performance while requiring less manual annotation work.
... In other cropping systems, Sharpe et al. [63] reported that the leaf-trained Detect-Net showed a high F 1 score (0.94) for detecting Carolina geranium growing in competition with strawberry (Fragaria × ananassa (Weston) Duchesne ex Rozier (pro sp.) (chiloensis × virginiana)). Recently, to detect broadleaf weed seedlings growing in wheat (Triticum aestivum L.), Zhuang et al. [64] reported that object detection neural networks, including CenterNet, Faster R-CNN, TridentNet, VFNet, and YOLOv3, were ineffective (F 1 scores ≤ 0.68). ...
Article
Full-text available
Alfalfa (Medicago sativa L.) is used as a high-nutrient feed for animals. Weeds are a significant challenge that affects alfalfa production. Although weeds are unevenly distributed, herbicides are broadcast-applied in alfalfa fields. In this research, object detection convolutional neural networks, including Faster R-CNN, VarifocalNet (VFNet), and You Only Look Once Version 3 (YOLOv3), were used to indiscriminately detect all weed species (1-class) and discriminately detect between broadleaves and grasses (2-class). YOLOv3 outperformed other object detection networks in detecting grass weeds. The performances of using image classification networks (GoogLeNet and VGGNet) and object detection networks (Faster R-CNN and YOLOv3) for detecting broadleaves and grasses were compared. GoogLeNet and VGGNet (F1 scores ≥ 0.98) outperformed Faster R-CNN and YOLOv3 (F1 scores ≤ 0.92). Classifying and training various broadleaf and grass weeds did not improve the performance of the neural networks for weed detection. VGGNet was the most effective neural network (F1 scores ≥ 0.99) tested to detect broadleaf and grass weeds growing in alfalfa. Future research will integrate the VGGNet into the machine vision subsystem of smart sprayers for site-specific herbicide applications.
... The authors, based on attained results, concluded the effectiveness of deep convolutional neural networks in the weed detection problem. In another study, Sharpe et al. [60] evaluated three CNNs-DetectNet, VGGNet, and GoogLeNet-for the detection of weeds in strawberry fields. It was observed that the image classification DetectNet model produced the best results for image-based remote sensing of weeds. ...
Article
Full-text available
Selective agrochemical spraying is a highly intricate task in precision agriculture. It requires spraying equipment to distinguish between crop (plants) and weeds and perform spray operations in real-time accordingly. The study presented in this paper entails the development of two convolutional neural networks (CNNs)-based vision frameworks, i.e., Faster R-CNN and YOLOv5, for the detection and classification of tobacco crops/weeds in real time. An essential requirement for CNN is to pre-train it well on a large dataset to distinguish crops from weeds, lately the same trained network can be utilized in real fields. We present an open access image dataset (TobSet) of tobacco plants and weeds acquired from local fields at different growth stages and varying lighting conditions. The TobSet comprises 7000 images of tobacco plants and 1000 images of weeds and bare soil, taken manually with digital cameras periodically over two months. Both vision frameworks are trained and then tested using this dataset. The Faster R-CNN-based vision framework manifested supremacy over the YOLOv5-based vision framework in terms of accuracy and robustness, whereas the YOLOv5-based vision framework demonstrated faster inference. Experimental evaluation of the system is performed in tobacco fields via a four-wheeled mobile robot sprayer controlled using a computer equipped with NVIDIA GTX 1650 GPU. The results demonstrate that Faster R-CNN and YOLOv5-based vision systems can analyze plants at 10 and 16 frames per second (fps) with a classification accuracy of 98% and 94%, respectively. Moreover, the precise smart application of pesticides with the proposed system offered a 52% reduction in pesticide usage by spotting the targets only, i.e., tobacco plants.
... Images are typically processed at resolutions from 224 × 224 [31][32][33] to 608 × 608 [ 34 , 35 ], but this can be increased to improve clarity of visual features [ 36 , 33 ]. CNNs have been used in agriculture for detecting weeds [37][38][39][40][41] , detecting plant diseases [ 42 , 43 ], monitoring plant growth and ripeness [ 44 , 36 , 45 ], and monitoring livestock [ 46 , 47 ]. ...
Article
Agricultural herbicide application efficiency can be improved using smart sprayers which provide site-specific, rather than broadcast, applications of agrochemicals. The YOLOv3-Tiny convolutional neural network (CNN) was trained to detect two weeds, hair fescue and sheep sorrel, in images captured from wild blueberry fields throughout Nova Scotia, Canada. An evaluation was performed in three commercial wild blueberry fields in Nova Scotia to examine the effects of camera selection and target distance on detection accuracy. A Canon T6 DSLR camera, an LG G6 smartphone, and a Logitech c920 webcam were used to capture RGB images at varying distances from target weeds. Mean F1-scores for each combination of camera and image height were analyzed in a 3 × 3 factorial arrangement for hair fescue and a 3 × 2 factorial arrangement for sheep sorrel. Images captured from 0.98 m with the LG G6 and Canon T6 produced F1-scores of up to 0.97 for detection of at least one hair fescue tuft. Images captured with the LG G6 and Canon T6 DSLR from 0.57 m achieved F1-scores of 0.94 and 0.93, respectively, for detection of at least one sheep sorrel plant per image. Sheep sorrel was undetectable in images from the Logitech c920 under 19 of 27 parameter combinations. Future work will involve using the CNN to control herbicide applications with a real-time smart sprayer. Additionally, the CNN will be used in a web-based application to detect target weeds and provide site-specific information to aid management decisions. Using a CNN to detect weeds will create improvements in management techniques, resulting in cost-savings and greater sustainability for the wild blueberry industry.
... CNNs provide an opportunity for high-speed, automatic inferencing of field conditions using images from RGB cameras [18], [19]. In the wild blueberry cropping system, Schumann et al. [20] trained four CNNs to detect and classify fruit ripeness stages and provide yield prediction. CNNs have been used to detect weeds in other cropping systems such as potatoes [21], strawberries [22], and various Florida vegetables [23]. Hennessy et al. [24] trained six CNNs for detecting hair fescue and sheep sorrel using images from 58 wild blueberry fields in Nova Scotia, Canada. ...
Conference Paper
Full-text available
Broadcast applications of liquid herbicides are used to manage weeds such as hair fescue (Festuca filiformis Pourr.) and sheep sorrel (Rumex acetosella L.) in wild blueberry (Vaccinium angustifolium Ait.) fields. Weeds typically grow in patches of the fields, consequently, herbicide is wasted on areas of the field without weed cover. Application efficiency can be optimized by employing a smart sprayer which uses machine vision to identify areas of the field containing the target weeds in real-time. The YOLOv3-Tiny convolutional neural network (CNN) was trained to detect hair fescue and sheep sorrel using 1280x720 resolution images captured in 58 wild blueberry fields throughout Nova Scotia, Canada. The trained CNN detected at least one target weed per validation image with F1-scores of 0.97 for hair fescue and 0.90 for sheep sorrel at a network resolution of 1280x736. An evaluation was performed at a commercial wild blueberry field in Debert, Nova Scotia, to examine the effects of camera selection and target distance on detection accuracy. A Logitech c920 webcam, an LG G6 smartphone, and a Canon T6 DSLR camera were used to capture colour images at distances of 0.57 m, 0.98 m, and 1.29 m from target weeds. Test plots were selected at randomly spaced intervals along an inverted "W" pattern in the field. Mean F1-scores for each combination of camera and image height were analyzed in a 3x3 factorial arrangement for hair fescue and a 3x2 factorial arrangement for sheep sorrel. The peak F1-score for detection of at least one hair fescue plant, 0.97, was achieved with images captured with the LG G6 smartphone at a height of 0.98 m. Images captured with the LG G6 smartphone and Canon T6 DSLR camera at 0.98 m each achieved an F1-score of 0.82 for detection of at least one sheep sorrel plant per image. Sheep sorrel was only detected by the CNN in images from the Logitech c920 camera using 3 of 9 parameter combinations in the analysis. Future work will examine images from two additional fields tested under similar conditions. Additionally, the CNN will be used to control herbicide applications after integration with a real-time smart sprayer. A web-based application will be developed to detect target weeds using the CNN and provide wild blueberry growers with site-specific information to aid management decisions. Using a CNN to detect weeds will improve traditional management techniques and create cost-savings and greater sustainability for wild blueberry growers.
... Images are typically processed at resolutions from 224x224 (Redmon, 2016;Sandler et al., 2018;Tan & Le, 2019) to 608x608 (Redmon & Farhadi, 2018;, but this can be increased to improve clarity of visual features Tan & Le, 2019). CNNs have been used in agriculture for detecting weeds (Blue River Technologies, 2018; Sharpe et al., 2019;Yu, Schumann, et al., 2019;Yu, Sharpe, et al., 2019a, 2019b, detecting plant diseases (Fuentes et al., 2017;Venkataramanan, et al., 2019), monitoring plant growth and ripeness Tian et al., 2019), and monitoring livestock Yang et al., 2018). Chapter 2 trained six CNNs using the Darknet framework (Redmon et al., 2020) to identify hair fescue and sheep sorrel in images of wild blueberry fields. ...
... Image processing using CNNs has been used in various aspects of agriculture since 2015 [20]. Innovative uses of this technology in agriculture have included livestock monitoring [21]- [23], plant disease detection [24]- [26], wild blueberry ripeness detection [27], and weed detection for strawberries [28], Florida vegetables [29], turfgrasses [30], [31], and ryegrass [32]. Reference [33] was the first to use CNNs for detecting weeds in wild blueberry fields. ...
... There has been limited research evaluating the feasibility of using DCNN as a method of real-time weed detection in the machine vision subsystem of smart sprayers. Sharpe et al. (2019) reported that DetectNet, an object-detection DCNN, was effective at detecting Carolina geranium (Geranium carolinianum L.) in plastic-mulched strawberry (Fragaria × ananassa D.). Recently, we reported that DCNN achieved excellent performance in the detection of multiple broadleaf weed species in bermudagrass [Cynodon dactylon (L.) Pers.], bahiagrass (Paspalum notatum Flueggé), and perennial ryegrass (Lolium perenne L. ssp. ...
Article
Full-text available
Spot-spraying POST herbicides is an effective approach to reduce herbicide input and weed control cost. Machine vision detection of grass or grass-like weeds in turfgrass systems is a challenging task due to the similarity in plant morphology. In this work, we explored the feasibility of using image classification with deep convolutional neural networks (DCNN), including AlexNet, GoogLeNet, and VGGNet, for detection of crabgrass species (Digitaria spp.), doveweed [Murdannia nudiflora (L.) Brenan], dallisgrass (Paspalum dilatatum Poir.), and tropical signalgrass [Urochloa distachya (L.) T.Q. Nguyen] in bermudagrass [Cynodon dactylon (L.) Pers.]. VGGNet generally out-performed AlexNet and GoogLeNet in detecting selected grassy weeds. For detection of P. dilatatum, VGGNet achieved high F1 scores (≥0.97) and recall values (≥0.99). A single VGGNet model exhibited high F1 scores (≥0.93) and recall values (1.00) that reliably detected Digitaria spp., M. nudiflora, P. dilatatum, and U. distachya. Low weed density reduced the recall values of AlexNet at detecting all weed species and GoogLeNet at detecting Digitaria spp. In comparison, VGGNet achieved excellent performances (overall accuracy = 1.00) at detecting all weed species in both high and low weed density scenarios. These results demonstrated the feasibility of using DCNN for detection of grass or grass-like weeds in turfgrass systems.
... TD 1, testing dataset 1; TD 2, testing dataset 2; MCC, Matthews correlation coefficient. previous research, DetectNet trained to detect Carolina geranium (Geranium carolinianum L.) in plastic-mulched strawberry crops was successfully desensitized to black medic (Medicago lupulina L.) leaves in a similar circumstance (Sharpe et al., 2018). ...
Article
Full-text available
Precision herbicide application can substantially reduce herbicide input and weed control cost in turfgrass management systems. Intelligent spot-spraying system predominantly relies on machine vision-based detectors for autonomous weed control. In this work, several deep convolutional neural networks (DCNN) were constructed for detection of dandelion (Taraxacum officinale Web.), ground ivy (Glechoma hederacea L.), and spotted spurge (Euphorbia maculata L.) growing in perennial ryegrass. When the networks were trained using a dataset containing a total of 15,486 negative (images contained perennial ryegrass with no target weeds) and 17,600 positive images (images contained target weeds), VGGNet achieved high F1 scores (≥0.9278), with high recall values (≥0.9952) for detection of E. maculata, G. hederacea, and T. officinale growing in perennial ryegrass. The F1 scores of AlexNet ranged from 0.8437 to 0.9418 and were generally lower than VGGNet at detecting E. maculata, G. hederacea, and T. officinale. GoogleNet is not an effective DCNN at detecting these weed species mainly due to the low precision values. DetectNet is an effective DCNN and achieved high F1 scores (≥0.9843) in the testing datasets for detection of T. officinale growing in perennial ryegrass. Moreover, VGGNet had the highest Matthews correlation coefficient (MCC) values, while GoogleNet had the lowest MCC values. Overall, the approach of training DCNN, particularly VGGNet and DetectNet, presents a clear path toward developing a machine vision-based decision system in smart sprayers for precision weed control in perennial ryegrass.
Article
Full-text available
A global experiment was conducted in a private nursery for the purpose of studying the effect of fertilization and biofertilization methods on the vegetative and flowering growth of two varieties of geranium plants. Three factors were used in the experiment, such as the first factor using two varieties of geranium (the white and the red variety), the second factor represented the use of 3 concentrations of Biohealth biofertilizer (0.0, 5.00 and 10.00 mg L-1), while the third factor was the use of four types of irrigation water (tap, distill, well water and drainage water). Completely Randomized Design (C.R.D.), with three replicates for each treatment, each treatment contained three experimental units, with 3 pots for each experimental unit. Least Significant Difference (L.S.D.) was applied at the 0.05 level in data analysis. The results of the experiment showed that there were no significant differences between the two varieties on roots number and plant height, while it differed significantly in all other characteristics. Concentration was superior to all of the studied traits except for root length and plant height. Irrigation with distilled water produced a significant improvement in all characteristics except for root length, which was superior to the method of irrigation with well water.
Article
Full-text available
Weeds are a significant threat to agricultural productivity and the environment. The increasing demand for sustainable weed control practices has driven innovative developments in alternative weed control technologies aimed at reducing the reliance on herbicides. The barrier to adoption of these technologies for selective in-crop use is availability of suitably effective weed recognition. With the great success of deep learning in various vision tasks, many promising image-based weed detection algorithms have been developed. This paper reviews recent developments of deep learning techniques in the field of image-based weed detection. The review begins with an introduction to the fundamentals of deep learning related to weed detection. Next, recent advancements in deep weed detection are reviewed with the discussion of the research materials including public weed datasets. Finally, the challenges of developing practically deployable weed detection methods are summarized, together with the discussions of the opportunities for future research. We hope that this review will provide a timely survey of the field and attract more researchers to address this inter-disciplinary research problem.
Chapter
The yield gaps between conventional and progressive farms have widened considerably, globally. Precision farming, the use of site-specific agricultural inputs accurately through decision support mechanisms, has the ability to lower the potential crop yield gap. This chapter offers an outline on the development of precision agriculture technologies (PATs) and its current adoptability status based on published literature from the past two decades. The focus of this chapter is mainly on the transformation of agriculture from mechanized to precision agriculture (PA). The role of PA, variability management zones, resources variability, sensors innovation in agricultural engineering, and the development of precision agricultural machinery to maximize the farm output will be discussed. Moreover, recent development of remote sensing and GIS applications in agriculture, use of unmanned aerial vehicles (UAVs) for agriculture, global application of PA, adoption tendency of PAT, and potential for adopting the precision agricultural technologies in developing countries (i.e., Pakistani farm settings) to modernize the conventional farming and maximize the farm yield are also discussed. There have been a number of advanced countries that already have adopted the PAT, including the United States and Europe, in the 1980s and 1990s. However, in developing countries, farmers are still conservative to use these technologies either due to high technology cost or otherwise, less adoptability to new technologies. In this chapter, we have examined the problems and recommend possible solutions to the developing countries for worldwide adoptability of PATs. The researchers and farmers can follow the recommended suggestions for the adoptability of PATs to maximize the farm income, which will ensure the global food security.
Article
Full-text available
There are different types of berries, one of the best-known, nutritious and important is the blueberry. The modern processing of these fruits guarantees high quality, better marketing of the product and an estimate of its useful life. The aim of this review was to provide scientific information on the physicochemical characteristics of different berries using hyperspectral imaging technology and digital imaging. These technologies present trends with satisfactory results in various technological and research fields. The findings obtained show that hyperspectral imaging technology and digital imaging technology have been of great interest in recent years, because they are non-destructive technologies, which allow good predictions in the detection of anomalies in berries, considering them Robust, reliable tools with high potential for use in the large industry in evaluating the quality of berries, making it possible to offer more suitable products for the consumer. With the advancement of technology, new possibilities for future studies are presented to obtain models that are faster to process and with greater statistical precision.
Article
Full-text available
A partir de la tecnología de visión artificial, específicamente de redes neuronales convolucionales, se propuso una solución para realizar el reconocimiento de frutos de durazno maduros, así como la identificación de frutos dañados. La finalidad es obtener frutos con el nivel de calidad adecuado para su comercialización. Para lograr este propósito, se obtuvieron imágenes de duraznos en un ambiente no controlado. Se recortaron las imágenes digitales hasta obtener el área de interés. Se configuraron tres conjuntos de datos: el primero, de duraznos maduros e inmaduros; el segundo, también de duraznos maduros e inmaduros pero con enfoque en un área textural, y el tercero, de duraznos sanos y dañados. Se aplicó una red neuronal convolucional, que fue programada en el lenguaje Python, las librerías de Keras y TensorFlow. Durante las pruebas se obtuvo una precisión de 95.31 % a la hora de elegir entre maduros y no maduros. Mientras que al clasificar los duraznos sanos y dañados se obtuvo 92.18 % de precisión. Por último, al clasificar las tres categorías (dañados, inmaduros y maduros), se obtuvo 83.33 % de precisión. Los resultados anteriores indican que con inteligencia artificial embebida en un dispositivo físico se puede hacer la clasificación del fruto del durazno.
Article
Full-text available
The impacts of training image sizes and optimizers on deep convolutional neural networks for weed detection in alfalfa have not been well explored. In this research, AlexNet, GoogLeNet, VGGNet, and ResNet were trained with various sizes of input images, including 200 × 200, 400 × 400, 600 × 600, and 800 × 800 pixels, and deep learning optimizers including Adagrad, AdaDelta, Adaptive Moment Estimation (Adam) and Stochastic Gradient Descent (SGD). Increasing input image sizes reduced the classification accuracy of all neural networks. The neural networks trained with the input images of 200 × 200 pixels resulted in better classification accuracy than the other image sizes investigated here. The optimizers affected the performance of the neural networks for weed detection. AlexNet and GoogLeNet trained with AdaDelta and SGD outperformed Adagrad and Adam; VGGNet trained with AdaDelta outperformed Adagrad, Adam, and SGD; and ResNet trained with AdaDelta and Adagrad outperformed the Adam and SGD. When the neural networks were trained with the best-performed input image size (200 × 200 pixels) and the deep learning optimizer, VGGNet was the most effective neural network with high precision and recall values (≥0.99) in the validation and testing datasets. At the same time, ResNet was the least effective neural network for classifying images containing weeds. However, the detection accuracy did not differ between broadleaf and grass weeds for the different neural networks studied here. The developed neural networks can be used for scouting weed infestations in alfalfa and further integrated into the machine vision subsystem of smart sprayers for site-specific weed control.
Article
Full-text available
Conventional weed management approaches are inefficient and non-suitable for integration with smart agricultural machinery. Automatic detection and classification of weeds can play a vital role in weed management contributing to better crop yields. Intelligent and smart spot-spraying system's efficiency relies on the accuracy of the computer vision based detectors for autonomous weed control. In the present study, feasibility of deep learning based techniques (Alexnet, GoogLeNet, Inception V3, Xception) were evaluated in weed identification from RGB images of bell pepper field. The models were trained with different values of epochs (10,20,30), batch sizes (16, 32), and hyperparameters were tuned to get optimal performance. The overall accuracy of the selected models varied from 94.5 to 97.7%. Among the models, Inception V3 exhibited superior performance at 30-epoch and 16-batch size with a 97.7% accuracy, 98.5% precision, and 97.8% recall. For this Inception V3 model, the type 1 error was obtained as 1.4% and type II error was 0.9%. The effectiveness of the deep learning model presents a clear path towards integrating them with image-based herbicide applicators for precise weed management.
Article
Full-text available
Farms face various risks such as uncertainties in the natural growth process, obtaining adequate financing, volatile input and output prices, unpredictable changes in farm-related policy and regulations, and farmers‘ personal health problems. Accordingly, farmers have to make decisions to be prepared for such situations under risk or mitigate their impacts to maintain essential functions. Increasingly, a data-driven perspective is warranted where machine learning (ML) has become an essential tool for automatic extraction of useful information to support decision-making in farm management as well as risk management. ML’s role in farm risk management (FRM) has recently increased with advances in technology and digitalization. This paper provides a literature review in the form of a systematic mapping study to identify the publications, trends, active research communities, and detailed reviews on the use of ML methods for FRM. Accordingly, nine research/mapping questions are designed to extract the required information. In total, we retrieved 1819 papers, of which 746 papers were selected based on the defined exclusion criteria for a detailed review. We categorized the studies based on the addressed risk types (e.g., production risk), assessments that addressed risk components (e.g., resilience), used ML types (e.g., supervised learning) and algorithms ranging from regression modeling to deep learning, addressed ML tasks (e.g., classification), data types (e.g., images), and farm types (e.g., crop-based farm). The results reveal that there is a significant increase in employing ML methods including deep learning and convolutional neural networks for FRM in recent years. The production risk and impact/damage assessment are the most frequently addressed risk type and assessment that addressed risk components in ML-FRM, respectively. In addition, research gaps and open problems are identified and accordingly insights and recommendations from risk management and machine learning perspectives are provided for future studies including the need for ML methods for different risk types (e.g., financial risk), assessments addressing different risk components (e.g., resilience assessment), and developing more advanced ML methods (e.g., reinforcement learning) for FRM.
Article
Full-text available
The Philippine government's effort to transcend agriculture as an industry requires precision agriculture. Remote- and proximal-sensing technologies help to identify what is needed, when, and where it is needed on the farm. This paper proposes the use of vision-based indicators captured using a low-altitude unmanned aerial vehicle (UAV) to estimate weed and pest damages. Coverage path planning is employed for automated data acquisition via UAV. The gathered data are processed in a ground workstation employing the proposed methods in estimating vegetation fraction, weed presence, and pest damages. The data processing includes techniques on sub-image level classification using a hybrid ResNet-SVM model and normalized triangular greenness index. Sub-image level classification for isolating crops from the rest of the image achieved an F1-score of 97.73% while pest damage detection was performed with an average accuracy of 86.37%. Also, the weed estimate achieved a true error of 5.72%.
Article
Full-text available
The digital transformation of agriculture has evolved various aspects of management into artificial intelligent systems for the sake of making value from the ever-increasing data originated from numerous sources. A subset of artificial intelligence, namely machine learning, has a considerable potential to handle numerous challenges in the establishment of knowledge-based farming systems. The present study aims at shedding light on machine learning in agriculture by thoroughly reviewing the recent scholarly literature based on keywords' combinations of "machine learning" along with "crop management", "water management", "soil management", and "livestock manage-ment", and in accordance with PRISMA guidelines. Only journal papers were considered eligible that were published within 2018-2020. The results indicated that this topic pertains to different disciplines that favour convergence research at the international level. Furthermore, crop management was observed to be at the centre of attention. A plethora of machine learning algorithms were used, with those belonging to Artificial Neural Networks being more efficient. In addition, maize and wheat as well as cattle and sheep were the most investigated crops and animals, respectively. Finally , a variety of sensors, attached on satellites and unmanned ground and aerial vehicles, have been utilized as a means of getting reliable input data for the data analyses. It is anticipated that this study will constitute a beneficial guide to all stakeholders towards enhancing awareness of the potential advantages of using machine learning in agriculture and contributing to a more systematic research on this topic.
Conference Paper
Full-text available
Information about the presence of weeds in fields is important to decide on a weed control strategy. This is especially crucial in precision weed management, where the position of each plant is essential for conducting mechanical weed control or patch spraying. For detecting weeds, this study proposes a fully convolutional neural network, which detects weeds in images and classifies each one as either a monocot or dicot. The network has been trained on over 13 000 weed annotations in high-resolution RGB images from Danish wheat and rye fields. Due to occlusion in cereal fields, weeds can be partially hidden behind or touching the crops or other weeds, which the network handles. The network can detect weeds with an average precision (AP) of 0.76. The weed detection network has been evaluated on an Nvidia Titan X, on which it is able to process a 5 MPx image in 0.02 s, making the method suitable for real-time field operation.
Article
Full-text available
This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.
Article
Full-text available
This paper presents a method for automating weed detection in colour images despite heavy leaf occlusion. A fully convolutional neural network is used to detect the weeds. The network is trained and validated on a total of more than 17,000 annotations of weeds in images from winter wheat fields, which have been collected using a camera mounted on an all-terrain vehicle. Hereby, the network is able to automatically detect single weed instances in cereal fields despite heavy leaf occlusion.
Article
Full-text available
Specialty crops, like flowers, herbs, and vegetables, generally do not have an adequate spectrum of herbicide chemistries to control weeds and have been dependent on hand weeding to achieve commercially acceptable weed control. However, labor shortages have led to higher costs for hand weeding. There is a need to develop labor-saving technologies for weed control in specialty crops if production costs are to be contained. Machine vision technology, together with data processors, have been developed to enable commercial machines to recognize crop row patterns and control automated devices that perform tasks such as removal of intrarow weeds, as well as to thin crops to desired stands. The commercial machine vision systems depend upon a size difference between the crops and weeds and/or the regular crop row pattern to enable the system to recognize crop plants and control surrounding weeds. However, where weeds are large or the weed population is very dense, then current machine vision systems cannot effectively differentiate weeds from crops. Commercially available automated weeders and thinners today depend upon cultivators or directed sprayers to control weeds. Weed control actuators on future models may use abrasion with sand blown in an air stream or heating with flaming devices to kill weeds. Future weed control strategies will likely require adaptation of the crops to automated weed removal equipment. One example would be changes in crop row patterns and spacing to facilitate cultivation in two directions. Chemical company consolidation continues to reduce the number of companies searching for new herbicides; increasing costs to develop new herbicides and price competition from existing products suggest that the downward trend in new herbicide development will continue. In contrast, automated weed removal equipment continues to improve and become more effective. Available at: http://www.wssajournals.org/doi/10.1614/WT-D-16-00070.1
Article
Full-text available
In the last few years, deep learning has lead to very good performance on a variety of problems, such as object recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Due to the lack of training data and computing power in early days, it is hard to train a large high-capacity convolutional neural network without overfitting. Recently, with the rapid growth of data size and the increasing power of graphics processor unit, many researchers have improved the convolutional neural networks and achieved state-of-the-art results on various tasks. In this paper, we provide a broad survey of the recent advances in convolutional neural networks. Besides, we also introduce some applications of convolutional neural networks in computer vision.
Article
Full-text available
We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of bounding box priors over different aspect ratios and scales per feature map location. At prediction time, the network generates confidences that each prior corresponds to objects of interest and produces adjustments to the prior to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Our SSD model is simple relative to methods that requires object proposals, such as R-CNN and MultiBox, because it completely discards the proposal generation step and encapsulates all the computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on ILSVRC DET and PASCAL VOC dataset confirm that SSD has comparable performance with methods that utilize an additional object proposal step and yet is 100-1000x faster. Compared to other single stage methods, SSD has similar or better performance, while providing a unified framework for both training and inference.
Article
Full-text available
Hard-seeded, broadleaf, winter annual weeds in strawberry plasticulture production in Florida emerge in the crop holes in the plastic mulch and reduce berry yield and quality. Clopyralid is registered for POST control of broadleaf weeds, but herbicide damage has been observed in commercial fields, and preliminary observations suggest that effects vary with time of application. To address this issue, an experiment was conducted in 2012 to 2013 and 2013 to 2014 to evaluate clopyralid rate (0, 140, 280, and 560 g ae ha-1) and application time on strawberry vegetative and reproductive growth. Clopyralid applications at 280 and 560 g ae ha-1 on January 2 and 16, 2013 (yr 1) reduced leaf number per plant by 33 to 44% and increased the number of deformed leaves per plant compared with the nontreated control. This pattern was not observed in yr 2. In yr 1 and 2, two times the label rate of clopyralid (560 g ae ha-1) tended to reduce the total number of floral buds compared with the nontreated control by 12 to 17%. None of the herbicide rates or application times reduced the number of flowers per plant, marketable berries per plant, yield over time, or total yield. We conclude that clopyralid applications at the rates and application times tested in this study may cause leaf damage and may reduce leaf number in some situations but does not affect yield. Nomenclature: Clopyralid; strawberry, Fragaria × ananassa Duchesne.
Article
Full-text available
Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (approx 2 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.
Article
Full-text available
We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10–100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.
Article
Full-text available
We describe the approach that won the prelimi- nary phase of the German traffic sign recognition benchmark with a better-than-human recognition rate of 98.98%. We obtain an even better recognition rate of 99.15% by further training the nets. Our fast, fully parameterizable GPU implementation of a Convolutional Neural Network does not require careful design of pre-wired feature extractors, which are rather learned in a supervised way. A CNN/MLP committee further boosts recognition performance.
Article
Full-text available
A machine vision system to detect and locate tomato seedlings and weed plants in a commercial agricultural environment was developed and tested. Images acquired in agricultural tomato fields under natural illumination were studied extensively, and an environmentally adaptive image segmentation algorithm was developed to improve machine recognition of plants under these conditions. The system was able to identify the majority of non-occluded target plant cotyledons, and to locate plant centers even when the plant was partially occluded. Of all the individual target crop plants 65% to 78% were correctly identified and less than 5% of the
Article
Full-text available
Multispectral images of leaf reflectance in the visible and near infrared region from 384 to 810 nm were used to establish the feasibility of developing a site-specific classifier to distinguish lettuce plants from weeds in California direct-seeded lettuce fields. An average crop vs. weed classification accuracy of 90.3% was obtained in a study of over 7,000 individual spectra representing 150 plants. The classifier utilized reflectance values from a small spatial area (3 mm diameter) of the leaf in order to allow the method to be robust to occlusion and to eliminate the need to identify leaf boundaries for shape-based machine vision recognition. Reflectance spectra were collected in the field using equipment suitable for real-time operation as a weed sensor in an autonomous system for automated weed control. Nomenclature: Lettuce, Lactuca sativa L. ‘Capitata’ and ‘Crispa’
Article
Full-text available
For site-specific application of herbicides, automatic detection and evaluation of weeds is desirable. Since reflectance of crop, weeds and soil differs in the visual and near infrared wavelengths, there is potential for using reflection measurements at different wavelengths to distinguish between them. Reflectance spectra of crop and weed canopies were used to evaluate the possibilities of weed detection with reflection measurements in laboratory circumstances. Sugarbeet and maize and 7 weed species were included in the measurements. Classification into crop and weeds was possible in laboratory tests, using a limited number of wavelength band ratios. Crop and weed spectra could be separated with more than 97% correct classification. Field measurements of crop and weed reflection were conducted for testing spectral weed detection. Canopy reflection was measured with a line spectrograph in the wavelength range from 480 to 820 nm (visual to near infrared) with ambient light. The discriminant model uses a limited number of narrow wavelength bands. Over 90% of crop and weed spectra can be identified correctly, when the discriminant model is specific to the prevailing light conditions.
Article
Full-text available
This paper presents a systematic analysis of twenty four performance measures used in the complete spectrum of Machine Learning classification tasks, i.e., binary, multi-class, multi-labelled, and hierarchical. For each classification task, the study relates a set of changes in a confusion matrix to specific characteristics of data. Then the analysis concentrates on the type of changes to a confusion matrix that do not change a measure, therefore, preserve a classifier’s evaluation (measure invariance). The result is the measure invariance taxonomy with respect to all relevant label distribution changes in a classification problem. This formal analysis is supported by examples of applications where invariance properties of measures lead to a more reliable evaluation of classifiers. Text classification supplements the discussion with several case studies.
Conference Paper
Full-text available
The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called ldquoImageNetrdquo, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.
Article
Broadleaf species escape current integrated weed management strategies in strawberry [ Fragaria×ananassa (Weston) Duchesne ex Rozier (pro sp.) [ chiloensis×virginiana ]] production. Clopyralid is a registered POST control option, but current application timings provide suppression of only some species. Earlier clopyralid application timings may increase spray coverage to weeds at the planting hole, but strawberry plant tolerance to applications shortly after transplant is unknown. The objectives of the study were to determine the degree of clopyralid tolerance when applied to mature strawberry plants according to current management strategies, whether clopyralid absorption and translocation were involved in the tolerance response demonstrated by mature strawberry plants, and whether clopyralid could be safely applied to immature strawberry plants shortly after transplant. Clopyralid caused no damage when applied to mature strawberry plants and did not affect crop height, number of crowns, flowers, immature berries, or yield. Maximal strawberry absorption of radiolabeled clopyralid was 82% of the recovered radioactivity and reached peak (90%) absorption at 15 h. Maximal total translocation of radioactivity from the treated leaf was 17% and reached peak translocation at 52 h. Translocation was primarily to the new leaves and reproductive structures. In the early-application experiment, damage induced by clopyralid for all application timings reached 0 by 8 wk after treatment. Across all timings, maximal damage at 140 g ha ⁻¹ was 17% when applied 14 d after transplant (DATr) and 56% at 28 g ha ⁻¹ when applied at 21 DATr. Clopyralid dose did not affect the number of crowns, aboveground biomass, or yield. There was some stunting in plant height (3%) by the high labeled dose of clopyralid. Labeled dose clopyralid applications appear safe for application timings closer to strawberry transplant, though considerations of leaf cupping should be taken under consideration for label changes.
Article
Strawberry is an important horticultural crop in Florida. The long growing season and escapes from fumigation and PRE herbicides necessitate POST weed management to maximize harvest potential and efficiency. Alternatives to hand-weeding are desirable, but clopyralid is the only broadleaf herbicide registered for use. Weed control may be improved by early-season clopyralid applications, but at risk of high temperature and increased strawberry injury. The effect of temperature on clopyralid safety on strawberry is unknown. We undertook a growth chamber experiment using a completely randomized design to determine crop safety under various temperature conditions across acclimation, herbicide application, and post-application periods. There was no effect of clopyralid on the number of strawberry leaves across all temperatures. Damage to the strawberry manifested as leaf malformations. Acclimation temperatures affected clopyralid-associated injury ( p =0.0309), with increased leaf malformations at higher temperatures (27 C) compared to lower (18 C) temperatures. Pre-treatment temperatures did not affect clopyralid injury. Post-application temperature also affected clopyralid injury ( p =0.0161), with increased leaf malformations at higher temperatures compared to lower ones. Clopyralid application did not reduce flowering or biomass production in the growth chamber. If leaf malformations are to be avoided, consideration to growing conditions prior to application is advisable, especially if applying clopyralid early in the season.
Article
Aiming at the problem that unstable identification results and weak generalization ability in feature extraction based on manual design features in weed identification, this paper take the soybean seedlings and its associated weeds as the research object, and construct a weed identification model based on K-means feature learning combined with Convolutional neural network. Combining advantages of multilayer and fine-turning of parameters of the convolutional neural network, this paper set k-means unsupervised feature learning as pre-training process, and replaced the random initialization weights of traditional CNN parameters. This method make the parameters can be obtained more reasonable values before optimization to gain higher weed identification accuracy. The experimental results show that this method with K-means pre-training achieved 92.89% accuracy, beyond 1.82% than convolutional neural network with random initialization and 6.01% than the two layer network without fine-tuning. Our results suggest that identification accuracy might be improved by fine-tuning of parameters.
Conference Paper
Convolutional networks are at the core of most stateof-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error and 17.3% top-1 error.
Article
Experiments were conducted in North Carolina in winter/spring 1992, 1993, and 1994 to determine crop tolerance, weed response, and clopyralid residue levels in fruit of 'Chandler' strawberry plants treated with clopyralid. Clopyralid at 0.07, 0.14, 0.20, or 0.28 kg ai ha-1 applied POST over strawberry plants and vetch resulted in 100% control of vetch species, 49 to 83% control of black medic, and less than 6% crop injury. As a comparison, 2,4-D at 0.84 kg ai ha-1 applied POST at 5 to 10% strawberry bloom resulted in 50% control of black medic and 48 to 73% crop injury, while 2,4-D at 0.84 kg ai ha-1 applied POST to 7- to 9- and 9- to 10- leaf strawberries resulted in 5 to 22% crop injury and no adverse affect on strawberry yield. Single applications of clopyralid at 0.07, 0.14, or 0.28 kg ai ha-1 applied POST to weed-free strawberries at the 5- to 6- , 6- to 7-, 9- to 10-, or 12-to 14-leaf stage caused less than 6% injury and did not adversely affect strawberry yield. In 1993 with preharvest intervals (PHI) of 39, 66, and 101 d after treatment all clopyralid residue levels in strawberry fruit were below the detectable level of 0.3 parts per billion (ppb). In 1994, with PHI of 30, 59, and 87 d after treatment trace clopyralid residues were found in strawberry fruit with a range from 0.25 to 1.9 ppb, with a level of detection of 0.2 ppb.
Conference Paper
This paper shows how to analyze the influences of object characteristics on detection performance and the frequency and impact of different types of false positives. In particular, we examine effects of occlusion, size, aspect ratio, visibility of parts, viewpoint, localization error, and confusion with semantically similar objects, other labeled objects, and background. We analyze two classes of detectors: the Vedaldi et al. multiple kernel learning detector and different versions of the Felzenszwalb et al. detector. Our study shows that sensitivity to size, localization error, and confusion with similar objects are the most impactful forms of error. Our analysis also reveals that many different kinds of improvement are necessary to achieve large gains, making more detailed analysis essential for the progress of recognition research. By making our software and annotations available, we make it effortless for future researchers to perform similar analysis.
Article
In recent years, deep neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Article
A hyperspectral imaging system was coupled to a precision, pulsed-jet, micro-dosing system to selectively deliver high-temperature, organic, food-grade oil for intra-row weed control in early growth tomatoes. The imaging system, based upon a multispectral Bayesian classifier, successfully discriminated the species of 95.9%, on average, of plant canopy for tomato, Solanum nigrum L. and Amaranthus retroflexus L. using canopy reflectance in the 384–810 nm range. Food-grade oil, heated to approximately 160 °C, was applied to weeds using a pressurized micro-dosing pulsed-jet. The target application rate was approximately 0.85 mg/cm2 in 10-ms-pulsed doses while traveling at a ground speed of 0.04 m/s. In an outdoor test, approximately 95.8% of S. nigrum and 93.8% of A. retroflexus were controlled 15 days post thermal treatment, while only 2.4% of the tomato plants received significant damage. Application coverage assessments of leaf surfaces immediately after the heated oil application found that tomato viability was retained if 50% or less of the leaf surface was inadvertently dosed, while 100% mortality was achieved for S. nigrum and A. retroflexus if more than 90% of their respective leaf surfaces were covered with heated oil.
Article
We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.
Article
Autonomous robotic weed control systems hold promise toward the automation of one of agriculture's few remaining unmechanized and drudging tasks, hand weed control. Robotic technology may also provide a means of reducing agriculture's current dependency on herbicides, improving its sustainability and reducing its environmental impact. This review describes the current status of the four core technologies (guidance, detection and identification, precision in-row weed control, and mapping) required for the successful development of a general-purpose robotic system for weed control. Of the four, detection and identification of weeds under the wide range of conditions common to agricultural fields remains the greatest challenge. A few complete robotic weed control systems have demonstrated the potential of the technology in the field. Additional research and development is needed to fully realize this potential.
Article
This study investigated the robustness of hyperspectral image-based plant recognition to seasonal variability in a natural farming environment in the context of automated in-row weed control. A machine vision system was developed and equipped with a CCD camera integrated with a line-imaging spectrograph for close-range weed sensing and mapping. Three canonical Bayesian classifiers were developed using canopy reflectance (400-795 nm) collected over three seasons for tomato and weeds. The performance of the three season-specific classifiers was tested by changing environmental conditions, resulting in an increase in total error rate of up to 36%. Global calibration across the complete span of the three seasons produced overall classification accuracies of 85.0%, 90.0% and 92.7%, respectively, for 2005, 2006 and 2008. To improve the stability of global classifier over multiple seasons, a multiclassifier system was constructed with three canonical Bayesian classifiers optimized for the three seasons individually. This system was tested on a data set simulating an upcoming season with field conditions similar to that in 2005. The system increased the total discrimination accuracy to 95.8% for the tested season under simulation. This method provided an innovative direction for achieving robust plant recognition over multiple seasons by integrating expert knowledge from historical data that most closely matched the new field environment.
Article
Field experiments were conducted in 2001 through 2003 in Wooster, OH, to determine strawberry ( Fragaria X ananassa ) plant response to clopyralid applied after plant renovation in established plantings. Clopyralid applied at a rate of 200 g ae/ha or greater controlled at least 82% of common groundsel ( Senecio vulgaris ) 6 wk after treatment (WAT). Maximum total fruit yield (marketable plus unmarketable fruits) occurred at a clopyralid rate of 200 g/ha, and higher or lower rates resulted in reduced yield. Application of clopyralid at 400 g/ha tended to reduce the canopy of the strawberry crop, especially in comparison to rates lower that 200 g/ha. Overall, clopyralid applied POST at 200 g/ha did not reduce fruit yield when applied after strawberry renovation, and effectively controlled common groundsel plants that were entering the reproductive stage.
Article
Environmental and commercial pressures are pushing vegetable and salad growers away from a reliance on herbicides. Whilst inter-row cultivation provides a relatively efficient method of removing weeds between crop rows, hand labour is often required to remove weeds within rows. A machine vision guidance has been used to address the problem of mechanically removing weeds within rows of transplanted vegetables and salads.The experimental machine was based on a commercially available steerage hoe equipped with conventional inter-row cultivation blades. It was also fitted with two novel shallow cultivation modules acting within crop rows. Each module featured a hydraulically driven disc rotating about a substantially vertical axis. Each disc had an interior section cut away to allow crop plants to pass undamaged. A vision system detected the phase of approaching plants and that information was combined with measured disc rotation to calculate a phase error between the next plant and disc cut-out. This phase error was corrected by advancing or retarding the hydraulic drive enabling synchronisation of the mechanism even in the presence of crop spacing variability.Field trials in transplanted cabbage indicated that under normal commercial growing conditions crop damage levels were low with weed reductions in the range 62–87% measured within a 240 mm radius zone around crop plants.
Article
The possibility of combining novel monitoring techniques and precision spraying for crop protection in the future is discussed. A generic model for an innovative crop protection system has been used as a framework. This system will be able to monitor the entire cropping system and identify the presence of relevant pests, diseases and weeds online, and will be location specific. The system will offer prevention, monitoring, interpretation and action which will be performed in a continuous way. The monitoring is divided into several parts. Planting material, seeds and soil should be monitored for prevention purposes before the growing period to avoid, for example, the introduction of disease into the field and to ensure optimal growth conditions. Data from previous growing seasons, such as the location of weeds and previous diseases, should also be included. During the growing season, the crop will be monitored at a macroscale level until a location that needs special attention is identified. If relevant, this area will be monitored more intensively at a microscale level. A decision engine will analyse the data and offer advice on how to control the detected diseases, pests and weeds, using precision spray techniques or alternative measures. The goal is to provide tools that are able to produce high-quality products with the minimal use of conventional plant protection products. This review describes the technologies that can be used or that need further development in order to achieve this goal.
Potential for Hyperspectral Technology in Wild Blueberry (Vaccinium angustifolium Ait.) Production. MS thesis
  • S M Sharpe
DetectNet: Deep Neural Network for Object Detection in DIGITS
  • A Tao
  • J Barker
  • S Sarathy