Figure - available from: Remote Sensing
This content is subject to copyright.
An example UAV trajectory covering a 1300 m² sugar beet field (RedEdge-M 002 from Table 1). Each yellow frustum indicates the position where an image is taken, and the green lines are rays between a 3D point and their co-visible multiple views. Qualitatively, it can be seen that the 2D feature points from the right subplots are properly extracted and matched for generating a precise orthomosaic map. A similar coverage-type flight path is used for the collection of our datasets.

An example UAV trajectory covering a 1300 m² sugar beet field (RedEdge-M 002 from Table 1). Each yellow frustum indicates the position where an image is taken, and the green lines are rays between a 3D point and their co-visible multiple views. Qualitatively, it can be seen that the 2D feature points from the right subplots are properly extracted and matched for generating a precise orthomosaic map. A similar coverage-type flight path is used for the collection of our datasets.

Source publication
Article
Full-text available
The ability to automatically monitor agricultural fields is an important capability in precision farming, enabling steps towards more sustainable agriculture. Precise, high-resolution monitoring is a key prerequisite for targeted intervention and the selective application of agro-chemicals. The main goal of this paper is developing a novel crop/wee...

Similar publications

Article
Full-text available
This paper presents the development of a remote robotic platform to monitor and take care of urban crops. The agricultural robot performs activities such as sowing, irrigation, fumigation, and pruning activities on a small scalable structured crop. The generalities of the design of an anthropomorphic robot with five degrees of freedom, the descript...

Citations

... Innovations in machine learning, robotic weed eaters, and cutting-edge methods like GLCM and nanoherbicides are revolutionizing methods for sustainability and precision, which are essential for realizing the full potential of the world's agricultural resources [23] . AI-powered systems can optimize weed management tactics with previously unheard-of efficacy by sifting through enormous datasets and identifying complex patterns in crop-weed interactions [24] . These intelligent systems continuously improve their predictive models to maximize weed suppression while reducing the unintended consequences of herbicide use. ...
... In places with dense vegetation, AI-based approaches, as opposed to shape-based machine vision, may successfully identify weeds at this stage and tell them apart from agricultural crops [77] . The adoption of AI-based weed control, intelligent systems, and robots are now restricted to early adopters because the range of plants that can be recognized based on spectral features is limited or because the precision based on species or geocoordinates is insufficient to safeguard the crop plant [24] . AI is employed in a robotic system for weed control in a variety of ways, including operation control, trajectory planning, data management, weed dispersion prediction model, data sharing, and more [23] . ...
... WeedMap [155] Weed management The dataset comprises imagery from sugar fields in Switzerland and Germany, which were captured by two UAVs equipped with RedEdge and Sequoia multispectral sensors, generating ortho-mosaics and tiles at 480 × 360 size, with RedEdge images from Germany (000-004) and Sequoia images from Switzerland (005-007). ...
... Commercial drone platforms are used to map agricultural landscapes with multispectral and light detection and ranging (LiDAR) sensors (Tsouros et al., 2019). Aerial photography captured using low altitude drone flights is orthorectified into sub-meter resolution scenes of entire agricultural landscapes (Sa et al., 2018). For example, drone orthomosaics, which are orthorectified imagery datasets that are geometrically corrected for perspective and terrain effects, are used as the input to machine learning classifiers to automate crop classification and weed detection (Chen et al., 2019;Sa et al., 2018). ...
... Aerial photography captured using low altitude drone flights is orthorectified into sub-meter resolution scenes of entire agricultural landscapes (Sa et al., 2018). For example, drone orthomosaics, which are orthorectified imagery datasets that are geometrically corrected for perspective and terrain effects, are used as the input to machine learning classifiers to automate crop classification and weed detection (Chen et al., 2019;Sa et al., 2018). ...
... This approach has been applied to automate the mapping of flowering plants (de Sá et al., 2018;Hicks et al., 2021;Hill et al., 2017). In de Sa et al. (2018), the authors used a random forest binary classifier paired with drone orthomosaic imagery to map the floral cover of Acacia longifolia (Andrews) Willd. (Fabaceae), a mass flowering shrub that is invasive in Portuguese dunes. ...
Article
Full-text available
The abundance and diversity of flowering plant species are important indicators of pollinator habitat quality, but traditional field‐based surveying techniques are time‐intensive. Therefore, they are often biased due to under‐sampling and are difficult to scale. Aerial photography was collected across 10 sites located in and around Rouge National Urban Park, Toronto, Canada using a consumer‐grade drone. A convolutional neural network (CNN) was trained to semantically segment, or identify and categorize, pixel clusters which represent flowers in the collected aerial imagery. Specifically, flowers of the dominant taxa found in the depauperate fall flowering plant community were surveyed. This included yellow flowering Solidago spp., white Symphyotrichum ericoides/lanceolatum and purple Symphyotrichum novae‐angliae. The CNN was trained using 930 m² of manually annotated data, ~1% of the mapped landscape. The trained CNN was tested on 20% of the manually annotated data concealed during training. In addition, it was externally validated by comparing the predicted drone‐derived floral abundance metrics (i.e. floral area (m²) and the number of floral patches) to the field‐based count of floral units estimated for 34 4 m² plots. The CNN returned accurate multiclassification when evaluated against the testing data. It obtained a precision score of 0.769, a recall of 0.849, and an F1 score of 0.807. The automated floral abundance counting yielded estimates that were strongly correlated with field‐based manual counting. In addition, flower segmentation using the trained CNN was time‐efficient. On average, it took roughly the same amount of time to segment the flowers occurring in an entire drone scene as it took to complete the abundance count of a single quadrat. However, the training process, particularly manual data annotation, was the most time‐consuming component of the study. Practical implication: Overall, the analysis provided valuable insights into automated flower classification and abundance estimation using drone imagery and machine learning. The results demonstrate that these tools can be used to provide accurate and scalable estimates of pollinator habitat quality. Further research should consider diverse wildflower systems to develop the generalizability of the methods.
... The model is based on the SegFormer encoder architecture [36], coupled with an ad-hoc decoder that fuses the features extracted by the encoder. Evaluation is conducted on the WeedMap dataset [29], which contains multispectral images of sugar beet fields captured by drones. ...
... WeedNet, based on SegNet and trained on the WeedMap dataset, is another example of successful application in this domain [28]. The WeedMap dataset, containing multispectral images from sugar beet fields in Germany and Switzerland, has become a benchmark for weed mapping studies [29]. Recently, we have explored the benefits of RGB pre-training and fine-tuning on multispectral images, as well as the application of knowledge distillation to improve the performance of lightweight models [7,8]. ...
... We assessed RoWeeder on the WeedMap dataset [29], which includes multispectral images from sugar beet fields in Germany (Rheinbach) and Switzerland (Eschikon). These images were captured using UAVs equipped with RedEdge-M and Sequoia cameras, respectively (see [29] for further details). ...
Preprint
Full-text available
Precision agriculture relies heavily on effective weed management to ensure robust crop yields. This study presents RoWeeder, an innovative framework for unsupervised weed mapping that combines crop-row detection with a noise-resilient deep learning model. By leveraging crop-row information to create a pseudo-ground truth, our method trains a lightweight deep learning model capable of distinguishing between crops and weeds, even in the presence of noisy data. Evaluated on the WeedMap dataset, RoWeeder achieves an F1 score of 75.3, outperforming several baselines. Comprehensive ablation studies further validated the model's performance. By integrating RoWeeder with drone technology, farmers can conduct real-time aerial surveys, enabling precise weed management across large fields. The code is available at: \url{https://github.com/pasqualedem/RoWeeder}.
... S. marianum was usually found in fields with a random mix of crop species [1,44,52], while Amaranth was found in the crops of sorghum, sugar beet, and soybean [5,36,37]. ...
... 3 [2], [4], [3] AUC 3 [4], [36], [9] User's Accuracy 3 [5], [44], [52] Producer's Accuracy 3 [5], [44], [52] Kappa Statistic 3 [4], [44], [52] Precision 2 [2], [41] Mean Absolute Error 2 [29], [28] Recall 2 [2], [41] Mean Square Error 1 [26] KHAT Statistic 1 [5] Intersection over Union (IoU) 1 [17] Mean Entropy 1 [17] ...
Preprint
Full-text available
The growth of weeds poses a significant challenge to agricultural productivity, necessitating efficient and accurate weed detection and management strategies. The combination of multispectral imaging and drone technology has emerged as a promising approach to tackle this issue, enabling rapid and cost-effective monitoring of large agricultural fields. This systematic review surveys and evaluates the state-of-the-art in machine learning interventions for weed detection that utilize multispectral images captured by unmanned aerial vehicles. The study describes the various models used for training, features extracted from multi-spectral data, their efficiency and effect on the results, the performance analysis parameters, and also the current challenges faced by researchers in this domain. The review was conducted in accordance with the PRISMA guidelines. Three sources were used to obtain the relevant material, and the screening and data extraction were done on the COVIDENCE platform. The search string resulted in 600 papers from all sources. The review also provides insights into potential research directions and opportunities for further advancements in the field. These insights would serve as a valuable guide for researchers, agricultural scientists, and practitioners in developing precise and sustainable weed management strategies to enhance agricultural productivity and minimize ecological impact.
... vegetation. In the existing literature [33][34][35][36][37][38], various semantic segmentation models based on deep learning have been applied to identify vegetation in aerial images. However, these studies are generally limited to smaller images and must address large orthomosaics segmentation. ...
Article
Full-text available
This study focuses on semantic segmentation in crop Opuntia spp. orthomosaics; this is a significant challenge due to the inherent variability in the captured images. Manual measurement of Opuntia spp. vegetation areas can be slow and inefficient, highlighting the need for more advanced and accurate methods. For this reason, we propose to use deep learning techniques to provide a more precise and efficient measurement of the vegetation area. Our research focuses on the unique difficulties posed by segmenting high-resolution images exceeding 2000 pixels, a common problem in generating orthomosaics for agricultural monitoring. The research was carried out on a Opuntia spp. cultivation located in the agricultural region of Tulancingo, Hidalgo, Mexico. The images used in this study were obtained by drones and processed using advanced semantic segmentation architectures, including DeepLabV3+, UNet, and UNet Style Xception. The results offer a comparative analysis of the performance of these architectures in the semantic segmentation of Opuntia spp., thus contributing to the development and improvement of crop analysis techniques based on deep learning. This work sets a precedent for future research applying deep learning techniques in agriculture.
... Some studies have acquired and published multispectral data for weedrelated research. For example, there exists a dataset capturing nine weed species through UAV-mounted multispectral sensors in a sugar beet field [14] and a sugar beet/weed dataset from a controlled field experiment alongside pixel-wise labeled data [15]. However, the availability of open-access weed datasets remains limited. ...
... We designed and implemented a One-Dimensional Convolutional Neural Network (1D-CNN) with the following architecture ( Figure 4): The network begins with a sequence input layer tailored to the number of input channels, followed by a causal convolutional layer with a specified kernel size of [1,15], 32 filters, and casual padding, keeping the size of the vector unchanged. The convolutional layer is succeeded by a Rectified Linear Unit (ReLU) activation function to introduce non-linearity and a normalization layer to stabilize and accelerate the training process. ...
Article
Full-text available
Site-specific weed management employs image data to generate maps through various methodologies that classify pixels corresponding to crop, soil, and weed. Further, many studies have focused on identifying specific weed species using spectral data. Nonetheless, the availability of open-access weed datasets remains limited. Remarkably, despite the extensive research employing hyperspectral imaging data to classify species under varying conditions, to the best of our knowledge, there are no open-access hyperspectral weed datasets. Consequently, accessible spectral weed datasets are primarily RGB or multispectral and mostly lack the temporal aspect, i.e., they contain a single measurement day. This paper introduces an open dataset for training and evaluating machine-learning methods and spectral features to classify weeds based on various biological traits. The dataset comprises 30 hyperspectral images, each containing thousands of pixels with 204 unique visible and near-infrared bands captured in a controlled environment. In addition, each scene includes a corresponding RGB image with a higher spatial resolution. We included three weed species in this dataset, representing different botanical groups and photosynthetic mechanisms. In addition, the dataset contains meticulously sampled labeled data for training and testing. The images represent a time series of the weed’s growth along its early stages, critical for precise herbicide application. We conducted an experimental evaluation to test the performance of a machine-learning approach, a deep-learning approach, and Spectral Mixture Analysis (SMA) to identify the different weed traits. In addition, we analyzed the importance of features using the random forest algorithm and evaluated the performance of the selected algorithms while using different sets of features.
... In recent years, there has been an increasing interest in semantic interpretation of images in the agricultural context, both on arable crops [19], [34], [28] and horticulture [12], [29], [2]. These works rely on large image datasets with pixel-accurate labels designed for various tasks: crop/weed segmentation [6], [30], leaf segmentation [15], [33], fruit counting [32], [27], and fruit size estimation [9], [11]. We refer to Lu et al. [20] for an overview of image datasets in agricultural environments. ...
Preprint
As the population is expected to reach 10 billion by 2050, our agricultural production system needs to double its productivity despite a decline of human workforce in the agricultural sector. Autonomous robotic systems are one promising pathway to increase productivity by taking over labor-intensive manual tasks like fruit picking. To be effective, such systems need to monitor and interact with plants and fruits precisely, which is challenging due to the cluttered nature of agricultural environments causing, for example, strong occlusions. Thus, being able to estimate the complete 3D shapes of objects in presence of occlusions is crucial for automating operations such as fruit harvesting. In this paper, we propose the first publicly available 3D shape completion dataset for agricultural vision systems. We provide an RGB-D dataset for estimating the 3D shape of fruits. Specifically, our dataset contains RGB-D frames of single sweet peppers in lab conditions but also in a commercial greenhouse. For each fruit, we additionally collected high-precision point clouds that we use as ground truth. For acquiring the ground truth shape, we developed a measuring process that allows us to record data of real sweet pepper plants, both in the lab and in the greenhouse with high precision, and determine the shape of the sensed fruits. We release our dataset, consisting of almost 7000 RGB-D frames belonging to more than 100 different fruits. We provide segmented RGB-D frames, with camera instrinsics to easily obtain colored point clouds, together with the corresponding high-precision, occlusion-free point clouds obtained with a high-precision laser scanner. We additionally enable evaluation ofshape completion approaches on a hidden test set through a public challenge on a benchmark server.
... This speed is crucial as technology becomes more prevalent in agriculture. Additionally, it is particularly important for many farms that cannot invest in the latest, most expensive equipment, and even when they can, they often face challenges due to image processing difficulties caused by hardware constraints and computational limitations [54]. The experiment's outcomes are highly reliable as they were conducted in three locations across Europe: Spain, Serbia, and Finland (Fig. 1), involving diverse UAVs and sensors, various crop types, and various weather conditions. ...
Article
Innovations in precision agriculture enhance complex tasks, reduce environmental impact, and increase food production and cost efficiency. One of the main challenges is ensuring rapid information availability for autonomous vehicles and standardizing processes across platforms to maximize interoperability. The lack of drone technology standardisation, communication barriers, high costs, and post-processing requirements sometimes hinder their widespread use in agriculture. This research introduces a standardized data fusion framework for creating real-time spatial variability maps using images from different Unmanned Aerial Vehicles (UAVs) for Site-Specific Crop Management (SSM). Two spatial interpolation methods were used (Inverse Distance Weight, IDW, and Triangulated Irregular Networks, TIN), selected for their computational efficiency and input flexibility. The proposed framework can use different UAV image sources and offers versatility, speed, and efficiency , consuming up to 98 % less time, energy, and computing requirements than standard photogrammetry techniques, providing rapid field information, allowing edge computing incorporation into the UAV data acquisition phase. Experiments conducted in Spain, Serbia, and Finland in 2022 under the H2020 FlexiGroBots project demonstrated a strong correlation between results from this method and those from standard photo-grammetry techniques (up to r = 0.93). In addition, the correlation with Sentinel 2 satellite images was as strong as that obtained with photogrammetry-based orthomosaics (up to r = 0.8). The proposed approach could support irrigation leak detection, soil parameter estimation, weed management, and satellite integration for agriculture.
... In terms of herbicide application, precision agriculture technologies allow for precise herbicide application on target areas that considerably reduce herbicide use [22][23][24]. Precision agriculture studies have proposed strategies such as weed mapping [25][26][27] and the real-time targeted spraying or mechanical removal of weeds [22,28,29]. Despite the reported decrease in herbicide application using precision agriculture [24], few studies have explored weed detection under field conditions. ...
... A systematic review has reported that just 34 weed species have been targeted using weed detection models, frequently focusing on a single weed-crop species association [30], which does not always reflect the conditions observed at a commercial farm. Datasets enabling precision agriculture technologies are typically collected using hand held devices [31], vehicle mounted camera systems, or UAVs [25,26,[32][33][34]. Red Green Blue (RGB) images are the most commonly used data format for weed detection due to the low cost and ease of collection; however, researchers are also using multispectral image datasets that can identify unique plant spectral signatures to differentiate between crop and weed species [16,25,26,31,32,[35][36][37][38]. ...
... Datasets enabling precision agriculture technologies are typically collected using hand held devices [31], vehicle mounted camera systems, or UAVs [25,26,[32][33][34]. Red Green Blue (RGB) images are the most commonly used data format for weed detection due to the low cost and ease of collection; however, researchers are also using multispectral image datasets that can identify unique plant spectral signatures to differentiate between crop and weed species [16,25,26,31,32,[35][36][37][38]. ...
Article
Full-text available
Weeds pose a significant threat to agricultural production, leading to substantial yield losses and increased herbicide usage, with severe economic and environmental implications. This paper uses deep learning to explore a novel approach via targeted segmentation mapping of crop plants rather than weeds, focusing on canola (Brassica napus) as the target crop. Multiple deep learning architectures (ResNet-18, ResNet-34, and VGG-16) were trained for the pixel-wise segmentation of canola plants in the presence of other plant species, assuming all non-canola plants are weeds. Three distinct datasets (T1_miling, T2_miling, and YC) containing 3799 images of canola plants in varying field conditions alongside other plant species were collected with handheld devices at 1.5 m. The top performing model, ResNet-34, achieved an average precision of 0.84, a recall of 0.87, a Jaccard index (IoU) of 0.77, and a Macro F1 score of 0.85, with some variations between datasets. This approach offers increased feature variety for model learning, making it applicable to the identification of a wide range of weed species growing among canola plants, without the need for separate weed datasets. Furthermore, it highlights the importance of accounting for the growth stage and positioning of plants in field conditions when developing weed detection models. The study contributes to the growing field of precision agriculture and offers a promising alternative strategy for weed detection in diverse field environments, with implications for the development of innovative weed control techniques.