Figure - available from: Precision Agriculture
This content is subject to copyright. Terms and conditions apply.
a Crop and soil lines represented by the green and yellow lines respectively, cyan dotted lines around yellow lines represent the regions where soil patterns were obtained; b soil samples selected (Color figure online)

a Crop and soil lines represented by the green and yellow lines respectively, cyan dotted lines around yellow lines represent the regions where soil patterns were obtained; b soil samples selected (Color figure online)

Source publication
Article
Full-text available
Precision Agriculture aims to apply selective treatments and tasks at localized areas concerning crop fields. Robotized and autonomous tractors, equipped with perception, decision-making and actuation systems, can apply specific treatments as may be required. Correct plant identification through the perception system, including crops and weeds, is...

Similar publications

Article
Ethnopharmacological relevance: Plants have provided medicine to humans for thousands of years, and in most parts of the world people still use traditional plant-derived medicine. Knowledge related to traditional use provides an important alternative to unavailable or expensive western medicine in many rural communities. At the same time, ethnomed...
Article
Full-text available
The control of plant diseases is a major challenge to ensure global food security and sustainable agriculture. Several recent studies have proposed to improve existing procedures for early detection of plant diseases through modern automatic image recognition systems based on deep learning. In this article, we study these methods in detail, especia...
Article
Full-text available
The genus Echinops(Asteraceae family, Echinopeae class) comprises ca. 120 species and is native to Africa, the Middle East, Europe, and Asia. In Algeria, this genus is represented by the very common species Echinops spinosus L., also known as “Tesskra”, which is used as a diuretic, hypoglycemic, liver disorders, for post-partum care, and for its st...
Conference Paper
Full-text available
This work describes the plant identification system that we submitted to the ExpertLifeCLEF plant identification campaign in 2018. We fine-tuned two pre-trained deep learning architectures (SeNet and DensNetwork) using images shared by the CLEF organizers in 2017. Our main runs are 4 ensembles obtained with different weighted combinations of the 4...
Article
Full-text available
Human error has been statistically proven to be the primary cause of road accidents. This undoubtedly is a contributory cause of the rising popularity of autonomous vehicles as they are presumably able to maneuver appropriately/optimally on the roads while diminishing the likelihood of human error and its repercussion. However, autonomous vehicles...

Citations

... These tasks are developed using robotic manipulators and specific sensors to identify plants and their specific needs. Examples of employed sensors are RGB cameras (Manasa et al., 2019;Magalhães et al., 2021;Campos et al., 2017), multi-spectral (MS) cameras (Paoletti et al., 2019), and lasers. Lately, machine learning algorithms are gaining ground in processing the data these sensors produce, as e.g. in (Kulkarni et al., 2020;. ...
Article
Full-text available
To meet the increased demand for organic vegetables and improve their product quality, the Sureveg CORE Organic Cofund ERA-Net project focuses on the benefits and best practices of growing different crops in alternate rows. A prototype of a robotic platform was developed to address the specific needs of this field type at an individual plant level rather than per strip or field section. This work describes a novel method to develop robotic fertilization tasks in crop rows, based on automatic vegetable Detection and Characterization (D.a.C) through an algorithm based on artificial vision and Convolutional Neural Networks (CNN). This network was trained with a data-set acquired from the project's experimental fields at ETSIAAB-UPM. The data acquisition, processing, anc actuation are carried out in Robot Operating System (ROS). The CNN's precision, recall, and IoU values as well as characterization errors were evaluated in field trials. Main results show a neural network with an accuracy of 90.5% and low error percentages (<3%) during the vegetable characterization. This method's main contribution focuses on developing an alternative system for the vegetable D.A.C for individual plant treatments using CNN and low-cost RGB sensors.
... The majority of them are about autonomous harvesting where the fruit or vegetable must be detected and/or segmented prior to its picking [101][102][103][104][105][106][107][108][109]. The detection of vegetables or fruits are also important to count them and estimate the production yield [110][111][112][113]. Similarly to forestry contexts, some works are about disease detection and monitoring [114,115], and others are focused on detection woody trunks, weeds, and general obstacles in crops for navigation [116][117][118][119], operation purposes [120,121], and cleaning tasks [122,123]. Another application of perception systems in precision agriculture is characterising, monitoring, and phenotyping vegetative cultures using stereo vision [124,125], point clouds [126,127], satellite imagery [128], low-altitude aerial images [129], or multispectral imagery [130]. ...
Article
Full-text available
Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain.
... Recently, the challenges of obstacle avoidance in an agricultural environment have gained attention for different applications such as autonomous navigation of mobile robots (Campos et al., 2016(Campos et al., , 2017 and manipulators (Botterill et al., 2017;You et al., 2020). A few studies describe collision-free path planning for agricultural manipulators (Table 7), mostly using the RRT algorithm as it has higher efficiency and low processing time, and could be applied successfully to multi-DoF (up to 12 DoF) manipulators (Cao et al., 2019;LaValle, 1998). ...
Article
Automation and robotics have been widely applied in many agricultural operations; however, the production operations of apple trees are usually performed manually. Pruning is one of the most labor-intensive operations, accounting for about 20% of the total labor costs. Robotic pruning is a potential long-term solution to deal with the issue of labor shortages and associated high costs. In recent years, researchers have advanced technologies that could lead to the development of a robotic pruning system. However, numerous challenges are involved for successful adoption of robotic pruning technologies. This review highlighted the technological progress in the core components of a robotic pruner including machine vision, manipulator, end-effector, path planning, and obstacle avoidance; the challenges associated with the existing technologies; and potential solutions to the development of an automated pruning system. The review covers both scientific/research and commercial technologies developed for robotic tree pruning. The available literature related to all technological components of a robotic pruning system was reviewed in detail, and useful information was synthesized for presenting in the paper. Finally, the paper scrutinizes the challenges and potential future opportunities for developing a robotic pruning system for a sustainable apple production system.
... The average of the third component in YIQ colour space spectral features is discussed by Sabzi et al. (2018). Campos et al. (2017) used the statistical visual texture features such as the autocorrelation of the degree of similarity of the elements in an image. Dynamic programming technique which uses the previous knowledge of the geometric structure for crop row detection was developed by García-Santillán et al. (2018 and Vidović et al. (2016). ...
Article
In the Indian agricultural industry, weedicides are sprayed to the crops collectively without taking into consideration whether weeds are present. More intelligent methods should be adopted to guarantee that the soil and crops obtain exactly what they need for optimum health and productivity in smart agriculture. In smart farming industry, the use of robotic systems enabled with cameras for case-specific ministrations is on the rise. In this paper, the crop and weed have been efficiently differentiated by first applying the feature extraction methods followed by machine learning algorithms. The features of the weed and crop are extracted using speeded-up robust features and histogram of gradients. The logistic regression and support vector machine algorithms are used for classification of weed and crop. The method which used histogram of gradients for feature extraction and support vector machine for classification shows better results compared to other methods. This model is deployed on a field robot, weed detection system. The system helps in spraying weedicide only wherever it is required, thereby eliminating manual engagement with harmful chemicals and also reducing the number of toxic chemicals that enter through the food. This automated system ultimately helps in the sustainable smart farming for agricultural growth.
... With the ever-growing available VHR remote-sensing images, conventional pixel-based classification approaches were sensitive in these studies. Detecting and counting objects are the effective pixel-based feature extraction methods commonly used in computer vision and image processing applications (Campos et al., 2017). The core task for object recognition from the UAV data is to provide relevant data sets for crop and orchard management. ...
Article
Manual inspection has been a common application for counting the trees and plants in orchards in precision agriculture processes. However, it is a time-consuming and, labour-intensive and expensive task. Recent remote sensing tools and methods provide a revolutionizing innovation for monitoring individual trees and crop recognition as an alternative to manual detection useful for long-term agricultural management. Our study adopted a Connected Components Labeling (CCL) algorithm to detect and count the citrus trees based on the high-resolution Unmanned Air Vehicles (UAV) images in two agricultural patches. The workflow consisted of applying morphological image operation algorithms on multi-spectral, 5-banded orthophoto imagery (derived from 1560 scenes) and 3,57 cm spatial resolution. Our approach was able to count 1462 out of 1506 trees resulting in accuracy and precision higher than 95% (average Recall: 0.97, Precision: 0.95) in heterogeneous agricultural patches (multiple trees and tree sizes). According to our understanding, the first time a CCL algorithm has been used with UAV multi-spectral images for detecting citrus trees. It performed significantly for geolocation and counting the trees individually in a heterogenous orchard. We concluded that our methodology provided satisfactory performance to predict the number of trees (in the citrus case study) in dense patches. Therefore it could be promising to replace the conventional tree detection techniques to detect the orchard trees in complex agricultural regions.
... Guijarro et al reduced texture to colour information for segmenting pictures from an optical camera into plant, soil, and sky regions [10], while Campos et al aimed at the discrimination of plants, soil, and disturbing objects based on a camera mounted at a tractor. They used statistical feature including those derived from GLCM and SVM classification [11]. GLCMs are also investigated by [12] and performed worse than histograms of gradients (HoG). ...
... Many researchers have applied computer vision technology to autonomous weeding systems and explored various methods to detect crops or weeds in digital images since 1980s. Traditional methods usually separate crops from weeds and soil based on features including: color [3], [7], [8]; shape [9], [10], [11]; texture [12], [13], [14]; depth [15], [16]; spatial distribution [4], [17], [18]; and combinations of these features. However, these methods use manually identified features, based on the prior knowledge of existing datasets, which limits their generalization abilities. ...
Article
Full-text available
Crop recognition is one of the key processes for robotic weeding in precision agriculture, which remains an open problem due to the unstructured field environment and the wide variety of plant species. It becomes especially challenging when the weeds are prominent and overlap with the crop plants. This paper presents a novel method for recognizing crop plants of field images with a high weed presence. This method segments crop plants from overlapped weeds based on the visual attention mechanism of the human visual system using a convolutional neural network. The network utilizes ResNet-10 as backbone, while introducing side outputs and short connections for multi-scale feature fusion. The Adaptive Affinity Fields method is adopted to improve the segmentation at object boundaries and for fine structures. To train and test the network, a field image dataset has been created which consists of 788 color images with manually segmented annotations. The images are captured under challenging conditions with extremely high weed pressure. The experimental results show that the proposed method can accurately segment crops from weeds and soil, with mean absolute errors less than 0.005 and F-measure scores exceeding 97%. In terms of efficiency, the proposed method can process up to 169 images per second when accelerated by a NVIDIA RTX 2080Ti graphics processing unit (GPU), and operate at approximately 5.6 Hz in a Jetson TX2 embedded computer. The results indicate that the proposed method has the potential to provide an efficient solution for recognizing crop plants, even in the presence of severe weed growth. The code and the dataset are available at https://github.com/ZhangXG001/Real-Time-Crop-Recognition.
Chapter
Looking for the improvement of the classification, we propose a hybrid algorithm to identify the corn plant and the weed. With the aim of improving the fertilization and herbicide application processes. An efficient process can avoid wasted fertilizers and decrease subsoil contamination. The purpose is to identify the corn plant to specify the fertilizer application in an automated and precise way. Whereas, the identification of the weed allows to apply herbicides directly. In this work we propose a hybrid method with Convolutional Neural Networks (CNN) to extract characteristics from images and Vector Support Machines (SVM) for classification. We obtained effectiveness results, a percentage of 98%, being higher than those compared to the state of the art.
Article
Fast and accurate perception of crop plants is critical for robotic weeding. However, due to the similar appearance, it remains challenging to precisely and efficiently segment crop plants from weeds, especially when the weed density is high. To tackle this problem, we devise an ultra-fast bi-phase advanced network (BA-Net) architecture that advances both the feature extraction phase and the model training phase. A bi-path fusion tree structure and a bi-polar distributed loss are introduced for the two phases, respectively. The bi-path fusion tree merges features from two adjacent convolutional stages at each step and efficiently models the scene through a tree-like structure. The bi-polar distributed loss strengthens the training at ambiguous areas, which can greatly enhance the distinctness at detailed structures like the crop plant boundaries. Experiments demonstrate the superiority of the proposed BA-Net to other state-of-the-art methods in terms of both accuracy and efficiency. BA-Net can accurately segment the crop plants from the weedy background, achieving an intersection over union score of 0.963, a F-measure score of 0.981, and a mean absolute error of 0.00376. Also, BA-Net is lightweight with only 0.55 M trainable parameters and achieves ultra-fast speed, i.e., over 340 FPS on a desktop and over 26 FPS on a Jetson TX2 embedded computer. The code of BA-Net is available at https://github.com/ZhangXG001/BA-Net.
Article
Full-text available
Population growth and the improvement of the living level have resulted in the use of new methods to produce healthy and quality food. Due to the remarkable development of image processing and machine vision methods, a lot of research has been presented to provide an effective solution based on image processing in various fields of precision agriculture. Researchers have currently widely used image processing in smart farming and gardening, including planting, weed control, irrigation, spraying, fertilization, plant growth monitoring, and crop harvesting. In addition to reducing production costs and agricultural inputs, these methods also have significant effects on the conservation of water resources and protecting the environment. In this review study, image processing-based methods that have been introduced so far for smart weed control, local spraying, and irrigation management are investigated. It is also attempted to evaluate the performance of each method with its advantages and disadvantages. Finally, the future perspectives of image processing applications in the fields of precision agriculture are expressed.