Figure - uploaded by Gonzalo Pajares
Content may be subject to copyright.
Figure A1. Reference systems and relations.

Figure A1. Reference systems and relations.

Source publication
Article
Full-text available
Machine vision systems are becoming increasingly common onboard agricultural vehicles (autonomous and non-autonomous) for different tasks. This paper provides guidelines for selecting machine-vision systems for optimum performance, considering the adverse conditions on these outdoor environments with high variability on the illumination, irregular...

Citations

... The inclusion of machine vision in agriculture is increasingly used especially in agricultural vehicles (autonomous and non-autonomous) and can be used for various agricultural operations, including row detection (Fig. 11), special application, identification and monitoring. With progress, the machine vision is becoming an imperative in autonomous vehicles [39,40,41,42]. The same authors [38] state that image sensors are used for numerous tasks in agriculture such as guidance, weed detection or phenotyping analysis. ...
Article
In this paper, new types of autonomous systems used in agriculture were analysed. The paper shows new self-guiding systems such as AGVs with full autonomy in degrees operation. It explains internal transport and systems of autonomous vehicles in outdoor agriculture. New autonomous systems used outside such as appliance of special navigation systems and their purpose in agriculture are present in this work. Navigation systems with GPS signal and RTK technology, vehicle guidance camera and AI machine vision for manipulation are described. Light and laser technologies for fully autonomous robotic technologies such as LiDAR system in vehicle for detection of the presence of pests and diseases are presented in this paper. The paper emphasized advantages of using AGVs as result of their autonomy, clean power sources without harmful impact on the environment. Navigation in indoor spaces that uses LTE Direct protocol is explained, whereby the Wi-Fi ceiling antenna and wireless APP for horizontal movement of AGVs is shown. The ways of using UAVs for warehouse inventory through web applications with an advanced navigation system guided by AI are given in this work.
... It has been shown that data in the red part of the visible region of spectra are sensitive to low chlorophyll content (a shoulder near 650 nm for Chl b absorption) and therefore are suitable for examining apples' ripeness and ripening process (Nagy et al., 2016). Applying near-infrared is well-known in remote sensing applications (Pajares et al., 2016;Tucker, 1979), where it is a useful band for plant identification and phenotyping because plant tissue microstructure strongly scatters NIR light, and no pigments are absorbing in this wavelength range (Ollinger, 2011). Moreover, it has been shown that using IR light reduces the variability of the acquired measurements due to lower pigment sensitivity and the greater penetration of IR light into samples (Contado et al., 2023). ...
... Although both methods can detect row guide lines, different environmental conditions, such as variations in color, brightness, saturation, contrast, reflection, shadow, occlusion, and noise in the same scene, can lead to the failure of guidance line extraction [15,16]. Secondly, differences in wind intensity in the field will also cause plant movement, which will blur the plant image and cause inaccurate crop center point detection [17][18][19]. ...
Article
Full-text available
Complex farmland backgrounds and varying light intensities make the detection of guidance paths more difficult, even with computer vision technology. In this study, a robust line extraction approach for use in vision-guided farming robot navigation is proposed. The crops, drip irrigation belts, and ridges are extracted through a deep learning method to form multiple navigation feature points, which are then fitted into a regression line using the least squares method. Furthermore, deep learning-driven methods are used to detect weeds and unhealthy crops. Programmed proportional–integral–derivative (PID) speed control and fuzzy logic-based steering control are embedded in a low-cost hardware system and assist a highly maneuverable farming robot in maintaining forward movement at a constant speed and performing selective spraying operations efficiently. The experimental results show that under different weather conditions, the farming robot can maintain a deviation angle of 1 degree at a speed of 12.5 cm/s and perform selective spraying operations efficiently. The effective weed coverage (EWC) and ineffective weed coverage (IWC) reached 83% and 8%, respectively, and the pesticide reduction reached 53%. Detailed analysis and evaluation of the proposed scheme are also illustrated in this paper.
... At present, traditional methods and deep-learning-based methods are the two main research directions in the field of crop row centerline recognition [8]. However, the traditional crop line detection scheme is vulnerable to various environmental factors such as light and shadow [9]. The application of deep learning algorithms can effectively overcome some of the limitations of the traditional methods and can detect crop rows in the presence of changing illumination, object size and position, weed shading, and background [10,11]. ...
Article
Full-text available
Field crops are usually planted in rows, and accurate identification and extraction of crop row centerline is the key to realize autonomous navigation and safe operation of agricultural machinery. However, the diversity of crop species and morphology, as well as field noise such as weeds and light, often lead to poor crop detection in complex farming environments. In addition, the curvature of crop rows also poses a challenge to the safety of farm machinery during travel. In this study, a combined multi-crop row centerline extraction algorithm is proposed based on improved YOLOv8 (You Only Look Once-v8) model, threshold DBSCAN (Density-Based Spatial Clustering of Applications with Noise) clustering, least squares method, and B-spline curves. For the detection of multiple crops, a DCGA-YOLOv8 model is developed by introducing deformable convolution and global attention mechanism (GAM) on the original YOLOv8 model. The introduction of deformable convolution can obtain more fine-grained spatial information and adapt to crops of different sizes and shapes, while the combination of GAM can pay more attention to the important feature areas of crops. The experimental results shown that the F1-score and mAP value of the DCGA-YOLOv8 model for Cabbage, Kohlrabi, and Rice are 96.4%, 97.1%, 95.9% and 98.9%, 99.2%, 99.1%, respectively, which has good generalization and robustness. A threshold-DBSCAN algorithm was proposed to implement clustering for each row of crops. The correct clustering rate for Cabbage, Kohlrabi and Rice reaches 98.9%, 97.9%, and 100%, respectively. And LSM and cubic B-spline curve methods were applied to fit straight and curved crop rows, respectively. In addition, this study constructed a risk optimization function for the wheel model to further improve the safety of agricultural machines operating between crop rows. This indicates that the proposed method can effectively realize the accurate recognition and extraction of navigation lines of different crops in complex farmland environment, and improve the safety and stability of visual navigation and field operation of agricultural machines.
... Such variability may hinder the accurate extraction of guiding lines. Moreover, fluctuating wind conditions in fields can induce plant motion, resulting in blurred images and potentially misidentifying crop center points [14][15][16][17]. With the advancements in high-speed computing technology, employing deep learning for navigation line extraction has gained traction [18,19]. ...
Preprint
Full-text available
This paper presents a deep learning-based multi-guidance line detection approach, enabling an autonomous four-wheeled agricultural robot to navigate strip farmland. The integration of Proportional-Integral-Derivative (PID) and Fuzzy Logic Control (FLC) systems optimizes the velocity and heading angle of robot, facilitating smooth navigation along identified guidance line. Enhance cornering maneuverability with real-time kinematics (RTK)-assisted Global Navigation Satellite System (GNSS) (RTK-GNSS) positioning. Additionally, the spraying system combined with deep learning can effectively identify weeds and crop nutrient deficiencies to achieve precise spraying. The guidance system prioritizes irrigation lines for navigation, with additional guidance from crop and furrow lines. Trigonometric analysis determines the angular deviation between the identified guidance line and the vertical line of the top view image. Experiments under diverse weather conditions demonstrate the stable navigation of robot at 12.5 cm/s, achieving up to 90% accuracy in weed and nutrient deficiency identification. The spraying accuracy for weeds and deficiencies averages 87% and 73%, respectively, underscoring the system's contribution to sustainable and precision horticulture practices.
... However, environmental factors such as lighting and shading have a deep impact on the accuracy and reliability of traditional crop row detection methods in practical applications. In recent years, the development and application of machine learning and deep learning have provided strong theoretical support for vision-based crop row detection [16]. In image segmentation, feature detection, target recognition and other visual information processing and machine learning can replace traditional methods to reduce the interference of environmental noise and vegetation overlap and improve the accuracy of crop row detection [17,18]. ...
Article
Full-text available
Crop row detection is one of the foundational and pivotal technologies of agricultural robots and autonomous vehicles for navigation, guidance, path planning, and automated farming in row crop fields. However, due to a complex and dynamic agricultural environment, crop row detection remains a challenging task. The surrounding background, such as weeds, trees, and stones, can interfere with crop appearance and increase the difficulty of detection. The detection accuracy of crop rows is also impacted by different growth stages, environmental conditions, curves, and occlusion. Therefore, appropriate sensors and multiple adaptable models are required to achieve high-precision crop row detection. This paper presents a comprehensive review of the methods and applications related to crop row detection for agricultural machinery navigation. Particular attention has been paid to the sensors and systems used for crop row detection to improve their perception and detection capabilities. The advantages and disadvantages of current mainstream crop row detection methods, including various traditional methods and deep learning frameworks, are also discussed and summarized. Additionally, the applications for different crop row detection tasks, including irrigation, harvesting, weeding, and spraying, in various agricultural scenarios, such as dryland, the paddy field, orchard, and greenhouse, are reported.
... To set the parameter searching radius and to make it more consistent with different point cloud environments, automatic searching radius ε estimation was proposed based on the average of nearest neighbors' maximum distance [16]. In line detection, 2D point clouds returned by LiDAR use several algorithms such as, random sample consensus (RANSAC) [17,18], Hough transform [19,20], the least squares method [21], and a line segment detector (LSD) [22]. Simultaneous localization and mapping (SLAM) technology developed the GNSS-independent VineSlam localization mapping method and the vineyard-specific path planner "AgRobPP" was reported. ...
Article
Full-text available
Traditional Japanese orchards control the growth height of fruit trees for the convenience of farmers, which is unfavorable to the operation of medium- and large-sized machinery. A compact, safe, and stable spraying system could offer a solution for orchard automation. Due to the complex orchard environment, the dense tree canopy not only obstructs the GNSS signal but also has effects due to low light, which may impact the recognition of objects by ordinary RGB cameras. To overcome these disadvantages, this study selected LiDAR as a single sensor to achieve a prototype robot navigation system. In this study, density-based spatial clustering of applications with noise (DBSCAN) and K-means and random sample consensus (RANSAC) machine learning algorithms were used to plan the robot navigation path in a facilitated artificial-tree-based orchard system. Pure pursuit tracking and an incremental proportional–integral–derivative (PID) strategy were used to calculate the vehicle steering angle. In field tests on a concrete road, grass field, and a facilitated artificial-tree-based orchard, as indicated by the test data results for several formations of left turns and right turns separately, the position root mean square error (RMSE) of this vehicle was as follows: on the concrete road, the right turn was 12.0 cm and the left turn was 11.6 cm, on grass, the right turn was 12.6 cm and the left turn was 15.5 cm, and in the facilitated artificial-tree-based orchard, the right turn was 13.8 cm and the left turn was 11.4 cm. The vehicle was able to calculate the path in real time based on the position of the objects, operate safely, and complete the task of pesticide spraying.
... This complexity may explain why such approaches have not been commercially deployed yet, but with the rapid advance in high-performance computing systems, this may not present a barrier to adoption in the future. Nevertheless, the development stage seems to be inversely correlated to complexity and this may potentially be explained by the difficulty of generating reliable data with a complex system operating in an uncontrolled environment with variable conditions such as lighting, soil surface, pests, and temperature (Pajares et al., 2016). Another aspect to consider is the presence of the yield monitoring approach on-board the harvester or being used during the harvesting process, and the possibility for yield mapping prior to harvest. ...
Article
Full-text available
Yield maps provide a detailed account of crop production and potential revenue of a farm. This level of details enables a range of possibilities from improving input management, conducting on-farm experimentation, or generating profitability map, thus creating value for farmers. While this technology is widely available for field crops such as maize, soybean and grain, few yield sensing systems exist for horticultural crops such as berries, field vegetable or orchards. Nevertheless, a wide range of techniques and technologies have been investigated as potential means of sensing crop yield for horticultural crops. This paper reviews yield monitoring approaches that can be divided into proximal, either direct or indirect, and remote measurement principles. It reviews remote sensing as a way to estimate and forecast yield prior to harvest. For each approach, basic principles are explained as well as examples of application in horticultural crops and success rate. The different approaches provide whether a deterministic (direct measurement of weight for instance) or an empirical (capacitance measurements correlated to weight for instance) result, which may impact transferability. The discussion also covers the level of precision required for different tasks and the trend and future perspectives. This review demonstrated the need for more commercial solutions to map yield of horticultural crops. It also showed that several approaches have demonstrated high success rate and that combining technologies may be the best way to provide enough accuracy and robustness for future commercial systems.
... In addition, neural networks, support vector machines, or deep learning, etc., [36][37][38][39] are also common image segmentation methods. Other specialized machine vision methods include the use of a variable field of view [40], adaptive thresholds [41], and spectral filters for automatic vehicle guidance and obstacle detection [42]. The image segmentation methods based on the Otsu method [23] are very favorable for use when there is a large variance in the foreground and background of the image, and many rapid image segmentation methods with high real-time performance have also been proposed [29][30][31]. ...
Article
Full-text available
Due to the complex environment in the field, using machine vision technology to enable the robot to travel autonomously was a challenging task. This study investigates a method based on mathematical morphology and Hough transformation for drip tape following by a two-wheeled robot trailer. First, an image processing technique was utilized to extract the drip tape in the image, including the selection of the region of interest (ROI), Red-Green-Blue (RGB) to Hue-Saturation-Value (HSV) color space conversion, color channel selection, Otsu’s binarization, and morphological operations. The line segments were obtained from the extracted drip tapes image by a Hough line transform operation. Next, the deviation angle between the line segment and the vertical line in the center of the image was estimated through the two-dimensional law of cosines. The steering control system could adjust the rotation speed of the left and right wheels of the robot to reduce the deviation angle, so that the robot could stably travel along the drip tape, including turning. The guiding performance was evaluated on the test path formed by a drip tape in the field. The experimental results show that the proposed method could achieve an average line detection rate of 97.3% and an average lateral error of 2.6 ± 1.1 cm, which was superior to other drip-tape-following methods combined with edge detection, such as Canny and Laplacian.
... Sensors record physical data from the environment and convert them into digital measurements that can be processed by the system. Determining the exact position of sensors on a tractor presupposes knowledge of both the operation of each sensor (field of view, resolution, range, etc.) and the geometry of the tractor, so that by being placed in the appropriate position onboard the vehicle, the sensor could perform to its maximum [58]. Navigation sensors can be either object sensors or pose sensors. ...
Article
Full-text available
Automatic navigation of agricultural machinery is an important aspect of Smart Farming. Intelligent agricultural machinery applications increasingly rely on machine vision algorithms to guarantee enhanced in-field navigation accuracy by precisely locating the crop lines and mapping the navigation routes of vehicles in real-time. This work presents an overview of vision-based tractor systems. More specifically, this work deals with (1) the system architecture, (2) the safety of usage, (3) the most commonly faced navigation errors, (4) the navigation control system of tractors and presents (5) state-of-the-art image processing algorithms for in-field navigation route mapping. In recent research, stereovision systems emerge as superior to monocular systems for real-time in-field navigation, demonstrating higher stability and control accuracy, especially in extensive crops such as cotton, sunflower, maize, etc. A detailed overview is provided for each topic with illustrative examples that focus on specific agricultural applications. Several computer vision algorithms based on different optical sensors have been developed for autonomous navigation in structured or semi-structured environments, such as orchards, yet are affected by illumination variations. The usage of multispectral imaging can overcome the encountered limitations of noise in images and successfully extract navigation paths in orchards by using a combination of the trees’ foliage with the background of the sky. Concisely, this work reviews the current status of self-steering agricultural vehicles and presents all basic guidelines for adapting computer vision in autonomous in-field navigation.