Chapter

Pest Detection in Olive Groves Using YOLOv7 and YOLOv8 Models

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Modern agriculture faces important challenges for feeding a fast-growing planet’s population in a sustainable way. One of the most important challenges faced by agriculture is the increasing destruction caused by pests to important crops. It is very important to control and manage pests in order to reduce the losses they cause. However, pest detection and monitoring are very resources consuming tasks. The recent development of computer vision-based technology has made it possible to automatize pest detection efficiently. In Mediterranean olive groves, the olive fly (Bactrocera oleae Rossi) is considered the key-pest of the crop. This paper presents olive fly detection using the lightweight YOLO-based model for versions 7 and 8, respectively, YOLOv7-tiny and YOLOv8n. The proposed object detection models were trained, validated, and tested using two different image datasets collected in various locations of Portugal and Greece. The images are constituted by sticky yellow trap photos and by McPhail trap photos with olive fly exemplars. The performance of the models was evaluated using precision, recall, and mAP.95. The YOLOV7-tiny model best performance is 88.3% of precision, 85% of Recall, 90% of mAP.50, and 53% of mAP.95. The YOLOV8n model best performance is 85% of precision, 85% of Recall, 90% mAP.50, and 55% of mAP.50 YOLO8n model achieved worst results than YOLOv7-tiny for a dataset without negative images (images without olive fly exemplars). Aiming at installing an experimental prototype in the olive grove, the YOLOv8n model was implemented in a Ubuntu Server 23.04 Raspberry PI 3 microcomputer.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Traditional camera sensors rely on human eyes for observation. However, human eyes are prone to fatigue when observing objects of different sizes for a long time in complex scenes, and human cognition is limited, which often leads to judgment errors and greatly reduces efficiency. Object recognition technology is an important technology used to judge the object’s category on a camera sensor. In order to solve this problem, a small-size object detection algorithm for special scenarios was proposed in this paper. The advantage of this algorithm is that it not only has higher precision for small-size object detection but also can ensure that the detection accuracy for each size is not lower than that of the existing algorithm. There are three main innovations in this paper, as follows: (1) A new downsampling method which could better preserve the context feature information is proposed. (2) The feature fusion network is improved to effectively combine shallow information and deep information. (3) A new network structure is proposed to effectively improve the detection accuracy of the model. From the point of view of detection accuracy, it is better than YOLOX, YOLOR, YOLOv3, scaled YOLOv5, YOLOv7-Tiny, and YOLOv8. Three authoritative public datasets are used in these experiments: (a) In the Visdron dataset (small-size objects), the map, precision, and recall ratios of DC-YOLOv8 are 2.5%, 1.9%, and 2.1% higher than those of YOLOv8s, respectively. (b) On the Tinyperson dataset (minimal-size objects), the map, precision, and recall ratios of DC-YOLOv8 are 1%, 0.2%, and 1.2% higher than those of YOLOv8s, respectively. (c) On the PASCAL VOC2007 dataset (normal-size objects), the map, precision, and recall ratios of DC-YOLOv8 are 0.5%, 0.3%, and 0.4% higher than those of YOLOv8s, respectively.
Preprint
Full-text available
Traditional camera sensors rely on human eyes for observation. However, the human eye 1 is prone to fatigue when observing targets of different sizes for a long time in complex scenes, and 2 human cognition is limited, which often leads to judgment errors and greatly reduces the efficiency. 3 Target recognition technology is an important technology to judge the target category in camera 4 sensor. In order to solve this problem, a small size target detection algorithm for special scenarios was 5 proposed by this paper. Its advantage is that this algorithm not only has higher precision for small 6 size target detection, but also can ensure that the detection accuracy of each size is not lower than the 7 existing algorithm. In this paper, a new down-sampling method was proposed, which could better 8 preserve the context feature information. The feature fusion network was improved to effectively 9 combine shallow information and deep information. A new network structure was proposed to 10 effectively improve the detection accuracy of the model. In terms of accuracy, it is better than: YOLOX, 11 YOLOXR, YOLOv3, scaled YOLOv5, YOLOv7-Tiny and YOLOv8.Three authoritative public data sets 12 were used in this experiment: a) On Visdron data sets (small size targets), DC-YOLOv8 is 2.5% more 13 accurate than YOLOv8. b) On Tinyperson data sets (minimal size targets), DC-YOLOv8 is 1% more 14 accurate than YOLOv8. c) On PASCAL VOC2007 data sets (Normal size target), DC-YOLOv8 is 0.5% 15 more accurate than YOLOv8.
Article
Full-text available
The most incredible diversity, abundance, spread, and adaptability in biology are found in insects. The foundation of insect study and pest management is insect recognition. However, most of the current insect recognition research depends on a small number of insect taxonomic experts. We can use computers to differentiate insects accurately instead of professionals because of the quick advancement of computer technology. The “YOLOv5” model, with five different state of the art object detection techniques, has been used in this insect recognition and classification investigation to identify insects with the subtle differences between subcategories. To enhance the critical information in the feature map and weaken the supporting information, both channel and spatial attention modules are introduced, improving the network’s capacity for recognition. The experimental findings show that the F1 score approaches 0.90, and the mAP value reaches 93% through learning on the self-made pest dataset. The F1 score increased by 0.02, and the map increased by 1% as compared to other YOLOv5 models, demonstrating the success of the upgraded YOLOv5-based insect detection system.
Article
Full-text available
Pest detection in plants is essential for ensuring high productivity. Convolutional neural networks (CNN)-based deep learning advancements recently have made it possible for researchers to increase object detection accuracy. In this study, pest detection in plants with higher accuracy is proposed by an improved YOLOv5m-based method. First, the SWin Transformer (SWinTR) and Transformer (C3TR) mechanisms are introduced into the YOLOv5m network so that they can capture more global features and can increase the receptive field. Then, in the backbone, ResSPP is considered to make the network extract more features. Furthermore, the global features of the feature map are extracted in the feature fusion phase and forwarded to the detection phase via a modification of the three output necks C3 into SWinTR. Finally, WConcat is added to the fusion feature, which increases the feature fusion capability of the network. Experimental results demonstrate that the improved YOLOv5m achieved 95.7% precision rate, 93.1% recall rate, 94.38% F1 score, and 96.4% Mean Average Precision (mAP). Meanwhile, the proposed model is significantly better than the original YOLOv3, YOLOv4, and YOLOv5m models. The improved YOLOv5m model shows greater robustness and effectiveness in detecting pests, and it could more precisely detect different pests from the dataset.
Article
Full-text available
Insect pests are a major element influencing agricultural production. According to the Food and Agriculture Organization (FAO), an estimated 20–40% of pest damage occurs each year, which reduces global production and becomes a major challenge to crop production. These insect pests cause sooty mold disease by sucking the sap from the crop’s organs, especially leaves, fruits, stems, and roots. To control these pests, pesticides are frequently used because they are fast-acting and scalable. Due to environmental pollution and health awareness, less use of pesticides is recommended. One of the salient approaches could be to reduce the wide use of pesticides by spraying on demand. To perform spot spraying, the location of the pest must first be determined. Therefore, the growing population and increasing food demand emphasize the development of novel methods and systems for agricultural production to address environmental concerns and ensure efficiency and sustainability. To accurately identify these insect pests at an early stage, insect pest detection and classification have recently become in high demand. Thus, this study aims to develop an object recognition system for the detection of crops damaging insect pests and their classification. The current work proposes an automatic system in the form of a smartphone IP- camera to detect insect pests from digital images/videos to reduce farmers’ reliance on pesticides. The proposed approach is based on YOLO object detection architectures including YOLOv5 (n, s, m, l, and x), YOLOv3, YOLO-Lite, and YOLOR. For this purpose, we collected 7046 images in the wild under different illumination and background conditions to train the underlying object detection pproaches. We trained and test the object recognition system with different parameters from scratch. The eight models are compared and analyzed. The experimental results show that the average precision (AP@0.5) of the eight models including YOLO-Lite, YOLOv3, YOLOR, and YOLOv5 with five different scales (n, s, m, l, and x) reach 51.7%, 97.6%, 96.80%, 83.85%, 94.61%, 97.18%, 97.04%, and 98.3% respectively. The larger the model, the higher the average accuracy of the detection validation results. We observed that the YOLOv5x model is fully functional and can correctly identify the twenty-three species of insect pests at 40.5 milliseconds (ms). The developed model YOLOv5x performs the state-of-the-art model with an average precision value of (mAP@0.5) 98.3%, (mAP@0.5:0.95) value of 79.8%, precision of 94.5% and a recall of 97.8%, and F1-score with 96% on our IP-23 dataset. The results show that the system works efficiently and was able to correctly detect and identify insect pests, which can be employed for realistic application while farming.
Article
Full-text available
Modern agriculture is facing unique challenges in building a sustainable future for food production, in which the reliable detection of plantation threats is of critical importance. The breadth of existing information sources, and their equivalent sensors, can provide a wealth of data which, to be useful, must be transformed into actionable knowledge. Approaches based on Information Communication Technologies (ICT) have been shown to be able to help farmers and related stakeholders make decisions on problems by examining large volumes of data while assessing multiple criteria. In this paper, we address the automated identification (and count the instances) of the major threat of olive trees and their fruit, the Bactrocera Oleae (a.k.a. Dacus) based on images of the commonly used McPhail trap’s contents. Accordingly, we introduce the “Dacus Image Recognition Toolkit” (DIRT), a collection of publicly available data, programming code samples and web-services focused at supporting research aiming at the management the Dacus as well as extensive experimentation on the capability of the proposed dataset in identifying Dacuses using Deep Learning methods. Experimental results indicated performance accuracy (mAP) of 91.52% in identifying Dacuses in trap images featuring various pests. Moreover, the results also indicated a trade-off between image attributes affecting detail, file size and complexity of approaches and mAP performance that can be selectively used to better tackle the needs of each usage scenario.
Article
Full-text available
In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.
Article
Fruit usually plays a vital role in people's daily life. Many kinds of fruits are rich in vitamins and trace elements, which have high edible value. Pests and diseases are a considerable problem in the process of fruit planting. The quality and quantity of fruit can be effectively improved by the detection and preventing pests and diseases. However, suppose in the process of fruit growth, it is always necessary to manually identify and detect pests and diseases. In that case, it will inevitably consume a lot of workforce and material resources. Therefore, it is advisable to have an automated system to save unnecessary time and effort. This article introduces the detection and identification system of pests and diseases based on Raspberry Pi to identify and detect the pests and diseases of fruit such as Longan and lychee. Firstly, we constructed a knowledge graph of pests and diseases related to lychee and longan. Then, we used the Raspberry Pi to control the camera to capture the pests and diseases images. Next, the system processed and recognized the images captured by the camera. Finally, the Bluetooth speaker broadcasted the results in real-time. We constructed the knowledge graph through data collection, information extraction, knowledge fusion and storage. We trained the vgg-16 model, which achieves 94.9% accuracy in the pests identification task, and we deployed it on a Raspberry Pi.
Fast vehicle detection algorithm based on lightweight
  • B Li
  • Y Chen
  • H Xu
  • F Zhong
Li, B., Chen, Y., Xu, H., Zhong, F.: Fast vehicle detection algorithm based on lightweight yolo7-tiny (2023)
Yellow sticky traps dataset olive fly (Bactrocera Oleae) (2023) Google Scholar
  • J A Pereira
Raw data from yellow sticky traps with insects for training of deep learning convolutional neural network for object detection
  • A T Nieuwenhuizen
(Ard) Nieuwenhuizen, A.T., et al.: Raw data from yellow sticky traps with insects for training of deep learning convolutional neural network for object detection (2019)
Noise removal in images using deep learning models
  • S Belde
Belde, S.: Noise removal in images using deep learning models, April 2021
A new object-based framework to detect shodows in high-resolution satellite imagery over urban areas. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
  • N Tatar
  • M Saadatseresht
  • H Arefi
  • A Hadavand
Tatar, N., Saadatseresht, M., Arefi, H., Hadavand, A.: A new object-based framework to detect shodows in high-resolution satellite imagery over urban areas. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-1/W5:713-717, December 2015