Conference PaperPDF Available

Recognition and separation of fresh and rotten fruits using YOLO algorithm

Authors:

Abstract

Fruit quality evaluation is crucial in today's food processing and distribution systems to assure consumer safety and minimize food waste. The fruits are often sorted manually using visual examination, which is time-consuming, labor-intensive, and prone to error. To address these issues, we can use the capabilities of computer vision and Deep learning to build a powerful and real-time fruit quality evaluation system. This paper presents an innovative approach for the automated detection and classification of fresh and rotten fruits through object detection. For our project, we collected and annotated the images of two fruits: "Apple" and "Tomato" in different orientations and shades. The dataset is divided in four classes and is used to train YOLO algorithm model. This paper presents a novel approach for the automated recognition and separation of fresh and rotten fruits on conveyor belts using the YOLO algorithm and Raspberry Pi. The integration of computer vision and Deep learning techniques empowers the system to perform real-time fruit quality assessment, offering significant benefits in terms of accuracy, speed, and waste reduction. This research bridges the gap between technology and agriculture, showcasing the potential of AI-powered solutions to revolutionize food quality inspection processes.
Prem Bahadur Rana*
Department of Electronics and
Communication
Khwopa Engineering College
Bhaktapur, Nepal
prem2056rana@gmail.com
Kushal Shrestha*
Department of Electronics and
Communication
Khwopa Engineering College
Bhaktapur, Nepal
kushals2075@gmail.com
Chet Narayan Mandal*
Department of Electronics and
Communication
Khwopa Engineering College
Bhaktapur, Nepal
mandalchetnarayan@gmail.com
Ganesh Ram Dhonju
Department of Electronics and
Communication
Khwopa Engineering College
Bhaktapur, Nepal
gr.dhonju@khec.edu.np
Ujjwal Dahal*
Department of Electronics and
Communication
Khwopa Engineering College
Bhaktapur, Nepal
ujjwaldahal57@gmail.com
Abstract Fruit quality evaluation is crucial in today's
food processing and distribution systems to assure consumer
safety and minimize food waste. The fruits are often sorted
manually using visual examination, which is time-consuming,
labor-intensive, and prone to error. To address these issues,
we can use the capabilities of computer vision and Deep
learning to build a powerful and real-time fruit quality
evaluation system. This paper presents an innovative
approach for the automated detection and classification of
fresh and rotten fruits through object detection. For our
project, we collected and annotated the images of two fruits:
"Apple" and "Tomato" in different orientations and shades.
The dataset is divided in four classes and is used to train
YOLO algorithm model. This paper presents a novel
approach for the automated recognition and separation of
fresh and rotten fruits on conveyor belts using the YOLO
algorithm and Raspberry Pi. The integration of computer
vision and Deep learning techniques empowers the system to
perform real- time fruit quality assessment, offering
significant benefits in terms of accuracy, speed, and waste
reduction. This research bridges the gap between technology
and agriculture, showcasing the potential of AI-powered
solutions to revolutionize food quality inspection processes.
Keywords Fruits Detection, Dataset, You Only Look
Once (YOLO), Deep Learning, Conveyor belt, Computer
Vision.
*Author with *are equal contributors
I. INTRODUCTION
Fruits and vegetables are essential components of our
everyday diet. There are many varieties of eatable fruits and
vegetables in the natural world. Fresh fruits are not only
wonderful to eat, but it also contains a lot of vital vitamins
and minerals. In the food processing industry, fresh fruits
are used to make delicious and healthy food products [1].
Automatic fruit classification is an intriguing issue in
the fruit growing and retailing industry chain because it can
assist supermarkets and fruit growers in identifying various
fruits and their status from the stock or containers in order
to increase production efficiency and, ultimately, business
profit. As a result, in the past ten years, research into
intelligent systems combining computer vision and
machine learning techniques for fruit defect recognition,
ripeness grading, and categorization has been conducted
[2].
It becomes difficult to manage agriculture in a
sustainable way as the human population grows. By 2050,
the world's population is projected to increase by 40% to
9.7 billion people, necessitating a doubling of fruit output.
It is anticipated that the number of people employed in
agriculture would decline by half by 2050, leading to a
shortage of 5 million harvesters. Hence, more than 10% of
fruits in the world cannot be picked; this is equivalent to the
annual consumption of the European Union [3]. The
classification of fresh and rotten fruits is generally done by
human, which is inefficient for fruit growers. We can used
robots for this purpose as they do not get tired from
performing the same task repeatedly like humans do.
Fruit growers are facing a growing labor shortage as the
workforce's interest in agriculture has declined. The
problem is worsened by international travel restrictions
recently imposed during the pandemic, which have sharply
limited agricultural productivity due to a lack of migrant
workers. As a result, tons of fresh produce went
unharvested and rotted in the Felds, where farms had long
depended on foreign seasonal workers [4].
Robotic harvesting can offer a possible solution to this
problem by reducing labor costs (longer durability and
good repeatability) and improving fruit quality assessment.
For these reasons, interest in the use of agricultural robots
in fruit and vegetable harvesting has grown over the last
three decades [5].
The use of intelligent harvesting robots to replace or
assist manual harvesting of fresh fruit is significant in terms
of reducing production costs and improving economic
returns. As a typical representative of an agricultural robot,
the fruit harvesting robot is considered to have good
potential for the future of intelligent agriculture and has
received widespread attention worldwide [6].
Identification of defective fruit and classification of
fresh and rotten fruit is one of the most important
challenges in agriculture. Rotten fruits can damage other
fresh fruits if they are not properly graded and rotten fruits
are made by hand, which is an inefficient and lengthy
process for farmers [7].
International Conference on Technologies for Computer, Electrical, Electronics & Communication (ICT-CEEL 2023)
247
Recognition and Separation of Fresh and Rotten
Fruits Using Yolo Algorithm
Therefore, it is necessary to develop a new classification
model that reduces human effort, cost, and production time
in the agricultural industry by identifying fruit defect [8].
The continued development of machine learning (ML)
has led to significant advances in agricultural tasks. Deep
learning (DL) is widely used in fruit recognition and
automatic harvesting because it can extract high-
dimensional features from fruit images. In particular,
convolutional neural networks (CNN) have been shown to
achieve accuracy and speed comparable to humans in some
areas of fruit recognition and automatic harvesting.
Compared with digital image processing and traditional
ML techniques, fruit detection and recognition methods
based on CNN have great advantages in terms of accuracy
[9].
In recent years, the technology of deep learning in
image classification, object detection and face
identification and many other computers vision tasks have
achieved great success. Experimental data shows that the
technology of deep learning is an effective tool to pass the
man-made feature relying on the drive of experience to the
learning relying on the drive of data [10].
Through our project, we have focused on automation in
recognition and separation out fresh and rotten fruits using
YOLO algorithm. YOLO is a fast, accurate object detector,
making it ideal for computer vision applications [11].
We connect YOLO to a webcam and verify that it
maintains real-time performance, including the time to
fetch images from the camera and display the detections.
By using dual web camera to capture maximum view of the
fruits along with conveyer belt to sort out rotten and fresh
fruits.
II. METHODOLOGY
Fig. 1. Block diagram of training model
For our project, we first collected the images of fruits:
"Tomato and Apple". The images were classified in four
groups; "Fresh apple, Rotten Apple, Fresh Tomato and
Rotten Tomato. A total of 8000 images of apples and
tomatoes were collected. We used 2000 images for each
class. We collected most of the dataset from sites like
'Kaggle' and 'Rob flow'. Some of the images were taken by
ourselves. ". We annotated the images in 'Rob flow', and
divided the dataset in two: "Train and Test". We trained our
dataset on YOLOv7-tiny.
Robo flow is a Computer Vision developer framework
for better data collection to pre-processing and model
training techniques. It also provides datasets for public use.
Kaggle is also another platform where we can find
necessary datasets.
After the collection of data, a text document of annotated
images was generated for creating a training model. The
annotation was done by creating boundary boxes around the
images of the fruits. The fruits were labeled accordingly to
their visible characteristics whether they were "fresh/rotten"
or "tomato/apple". Robo flow was used to create bounding
box and label the dataset with classes "Fresh apple, Rotten
Apple, Fresh Tomato and Rotten Tomato".
Fig. 2. Detection of Fresh or Rotten Fruit in Conveyor Belt
Using Raspberry Pi
We started by labeling on pictures of fruits to teach our
computer what they look like. Then, we made a special
learning tool using a powerful computer chip called a GPU
from our college. This chip helped our computer get really
good at recognizing different fruits.
Next, we took our smart program and put it into a tiny
computer called a Raspberry Pi. This little computer is
really handy and can do lots of different things. It became
the brain of our fruit sorting system.
To talk to the Raspberry Pi, we used a special program
called PuTTY. It let us give instructions to our system using
words on a computer screen. This made it easier to set things
up just the way we wanted.
We needed to see the fruits in real-time, so we used web
cameras. They're like the eyes of our system, taking pictures
of the fruits as they move along. These cameras are really
good at capturing clear images, which helps our system do
its job well.
To move the fruits along, we made a moving platform
called a conveyor belt. It helps them travel smoothly
through our system. The conveyor belt was powered by a
special motor called a "Stepper Motor". It's good at moving
things just a little bit at a time, which was perfect for our
conveyor belt.
Then the important part - separating the fresh fruits
from the not-so-fresh ones. We used a special motor called
a "Servo Motor" for this job. This motor gently sorted each
fruit into the right place, making sure only the really fresh
Collection
of fruits
data
Labeled
images
Training
data
Yolov
7-tiny
model
Raspberry pi
Yolov7-tiny
model
Web
Camera
Conveyor
belt
Fruits
Collector
International Conference on Technologies for Computer, Electrical, Electronics & Communication (ICT-CEEL 2023)
248
Fig. 3. Flow chart of detection of rotten or fresh fruits in
conveyor belt by Using Raspberry Pi
ones ended up in their own special container.
The flowchart shows the steps taken for recognizing and
separating rotten and fresh fruits; the two fruits being Apple
and Tomato. First, the image is taken through web-cameras
when the fruit is being transported by conveyor belt. This
captured image is then given as input to the processor. If and
when, the fruit is detected, the algorithm classifies among
which of the classes, it belongs to. If the fruit is rotten then,
it is sent to the rotten fruit collector. If the fruit is fresh, then
else it is sent to the fresh fruit collector. Separation of fruit
is done using servo motor after the fruit is detected by
camera and is classified
III. EXPERIMENTATION
We trained our custom dataset using a pre-trained model
in YOLOv7-tiny. Images size were kept at 640x640, and
batch size was kept to 64. We fine-tuned the learning
process with specific settings: momentum at 0.937, weight
decay at 0.0005, and an initial learning rate of 0.001. This
training was done on a powerful GeForce RTX 3080 Ti
GPU with 16GB of RAM.
The progress of the model's learning was tracked with a
loss graph, shown in Fig. 4. This graph gives us a visual
overview of how well the model was adapting to the data
throughout the training process. Overall, this approach
Fig. 4. Loss Graph of YOLOv7-tiny Mode
helped us create an accurate and efficient object detection
model.
Figure 4 shows the graph of loss representation of our
project while training the data. A loss graph is a visual
representation that shows how the error between a model's
predictions and actual data changes as the model learns from
training data. The graph plots the loss value (a measure of
prediction error) on the y-axis and training iterations or
epochs on the x-axis. As training progresses, the loss value
ideally decreases, indicating that the model is improving its
predictions.
Fig. 5. F1_Curve
The F1 curve analysis demonstrates that the model is
achieving a high level of performance, with an F1 score of
0.96 at the chosen threshold of 0.526.
These results indicate the model's capability to effectively
classify instances across all classes while maintaining a
strong balance between precision and recall. The high F1
score suggests that our model is effectively capturing true
Start
Pre-processing
Fruits Detection
Are the
fruits
Fresh?
Sent Fruits to
Fresh fruits
collector
Sent fruits to
Rotten fruits
collector
End
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0 20 40 60 80 100
Train Loss
Epochs
International Conference on Technologies for Computer, Electrical, Electronics & Communication (ICT-CEEL 2023)
249
positive instances while minimizing false positives across
all classes at the specified threshold.
Fig. 6. PR Curve
Figure 6 shows the graph of the PR curve. The PR curve
(Precision-Recall curve) is a graphical representation that
shows the trade-off between precision and recall for
different probability thresholds in a binary classification or
multiclass classification problem. It's often used to evaluate
models, especially when dealing with imbalanced datasets.
The resulting PR curve can be used to evaluate the
performance of the model, and help determine the best
threshold for objects detection in the specific application.
An mAP@0.5 value typically ranges between 0 and 1,
where higher values indicate better performance.
An mAP@0.5 of 0.986 suggests that the model's object
detection predictions are highly accurate, with a strong
balance between precision and recall at the IoU threshold of
0.5. These results reflect the model's ability to make
accurate predictions while maintaining a good trade-off
between different evaluation metrics.
Fig. 7. mAP of YOLOV7-tiny Model
The mAP@0.5 value of 0.986 indicates that our model
performs exceptionally well in recognising objects
accurately, achieving a high level of agreement between
predicted and actual object locations when the overlap
threshold is set at 0.5. This suggests that our model is
highly effective at recognizing objects even when they
partially match the ground truth.
The mAP@0.5:0.95 value of 0.826 illustrates that our
model maintains strong performance across a broader range
of overlap thresholds, from 0.5 to 0.95. While the
performance slightly decreases compared to the mAP@0.5
result, it still demonstrates that our model is adept at
identifying objects with varying degrees of precision and
recall, considering stricter matching criteria.
The high F1 score, impressive AUC-PR value, and
outstanding mAP value collectively indicate our project's
proficiency in recognizing fresh and rotten fruits. This
technological solution has the potential to revolutionize the
fruit quality assessment process, benefitting consumers and
the food industry alike.
We employed NEMA 23 stepper motors as part of our
automated fruit quality detection system. These motors
were selected for their capability to provide precise and
controlled movement to our conveyor belt setup. The
conveyor belt's continuous rotation was achieved through
the use of a stepper motor, with the A4988 driver
controlling its operation. We introduced a time delay of
0.003 seconds for the conveyor belt's movement.
We collected some fresh and rotten fruits (i.e., Fresh
Apple, Fresh Tomato, Rotten Apple, Rotten Tomato).
These samples were then placed on the conveyor belt.
Then, the web camera captured images of the fruits as they
moved through the system. We then processed these
images using the YOLOv7- tiny image processing
algorithm, which had been pre-trained to identify fruit
conditions. Our experiments involved quantifying the
system's accuracy in correctly classifying the fruits,
including the percentage of correctly identified fresh and
rotten apples and tomatoes.
Then, web camera captured real-time images of the
fruits as they moved along the conveyor belt. These images
were then analyzed by the system, utilizing algorithms to
determine the quality of the fruits whether they were fresh
or rotten. Subsequently, the system sent commands to the
servo motor to physically separate the fruits accordingly.
IV. RESULT AND DISCUSSION
The evaluation of our object detection model using mAP
metrics demonstrates its high accuracy in identifying
objects. The achieved mAP values indicate its proficiency
in discerning objects, even in cases of overlap. This robust
performance positions our model as a reliable choice for
tasks requiring precise object detection, offering practical
applications across a range of real-world scenarios.
This means it could be really useful in places like
grocery stores or factories where quick and accurate sorting
is important.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 50 100
mAP
Epoch
mAp@.5 mAp@.5:.95
International Conference on Technologies for Computer, Electrical, Electronics & Communication (ICT-CEEL 2023)
250
The real time recognition and separation of Fresh
Apple”, “Fresh Tomato”, Rotten Apple and “Rotten
Tomato results are as shown in Figure 8, Figure 9
showcasing the practical effectiveness of our model.
Fig. 8. Real time detection of Rotten Apple and Rotten
Tomato
Fig. 9. Real time detection of Fresh Apple and Fresh
Tomato
The above Fig. 10. shows the complete design of
hardware for recognition and separation of rotten and fresh
fruits; the two fruits being Apple and Tomato.
In our project, it's important to acknowledge that the use
of a low-cost web camera and the relatively slow processing
speed of our system did present certain limitations.
The low- cost web camera might have affected the
image quality and resolution, potentially impacting the
system's ability to precisely analyze and distinguish
between fresh and rotten fruits. This could potentially affect
how accurately our system can analyze and tell the
difference between fresh and rotten fruits.
A higher- resolution camera with better image quality
would likely improve the accuracy of fruit classification,
enabling the system to make more reliable judgments.
Likewise, upgrading the processing hardware to a faster
platform would enhance the system's responsiveness and
real-time operation.
Taking steps to address these limitations by considering
equipment upgrades could be a valuable next move in
refining and optimizing our system for even better
performance. This way, we can ensure that our fruit sorting
process becomes even more effective and reliable in
practical applications.
Fig. 10. Complete Design of Hardware
V. CONCLUSION
We specifically utilized Apple and Tomato fruits. These
two types of fruits with four classes “Fresh Apple”, “Fresh
Tomato”, “Rotten Apple”, “Rotten Tomato” were
subjected to our automated system for the purpose of
detecting and categorizing their quality on the conveyor
belt. Using YOLOv7-tiny image processing algorithm, we
successfully developed a system capable of analyzing fresh
and rotten fruit images for accurate sorting. Integration of
servo and stepper motors played a crucial role, enabling the
separation of undesirable fruits and continuous belt
movement. The A4988 driver facilitated precise motor
control. This project showcases the potential of combining
computer vision, robotics, and Raspberry Pi for enhancing
sorting efficiency and product quality in various industries
ACKNOWLEDGMENT
We would like to express our sincere gratitude and
respect to the Department of Electronics and
Communication Engineering for the cooperation and
opportunity to do this meaningful project. We want to thank
Khwopa Engineering College and Khwopa College of
Engineering for their outstanding help in providing us with
a GPU for our research. Their support has really made our
work better and more successful.
REFERENCES
[1] S. Jana, R. Parekh, and B. Sarkar, “Detection of
Rotten Fruits and Vegetables Using Deep
Servo
motor used
as sorting
Conveyor
belt
Stepper
motor
Web camera
International Conference on Technologies for Computer, Electrical, Electronics & Communication (ICT-CEEL 2023)
251
Learning, 2021, pp. 31–49. doi: 10.1007/978-
981-33-6424-0_3.
[2] T. B. Shahi, C. Sitaula, A. Neupane, and W. Guo,
“Fruit classification using attention-based
MobileNetV2 for industrial applications,” PLoS
One, vol. 17, no. 2 February, Feb. 2022, doi:
10.1371/journal.pone.0264586.
[3] E. Vrochidou, V. N. Tsakalidou, I. Kalathas, T.
Gkrimpizis, T. Pachidis, and V. G. Kaburlasos,
“An Overview of End Effectors in Agricultural
Robotic Harvesting Systems,” Agriculture
(Switzerland), vol. 12, no. 8. MDPI, Aug. 01,
2022. doi:10.3390/agriculture12081240.
[4] H. Zhou, X. Wang, W. Au, H. Kang, and C. Chen,
“Intelligent robots for fruit harvesting: recent
developments and future challenges,” Precision
Agriculture, vol. 23, no. 5. Springer, pp. 1856
1907,Oct. 01, 2022. doi: 10.1007/s11119-022-
09913-3.
[5] I. Sa, Z. Ge, F. Dayoub, B. Upcroft, T. Perez, and
C. McCool, “Deepfruits: A fruit detection system
using deep neural networks,” Sensors
(Switzerland), vol. 16, no. 8, Aug. 2016, doi:
10.3390/s16081222.
[6] Y. Li, Q. Feng, T. Li, F. Xie, C. Liu, and Z. Xiong,
“Advance of Target Visual Information
Acquisition Technology for Fresh Fruit Robotic
Harvesting: A Review,” Agronomy, vol. 12, no. 6.
MDPI, Jun. 01,2022. doi:
10.3390/agronomy12061336.
[7] S. S. S. Palakodati, V. R. R. Chirra, Y. Dasari, and
S. Bulla, “Fresh and rotten fruits classification
using CNN and transfer learning,” Revue
d’Intelligence Artificielle, vol. 34, no. 5, pp. 617–
622, Oct. 2020, doi: 10.18280/ria.340512.
[8] A. Bhargava and A. Bansal, “Fruits and vegetables
quality evaluation using computer vision: A
review,” Journal of King Saud University -
Computer and Information Sciences, vol. 33, no. 3.
King Saud bin Abdulaziz University, pp. 243257,
Mar. 01, 2021. doi: 10.1016/j.jksuci.2018.06.002.
[9] F. Xiao, H. Wang, Y. Xu, and R. Zhang, “Fruit
Detection and Recognition Based on Deep
Learning for Automatic Harvesting: An Overview
and Review,” Agronomy, vol. 13, no. 6, p. 1625,
Jun.2023, doi: 10.3390/agronomy13061625.
[10] G. Zhu et al., 16th IEEE/ACIS International
Conference on Computer and Information Science
(ICIS 2017) : proceedings : May 24-26, 2017,
Wuhan, China.
[11] J. Redmon, S. Divvala, R. Girshick, and A.
Farhadi, “You Only Look Once: Unified, Real-
Time Object Detection.” [Online].Available:
http://pjreddie.com/yolo/
International Conference on Technologies for Computer, Electrical, Electronics & Communication (ICT-CEEL 2023)
252
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Continuing progress in machine learning (ML) has led to significant advancements in agricultural tasks. Due to its strong ability to extract high-dimensional features from fruit images, deep learning (DL) is widely used in fruit detection and automatic harvesting. Convolutional neural networks (CNN) in particular have demonstrated the ability to attain accuracy and speed levels comparable to those of humans in some fruit detection and automatic harvesting fields. This paper presents a comprehensive overview and review of fruit detection and recognition based on DL for automatic harvesting from 2018 up to now. We focus on the current challenges affecting fruit detection performance for automatic harvesting: the scarcity of high-quality fruit datasets, fruit detection of small targets, fruit detection in occluded and dense scenarios, fruit detection of multiple scales and multiple species, and lightweight fruit detection models. In response to these challenges, we propose feasible solutions and prospective future development trends. Future research should prioritize addressing these current challenges and improving the accuracy, speed, robustness, and generalization of fruit vision detection systems, while reducing the overall complexity and cost. This paper hopes to provide a reference for follow-up research in the field of fruit detection and recognition based on DL for automatic harvesting.
Article
Full-text available
In recent years, the agricultural sector has turned to robotic automation to deal with the growing demand for food. Harvesting fruits and vegetables is the most labor-intensive and time-consuming among the main agricultural tasks. However, seasonal labor shortage of experienced workers results in low efficiency of harvesting, food losses, and quality deterioration. Therefore, research efforts focus on the automation of manual harvesting operations. Robotic manipulation of delicate products in unstructured environments is challenging. The development of suitable end effectors that meet manipulation requirements is necessary. To that end, this work reviews the state-of-the-art robotic end effectors for harvesting applications. Detachment methods, types of end effectors, and additional sensors are discussed. Performance measures are included to evaluate technologies and determine optimal end effectors for specific crops. Challenges and potential future trends of end effectors in agricultural robotic systems are reported. Research has shown that contact-grasping grippers for fruit holding are the most common type of end effectors. Furthermore, most research is concerned with tomato, apple, and sweet pepper harvesting applications. This work can be used as a guide for up-to-date technology for the selection of suitable end effectors for harvesting robots.
Article
Full-text available
Intelligent robots for fruit harvesting have been actively developed over the past decades to bridge the increasing gap between feeding a rapidly growing population and limited labour resources. Despite significant advancements in this field, widespread use of harvesting robots in orchards is yet to be seen. To identify the challenges and formulate future research and development directions, this work reviews the state-of-the-art of intelligent fruit harvesting robots by comparing their system architectures, visual perception approaches, fruit detachment methods and system performances. The potential reasons behind the inadequate performance of existing harvesting robots are analysed and a novel map of challenges and potential research directions is created, considering both environmental factors and user requirements.
Article
Full-text available
In view of the continuous increase in labor costs for complex picking tasks, there is an urgent demand for intelligent harvesting robots in the global fresh fruit cultivation industry. Fruit visual information is essential to guide robotic harvesting. However, obtaining accurate visual information about the target is critical in complex agricultural environments. The main challenges include the image color distortion under changeable natural light, occlusions from the interlaced plant organs (stems, leaves, and fruits), and the picking point location on fruits with variable shapes and poses. On top of summarizing the current status of typical fresh fruit harvesting robots, this paper outlined the state-of-the-art advance of visual information acquisition technology, including image acquisition in the natural environment, fruit recognition from the complex backgrounds, target stereo locating and measurement, and fruit search among the plants. It then analyzed existing problems and raised future potential research trends from two aspects, multiple images fusion and self-improving algorithm model.
Article
Full-text available
Recent deep learning methods for fruits classification resulted in promising performance. However, these methods are with heavy-weight architectures in nature, and hence require a higher storage and expensive training operations due to feeding a large number of training parameters. There is a necessity to explore lightweight deep learning models without compromising the classification accuracy. In this paper, we propose a lightweight deep learning model using the pre-trained MobileNetV2 model and attention module. First, the convolution features are extracted to capture the high-level object-based information. Second, an attention module is used to capture the interesting semantic information. The convolution and attention modules are then combined together to fuse both the high-level object-based information and the interesting semantic information, which is followed by the fully connected layers and the softmax layer. Evaluation of our proposed method, which leverages transfer learning approach, on three public fruit-related benchmark datasets shows that our proposed method outperforms the four latest deep learning methods with a smaller number of trainable parameters and a superior classification accuracy. Our model has a great potential to be adopted by industries closely related to the fruit growing and retailing or processing chain for automatic fruit identification and classifications in the future.
Chapter
Full-text available
Nowadays, food safety is a global concern. This chapter elucidates various problems of fruits and vegetable processing using computer vision and machine learning as well as proposes a convolutional neural network architecture for the automated detection of rotten fruits and vegetables from an image. The convolutional neural network architecture is built from scratch to perform the task of classification between fresh and rotten fruits and vegetables. The network contains four convolutional layers to extract different levels of features from the images. The experimentation is done on a dataset, which contains 13,599 images of fresh and rotten fruits. The experimentation result shows that the proposed deep learning architecture outperforms the previous approaches. The classification accuracy is ranged between 97.74 and 99.92% using the proposed approach. The range of the F1 score is between 98.43 and 99.95% using the proposed approach.
Article
Full-text available
Detecting the rotten fruits become significant in the agricultural industry. Usually, the classification of fresh and rotten fruits is carried by humans is not effectual for the fruit farmers. Human beings will become tired after doing the same task multiple times, but machines do not. Thus, the project proposes an approach to reduce human efforts, reduce the cost and time for production by identifying the defects in the fruits in the agricultural industry. If we do not detect those defects, those defected fruits may contaminate good fruits. Hence, we proposed a model to avoid the spread of rottenness. The proposed model classifies the fresh fruits and rotten fruits from the input fruit images. In this work, we have used three types of fruits, such as apple, banana, and oranges. A Convolutional Neural Network (CNN) is used for extracting the features from input fruit images, and Softmax is used to classify the images into fresh and rotten fruits. The performance of the proposed model is evaluated on a dataset that is downloaded from Kaggle and produces an accuracy of 97.82%. The results showed that the proposed CNN model can effectively classify the fresh fruits and rotten fruits. In the proposed work, we inspected the transfer learning methods in the classification of fresh and rotten fruits. The performance of the proposed CNN model outperforms the transfer learning models and the state of art methods.
Article
Full-text available
In agriculture science, automation increases the quality, economic growth and productivity of the country. The export market and quality evaluation are affected by assorting of fruits and vegetables. The crucial sensory characteristic of fruits and vegetables is appearance that impacts their market value, the consumer's preference and choice. Although, the sorting and grading can be done by human but it is inconsistent, time consuming, variable, subjective, onerous, expensive and easily influenced by surrounding. Hence, an astute fruit grading system is needed. In recent years, various algorithms for sorting and grading are done by various researchers using computer vision. This paper presents a detailed overview of various methods i.e. preprocessing, segmentation, feature extraction, classification which addressed fruits and vegetables quality based on color, texture, size, shape and defects. In this paper, a critical comparison of different algorithm proposed by researchers for quality inspection of fruits and vegetables has been carried out.
Article
Full-text available
This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit.
Article
We present YOLO, a unified pipeline for object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is also extremely fast; YOLO processes images in real-time at 45 frames per second, hundreds to thousands of times faster than existing detection systems. Our system uses global image context to detect and localize objects, making it less prone to background errors than top detection systems like R-CNN. By itself, YOLO detects objects at unprecedented speeds with moderate accuracy. When combined with state-of-the-art detectors, YOLO boosts performance by 2-3% points mAP.