ArticlePublisher preview available

Correction to: Detection and counting of immature green citrus fruit based on the Local Binary Patterns (LBP) feature using illumination-normalized images

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The original version of this article unfortunately contained a mistake. The affiliation “China West Normal University” should be removed from the author Dr. Chenglin Wang. The correct affiliation details are given below. © 2018 Springer Science+Business Media, LLC, part of Springer Nature
Precision Agric (2018) 19:1084
https://doi.org/10.1007/s11119-018-9580-7
1 3
CORRECTION
Correction to: Detection andcounting ofimmature green
citrus fruit based ontheLocal Binary Patterns (LBP)
feature using illumination-normalized images
ChenglinWang1· WonSukLee2 · XiangjunZou1· DaeunChoi3· HaoGan2·
JusticeDiamond2
Published online: 17 May 2018
© Springer Science+Business Media, LLC, part of Springer Nature 2018
Correction to: Precis Agric
https ://doi.org/10.1007/s1111 9-018-9574-5
The original version of this article unfortunately contained a mistake.
The affiliation “China West Normal University” should be removed from the author Dr.
Chenglin Wang.
The correct affiliation details are given below.
The original article can be found online at https ://doi.org/10.1007/s1111 9-018-9574-5.
* Won Suk Lee
wslee@ufl.edu
* Xiangjun Zou
xjzou1@163.com
1 College ofEngineering, South China Agricultural University, Guangzhou510642, China
2 Department ofAgricultural & Biological Engineering, University ofFlorida, Gainesville,
FL32611, USA
3 Department ofAgricultural & Biological Engineering, Penn State University, UniversityPark,
PA16802, USA
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... Thus prominent segmentation performance is required for precision application in agriculture. A large number of studies employed color index methods to segment plants(crops and weeds) from the background (soil and residues) under various image conditions [10,11] To assess the performance of the proposed algorithm, the Excess Green Index (ExG) was proposed and used as a benchmark to separate plant vegetation from the soil background. Several shape features were fed to classify between trees and weeds. ...
Article
Full-text available
Trees planted in agricultural land offer many environmental benefits, they are an essential source of food. Therefore farmers are very interested in testing innovative solutions to improve the quality and the quantity of their products, while reducing time and energy, managing water resources and fertilizers, and reducing emissions and pollution. In this context, precision agriculture is one of many modern farming practices that make production more efficient. Unmanned aerial vehicle (UAV) remote sensing has excellent potential for vegetation mapping, In precision agriculture application, digital image processing can improve decision making to find solutions for some challenging problems one of these is the recognition of essential parameters related to agronomy with Accuracy, precision, and economic efficiency, this paper presents an automatic tree recognition and tracking, the image target is captured by RGB camera mounted on a UAV. In the case of remote sensing vegetation, there is a need to develop automatic systems for plant enumeration in tree seedling crops to save human resources and improve yield estimation. In this paper, the captured image is subjected to threshold and extract selected information. Trees tracking algorithm used to solve miss detection problems caused by motion blur and dynamic background. Kalman and Hungarian algorithms combines the recursive state estimation and optimal solution assignment. The future position estimation helps detecting and counting trees using calculated prediction. The average accuracy for each process of trees detection, tracking, and counting is 92.9%, 89.4% respectively. These techniques may be applied to recognize and count objects with different shapes and sizes in real−time.
... To identifying lychee, grape, and other fruits by training the neural network, the authors proposed a more in-depth learning method to divide the fruit image into multiple parts Wang et al., 2018;Xiong et al., 2018b), the image features using the Faster R-CNN network algorithm were combined and the lychee and guava fruit were divided into parts and identified. Figure 7 shows the recognition effect of the image of the lychee in the shade and the sun. ...
Article
Full-text available
The utilization of machine vision and its associated algorithms improves the efficiency, functionality, intelligence, and remote interactivity of harvesting robots in complex agricultural environments. Machine vision and its associated emerging technology promise huge potential in advanced agricultural applications. However, machine vision and its precise positioning still have many technical difficulties, making it difficult for most harvesting robots to achieve true commercial applications. This article reports the application and research progress of harvesting robots and vision technology in fruit picking. The potential applications of vision and quantitative methods of localization, target recognition, 3D reconstruction, and fault tolerance of complex agricultural environment are focused, and fault-tolerant technology designed for utilization with machine vision and robotic systems are also explored. The two main methods used in fruit recognition and localization are reviewed, including digital image processing technology and deep learning-based algorithms. The future challenges brought about by recognition and localization success rates are identified: target recognition in the presence of illumination changes and occlusion environments; target tracking in dynamic interference-laden environments, 3D target reconstruction, and fault tolerance of the vision system for agricultural robots. In the end, several open research problems specific to recognition and localization applications for fruit harvesting robots are mentioned, and the latest development and future development trends of machine vision are described.
... To identifying lychee, grape, and other fruits by training the neural network, the authors proposed a more in-depth learning method to divide the fruit image into multiple parts Wang et al., 2018;Xiong et al., 2018b), the image features using the Faster R-CNN network algorithm were combined and the lychee and guava fruit were divided into parts and identified. Figure 7 shows the recognition effect of the image of the lychee in the shade and the sun. ...
Article
The utilization of machine vision and its associated algorithms improves the efficiency, functionality, intelligence and remote interactivity of harvesting robots in complex agricultural environments. Machine vision and its associated emerging technology promise huge potential in advanced agricultural applications. However, machine vision and its precise positioning still have many technical difficulties, making it difficult for most harvesting robots to achieve true commercial applications. In this paper, this article reports the application and research progress of harvesting robots and vision technology in fruit picking. The potential applications of vision and quantitative methods of localization, target recognition, 3D reconstruction and fault tolerance of complex agricultural environment are focused, and fault-tolerant technology designed for utilization with machine vision and robotic systems are also explored. The two main methods used in fruit recognition and localization are reviewed, including digital image processing technology and deep learning-based algorithms. The future challenges brought about by recognition and localization success rates are identified: target recognition in the presence of illumination changes and occlusion environments; target tracking in dynamic interference-laden environments, 3D target reconstruction and fault tolerance of the vision system for agricultural robots. In the end, several open research problems specific to recognition and localization applications for fruit harvesting robots are mentioned, and the latest development and future development trends of machine vision are described.
Article
The accurate detection and counting of fruits in natural environments are key steps for the early yield estimation of orchards and the realization of smart orchard production management. However, existing citrus counting algorithms have two primary limitations: the performance of counting algorithms needs to be improved, and their system operation efficiency is low in practical applications. Therefore, in this paper, we propose a novel end‐to‐end orchard fruit counting pipeline that can be used by multiple unmanned aerial vehicles (UAVs) in parallel to help overcome the above problems. First, to obtain on‐board camera images online, an innovative UAV live broadcast platform was developed for the orchard scene. Second, for this challenging specific scene, a detection network named Citrus‐YOLO was designed to detect fruits in the video stream in real‐time. Then, the DeepSort algorithm was used to assign a specific ID to each citrus fruit in the online UAV scene and track the fruits across video frames. Finally, a nonuniform distributed counter was proposed to correct the fruit count during the tracking process, and this can significantly reduce the counting errors caused by tracking failure. This is the first work to realize online and end‐to‐end counting in a field orchard environment. The experimental results show that the F1 score and mean absolute percentage error of the method are 89.07% and 12.75%, respectively, indicating that the system can quickly and accurately achieve fruit counting in large‐scale unstructured citrus orchards. Although our work is discussed in the context of fruit counting, it can be extended to the detection, tracking and counting of a variety of other objects of interest in UAV application scenarios
Article
Locating picking points plays an important role in robotic litchi harvesting in orchards. However, the localisation accuracy can easily be affected by unstructured growing environments with variable illumination conditions and the unpredictable growing shapes of weight-bearing stems carrying litchi clusters. A computer-vision-based method is proposed to locate acceptable picking points for litchi clusters. To improve the illumination distributions of weakly illuminated images while leaving those of well-illuminated images unchanged, an iterative retinex algorithm was developed. Then, the litchi regions are segmented by combining red/green chromatic mapping and Otsu thresholding in the RGB colour space. By incorporating both local spatial and local hue-level intensity relationships, the hue component in the HSV colour space is re-constructed to filter the noise, and the stem regions are segmented in accordance with the distributed intervals of hue-level intensity. Finally, based on the connection and position relationship among the segmented litchi and stem regions, the structural distribution of the corners generated via Harris corner detection is analysed to locate the picking points. More than 97% of the pixels within litchi regions are correctly segmented after illumination compensation, and an improvement of 7% in the number of correctly segmented pixels within stem regions is achieved with the re-construction of the hue-level intensity. The overall performance of the proposed method was evaluated on test images with high illumination diversity, and accuracy greater than 83% in locating litchi picking points was achieved, showing better performance than three similar frameworks that do not incorporate our proposed procedures.
Article
Full-text available
The automatic fruit detection and precision picking in unstructured environments was always a difficult and frontline problem in the harvesting robots field. To realize the accurate identification of grape clusters in a vineyard, an approach for the automatic detection of ripe grape by combining the AdaBoost framework and multiple color components was developed by using a simple vision sensor. This approach mainly included three steps: (1) the dataset of classifier training samples was obtained by capturing the images from grape planting scenes using a color digital camera, extracting the effective color components for grape clusters, and then constructing the corresponding linear classification models using the threshold method; (2) based on these linear models and the dataset, a strong classifier was constructed by using the AdaBoost framework; and (3) all the pixels of the captured images were classified by the strong classifier, the noise was eliminated by the region threshold method and morphological filtering, and the grape clusters were finally marked using the enclosing rectangle method. Nine hundred testing samples were used to verify the constructed strong classifier, and the classification accuracy reached up to 96.56%, higher than other linear classification models. Moreover, 200 images captured under three different illuminations in the vineyard were selected as the testing images on which the proposed approach was applied, and the average detection rate was as high as 93.74%. The experimental results show that the approach can partly restrain the influence of the complex background such as the weather condition, leaves and changing illumination.
Article
Full-text available
A fast normalized cross correlation (FNCC) based machine vision algorithm was proposed in this study to develop a method for detecting and counting immature green citrus fruit using outdoor colour images toward the development of an early yield mapping system. As a template matching method, FNCC was used to detect potential fruit areas in the image, which was the very basis for subsequent false positive removal. Multiple features, including colour, shape and texture features, were combined in this algorithm to remove false positives. Circular Hough transform (CHT) was used to detect circles from images after background removal based on colour components. After building disks centred in centroids resulted from both FNCC and CHT, the detection results were merged based on the size and Euclidian distance of the intersection areas of the disks from these two methods. Finally, the number of fruit was determined after false positive removal using texture features. For a validation dataset of 59 images, 84.4 % of the fruits were successfully detected, which indicated the potential of the proposed method toward the development of an early yield mapping system.
Article
Free PDF Version: https://authors.elsevier.com/a/1VEzpcV4Ktgts Background and Objective: Histological images have characteristics, such as texture, shape, colour and spatial structure, that permit the differentiation of each fundamental tissue and organ. Texture is one of the most discriminative features. The automatic classification of tissues and organs based on histology images is an open problem, due to the lack of automatic solutions when treating tissues without pathologies. Method: In this paper, we demonstrate that it is possible to automatically classify cardiovascular tissues using texture information and Support Vector Machines (SVM). Additionally, we realised that it is feasible to recognise several cardiovascular organs following the same process. The texture of histological images was described using Local Binary Patterns (LBP), LBP Rotation Invariant (LBPri), Haralick features and different concatenations between them, representing in this way its content. Using a SVM with linear kernel, we selected the more appropriate descriptor that, for this problem, was a concatenation of LBP and LBPri. Due to the small number of the images available, we could not follow an approach based on deep learning, but we selected the classifier who yielded the higher performance by comparing SVM with Random Forest and Linear Discriminant Analysis. Once SVM was selected as the classifier with a higher area under the curve that represents both higher recall and precision, we tuned it evaluating different kernels, finding that a linear SVM allowed us to accurately separate four classes of tissues: (i) cardiac muscle of the heart, (ii) smooth muscle of the muscular artery, (iii) loose connective tissue, and (iv) smooth muscle of the large vein and the elastic artery. The experimental validation was conducted using 3000 blocks of 100 × 100 sized pixels, with 600 blocks per class and the classification was assessed using a 10-fold cross-validation. Results: using LBP as the descriptor, concatenated with LBPri and a SVM with linear kernel, the main four classes of tissues were recognised with an AUC higher than 0.98. A polynomial kernel was then used to separate the elastic artery and vein, yielding an AUC in both cases superior to 0.98. Conclusion: Following the proposed approach, it is possible to separate with very high precision (AUC greater than 0.98) the fundamental tissues of the cardiovascular system along with some organs, such as the heart, arteries and veins.
Article
Recognition and detection of green immature citrus fruit more accurately and efficiently in groves under natural illumination conditions provides a promising benefit for growers to plan application of nutrients during the fruit maturing stages and estimate their yield and profit prior to the harvesting period. The goal of this study was to develop a robust and fast algorithm to detect and count immature green citrus fruit in individual trees from colour images acquired with different fruit sizes and under various illumination conditions. Adaptive Red and Blue chromatic map (ARB) was created and combined with the Hue image extracted after histogram equalization (HEH). The sum of absolute transformed difference (SATD), a block-matching method, was applied to detect potential fruit pixels. After OR operation of the results obtained from colour and SATD analysis which kept as many fruit pixels as possible, a kernel support vector machine (SVM) classifier was built with same learning sets used for different classification stages to remove false positives based on five selected texture features. The algorithm was evaluated with a set of testing images, and achieved more than 83% recognition accuracy. The proposed method can provide a more efficient way for green citrus identification in a grove using colour images.
Conference Paper
A new early season method of creating high detail citrus yield maps in a grove was evaluated. A monochromatic near-infrared camera was equipped with interchangeable optical band pass filters was used to capture images of citrus fruit in a Florida citrus grove during the late Fall of 2006 growing season. Over 500 images were acquired of 36 separate on tree citrus fruit targets using three different optical band pass filters. The saved images were processed together using indices and morphological image processing techniques. Results showed an average correct pixel identification of 84.5%. The number of citrus pixels counted by the image processing algorithm was showed an R2 of 0.74 when predicting the actual number of citrus pixels. These results show a multispectral imaging system has potential to be used in an automated early season yield evaluation and mapping system. Such systems are of high interests for grove modeling and precision agricultural management methods.
Article
In order to extract characteristics of face by making full use of LBP and improve its "adaptive ability", we proposed an algorithm based on global and local fusion LBP. First, we will extract overall face feature histogram with LBP, then segment the image into blocks, extract each LBP histogram feature, then combine the global and local features according to certain order , and we regard it as the total character of image. Finally, we use the LS- SVM (least squares support vector machine) identifying and training samples of face image to improve the speed of recognition the experiment on ORL face database shows that the algorithm has high recognition rate and has improved recognition rate of the face.
Article
Yield mapping for tree crops by mechanical harvesting requires automatic detection and counting of fruits in tree canopy. However, partial occlusion, shape irregularity, varying illumination, multiple sizes and similarity with the background make fruit identification a very difficult task to achieve. Therefore, immature green citrus-fruit detection within a green canopy is a challenging task due to all the above-mentioned problems. A novel algorithmic technique was used to detect immature green citrus fruit in tree canopy under natural outdoor conditions. Shape analysis and texture classification were two integral parts of the algorithm. Shape analysis was conducted to detect as many fruits as possible. Texture classification by a support vector machine (SVM), Canny edge detection combined with a graph-based connected component algorithm and Hough line detection, were used to remove false positives. Next, keypoints were detected using a scale invariant feature transform (SIFT) algorithm and to further remove false positives. A majority voting scheme was implemented to make the algorithm more robust. The algorithm was able to accurately detect and count 80.4% of citrus fruit in a validation set of images acquired from a citrus grove under natural outdoor conditions. The algorithm could be further improved to provide growers early yield estimation so that growers can manage grove more efficiently on a site-specific basis to increase yield and profit.
Article
Detection of immature green citrus fruit is important during the early life cycle of citrus fruit. It allows growers to manage citrus groves more efficiently and maximize yields by identifying expected fruit yields well in advance before harvesting. It also helps the growers prepare harvesting equipment and pickers for the harvesting operation. A novel technique was developed for detecting immature green citrus fruit from an outdoor color image and counting number of fruits. This technique is unique in that it is the first known attempt towards exploring it on green citrus fruits. A set of 71 images containing immature green citrus fruit was acquired in an experimental citrus grove at the University of Florida, Gainesville, Florida, USA. An algorithm was developed using a set of 11 training images by calculating the fast Fourier transform leakage values for fruit and leaves. A threshold value was obtained by comparing the percent leakage of fruit and other objects. The algorithm was tested on a set of 60 validation images. The correct total fruit count for a validation set came out to be 120, whereas the actual number of fruit was 146. The overall correct detection rate was 82.2 %. The proposed algorithm can be further improved to help growers manage their grove more efficiently.
Article
This work details the development and validation of an algorithm for estimating the number of apples in color images acquired in orchards under natural illumination. Ultimately, this algorithm is intended to enable estimation of the orchard yield and be part of a management decision support system. The algorithm includes four main steps: detection of pixels that have a high probability of belonging to apples, using color and smoothness; formation and extension of “seed areas”, which are connected sets of pixels that have a high probability of belonging to apples; segmentation of the contours of these seed areas into arcs and amorphous segments; and combination of these arcs and comparison of the resulting circle with a simple model of an apple. The performance of the algorithm is investigated using two datasets. The first dataset consists of images recorded in full automatic mode of the camera and under various lighting conditions. Although the algorithm detects correctly more than 85% of the apples visible in the images, direct illumination and color saturation cause a large number of false positive detections. The second dataset consists of images that were manually underexposed and recorded under mostly diffusive light (close to sunset). For such images the correct detection rate is close to 95% while the false positive detection rate is less than 5%.