Article

Automatic classification of parasitized fruit fly pupae from X-ray images by convolutional neural networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... [11][12][13] Before the release of parasitic wasps, they should be effectively separated from their hosts, and nonparasitic hosts should be completely inhibited from emergence to reduce operational expenses in the field, improve the security and flexibility of transport and release, and extend the shelf-life of wasps. [14][15][16] This approach consequently increases the confidence of the public regarding biological agents and the efficacy of biological control. Traditionally, the obvious morphological differences between parasitic and nonparasitic tephritid pupae serve as the basis for separating braconid wasps before mass-release programs. ...
... 7,16 Usually, both of the aforementioned methods are employed together to efficiently separate parasitic wasps from nonparasitic hosts. However, these approaches have several disadvantages: (i) expensive labor and specific equipment are required for the separation of nonparasitic hosts; 14 (ii) the shelf-life of braconid wasps against tephritid pests is shortened owing to the separation procedure, resulting in less time to adjust the release plan when unexpected situations are encountered; and (iii) the risk involved in transporting fertile hosts as a result of an incomplete sorting step. 17 Irradiation is another technique that impedes the full development of the host pest. ...
... 34 However, this phenomenon contributes to the potential risk of the biological control of pests, as completely separating the nonparasitic host from the parasitic agent during the mass-rearing process is labor-intensive, timeconsuming, and may cause physical damage to the parasitoid. 14 The traditional methods conducted before the mass-release of F. arisanus relied largely on effectively distinguishing the color and size of B. dorsalis pupae, as well as using a releasing device that coincided with the body size of parasitic wasps. 7 However, the shelf-life of F. arisanus is shortened owing to the waiting time for the emergence of an unparasitized host, which, in turn, limits the flexibility of the release program and reduces the availability of parasitic agents in inaccessible regions. ...
Article
Full-text available
Background Fopius arisanus (Sonan) is an important egg–pupal endoparasitic wasp of Bactrocera dorsalis (Hendel). As traditional method of sorting nonparasitic B. dorsalis from parasitic wasps is labor‐intensive, requires specific equipment and poses the risk of spreading fertile hosts, the development of a more convenient, economical and safe sorting procedure is important. Results The optimal cyromazine emergence inhibition procedure (CEIP) involved facilitating the pupation of B. dorsalis mature larvae (Bdml) in 3 mg kg⁻¹ cyromazine sand substrate (CSS) for 48 h. When the Bdml that had been exposed to F. arisanus during the egg stage were treated with 3–7 mg kg⁻¹ CSS for 48 h, no negative effects on the emergence parameters of parasitoids were observed. Treatment with 3–4 mg kg⁻¹ CSS had insignificant effects on the biological and behavioral parameters of F. arisanus. However, treatment with 5–6 mg kg⁻¹ CSS adversely affected the fecundity and antennating activity of the wasps; specifically, 6 mg kg⁻¹ CSS negatively affected the lifespan and flight ability of wasps. Fortunately, no transgenerational effects on these parameters were observed in the progeny. Regarding the nutrient reserves of both sexes of F. arisanus, significant dose‐dependent effects were observed. Moreover, 5–6 mg kg⁻¹ CSS significantly reduced the protein and carbohydrate content in F. arisanus; in particular, 6 mg kg⁻¹ CSS notably reduced the lipid content. Conclusion CEIP provides a more flexible, economical and safe mass‐release program for F. arisanus. In addition, it has profound implications for the biological control of other dipteran pests. © 2024 Society of Chemical Industry.
... For example, previous studies have successfully used artificial intelligence to analyze and classify animal behavior (Kleanthous et al., 2022), and evaluate animal well-being (McLennan et al., 2015). Furthermore, the use of neural networks or deep learning architectures has significantly increased for solving entomological problems (Marinho et al., 2023;Peng and Wang, 2022;Tuda and Luna-Maldonado, 2020). For example, Convolutional Neural Networks (CNN) have been applied to the classification of fruit fly pupae as parasitized or healthy (Marinho et al., 2023) and the recognition of insect pest species (Peng and Wang, 2022;Tuda and Luna-Maldonado, 2020). ...
... Furthermore, the use of neural networks or deep learning architectures has significantly increased for solving entomological problems (Marinho et al., 2023;Peng and Wang, 2022;Tuda and Luna-Maldonado, 2020). For example, Convolutional Neural Networks (CNN) have been applied to the classification of fruit fly pupae as parasitized or healthy (Marinho et al., 2023) and the recognition of insect pest species (Peng and Wang, 2022;Tuda and Luna-Maldonado, 2020). Regarding honey bees, recent studies have proposed deep learning-based image classification models for assessing individual bee health state (Berkaya et al., 2021) and CNN for evaluating colony health based on acoustical data (Troung et al., 2023). ...
Article
Automatic monitoring devices placed at the entrances of honey bee hives have facilitated the detection of various sublethal effects related to pesticide exposure, such as homing failure and reduced flight activity. These devices have further demonstrated that different neurotoxic pesticide molecules produce similar sublethal impacts on flight activity. The detection of these effects was conducted a posteriori, following the recording of flight activity data. This study introduces a method using an artificial intelligence model, specifically a recurrent neural network, to detect the sublethal effects of pesticides in real-time based on honey bee flight activity. This model was trained on a flight activity dataset comprising 42,092 flight records from 1107 control and 1689 pesticide-exposed bees. The model was able to classify honey bees as healthy or pesticide-exposed based on the number of flights and minutes spent foraging per day. The model was the least accurate (68.46%) when only five days of records per bee were used for training. However, the highest classification accuracy of 99%, a Cohen Kappa of 0.9766, a precision of 0.99, a recall of 0.99, and an F1-score of 0.99 was achieved with the model trained on 25 days of activity data, signifying near-perfect classification ability. These results underscore the highly predictive performance of AI models for toxicovigilance and highlight the potential of our approach for real-time and cost-effective monitoring of risks due to exposure to neurotoxic pesticide in honey bee populations.
... ML has successfully classified the morphological characteristics of insects, including those of agricultural and forest pests (Duarte et al., 2022;Domingues et al., 2022;Mendoza et al., 2023). The ML-based automated identification of insects represents a burgeoning area within biological research (Wei and Zhan, 2023;Joelianto et al., 2024), with applications ranging from the automatic classification of parasitized fruit fly pupae (Marinho et al., 2023), detection of pine pests (Ye et al., 2022), segmentation of butterfly ecological images (Filali et al., 2022), detection of whitefly (Kamei, 2023), and automatic counting of mosquito eggs (Javed et al., 2023). ...
Article
Full-text available
In recent years, insect husbandry has seen an increased interest in order to supply in the production of raw materials, food, or as biological/environmental control. Unfortunately, large insect rearings are susceptible to pathogens, pests and parasitoids which can spread rapidly due to the confined nature of a rearing system. Thus, it is of interest to monitor the spread of such manifestations and the overall population size quickly and efficiently. Medical imaging techniques could be used for this purpose, as large volumes can be scanned non-invasively. Due to its 3D acquisition nature, computed tomography seems to be the most suitable for this task. This study presents an automated, computed tomography-based, counting method for bee rearings that performs comparable to identifying all Osmia cornuta cocoons manually. The proposed methodology achieves this in an average of 10 seconds per sample, compared to 90 minutes per sample for the manual count over a total of 12 samples collected around lake Zurich in 2020. Such an automated bee population evaluation tool is efficient and valuable in combating environmental influences on bee, and potentially other insect, rearings.
Article
Full-text available
In comparison to the competitors, engineers must provide quick, low-cost, and dependable solutions. The advancement of intelligence generated by machines and its application in almost every field has created a need to reduce the human role in image processing while also making time and labor profit. Lepidopterology is the discipline of entomology dedicated to the scientific analysis of caterpillars and the three butterfly superfamilies. Students studying lepidopterology must generally capture butterflies with nets and dissect them to discover the insect’s family types and shape. This research work aims to assist science students in correctly recognizing butterflies without harming the insects during their analysis. This paper discusses transfer-learning-based neural network models to identify butterfly species. The datasets are collected from the Kaggle website, which contains 10,035 images of 75 different species of butterflies. From the available dataset, 15 unusual species were selected, including various butterfly orientations, photography angles, butterfly lengths, occlusion, and backdrop complexity. When we analyzed the dataset, we found an imbalanced class distribution among the 15 identified classes, leading to overfitting. The proposed system performs data augmentation to prevent data scarcity and reduce overfitting. The augmented dataset is also used to improve the accuracy of the data models. This research work utilizes transfer learning based on various convolutional neural network architectures such as VGG16, VGG19, MobileNet, Xception, ResNet50, and InceptionV3 to classify the butterfly species into various categories. All the proposed models are evaluated using precision, recall, F-Measure, and accuracy. The investigation findings reveal that the InceptionV3 architecture provides an accuracy of 94.66%, superior to all other architectures.
Article
Full-text available
The Tenebrio molitor has become the first insect added to the catalogue of novel foods by the European Food Safety Authority due to its rich nutritional value and the low carbon footprint produced during its breeding. The large scale of Tenebrio molitor breeding makes automation of the process, which is supported by a monitoring system, essential. Present research involves the development of a 3-module system for monitoring Tenebrio molitor breeding. The instance segmentation module (ISM) detected individual growth stages (larvae, pupae, beetles) of Tenebrio molitor, and also identified anomalies: dead larvae and pests. The semantic segmentation module (SSM) extracted feed, chitin, and frass from the obtained image. The larvae phenotyping module (LPM) calculated features for both individual larvae (length, curvature, mass, division into segments, and their classification) and the whole population (length distribution). The modules were developed using machine learning models (Mask R-CNN, U-Net, LDA), and were validated on different samples of real data. Synthetic image generation using a collection of labelled objects was used, which significantly reduced the development time of the models and reduced the problems of dense scenes and the imbalance of the considered classes. The obtained results (average F1>0.88 for ISM and average F1>0.95 for SSM) confirm the great potential of the proposed system.
Preprint
Full-text available
Pest infestations on wheat, corn, soybean, and other crops can cause substantial losses to their yield. Early diagnosis and automatic classification of various insect pest categories are of considerable importance for accurate and intelligent pest control. However, given the wide variety of crop pests and the high degree of resemblance between certain pest species, the automatic classification of pests can be very challenging. To improve the classification accuracy on publicly available D0 dataset with 40 classes, this paper compares studies on the use of ensemble models for crop pests classification. First, six basic learning models as Xception, InceptionV3, Vgg16, Vgg19, Resnet50, MobileNetV2 are trained on D0 dataset. Then, three models with the best classification performance are selected. Finally, the ensemble models, i.e, linear ensemble named SAEnsemble and nonlinear ensemble SBPEnsemble, are designed to combine the basic learning models for crop pests classification. The accuracies of SAEnsemble and SBPEnsemble improved by 0.85% and 1.49% respectively compared to basic learning model with the highest accuracy. Comparison of the two proposed ensemble models show that they have different performance under different condition. In terms of performance metrics, SBPEnsemble giving accuracy of classification at 96.18%, is more competitive than SAEnsemble.
Article
Full-text available
Image classification has always been a hot research direction in the world, and the emergence of deep learning has promoted the development of this field. Convolutional neural networks (CNNs) have gradually become the mainstream algorithm for image classification since 2012, and the CNN architecture applied to other visual recognition tasks (such as object detection, object localization, and semantic segmentation) is generally derived from the network architecture in image classification. In the wake of these successes, CNN-based methods have emerged in remote sensing image scene classification and achieved advanced classification accuracy. In this review, which focuses on the application of CNNs to image classification tasks, we cover their development, from their predecessors up to recent state-of-the-art (SOAT) network architectures. Along the way, we analyze (1) the basic structure of artificial neural networks (ANNs) and the basic network layers of CNNs, (2) the classic predecessor network models, (3) the recent SOAT network algorithms, (4) comprehensive comparison of various image classification methods mentioned in this article. Finally, we have also summarized the main analysis and discussion in this article, as well as introduce some of the current trends.
Preprint
Full-text available
In recent years, insect husbandry has seen an increased in- terest in order to supply in the production of raw materials, food or as biological/environmental control. Unfortunately, large insect rearings are susceptible to pathogens, pests and parasitoids which can spread rapidly due to the confined nature of a rearing system. Thus, it is of interest to quickly and efficiently monitor the spread of such manifesta- tions and the overall population size. Medical imaging techniques could be used for this purpose, as large volumes can be scanned non-invasively. Due to its 3D acquisition nature, computed tomography seems to be the most suitable for this task. This study presents an automated, computed tomography-based, counting method for bee rearings that performs com- parable/similar to identifying all Osmia cornuta cocoons manually. The proposed methodology achieves this in an average of 7 minutes per sam- ple, compared to 90 minutes per sample for the manual count over a total of 12 samples collected around lake Zurich in 2020. Such an automated bee population evaluation tool is a valuable in combating environmental influences on bee, and potentially other insect, rearings.
Article
Full-text available
Insects are the largest, most diverse organism class. Their key role in many ecosystems means that it is important they are identified correctly for effective management. However, insect species identification is challenging and labour-intensive. This has prompted increasing interest in image-based systems for rapid, reliable identification supported by advances in deep learning, computer vision, and sensing technologies. We conducted a systematic literature review (SLR) to analyse and compare primary studies of image-based insect detection and species classification methods. We initially identified 980 studies published between 2010-2020 and selected from these 69 relevant studies using explicitly defined inclusion/exclusion criteria. In this SLR, we conducted a detailed analysis of the primary studies’ dataset properties (i.e. insect species targeted, crops, geographical locations, image capture methods) and insect classification techniques. We provide recommendations for future research based on the gaps our survey identified. We found many studies were conducted in China, the USA, and Brazil, but none in the African continent. The majority of the studies (78.3%) aimed to identify crop pests, mainly of rice and wheat. Only three studies specifically targeted beneficial insects, bee species and predatory species. Insect species targeted by the studies were centred around 10 insect orders out of 28. The analysis of classification methods shows a recent trend toward applying deep learning techniques compared to shallow learning techniques for insect identification. The SLR provides insight into the current state of the art and indicates promising future directions for image-based insect identification and species classification relevant to Computer Science, Agriculture and Ecology research.
Article
Full-text available
A major challenge of agriculture is to improve the sustainability of food production systems in order to provide enough food for a growing human population. Pests and pathogens cause vast yield losses, while crop protection practices raise environmental and human health concerns. Decision support systems provide detailed information on optimal timing and necessity of crop protection interventions, but are often based on phenology models that are time‐, cost‐, and labor‐intensive in development. Here, we aim to develop a data‐driven approach for pest damage forecasting, relying on big data and deep learning algorithms. We present a framework for the development of deep neural networks for pest and pathogen damage classification and show their potential for predicting the phenology of damages. As a case study, we investigate the phenology of the pear leaf blister moth (Leucoptera malifoliella, Costa). We employ a set of 52,322 pictures taken during a period of 19 weeks and establish deep neural networks to categorize the images into six main damage classes. Classification tools achieved good performance scores overall, with differences between the classes indicating that the performance of deep neural networks depends on the similarity to other damages and the number of training images. The reconstructed damage phenology of the pear leaf blister moth matches mine counts in the field. We further develop statistical models to reconstruct the phenology of damages with meteorological data and find good agreement with degree‐day models. Hence, our study indicates a yet underexploited potential for data‐driven approaches to enhance the versatility and cost efficiency of plant pest and disease forecasting.
Preprint
Full-text available
Recently, mosquito-borne diseases have been a significant problem for public health worldwide. These diseases include dengue, ZIKA and malaria. Reducing disease spread stimulates researchers to develop automatic methods beyond traditional surveillance Well-known Deep Convolutional Neural Network, YOLO v3 algorithm, was applied to classify mosquito vector species and showed a high average accuracy of 97.7 per cent. While one-stage learning methods have provided impressive output in Aedes albopictus , Anopheles sinensis and Culex pipiens , the use of image annotation functions may help boost model capability in the identification of other low-sensitivity (< 60 per cent) mosquito images for Cu. tritaeniorhynchus and low-precision Ae. vexans (< 80 per cent). The optimal condition of the data increase (rotation, contrast and blurredness and Gaussian noise) was investigated within the limited amount of biological samples to increase the selected model efficiency. As a result, it produced a higher potential of 96.6 percent for sensitivity, 99.6 percent for specificity, 99.1 percent for accuracy, and 98.1 percent for precision. The ROC Curve Area (AUC) endorsed the ability of the model to differentiate between groups at a value of 0.985. Inter-and intra-rater heterogeneity between ground realities (entomological labeling) with the highest model was studied and compared to research by other independent entomologists. A substantial degree of near-perfect compatibility between the ground truth label and the proposed model (k = 0.950±0.035) was examined in both examinations. In comparison, a high degree of consensus was assessed for entomologists with greater experience than 5-10 years (k = 0.875±0.053 and 0.900±0.048). The proposed YOLO v3 network algorithm has the largest capacity for support-devices used by entomological technicians during local area detection. In the future, introducing the appropriate network model based methods to find qualitative and quantitative information will help to make local workers work quicker. It may also assist in the preparation of strategies to help deter the transmission of arthropod-transmitted diseases.
Article
Full-text available
Simple Summary Significant advances in the domestication and artificial rearing techniques for the South American fruit fly, Anastrepha fraterculus (Diptera, Tephritidae), have been achieved since the FAO/IAEA Workshop held in 1996 in Chile. Despite the availability of rearing protocols that allow the production of a high number of flies, they must be optimized to increase insect yields and decrease production costs. In addition, evidence of sexual incompatibility between a long-term mass-reared Brazilian strain and wild populations has been found. To address these issues, this study refined rearing protocols and assessed the suitability of a bisexual A. fraterculus strain established from a target population in southern Brazil for the mass production of sterile flies. Abstract The existing rearing protocols for Anastrepha fraterculus must be reviewed to make economically viable the production of sterile flies for their area-wide application. Additionally, evidence of sexual incompatibility between a long-term mass-reared Brazilian strain and wild populations has been found. To address these issues, this study aimed to refine rearing protocols and to assess the suitability of an A. fraterculus strain for the mass production of sterile flies. A series of bioassays were carried out to evaluate incubation times for eggs in a bubbling bath and to assess the temporal variation of egg production from ovipositing cages at different adult densities. A novel larval diet containing carrageenan was also evaluated. Egg incubation times higher than 48 h in water at 25 °C showed reduced larval and pupal yields. Based on egg production and hatchability, the density of 0.3 flies/cm² can be recommended for adult cages. The diet with carrageenan was suitable for mass production at egg-seeding densities between 1.0 and 1.5 mL of eggs/kg of diet, providing higher insect yields than a corn-based diet from Embrapa. Even after two years of being reared under the new rearing protocols, no sexual isolation was found between the bisexual strain and wild flies.
Article
Full-text available
The fast and accurate identification of apple leaf diseases is beneficial for disease control and management of apple orchards. An improved network for apple leaf disease classification and a lightweight model for mobile terminal usage was designed in this paper. First, we proposed SE-DEEP block to fuse the Squeeze-and-Excitation (SE) module with the Xception network to get the SE_Xception network, where the SE module is inserted between the depth-wise convolution and point-wise convolution of the depth-wise separable convolution layer. Therefore, the feature channels from the lower layers could be directly weighted, which made the model more sensitive to the principal features of the classification task. Second, we designed a lightweight network, named SE_miniXception, by reducing the depth and width of SE_Xception. Experimental results show that the average classification accuracy of SE_Xception is 99.40%, which is 1.99% higher than Xception. The average classification accuracy of SE_miniXception is 97.01%, which is 1.60% and 1.22% higher than MobileNetV1 and ShuffleNet, respectively, while its number of parameters is less than those of MobileNet and ShuffleNet. The minimized network decreases the memory usage and FLOPs, and accelerates the recognition speed from 15 to 7 milliseconds per image. Our proposed SE-DEEP block provides a choice for improving network accuracy and our network compression scheme provides ideas to lightweight existing networks.
Article
Full-text available
The application of artificial intelligence (AI) such as deep learning in the quality control of grains has the potential to assist analysts in decision making and improving procedures. Advanced technologies based on X-ray imaging provide markedly easier ways to control insect infestation of stored products, regardless of whether the quality features are visible on the surface of the grains. Here, we applied contrast enhancement algorithms based on peripheral equalization and calcification emphasis on X-ray images to improve the detection of Sitophilus zeamais in maize grains. In addition, we proposed an approach based on convolutional neural networks (CNNs) to identity non-infested and infested classes using three different architectures; (i) Inception-ResNet-v2, (ii) Xception and (iii) MobileNetV2. In general, the prediction models developed based on the MobileNetV2 and Xception architectures achieved higher accuracy (≥0.88) in identifying non-infested grains and grains infested by maize weevil, with a correct classification from 0.78 to 1.00 for validation and test sets. Hence, the proposed approach using enhanced radiographs has the potential to provide precise control of Sitophilus zeamais for safe human consumption of maize grains. The proposed method can automatically recognize food contaminated with hidden storage pests without manual features, which makes it more reliable for grain inspection.
Chapter
Full-text available
Exposure to ionizing radiation is currently the method of choice for rendering insects reproductively sterile for area-wide integrated pest management (AW-IPM) programmes that integrate the sterile insect technique (SIT). Gamma radiation from isotopic sources (mainly cobalt-60 or decreasingly caesium-137) is commonly used, but high-energy electrons and X-rays are other practical options. Recently, the development of self-contained low-energy X-ray irradiators for insect sterilization is being encouraged because they only require access to electricity and cooling water, and avoid the complex and costly shipment of radioactive sources across international borders. Insect irradiation is safe and reliable when established safety and quality-assurance guidelines are followed. The key processing parameter is absorbed dose, which must be tightly controlled to ensure that treated insects are sufficiently sterile in their reproductive cells and yet able to compete for mates with wild insects. To that end, accurate dosimetry (measurement of absorbed dose) is critical. Data from more than 5300 studies on irradiation since the 1950s, covering more than 360 arthropod species (IDIDAS database), indicate that the dose needed for sterilization of arthropods varies from less than 5 Gy for the acridid Schistocerca gregaria (Forskål) to 400 Gy or more for some lyonetiid, noctuid, and pyralid moths. Factors such as oxygen level, insect age and stage during irradiation, and many others, influence both the absorbed dose required for sterilization and the viability of irradiated insects. Consideration of these factors in the design of irradiation protocols can help to find a balance between the sterility and competitiveness of insects produced for programmes that release sterile insects. Some programmes prefer to apply “precautionary” radiation doses to increase the security margin of sterilization, but this overdosing often lowers competitiveness to the point where the overall induced sterility in the wild population is reduced significantly.
Article
Full-text available
Brain tumor (BT) is one of the brain abnormalities which arises due to various reasons. The unrecognized and untreated BT will increase the morbidity and mortality rates. The clinical level assessment of BT is normally performed using the bio-imaging technique, and MRI-assisted brain screening is one of the universal techniques. The proposed work aims to develop a deep learning architecture (DLA) to support the automated detection of BT using two-dimensional MRI slices. This work proposes the following DLAs to detect the BT: (i) implementing the pre-trained DLAs, such as AlexNet, VGG16, VGG19, ResNet50 and ResNet101 with the deep-features-based SoftMax classifier; (ii) pre-trained DLAs with deep-features-based classification using decision tree (DT), k nearest neighbor (KNN), SVM-linear and SVM-RBF; and (iii) a customized VGG19 network with serially-fused deep-features and handcrafted-features to improve the BT detection accuracy. The experimental investigation was separately executed using Flair, T2 and T1C modality MRI slices, and a ten-fold cross validation was implemented to substantiate the performance of proposed DLA. The results of this work confirm that the VGG19 with SVM-RBF helped to attain better classification accuracy with Flair (>99%), T2 (>98%), T1C (>97%) and clinical images (>98%).
Article
Full-text available
Different densities prerelease packing and times of lethargy in the fruit fly parasitoids Diachasmimorpha longicaudata (Ashmead) were evaluated in order to standardize the process of chilled insect technique for this species. Adults were kept at densities of 0.048, 0.072, 0.096, 0.120, and 0.144 parasitoids/cm2 before release in a México tower, where thermal lethargy was induced at a temperature of 2 ± 2°C for 45 min. Samples of parasitoids were collected to evaluate mortality, survival, fecundity, and flight capacity. All densities showed a similar mortality, both for males (ca. >10%) and females (ca. <7). There was no effect of density on survival and flight capacity in both sexes. On the other hand, fecundity increased with density, 1.66 sons/♀/day, similar to the control. We conclude that a density of 30,000 pupae per cage (0.144 parasitoids/cm2) is adequate for the massive prerelease packaging of the parasitoid D. longicaudata. Regarding the thermal lethargy period, 180 min under 2 ± 2°C conditions, considered as time for management, does not affect the survival, fecundity, and flight capacity of adults. The results obtained are of great utility to establish prerelease packaging parameters for D. longicaudata used in the biological control of Tephritidae fruit fly populations.
Article
Full-text available
There are several harmful and yield decreasing arthropod pests, which live within plant tissues, causing almost unnoticeable damage, e.g. Ostrinia nubilalis Hbn., Cydia pomonella L., Acanthoscelides obtectus Say. Their ecological and biological features are rather known. The process leading to the damage is difficult to trace by means of conventional imaging techniques. In this review, optical techniques—X-ray, computer tomography, magnetic resonance imaging, confocal laser scanning microscopy, infrared thermography, near-infrared spectroscopy and luminescence spectroscopy—are described. Main results can contribute to the understanding of the covert pest life processes from the plant protection perspective. The use of these imaging technologies has greatly improved and facilitated the detailed investigation of injured plants. The results provided additional data on biological and ecological information as to the hidden lifestyles of covertly developing insects. Therefore, it can greatly contribute to the realisation of integrated pest management criteria in practical plant protection.
Article
Full-text available
Sex determination of silkworm pupae is important for silkworm industry. Multivariate analysis methods have been widely applied in hyperspectral imaging spectroscopy for classification. However, these methods require essential steps containing spectra preprocessing or feature extraction, which were not easy determined. Convolutional neural networks (CNNs), which have been employed in image recognition, could effectively learn interpretable presentations of the sample without the need of ad-hoc preprocessing steps. The species of silkworm pupae are usually up to hundreds. Conventional classifiers based on one species of silkworm pupae could not give high performance when explored to other species that not participating in the model building, resulting in bad generalization ability. In this study, a CNN model was trained to automatically identify the sex of silkworm pupae from different years and species based on the hyperspectral spectra. The results were compared with the frequently used conventional machine classifiers including support vector machine (SVM) and K nearest neighbors (KNN). The results showed that CNN outperformed SVM and KNN in terms of accuracy when applied to the raw spectra with 98.03%. However, the performance of CNN decreased to 95.09 when combined with the preprocessed data. Then principal component analysis (PCA) was adopted to reduce data dimensionality and extract features. CNN gave higher accuracy than SVM and KNN based on PCA. The discussion section revealed that CNN had high generalization ability that could classify silkworm pupae from different species with a rather well performance. It demonstrated that HSI technology in combination with CNN was useful in determining the sex of silkworm pupae.
Article
Full-text available
Background: In this study, we exploited the Inception-v3 deep convolutional neural network (DCNN) model to differentiate cervical lymphadenopathy using cytological images. Methods: A dataset of 80 cases was collected through the fine-needle aspiration (FNA) of enlarged cervical lymph nodes, which consisted of 20 cases of reactive lymphoid hyperplasia, 24 cases of non-Hodgkin's lymphoma (NHL), 16 cases of squamous cell carcinoma (SCC), and 20 cases of adenocarcinoma. The images were cropped into fragmented images and divided into a training dataset and a test dataset. Inception-v3 was trained to make differential diagnoses and then tested. The features of misdiagnosed images were further analysed to discover the features that may influence the diagnostic efficiency of such a DCNN. Results: A total of 742 original images were derived from the cases, from which a total of 7,934 fragmented images were cropped. The classification accuracies for the original images of reactive lymphoid hyperplasia, NHL, SCC and adenocarcinoma were 88.46%, 80.77%, 89.29% and 100%, respectively. The total accuracy on the test dataset was 89.62%. Three fragmented images of reactive lymphoid hyperplasia and three fragmented images of SCC were misclassified as NHL. Three fragmented images of NHL were misclassified as reactive lymphoid hyperplasia, one was misclassified as SCC, and one was misclassified as adenocarcinoma. Conclusions: In summary, after training with a large dataset, the Inception-v3 DCNN model showed great potential in facilitating the diagnosis of cervical lymphadenopathy using cytological images. Analysis of the misdiagnosed cases revealed that NHL was the most challenging cytology type for DCNN to differentiate.
Article
Full-text available
In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others. This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models.
Article
Full-text available
With the rapid development of smart manufacturing, data-driven fault diagnosis has attracted increasing attentions. As one of the most popular methods applied in fault diagnosis, deep learning (DL) has achieved remarkable results. However, due to the fact that the volume of labeled samples is small in fault diagnosis, the depths of DL models for fault diagnosis are shallow compared with convolutional neural network in other areas (including ImageNet), which limits their final prediction accuracies. In this research, a new TCNN(ResNet-50) with the depth of 51 convolutional layers is proposed for fault diagnosis. By combining with transfer learning, TCNN(ResNet-50) applies ResNet-50 trained on ImageNet as feature extractor for fault diagnosis. Firstly, a signal-to-image method is developed to convert time-domain fault signals to RGB images format as the input datatype of ResNet-50. Then, a new structure of TCNN(ResNet-50) is proposed. Finally, the proposed TCNN(ResNet-50) has been tested on three datasets, including bearing damage dataset provided by KAT datacenter, motor bearing dataset provided by Case Western Reserve University (CWRU) and self-priming centrifugal pump dataset. It achieved state-of-the-art results. The prediction accuracies of TCNN(ResNet-50) are as high as 98.95% ± 0.0074, 99.99% ± 0 and 99.20% ± 0, which demonstrates that TCNN(ResNet-50) outperforms other DL models and traditional methods.
Article
Full-text available
Insects cause a major loss in stored food grains. Besides, pestilential activities of insects in stored food grains affect the marketability as well as the nutritional values. Early detection and monitoring of insects in the stored food grains become necessary for applying corrective actions. Visual inspection, probe sampling, insect trap, Berlese funnel, visual lures, pheromone devices etc., are some of the popular methods largely used in commercial granaries or grain storage establishments. Of late, electronic nose, solid phase micro-extraction, thermal imaging, acoustic detection, etc. have been reported to be successful in detecting insects. The capability of in-situ early detection, monitoring, cost, reliability, and labor requirements are the major factors considered during for selection of the method. Detection of hidden infestation, whose population may be many times higher than the free-living insects is an important concern to mitigate the losses in bulk storage warehouses, so as to enable the early actions for fumigation or to dispose off the grain. This paper reviews some of the widely used detection methods for early detection of insects’ pestilential activities in stored food grains as well as some of the novel technologies with an emphasis on acoustic method, which has a good commercial potential.
Article
Full-text available
Deep learning constitutes a recent, modern technique for image processing and data analysis, with promising results and large potential. As deep learning has been successfully applied in various domains, it has recently entered also the domain of agriculture. In this paper, we perform a survey of 40 research efforts that employ deep learning techniques, applied to various agricultural and food production challenges. We examine the particular agricultural problems under study, the specific models and frameworks employed, the sources, nature and pre-processing of data used, and the overall performance achieved according to the metrics used at each work under study. Moreover, we study comparisons of deep learning with other existing popular techniques, in respect to differences in classification or regression performance. Our findings indicate that deep learning provides high accuracy, outperforming existing commonly used image processing techniques.
Article
Full-text available
The oriental fruit fly, Bactrocera dorsalis, is an important agricultural pest and biological control is one of the most effective control methodologies. We conducted an investigation on the molecular response of the fruit fly to parasitism by the larval parasitoid, Diachasmimorpha longicaudata using two-dimensional gel electrophoresis and mass spectroscopy. We identified 285 differentially expressed protein spots (109 proteins) during parasitism. The molecular processes affected by parasitism varied at different time point during development. Transferrin and muscle specific protein 20 are the only two proteins differentially expressed that play a role in host immunity 24 h after parasitism. Developmental and metabolic proteins from parasitoids (transferrin and enolase) were up-regulated to ensure establishment and early development of parasitoids 48 h post parasitism. 72 h after parasitism, larval cuticle proteins, transferrin and CREG1 were overexpressed to support the survival of parasitoids while host metabolism proteins and parasitoid regulatory proteins were down-regulated. Host development slowed down while parasitoid development went up at 96 h after parasitism. All developmental, regulatory, structural, and metabolic proteins were expressed at their optimum at 120 h post parasitism. Host development was reduced, metabolism and regulatory proteins were strongly involved in the activities. The development deteriorated further at 144 h after parasitism. Enolase and CREG1 were indicators of parasitoid survival. Hexamerin and transferrin from the parasitoid was peaked at 168–216 h after parasitism, strongly indicating that parasitoid would survive. This study represents the first report that reveals the molecular players involved in the interaction between the host and parasitoid.
Article
Insect pests pose a significant and increasing threat to agricultural production worldwide. However, most existing recognition methods are built upon well-known convolutional neural networks, which limits the possibility of improving pest recognition accuracies. This research attempts to overcome this challenge from a novel perspective, constructing a simplified but very useful network for effective insect pest recognition by combining transformer architecture and convolution blocks. First, the representative features are extracted from the input image using a backbone convolutional neural network. Second, a new transformer attention-based classification head is proposed to sufficiently utilize spatial data from the features. With that, we explore different combinations for each module in our model and abstract our model into a simple and scalable architecture; we introduce more effective training strategies, pretrained models and data augmentation methods. Our model's performance was evaluated on the IP102 benchmark dataset and achieved classification accuracies of 74.897% and 75.583% with minimal implementation costs at image resolutions of 224 × 224 pixels and 480 × 480 pixels, respectively. Our model also attains accuracies of 99.472% and 97.935% on the D0 dataset and Li's dataset, respectively, with an image resolution of 224 × 224 pixels. The experimental results demonstrate that our method is superior to the state-of-the-art methods on these datasets. Accordingly, the proposed model can be deployed in practice and provides additional insights into the related research.
Article
The Mexican fruit fy (Anastrepha ludens, Loew, Diptera: Tephritidae) and the Mediterranean fruit fy (Ceratitis capitata, Wiedemann, Diptera: Tephritidae) are among the world's most damaging pests affecting fruits and vegetables. The Sterile Insect Technique (SIT), which consists in the mass-production, irradiation, and release of insects in affected areas is currently used for their control. The appropriate time for irradiation, one to two days before adult emergence, is determined through the color of the eyes, which varies according to the physiological age of pupae. Age is checked visually, which is subjective and depends on the technician's skill. Here, image proc�essing and Machine Learning techniques were implemented as a method to determine pupal development using eye color. First, Multi Template Matching (MTM) was used to correctly crop the eye section of pupae for 96.2% of images from A. ludens and 97.5% of images for C. capitata. Then, supervised Machine Learning algorithms were applied to the cropped images to classify the physiological age according to the color of the eyes. Algorithms based on Inception v1, correctly identifed the physiological age of maturity at 2 d before emergence, with a 75.0% accuracy for A. ludens and 83.16% for C. capitata, respectively. Supervised Machine Learning algorithms based on Neural Networks could be used as support in determining the physiological age of pupae from images, thus reducing human error and uncertainty in decisions as when to irradiate. The development of a user interface and an automatization process could be further developed, based on the data obtained on this study.
Article
Fruit fly pests seriously affect the quality and safety of various melons, fruits, and vegetable crops. Many farmers lack sufficient knowledge of the level of pest occurrence, which leads to the over-use of pesticides, resulting in environmental pollution and quality degradation of agricultural products. The combination of artificial intelligence and agricultural technology can continuously and dynamically monitor pests in orchards, help scientific researchers and fruit farmers master pest data in time, reduce the use of artificial and pesticides, and achieve scientific early warning and prevention of pests. In this paper, the sexual attractant is placed in the pest trap bottle, the optical flow method, U-Net semantic segmentation, and the YOLOv5 algorithm with SE, CBAM, CA, and ECA attention mechanisms are used to detect and count live Bactrocera cucurbitae on the bottle surface, and then we use Hough circle detection method to detect the entrance position of the trap bottle, and finally count the number of B. cucurbitae entering the trap bottle in combination with the position information of B. cucurbitae and the entrance of the trap bottle. The experimental results show that: the accuracy of counting B. cucurbitae on the surface of the trap bottle can reach 93.5%, and the accuracy of counting B. cucurbitae entering the trap bottle can reach 94.3%, which can dynamically monitor the number of pests in the orchard in real time.
Article
Data on individual feed intake of dairy cows, an important variable for farm management, are currently unavailable in commercial dairies. A real-time machine vision system including models that are able to adapt to multiple types of feed was developed to predict individual feed intake of dairy cows. Using a Red-Green-Blue-Depth (RGBD) camera, images of feed piles of two different feed types (lactating cows' feed and heifers' feed) were acquired in a research dairy farm, for a range of feed weights under varied configurations and illuminations. Several models were developed to predict individual feed intake: two Transfer Learning (TL) models based on Convolutional Neural Networks (CNNs), one CNN model trained on both feed types, and one Multilayer Perceptron and Convolutional Neural Network model trained on both feed types, along with categorical data. We also implemented a statistical method to compare these four models using a Linear Mixed Model and a Generalised Linear Mixed Model, showing that all models are significantly different. The TL models performed best and were trained on both feeds with TL methods. These models achieved Mean Absolute Errors (MAEs) of 0.12 and 0.13 kg per meal with RMSE of 0.18 and 0.17 kg per meal for the two different feeds, when tested on varied data collected manually in a cowshed. Testing the model with actual cows’ meals data automatically collected by the system in the cowshed resulted in a MAE of 0.14 kg per meal and RMSE of 0.19 kg per meal. These results suggest the potential of measuring individual feed intake of dairy cows in a cowshed using RGBD cameras and Deep Learning models that can be applied and tuned to different types of feed.
Chapter
Large scale use of augmentative biological control (the type of biological control using beneficial organisms that are mass-reared for seasonal releases in large numbers) started about 100 years ago. Since the 1970s, the production of ABC agents has moved from a cottage industry that produced a few thousand predators or parasitoids per week, to professional research and production facilities able to provide millions of beneficial arthropods weekly. In the past five decades, many efficient agents have been identified, and quality control protocols, mass production systems, shipment and release methods and adequate guidance for farmers have been developed. Currently, 450 species of biological control agents are commercially available to control of pests on more than 45 million hectares of various crop types. Annual growth in use of augmentative biological control is now about 15% per year. The popularity of this type of biological control is the result of several inherent positive characteristics: arthropod biological control agents are safe for the environment, result in a healthier work milieu, seldom cause phytotoxic effects, and pests rarely develop resistance against these agents. Recent regulatory challenges, in particular those related to searching for exotic natural enemies, seriously frustrate exploration projects for new biological control agents. However, policy measures aimed at reduced pesticide use, and retailer and consumer demands for pesticide-free food will stimulate further increase of augmentative biological control.
Article
Advances in artificial intelligence, computer vision, and high-performance computing have enabled the creation of efficient solutions to monitor pests and identify plant diseases. In this context, we present InsectCV, a system for automatic insect detection in the lab from scanned trap images. This study considered the use of Moericke-type traps to capture insects in outdoor environments. Each sample can contain hundreds of insects of interest, such as aphids, parasitoids, thrips, and flies. The presence of debris, superimposed objects, and insects in varied poses is also common. To develop this solution, we used a set of 209 grayscale images containing 17,908 labeled insects. We applied the Mask R-CNN method to generate the model and created three web services for the image inference. The model training contemplated transfer learning and data augmentation techniques. This approach defined two new parameters to adjust the ratio of false positive by class, and change the lengths of the anchor side of the Region Proposal Network, improving the accuracy in the detection of small objects. The model validation used a total of 580 images obtained from field exposed traps located at Coxilha, and Passo Fundo, north of Rio Grande do Sul State, during wheat crop season in 2019 and 2020. Compared to manual counting, the coefficients of determination (R² = 0.81 for aphids and R² = 0.78 for parasitoids) show a good-fitting model to identify the fluctuation of population levels for these insects, presenting tiny deviations of the growth curve in the initial phases, and in the maintenance of the curve shape. In samples with hundreds of insects and debris that generate more connections or overlaps, model performance was affected due to the increase in false negatives. Comparative tests between InsectCV and manual counting performed by a specialist suggest that the system is sufficiently accurate to guide warning systems for integrated pest management of aphids. We also discussed the implications of adopting this tool and the gaps that require further development.
Preprint
In this paper, a detection tool has been built for the detection and identification of the diseases and pests found in the crops at its earliest stage. For this, various deep learning architectures were experimented to see which one of those would help in building a more accurate and an efficient detection model. The deep learning architectures used in this study were Convolutional Neural Network, VGG16, InceptionV3, and Xception. VGG16, InceptionV3, and Xception are categorized as the pre-trained models based on CNN architecture. They follow the concept of transfer learning. Transfer learning is a technique which makes use of the learnings of the models previously trained on a base data and applies it to the present dataset. This is an efficient technique which gives us rapid results and improved performance. Two plant datasets have been used here for disease and insects. The results of the algorithms were then compared. Most successful one has been the Xception model which obtained 82.89 for disease and 77.9 for pests.
Article
Machine learning (ML) techniques excel at forecasting, clustering, and classification tasks, making them valuable for various aspects of mosquito control. In this literature review, we selected 120 papers relevant to the current state of ML for mosquito control in urban settings. The reviewed work covers several different methodologies, objectives, and evaluation criteria from various environmental contexts. We first divided the existing papers into geospatial, visual, or audio categories. For each category, we analyzed the machine learning pipeline, from dataset creation to model performance. We conclude with a discussion of the challenges and opportunities for further research. While the reviewed ML methods in mosquito control are promising, we recommend a) increased use of crowdsourced and citizen science data, b) a standardized and open ML pipeline for reproducible results, and c) research that incorporates advances in ML. With these suggestions, ML techniques could lead to effective mosquito control in urban environments.
Article
Most animal species on Earth are insects, and recent reports suggest that their abundance is in drastic decline. Although these reports come from a wide range of insect taxa and regions, the evidence to assess the extent of the phenomenon is sparse. Insect populations are challenging to study, and most monitoring methods are labor intensive and inefficient. Advances in computer vision and deep learning provide potential new solutions to this global challenge. Cameras and other sensors can effectively, continuously, and noninvasively perform entomological observations throughout diurnal and seasonal cycles. The physical appearance of specimens can also be captured by automated imaging in the laboratory. When trained on these data, deep learning models can provide estimates of insect abundance, biomass, and diversity. Further, deep learning models can quantify variation in phenotypic traits, behavior, and interactions. Here, we connect recent developments in deep learning and computer vision to the urgent demand for more cost-efficient monitoring of insects and other invertebrates. We present examples of sensor-based monitoring of insects. We show how deep learning tools can be applied to exceptionally large datasets to derive ecological information and discuss the challenges that lie ahead for the implementation of such solutions in entomology. We identify four focal areas, which will facilitate this transformation: 1) validation of image-based taxonomic identification; 2) generation of sufficient training data; 3) development of public, curated reference databases; and 4) solutions to integrate deep learning and molecular tools.
Article
Insects are among the important causes of significant losses in crops such as rice, wheat, corn, soybeans, sugarcane, chickpeas, potatoes. Identification of insect species in the early period is crucial so that the necessary precautions can be taken to keep losses at a low level. However, accurate identification of various types of crop insects is a challenging task for the farmers due to the similarities among insect species and also their lack of knowledge. To address this problem, computerized methods, especially based on Convolutional Neural Networks (CNNs), can be employed. CNNs have been used successfully in many image classification problems due to their ability to learn data-dependent features automatically from the data. Throughout the study, seven different pre-trained CNN models (VGG-16, VGG-19, ResNet-50, Inception-V3, Xception, MobileNet, SqueezeNet) were modified and re-trained using appropriate transfer learning and fine-tuning strategies on publicly available D0 dataset with 40 classes. Later, the top three best performing CNN models, Inception-V3, Xception, and MobileNet, were ensembled via sum of maximum probabilities strategy to increase the classification performance, the model was named SMPEnsemble. After that, these models were ensembled using weighted voting. The weights were determined by the genetic algorithm that takes the success rate and predictive stability of three CNN models into account, the model was named GAEnsemble. GAEnsemble achieved the highest classification accuracy of 98.81% for D0 dataset. For the sake of robustness ensembled model, without changing the initial best performing CNN models on D0, the process was repeated by using two more datasets such that SMALL dataset with 10 classes and IP102 dataset with 102 classes. The accuracy values for GAEnsemble are 95.15% for SMALL dataset and 67.13% for IP102. In terms of performance metrics, GAEnsemble is competitive compared to the literature for each of these three datasets.
Article
The present study describes the population dynamics of Bactrocera dorsalis (Hendel) in Manica Province (Mozambique), provides estimates of mango yield, economic losses and the economic injury level. From September 2014 to August 2015, methyl eugenol-baited traps were placed in three selected orchards (Trangapasso, Produssola and Pandafarm) and mango fruits were collected and incubated. Data on potential yield was estimated in each of the orchards and the owners were interviewed to collect information regarding mango selling prices. Differences in B. dorsalis catches, percentage of damaged fruits and infestation rates were determined across locations as well as sampling dates. The economic injury level (EIL) was estimated for each orchard. The populations of B. dorsalis started to build up in November 2014 reaching an abundance peak in January 2015 (40.26 flies/trap/day). Population growth was correlated to average temperature, month and host availability. The highest percentage of damaged fruits (77.16 %) and the highest rates of infestation (41.27 %, B. dorsalis/kg) were recorded during January. Average yield losses associated with fruit flies were estimated at 5.65 t/ha with a financial loss of USD 3428.97/ha. EIL varied among orchards and was estimated at 33.14 B. dorsalis/trap/week on average.
Article
The current livestock management landscape is transitioning to a high-throughput digital era where large amounts of information captured by systems of electro-optical, acoustical, mechanical, and biosensors is stored and analyzed on a daily and hourly basis, and actionable decisions are made based on quantitative and qualitative analytic results. While traditional animal breeding prediction methods have been used with great success until recently, the deluge of information starts to create a computational and storage bottleneck that could lead to negative long-term impacts on herd management strategies if not handled properly. A plethora of machine learning approaches, successfully used in various industrial and scientific applications, made their way in the mainstream approaches for livestock breeding techniques, and current results show that such methods have the potential to match or surpass the traditional approaches, while most of the time they are more scalable from a computational and storage perspective. This article provides a succinct view on what traditional and novel prediction methods are currently used in the livestock breeding field, how successful they are, and how the future of the field looks in the new digital agriculture era.
Chapter
The motivation behind this paper is to build up customized applications into easy to use Google Cloud Platform that contains 18000 pictures of butterflies from the Saturniidae, Nymphalidae and Papilionidae family to distinguish Butterfly classifications, image fruits or larval host plants and pupae deformities by incorporated trailblazing innovation to give profitable experiences into the activity. The exploration techniques utilized shading channels, mass channels, and shape discovery as exchange figured out the prepared pictures indicated using machine vision in CNN. The procedure will utilize a computer vision through scanning pupae quality before they are conveyed to the market of purchasers. The pictures were (a) transferred and named; (b) prepared, (c) assessed, and (d) tried in AutoML Vision. The computerized devices expectation certainty or accuracy normal is 97.1% and review of 91.3% out of 429 pictures 11 labels, 4 test images in top 5 labels appeared in the assessment. The next model was created and trained about 398 images9 labels, 4 test images. Precision and recall are based on a score threshold of 0.5. Precision help high exactness model produces less false positives 90.9% and review help high review model produces less false negatives 85.1%. The purpose of the study was to provide computerized instruments incorporated product search information communicate with GCP Cloud Vision API with the utilization of the Corvid Node Runtime to the online business website of Insects Connection, including machine vision, and sensor combination in items to robotize the retail exchange buy installment strategies.
Article
Fine-grained image classification is a more detailed division of the sub-categories of traditional image classification, which achieves a more sophisticated identification of objects. And it is a very challenging research in the field of computer vision. By analyzing the existing fine-grained image classification algorithm and Xception model, we propose to apply the Xception model to the fine-grained image classification task. Initialization of convolution layers uses pre-training model parameters of ImageNet classification. Then we resize images, transform data type, normalize value, and randomly initialize classifier. Finally, the network is fine-tuned. Our method obtains 71.0%, 89.9% and 91.4% per-image accuracy on the CUB200-2011, Flower102 and Stanford Dogs dataset respectively. The experimental results show that the Xception model has good generalization ability in fine-grained image classification. Because it does not need additional annotation information such as object bounding box and part annotation, the Xception model has good versatility and robustness in fine-grained image classification.
Article
Several fruit fly species are invasive pests that damage quality fruits in horticultural crops and cause significant value losses. The management of fruit flies is challenging due to their biology, adaptation to various regions and wide range of hosts. We assessed the historical and current approaches of fruit fly management research worldwide, and we established the current knowledge of fruit flies by systematically reviewing research on monitoring and control tactics, according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. We performed a systematic review of research outputs from 1952 to 2017, by developing an a priori defined set of criteria for subsequent replication of the review process. This review showed 4,900 publications, of which 533 publications matched the criteria. The selected research studies were conducted in 41 countries for 43 fruit fly species of economic importance. Although 46% of the studies were from countries of North America, analysis of the control tactics and studied species showed a wide geographical distribution. Biological control was the most commonly studied control tactic (29%), followed by chemical control (20%), behavioral control, including SIT (18%), and quarantine treatments (17%). Studies on fruit flies continue to be published and provide useful knowledge in the areas of monitoring and control tactics. The limitations and prospects for fruit fly management were analyzed, and we highlight recommendations that will improve future studies.
Technical Report
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively.
Article
Agricultural pests severely affect both agricultural production and the storage of crops. To prevent damage caused by agricultural pests, the pest category needs to be correctly identified and targeted control measures need to be taken; therefore, it is important to develop an agricultural pest identification system based on computer vision technology. To achieve pest identification with the complex farmland background, a pest identification method is proposed that uses deep residual learning. Compared to support vector machine and traditional BP neural networks, the pest image recognition accuracy of this method is noticeably improved in the complex farmland background. Furthermore, in comparison to plain deep convolutional neural networks such as Alexnet, the recognition performance in this method was further improved after optimized by deep residual learning. A classification accuracy of 98.67% for 10 classes of crop pest images with complex farmland background was achieved. Accordingly, the method has a high value of practical application, and can be integrated with currently used agricultural networking systems into actual agricultural pest control tasks.