Conference PaperPDF Available

Probabilistic Neural Network for the Automated Identification of the Harlequin Ladybird (Harmonia Axyridis)

Authors:

Abstract and Figures

This paper describes recent work in the UK to automate the identifi-cation of Harlequin ladybird species (Harmonia axyridis) using color images. The automation process involves image processing and the use of probabilistic neural network (PNN) as classifier, with an aim to reduce the number of color images to be examined by entomologists through pre-sorting the images into correct, questionable and incorrect species. Two major sets of features have been extracted: color and geometrical measurements. Experimental results re-vealed more than 75% class match for the identification of taxa with similar-colored spots.
Content may be subject to copyright.
adfa, p. 1, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Probabilistic Neural Network for the Automated
Identification of the Harlequin Ladybird (Harmonia
Axyridis)
M. Z. Ayob1 and E. D. Chesmore2
1 Universiti Kuala Lumpur British Malaysian Institute
Gombak, Selangor Malaysia
mohdzaki@bmi.unikl.edu.my
2 Department of Electronics
University of York
YO10 5DD York, UK
david.chesmore@york.ac.uk
Abstract. This paper describes recent work in the UK to automate the identifi-
cation of Harlequin ladybird species (Harmonia axyridis) using color images.
The automation process involves image processing and the use of probabilistic
neural network (PNN) as classifier, with an aim to reduce the number of color
images to be examined by entomologists through pre-sorting the images into
correct, questionable and incorrect species. Two major sets of features have
been extracted: color and geometrical measurements. Experimental results re-
vealed more than 75% class match for the identification of taxa with similar-
colored spots.
Keywords: automated identification, feature extraction, image processing,
probabilistic neural network
1 Introduction
The use of automated identification of ladybirds using color images has considera-
ble potential for biodiversity monitoring and has not been previously explored. In
spite of the many advantages, automating the identification process is not trivial. Tra-
ditionally, an expert will manually obtain details from the elytra and other features for
species identification. Dichotomous keys have been most commonly used, which are
based on morphological criteria such as color and size. These physical traits are too
small to examine without magnification, considering the size of typical ladybirds
themselves which are around 2-8 mm, and harlequin ladybirds vary around 6-8 mm
[1]. Furthermore, most ladybird species have a variety of color forms and spot pat-
terns. Both intra-species and inter-species variations can be very large, making spe-
cies identification a difficult skill to master. Part of the objectives of this research is to
capture specific features that would make pre-sorting of ladybird species easier. These
factors contribute to the development of an automated system for ladybird identifica-
tion. In this paper the focus is the identification of harlequin ladybirds (Harmonia
axyridis), an alien ladybird species found in the UK.
2 Background
The Harlequin ladybird is a voracious ladybird species that has been invading the
UK since 2004. It feeds on aphids and other native ladybird species [2-3]. The beetle
has now spread to most parts of England and Wales. It is receiving considerable atten-
tion due to its potential impact on ecological and biological balance [4]. To make
things worse, there has been a general decline in the taxonomic workforce which in
effect has been part of the taxonomic impediment to biodiversity studies [5]. In re-
sponse, The UK Ladybird survey is a web-based recording system for the general
public to send in their sightings and photographs. These color images enable the phys-
ical details of the ladybird forewings (called ‘elytra’) to be visually examined through
image processing steps, which will be explained further in subsequent sections.
There are up to 15 described color forms of H. axyridis; however, only three dif-
ferent forms are studied here. Fig. 1 shows three color forms of H. axyridis. It shows
the variation in colors; hence the problem to perform automated identification is
based on color images only. Table 1 contains the physical description, which serves as
an identification guide for harlequin ladybirds in UK. In terms of size and shape, the
Harlequin is generally large and the length is between 6 mm to 8 mm. The pronotum
pattern can be white or cream in color. It can contain up to 5 spots or fused lateral
spots forming 2 curved lines, M-shaped mark or solid trapezoid. The wing cases have
wide keel at the back, and legs are almost always brown.
(a) Form succinea (b) Form conspicua (c) Form spectabilis
Fig. 1. Top-view of the three forms of H. axyridis (Source: CEH Wallingford, UK)
Table 1. Identification guide for H. axyridis f. succinea, f. conspicua and f. spectabilis
Character
Form succinea
Form conspicua
Form spectabilis
Color
Orange with 0 to
21 black spots
Black with 2
red/orange spots,
and inner black
spots
Black with 4
red/orange spots
Shape
Round and domed
Round and domed
Round and domed
Size
6-8 mm
6-8 mm
6-8 mm
There are two major development works in this investigation, which are image
processing and intelligent systems. The work in image processing involves two major
processes: greyscale operations and color image processing. Section 3 covers the
concepts and experimental work that has been carried out in image processing. These
operations are implemented in MATLAB 2012 and GIMP2 [6]. Subsequently, the
intelligent system consisting of PNN been implemented in WEKA [7] is elaborated.
Section 4 discusses the results of experiments on PNN, and in Section 5 the paper
concludes with future recommendation.
3 Methodology
3.1 System block diagram
A prototype automated species identification system has been developed to distin-
guish UK ladybird species using image processing and probabilistic neural networks
(PNN). The block diagram is shown in Fig. 2.
Fig. 2. Block diagram of the prototype of an automated ladybird identification system
3.2 Image processing
Based on color variation, the author has initiated investigating the use of color as
the leading feature for identifying ladybird species. Based on visual observation, color
represents the natural characteristic of ladybirds. Ladybirds commonly possess large
variation in body color. Some are quite obvious, for instance, there is a species com-
monly called ‘orange ladybird’ which has orange-colored elytra and sixteen white
spots. There are also ‘striped ladybirds’ with brownish elytra and cream-white stripes.
The sightings can be confirmed based on publications by the UK ladybird identifica-
tion guide, Leicester & Rutland color key and Southampton Natural History Society
[1, 8-9].
In this work, CIELAB color space is used for representing pixel colors obtained
from the elytra and body markings. CIELAB is an approximate uniform color scale to
represent visual difference in the form of color plane, and able to represent chroma
separately from lightness [10]. It is also able to represent the chroma values in the
form of color plane. The maximum values for a* and b* are +120, while minimum
values are -120. The range for L-axis is 0-100. Unlike RGB, CIELAB is also device
independent.
CIELAB was selected as the desired color space to reduce illumination problems
since the majority of ladybird images obtained for the study were photographed in
their natural habitat. These images are prone to illumination issue, which is a variable
that is quite difficult to control. By separating lightness and chroma using CIELAB as
the color plane, subsequent work has been made simpler to solve [11-12]. CIELAB is
useful as it distinguishes one specific color to be distinct from another visually similar
color, and the difference between chroma values can be calculated [13]. Hence, errors
between visual perception and actual values due to bad illumination are significantly
reduced.
Another issue with the images is the background clutter. This is shown in Fig. 3,
where work on the image of a scarce 7-spot ladybird (Coccinella magnifica) without
background clutter revealed elytra markings better than the same image with back-
ground. Based on testing, it has been decided that the use of CIELAB is limited to
capturing pixel values on the elytra and spots. Segmentation and elimination of back-
ground clutter are done in RGB color space. Body marking measurements are per-
formed in greyscale and the image then converted into binary format. Only the colors
of the body markings are captured in CIELAB values.
For each image, CIELAB values are obtained by reading the average L*, a* and b*
values from a user-interactive pixel capture box. The size of the capture box is not
fixed. It varies between 25x25 to 100x100 square pixels depending on the image reso-
lution, hence user and image-dependent. Higher resolution images require a smaller
capture box, and vice versa. If the size of the capture box were fixed, and the image is
of low-resolution then the border pixels become indistinct and edges are difficult to
locate. Once the average values were obtained, each value was normalized to [-1, 1].
(a)
(b)
Fig. 3. (a) Image of scarce 7-spot (with background) after segmentation showing background
clutter, and (b) same image with clean background
The following formulae were applied for normalization:

 (1)

 (2)

 (3)
Fig. 4 shows the representation of the spot color and elytra/BG color (in CIELAB
values) on normalized color planes. Refer Table 2 for species names and acronyms.
3.3 Geometrical characters extraction
In addition to color, geometrical characters have been utilized for identification. In
order to get geometrical measurements, greyscale and binary image processing were
performed rather than using color image processing as it involves minimal complica-
tions to perform binary processing in one channel. Initially images have been convert-
ed to greyscale via the MATLAB function ‘rgb2gray’ and resized to 640x480 pixels.
The image is then converted to CIELAB for extracting pixel values. Geometrical
measurements on spots and elytra are then performed.
(a)
(b)
Fig. 4. CIELAB color distributions among local ladybird species and H. axyridis
3.4 Data organization
Three groups of ladybird data are formed: white, red and black. These are named
based on typical ladybird’s spot color. Table 2 shows the grouping and their acronym.
Both UK ladybirds and Harlequins images are used in the experiments.
Table 2. Ladybird groups, species and acronym (in brackets)
Groups
White
Red
Black
Calvia quattuordec-
imguttata Linnaeus
(C14)
Halyzia sedecimgut-
tata Linnaeus (H16)
Exochomus quadri-
pustulatus Linnaeus
(E4)
Harmonia axyridis f.
spectabilis Pallas
(H1)
Harmonia axyridis f.
conspicua Pallas (H2)
Adalia bipunctata
Linnaeus (A2)
Coccinella quinque-
punctata Linnaeus
(C5)
Coccinella sep-
tempunctata Linnaeus
(C7)
Harmonia axyridis f.
succinea Pallas (H3)
For the identification of biological species, results are typically presented in terms
of contingency table, or better known as confusion matrix. Based on the formulation
by Bradley [14], the extended metrics used are:

 (4)

 (5)

 (6)
A total of 40 samples per group were used, whereby cross-validation was applied
to all samples. Cross validation is a technique commonly used to compare models, to
estimate accuracy of a classifier and to avoid over fitting [16-18]. By doing so, the
generalisation of a trained classifier is assessed against an independent dataset. A
variant of cross validation is called K-fold cross validation, where the dataset is parti-
tioned into K equal-sized folds and the holdout method is repeated K times [19]. For
each run, one of the folds is used as the test set and the (K-1) remaining folds used for
training. Each of the K folds will be used once as validation data. After K runs, the
average cross validation error across all runs is computed. The error is an estimate of
how the classifier would perform if the data collected is an accurate representation of
the real world.
3.5 Probabilistic neural networks (PNN)
Probabilistic neural networks (PNN) are used as the classifier. A classifier func-
tions to map unlabeled instances to a class label using internal data structures [19].
PNN is chosen as classifier over other techniques because it implements kernel dis-
crimination analysis, meaning the operations are organized into a multilayer feed
forward neural network consisting of input layer, radial basis layer and competitive
layer [20-21]. For a supervised classifier such as PNN to work, it contains exemplars
and targets. The network structure is shown in Fig. 5.
Fig. 5. PNN network structure (Source: MathWorks)
The input layer consists of nodes that receive the data. The radial basis layer con-
tains a probability density function (pdf), using a given set of data points as centers.
The PNN uses normalized Gaussian radial basis functions as a network [22]. The
output layer selects the highest value and determines the class label. In general, a
PNN for M classes is defined as the following [23]:


 (7)
where j = 1,…, M and nj is the number of data points in class j.
A decision boundary is found by finding the numerical solution to the above for
each class. For instance, for a two-class problem this is done by equating y1(x) to y2(x)
and finding solution using grid search [24].
4 Results and discussion
In this work, 10-fold cross validation was applied and the following results were
obtained:
Table 3. Confusion matrix for C14H16 (white group)
C14
H16
C14
40
0
H16
0
40
Table 4. Detailed metrics for C14H16 (white group)
Class
TP rate
FP rate
Precision
Recall
AUC
C14
1
0
1
1
1
H16
1
0
1
1
1
Weighted
average
1
0
1
1
1
Note:
Results obtained at MinStdDev = 0.1, no. of clusters = 2, AUC = area under curve
Table 5. Confusion matrix for E4H1H2 (red group)
E4
H1
H2
E4
40
1
0
H1
0
28
17
H2
0
11
23
Table 6. Detailed metrics for E4H1H2 (red group)
Class
TP rate
FP rate
Precision
Recall
AUC
E4
1
0.013
0.976
1
0.993
H1
0.7
0.213
0.622
0.7
0.825
H2
0.575
0.138
0.676
0.575
0.847
Weighted
average
0.758
0.121
0.758
0.758
0.888
Note:
Results obtained at MinStdDev = 0.1, no. of clusters = 3, AUC = area under curve
Table 7. Confusion matrix for A2C5C7H3 (black group)
A2
C5
C7
H3
A2
39
0
0
0
C5
0
34
0
10
C7
0
0
39
0
H3
1
6
1
30
Table 8. Detailed metrics for A2C5C7H3 (black group)
Class
TP rate
FP rate
Precision
Recall
AUC
A2
0.975
0
1
0.975
1
C5
0.85
0.083
0.773
0.85
0.954
C7
0.975
0
1
0.975
1
H3
0.75
0.067
0.789
0.75
0.935
Weighted
average
0.888
0.038
0.891
0.888
0.972
Note:
Results obtained at MinStdDev = 0.1, no. of clusters = 4, AUC = area under curve
From the results in Table 3, it has been shown that PNN is able to identify
ladybirds in the white-spotted ladybirds group to 100% accuracy. The fact that they
have been grouped based on their spot colors means PNN is able to completely
discriminate the two species. Next, test results on the red-spotted ladybirds group
reveals 75.83% accuracy. For H1, one instance of misidentification as E4 and eleven
misidentifications as H2. For H2, the 0.575 TP rate means seventeen instances of H1
have been confused as H2.
Table 7 shows 88.75% accuracy for the identification of ladybirds in the black-
spotted group. The TP rate of 0.975 for A2 shows only one misidentification as H3.
This shows that between the two species A2 and H3, there are inherent similarities
which contribute to the low numbers of misidentification. Similar observation can be
said for C7 and H3. Interesting enough for H3, 10 out of 40 instances have been
misidentified as C5 giving only 0.75 TP rate.
Use of CIELAB values to represent spot colors and applying PNN in ladybird iden-
tification are some contributions to knowledge. The observations in this work will be
further investigated in the near future to see any correlations, for instance, to under-
stand which character contributes more towards the identification process. Finding
this ‘contribution factor’ will involve deeper investigations into network model, struc-
ture and the PNN algorithm itself. It will also involve shuffling between taxa to inves-
tigate intra-species and inter-species variations, as experimented by Ayob and
Chesmore using multilayer perceptron (MLP) [25]. At present, it can be said that
PNN works well to identify up to 4 taxa with a minimum of 75.83% accuracy.
5 Conclusion
In short, the use of PNN can be regarded as highly useful in the automated
identification of ladybirds, including alien species. This is supported by experimental
results on three groups of ladybirds involving UK species and Harmonia axyridis,
which have been pre-grouped by their spot colours. Two-species identification of
white-spotted taxa reveals 100% accuracy, while the identification of red-spotted taxa
and black-spotted taxa reveals 75.83% and 88.75% accuracies respectively. Future
investigations will look into the role of characters towards species identification.
Acknowledgment
The authors would like to extend thanks to Universiti Kuala Lumpur, University of
York and Majlis Amanah Rakyat (MARA) for sponsoring the research work. Special
thanks to Centre for Ecology & Hydrology (CEH) Wallingford, UK for supplying
stock ladybird images.
References
1. UK Ladybird Survey, http://www.coleoptera.org.uk
2. Ware, R.L., Majerus, M.E.: Intraguild Predation of Immature Stages of British and Japanese
Coccinellids by the Invasive Ladybird Harmonia Axyridis. BioControl, 53, 169-188 (2008).
3. Majerus, M.E.N., Strawson, V., Roy, H.: The Potential Impacts of the Arrival of the
Harlequin Ladybird, Harmonia Axyridis (Pallas) (Coleoptera:Coccinellidae), in Britain.
Ecological Entomology, 31, 207-215 (2006).
4. UK Harlequin Survey, http://www.harlequin-survey.org/ recognition_and_distinction.htm
5. Hopkins, G.W., Freckleton, R.P.: Decline In the Numbers of Amateur and Professional
Taxonomists: Implications for Conservation. Animal Conservation, 5, 245-249 (2002).
6. GIMP2, http://www.gimp.org/
7. WEKA, http://www.cs.waikato.ac.nz/ml/weka/
8. Community Heritage Initiative, P. Mabbott, ed. A colour key for identifying ladybirds in
Leicester & Rutland, http://www.leics.gov.uk/celebrating_wildlife
9. Southampton Natural History Society, Ladybirds of Southampton,
sotonnhs.org/docs/LadybirdAll.pdf
10. CIELAB colour models Technical Guides,
http://www.dba.med.sc.edu/price/irf/Adobe_tg/models/cielab.html
11. Torres, L., Reutter, J.Y., Lorente, L.: The Importance of the Color Information in Face
Recognition. In: International Conference on Image Procesing (ICIP 99), Oct. 1999, 3, 627-
631 (1999).
12. Yip, A., Sinha, P.: Role of Color in Face Recognition. In: MIT tech report (ai.mit.com) AI
Memo 2001-035, Massachusetts Institute of Technology, Cambridge, USA (2001).
13. Vízhányó, T., Felföldí, J.: Enhancing Colour Differences in Images of Diseased
Mushrooms. Computers and Electronics in Agriculture, 26, 187198 (2000).
14. Bradley, A.: The Use of the Area Under the ROC Curve in the Evaluation of Machine
Learning Algorithms. Pattern Recognition, 30, 1145-1159 (1997).
15. Tom Fawcett: An Introduction to ROC Analysis. Pattern Recognition Letters 27(8): pp. 861-
874 (2006).
16. Omid, M.: Design of an Expert System for Sorting Pistachio Nuts Through Decision Tree
and Fuzzy Logic Classifier. Expert Systems with Applications, 38, pp. 4339-4347 (2011).
17. Wolpert, D. H.: Stacked Generalization. Neural Networks, 5, pp. 241-259 (1992).
18. Prechelt, L.: Automatic Early Stopping Using Cross Validation: Quantifying the Criteria,
Neural Networks, 11(4), pp. 761-767 (1998).
19. Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model
Selection. In: Proceedings of the International Joint Conference on Artificial Intelligence
(IJCAI), San Francisco, CA, Morgan Kaufmann, pp. 1137-1143 (1995).
20. Wu, S. G., Bao, F.S., Xu, E.Y., Wang, Y.X., Chang, Y.F., Xiang, Q.L.: A Leaf Recognition
Algorithm for Plant Classification Using Probabilistic Neural Network. In: 2007 IEEE
International Symposium on Signal Processing and Information Technology. 15-18
December 2007, Giza, pp. 11-16 (2007).
21. MathWorks documentation, http://www.mathworks.com/help/nnet/ug/probabilistic-neural-
networks.html
22. Hagan, M.T., Demuth, H.B., Beale, M.: Neural Network Design. Beijing: PWS Publishing
Company, ISBN 7-111-10841-8 (2002).
23. Foody, G.M.: Thematic Mapping from Remotely Sensed Data with Neural Networks: MLP,
RBF and PNN based Approaches, In: Journal of Geographical Systems, 3, pp.217232
(2001).
24. X. Hong, Probabilistic neural network (PNN),
www.personal.reading.ac.uk/~sis01xh/teaching/CY2D2/Pattern3.pdf
25. Ayob, M. Z., Chesmore, E.D.: Hybrid Feature Extractor for Harlequin Ladybird
Identification Using Color Images. In: 2012 IEEE Symposium on Computational
Intelligence in Bioinformatics and Computational Biology (CIBCB), San Diego, pp. 214-
221 (2012).
... Examples using deep learning classifiers as in [28,31,33] provide good performance but incur in high computational costs as they skip the classification space reduction, for example, sometimes preferring to slide a subwindow into the whole image to attempt any detection or classification task. On the other hand, classifiers based on shallow learning as in [34,35] are less complex but with inferior performances. Manual identification by experts is still used as the reliable approach. ...
... In [40], an SVM classifier with radial basis kernel function was used to identify four species of rice pests on a data set with 156 feature vectors, reaching an ACC score of 97.5%. In [34], a set of color and geometrical features was computed to classify 360 images of ladybird beetles using a probabilistic neural network-based classifier, achieving a mean ACC value of 88.19%. Likewise, the developed method in [35] employed a combination of a multilayer perceptron and a J48 decision tree to classify 9 species of ladybird beetles, obtaining an averaged ACC of 81.93%. ...
... Given the background summarized in this section, it is possible to notice that most previously developed methods were employed to tackle the insect detection and classification problem in a general way. Only a few of them were applied to detect and classify ladybird beetles, but obtaining reduced performance as in [34,35,37] or incurring in a high model cost of classification as in [33]. Therefore, we expect that the contributions of the proposed method address the existing limitations of developing automatic ladybird beetle detection models without a high classification cost and without losing detection performance. ...
Article
Full-text available
Fast and accurate taxonomic identification of invasive trans-located ladybird beetle species is essential to prevent significant impacts on biological communities, ecosystem functions, and agricultural business economics. Therefore, in this work we propose a two-step automatic detector for ladybird beetles in random environment images as the first stage towards an automated classification system. First, an image processing module composed of a saliency map representation, simple linear iterative clustering superpixels segmentation, and active contour methods allowed us to generate bounding boxes with possible ladybird beetles locations within an image. Subsequently, a deep convolutional neural network-based classifier selects only the bounding boxes with ladybird beetles as the final output. This method was validated on a 2, 300 ladybird beetle image data set from Ecuador and Colombia obtained from the iNaturalist project. The proposed approach achieved an accuracy score of 92% and an area under the receiver operating characteristic curve of 0.977 for the bounding box generation and classification tasks. These successful results enable the proposed detector as a valuable tool for helping specialists in the ladybird beetle detection problem.
Conference Paper
Full-text available
We review accuracy estimation methods and compare the two most common methods: crossvalidation and bootstrap. Recent experimental results on arti cial data and theoretical results in restricted settings have shown that for selecting a good classi er from a set of classiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment|over half a million runs of C4.5 and a Naive-Bayes algorithm|to estimate the e ects of di erent parameters on these algorithms on real-world datasets. For crossvalidation, we vary the number of folds and whether the folds are strati ed or not � for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, the best method to use for model selection is ten-fold strati ed cross validation, even if computation power allows using more folds. 1
Article
Full-text available
To ensure the effective conservation of biodiversity the distribution of species needs to be accurately characterized and areas of high species richness located. For many taxa this can be achieved only by experienced taxonomists. Taxonomic research has a large input from non-professional or amateur researchers, in addition to professionals working at museums or universities. The decline of taxonomy and the number of taxonomists within the professional community has been widely publicized, but the trends in the activities of amateur taxonomists are unclear. Because amateurs contribute many valuable records of species occurrence this may have a disproportionate impact upon the information available for conservation planning and represents an underappreciated threat to conservation planning. We use taxonomic research by UK entomologists in order to evaluate the changing role of both amateur and professional taxonomists. We reviewed contributions by British-based authors to Entomologist's Monthly Magazine over the past century. Our results show that both amateur and professional taxonomy have undergone a long and persistent decline since the 1950s, in terms of both the number of contributors and the number of papers contributed. It is argued that the conservation community needs to help try and reverse the decline of taxonomy.
Article
Full-text available
Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
Article
This paper describes research into the automation of the identification of harlequin and other ladybird species using color images. The automation process involves image processing and the use of an artificial neural network as a classifier. The ultimate aim is to reduce the number of color images to be examined by an expert by pre-sorting the images into correct, questionable and incorrect species. The ladybirds are 3-dimensional and the images have variable resolution. CIELAB has been useful as the color space in this research, as it provides good separation of chroma components from luminance on a color plane. Two major sets of features have been extracted from ladybird images: color and geometrical measurements. The system combination consisted of J48 decision trees which were used to filter out unnecessary features, and multilayer perceptron which was used for classification. Trials using ladybird images showed 92% class match for the species Harmonia axyridis f. spectabilis against Exochomus 4-pustulatus. The identification results are rotation and translation invariant. The methods allow quantitative data on both intra-species and inter-species variation for biodiversity studies.
Article
1. The harlequin ladybird, Harmonia axyridis, has recently arrived in Britain. 2. This species has been introduced from Asia into many parts of the world for biological control purposes. 3. In many parts of North America it has become the predominant aphidophagous coccinellid in less than 20 years, and in north-western Europe it is spreading and increasing in number rapidly. 4. Since establishment in North America and continental Europe, reports of its effectiveness as a biological control agent of aphids and coccids have been accompanied by accounts of negative effects on other aphidophagous species and humans. 5. Here the potential impacts of the arrival of the harlequin ladybird in Britain are assessed.
Article
Declines in native aphidophages in North America have been linked to intraguild predation (IGP) by the invasive coccinellid Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae). It is feared that many British species will face a similar fate following the recent establishment of H.axyridis in the UK. Meanwhile, H.axyridis exists in apparent ecological equilibrium with other members of its guild in Japan. The impact of H.axyridis on British coccinellids is uncertain but intraguild predatory interactions do occur, particularly amongst immature stages. This study investigates IGP between immature stages of H.axyridis and various British and Japanese coccinellids. The only asymmetric IG predator of H.axyridis at first instar was Anatis ocellata (Linnaeus). Harmonia axyridis engaged in symmetric IGP with Coccinella septempunctata Linnaeus, Calvia quatuordecimguttata (Linnaeus), Harmonia quadripunctata (Pontoppidan) and Eocaria muiri Timberlake, but was the asymmetric IG predator of all other species studied. The level of IGP was high between fourth instar larvae, and frequently biased towards H.axyridis, except in the case of A.ocellata, which again was the only IG predator of H.axyridis. In interactions between fourth instar larvae and pre-pupae, IGP was unidirectional towards H.axyridis for all species except A.ocellata, which acted as both IG predator and IG prey. Pupae were better protected against IGP than pre-pupae but most species were still susceptible to attack by H.axyridis, although IGP was symmetric with A.ocellata, and H.quadripunctata pupae were never attacked. The differences in susceptibility of the various species and developmental stages to IGP by H.axyridis are discussed in relation to physical defence structures. We find no evidence that Japanese species have superior defences to British ones and suggest that behavioural strategies may enable co-existence in the native range. We discuss the relevance of IGP by H.axyridis to the species it is likely to encounter in Britain. KeywordsBehavioural defence-Coccinellidae-Co-evolution-Coleoptera-Harlequin ladybird- Harmonia axyridis -Intraguild predation-Invasive alien species-Physical defence
Article
Discoloration of mushrooms (senescence, damage, and bacterial infection) is an undesirable phenomenon in mushroom houses and on the market. Simple cluster analysis was not sufficient to discriminate the browning caused by disease from the natural browning of the mushroom. The transformation of RGB values to a* and b* colour components and the elimination of intensity gave a definitely better separation for the diseases. The best rate of separation was achieved by means of a vectorial normalisation, which was developed on the basis of the statistical analysis of the distribution of the colour points.
Article
Cross validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped before convergence to avoid the overfitting (`early stopping'). The exact criterion used for cross validation based early stopping, however, is chosen in an ad-hoc fashion by most researchers or training is stopped interactively. To aid a more well-founded selection of the stopping criterion, 14 different automatic stopping criteria from three classes were evaluated empirically for their efficiency and effectiveness in 12 different classification and approximation tasks using multi-layer perceptrons with RPROP training. The experiments show that, on average, slower stopping criteria allow for small improvements in generalization (in the order of 4%), but cost about a factor of 4 longer in training time.