ResearchPDF Available

Automatic identification of honeybee Callistemon rigidus pollen by optical microscopic color images indexing and retrieval

Authors:

Abstract and Figures

The purpose of this work is to identify pollen grain from its honey bee microscopic samples images acquired using the light optical microscope coupled with a camera. we want to create a robust and automatic methodology to distinguish from loads of pollen based on computer vision system. To reach this goal, The signatures of the objects to be sought, here the invariant moments named ‘Hu moments’ which is calculate to the co-occurrence matrix of the isolated object are compute and store to the database. The objects are isolated by segmentation using the fuzzy C-mean method. Givin a query image, that is segmented and his signature is compute. The object is matching in the images by comparison of their signatures. Results: After several tests performed on Callistemon rigidus pollen taken in several (seven) different positions, it appears that the moments phi1 and phi4 best describe the Callistemon rigidus pollen whatever the angle at which he finds himself on the microscope slide. Therefore, these moments phi1 and phi4 are the signature of this pollen. The retrieval result on a set of 26 images is 88% with 0% of wrong detection. Conclusion: The invariant moments (Hu moments) calculated on the co-occurrence matrix of the image of the isolated Callistemon rigidus pollen constitute the signature of this pollen.
Content may be subject to copyright.
Automatic identification of honeybee
Callistemon rigidus pollen by optical
microscopic color images indexing and
retrieval
Ngatcheu Tientcheu André Mitherand1,2*, Bitjoka Laurent1,3,4*, Boukar
Ousman1,3,4, Tonye Emmanuel5
1 Modelisation, Image Processing and Applications Research Group (MOTRIMA), ENSAI of
Ngaoundéré
2 The Faculty of Sciences, The University of Ngaoundéré, PO Box 455 Ngaoundéré,
Cameroon
3 National School of Agro-Industrial Sciences (ENSAI of Ngaoundéré), The University of
Ngaoundéré, PO Box 455 Ngaoundéré, Cameroon
4 Biophysics and Food Biochemistry Laboratory, ENSAI of Ngaoundéré
5 National School of Engineering, Electronics and signal processing laboratory (LETS),
University of Yaoundé 1, PO Box 8390 Yaoundé
.
ABSTRACT
Aims: The purpose of this work is to identify pollen grain from its honey bee microscopic
samples images acquired using the light optical microscope coupled with a camera.
Study Design: Image processing, honeybee pollen classification, Callistemon rigidus.
Place and Duration of Study: Modelisation, Image Processing and Applications Research
Group laboratory (MOTRIMA) Department of Physics, the Laboratory of Biology of the
Faculty of Sciences, the Laboratory of Biophysics and biochemistry of the Food Science and
Nutrition of the National school of Agro-Industrial Sciences (University of Ngaoundéré,
Cameroon) between August 2009 and March 2010.
Methodology: we want to create a robust and automatic methodology to distinguish from
loads of pollen based on computer vision system. To reach this goal, The signatures of the
objects to be sought, here the invariant moments named ‘Hu moments’ which is calculate to
the co-occurrence matrix of the isolated object are compute and store to the database. The
objects are isolated by segmentation using the fuzzy C-mean method. Givin a query image,
that is segmented and his signature is compute. The object is matching in the images by
comparison of their signatures.
Results: After several tests performed on Callistemon rigidus pollen taken in several (seven)
different positions, it appears that the moments phi1 and phi4 best describe the Callistemon
rigidus pollen whatever the angle at which he finds himself on the microscope slide.
Therefore, these moments phi1 and phi4 are the signature of this pollen. The retrieval result
on a set of 26 images is 88% with 0% of wrong detection.
Conclusion: The invariant moments (Hu moments) calculated on the co-occurrence matrix
of the image of the isolated Callistemon rigidus pollen constitute the signature of this pollen.
Keywords: color image processing - object recognition - fuzzy C-mean segmentation - honey
bee pollen pollen counting
1. INTRODUCTION
Honey is a natural sweet substance and is produced by honeybees from the nectar of
flowers, from secretion of living parts of plants. This product is of particular interest to
humans because of its therapeutic, cosmetic, nutritional and dietary properties (Bogdanov
and Blumer, 2001; Mbawala et al., 2002). In fact, honey has been claimed (Abdulla and
Abdulaziz, 1998) to have therapeutic properties in the treatment of digestive, respiratory,
cardiac and rheumatic disorders. Several studies have reported honey’s immunological,
antibacterial, anti-inflammatory, antipyretic properties besides its importance in terms of
energy intake. Furthermore, honey has been proved to possess wound healing and
analgesic actions (Abdulla and Abdulaziz, 1998; Pereira et al., 1998). Recently, the demand
for natural honey has increased, consequently, methods to assure the authenticity of honey
can be economically important. Several factors contribute to the quality properties of honey,
such as high osmotic pressure, lower water activity, low pH, and low protein content, among
others (Anklam, 1998).
The authenticity of honey has two aspects, one related to honey production, and the other to
description, such as geographic and botanical origin (Corbella and Cozzolino, 2008). A
number of techniques have been used to determine honey authenticity and botanical origin,
including the determination of aromatic compounds and flavonoids, amino acids and sugars
by high performance liquid chromatography (HPLC), detection of aroma compounds by gas
chromatography-mass spectrometry (GC-MS), determination of anions and cations by ion
chromatography (IC), and mineral content (Anklam, 1998; Mateo and Bosch-Reig, 1998;
Anapuma et al., 2003; Serrano et al., 2004). Spectroscopic techniques, such as mid infrared
(MIR), near infrared (NIR) and Raman spectroscopy were also used to determine chemical
characteristics (e.g., sugars) and contamination in honey samples from different origins
(Cozzolino and Corbella, 2005; Bertelli et al., 2007). Characterization of honey by means of
both chemical and sensory properties was also investigated (Latorre et al., 1999; 2000;
Hermosin et al., 2003; Cordella et al., 2003; Terrab et al., 2004). Quality control methods, in
conjunction with multivariate statistical analysis, have been found to be able to classify
honey from different geographic regions, detect adulteration and describe chemical
characteristics (Cordella et al., 2002; 2003; Marini et al., 2004; Devillers et al., 2004). The
method used commonly to analyze pollen is visual. This visual method, based on optical
microscopy, only takes into account the shape and size characteristics and does not take
into account special physical characteristics such as color and texture of the pollen grain.
Indeed the pollen grains have a complex three-dimensional structure and can appear on the
microscope in any orientation, and experts are unable to characterize micro texture
information and use them as a criterion for discriminating pollen loads from different species
of plants. Furthermore, it is well known that human visual analysis is highly subjective,
tedious and time-consuming.
An alternative method to analyze a pollen grain is to use computer vision systems (CVS).
CVS consists of a standard illuminant, a camera for image acquisition and software to
process the image. The use of CVS in food sciences has increased in recent years (Bitjoka
et al., 2010; Blasco et al. 2003, 2009a, 2009b, 2009c; Kang et al., 2008; Mendoza and
Aguilera, 2004) and was investigated to analyze pollen (Carrion et al., 2004; Carlos Travieso
et al., 2011a, 2011b; Juan Briceño et al., 2011; Li and Flenley, 1999; Allen et al., 2006;
Rodriguez-Damián et al., 2006; Ronneberger et al., 2008, Sander et al., 2009).
2. MATERIAL AND METHODS / EXPERIMENTAL DETAILS / METHODOLOGY
Honey samples were obtained directly from the beekeepers and the Callistemon rigidus
pollen from the plant in blossom. The honey samples were collected during the April 2004
season. Samples were harvested from the location of Dang in the Adamaoua region
(Ngaoundéré Cameroun) , located in the Vina Division at about 15 km of the Ngaoundéré
town in Cameroun, Africa. This village has a Soudano-Guinean climate. It’s altitude of 1106
meters and is located at 7° 02’ N latitude and 13° 04’E longitude. Samples are prepared
using the modified acetolysis less method described by Louveaux (Louveaux et al. 1978).
2.1 Sample, sampling and pollen analysis
Two honey samples were collected from little bottles directly provided by the
beekeepers. After, 10 g of honey of each one were weighed into a beaker and then diluted
with 10 ml of distilled water. The water may be heated to 45 ° C (not more) to facilitate the
dilution of honey. The mixture is stirred using a magnetic stirrer. After complete dissolution,
the sample is centrifuged for 10 min at 2500 rpm (revolutions per minute). The pellet
(depositing on the bottom) was then observed and we may qualitatively note its importance
(low, very low, medium, large, and very important). The floating liquid was rejected because
the sugars of honey crystallize remaining on the blade and prevent the proper reading of it.
The centrifuge tube filled with 10 ml of distilled water and centrifuged again 5 min at 2500
rpm and the pellet is collected and displayed in the middle of a clean glass slide. The slide is
dried in an oven at 35 ° C. After drying the preparation, it is mounted on a slide with a drop
of gelatinized glycerin. We made two microscopic slide preparations for each of the two
samples of honey.
To have a natural image of the Callistemon rigidus pollen, the preparation of the blade
proceeds as follows: we carve some stamen of the flower which then spreads in the medium
of the slide; we pour there then some drop of oil ether and let the blade dry to the ambient
temperature. Lastly, after drying of the preparation, it is assembled between blade and slide
with some drop of gelatinized glycerin. It can also be observed at the light microscope.
2.2 Analysis Method
After preparation, slides were observed and filmed using a digital transmitted light
optical microscope (TUBUS monocular, two large fields oculars WF5x and WF16x, optic rate
20x-1280x, 220-5.5 VS, 200 MA, Germany) equipped with a digital photograph sensor
(resolution 352x228 pixels) and connected to a computer. The images were obtain with the
microscope setting to objective 60 and acular 10.
the image pocessing method is divide into three stages,
- extraction of the primitive or mask of the object sought
- precomputing the signature of the object sought
- the retrieval of the object in query image
The technique is globaly divide into two steps. The first one is offline, when the algorithm
compute the features vectors (signature) of the pollen to sougth and store it to the database.
These features are the invariant moments calculated on the coocurence matrice of the
image. In the second step, the matching stage wich is online, the same features are
compute for the query image. Object retrieval is then done by comparing the signature of the
object in database with these of each portion epoused by the mask of the object images.
2.2.1 Object primitive extraction
The pollens are microscopic objects: so, their positions can’t be accurately controlled on the
microscope slides. Therefore, we seek pollen that is present in the optimal position to extract
the primitive, which is a binary mask of the shape of the object look at see in the pictures.
The extraction is done as shown in the diagram of figure 1. In fact, Color image
segmentation referred to the partitioning of a multi-channel image into meaningful objects.
The segmentation was based on measurements taken from the image and might be grey
level, colour, texture, depth or motion.In this paper, segmentation was an initial step
intended to remove background from the Callistemon rigidus pollen grain. After reading the
RGB color image, we transform it into a grayscale image, by bursting the three Red Green
and Blue components. We segment the green component of the image into two classes
using the fuzzy C-mean (FCM) algorithm.
Fig. 1. Pollen extraction primitive (mask) Diagram
2.2.1.1 The fuzzy C-mean classification
Fuzzy c-mean is an iterative classification method used to classify individuals
according to a set number of classes C (Chuai-Aree et al., 2000, Guillaume Serge, 2001). It
calculates each time the centers of classes and generates a matrix U belonging to these
classes of individuals.
Lets Vi be the centroid or prototype of class i, U the matrix of coefficients μik and Xc the
coordinates of the centers. Given the number of class C, the number of individual n and the
blur exponent m (m> 1), the objective of the method is to find U and Xc which minimize the
cost function j given by the following relations:
𝑗(𝑈,𝑉,𝑚) = 𝜇𝑖𝑘
𝑚𝐷𝑘𝑖
2
𝑛
𝑘=1
𝑐
𝑖=1
With 𝜇𝑖𝑘
𝑐
𝑖=1 .. For all k = 1... n, Dki is a metric chosen in the sense of a norm. Generally, it is
the Euclidean norm. Thus Dki = is the distance between the vector and the prototype Vi.
2.2.1.2 Improvement
The algorithm gives an image on two levels of gray. Then, proceed to extract the smallest
rectangle encompassing the object that we normalize at 0 (background) and 1 (object). This
is the binary mask of the object, and multiplying it with each grayscale component image,
the 0 pixels absorb who they are grown and we have thus a picture with an object on a
uniform black background.
2.2.2 Computing the object signature
The signature of the object is a vector consisting of parameters descriptors
calculated on the final image of the object isolated on uniform background. Since these
parameters are for the modeling of specific object, they must be invariant in translation,
rotation, to the symetrization and the resolution of the object (Muselet, 2005).The choice of
descriptors is done after a number of tests on different images in different positions of the
isolated object on uniform background. Each of theses images, 6 at least, is chosen so that
it represents one face of the object in view of a transparent cube if the object was inside the
cube (figure 8). Thus, the signing of our pollen, Callistemon rigidus is a combination of
Pretreatment: transformation of the color image into gray
level image.
Segmentation of the image by the fuzzy C-mean method
in two classes
Extraction of the rectangle encompassing the object and
working
Begin: Reading of the image
End
central moments (Gonzalez and Woods, 2002) calculated on the co-occurrence matrix of the
image. These moments are insensitive to rotations, translations and change of scale applied
on the pollen isolated.
2.2.2.1 Co occurrence matrix
The co-occurrence matrix (Muselet et al,. 2003) measuring the distribution between the color
components in the image, while taking into account the spatial interactions between pixels. for
an image I coded in a color space (C1, C2, C3), we consider Ck, Ck' two of the three
components (C1, C2, C3), θ the particular direction that the relationship between pixels will be
analyzed ( 0 ˚, 45 ˚, 90 ˚ and 135 ˚), v the distance (in pixels) between the pixel to analyze
these neighbors and the co-occurrence matrix that measures the color spatial interaction
between the components Ck and Ck' pixel of the image I found at a spatial distance v one of
the other, depending on the direction θ. The contents of the cell (i, j)) of this matrix indicates
the number of times a pixel P in image I, whose level of color component Ck'(P) is equal to j
has in their neighborhood after the θ direction, a pixel P', located at a distance v of P and
whose level of component Ck (P') is equal to i.
2.2.2.2 Choice of the signature
The signature of texture is a characteristic vector produced by calculating the statistical
parameters (descriptors) on the co-occurrence matrix of the image of pollen. The pollens are
microscopic objects and their positions can’t be controlled on the microscope slides. Thus, a
test performed on a set of images allowed to retain the moment’s φ1 and φ4 as the
signature of pollen of Callistemon rigidus. This test involved analyzing the evolution of seven
moments on a set of images of pollen of Callistemon rigidus isolated on a uniform
background black.
2.2.3 Object matching in an image
To match the object in an image was done following the blog diagram shown in the
figure 2. This step is online and the matching object on this paper was done by comparison
of signature of the object with those at the database.
3. RESULTS AND DISCUSSION
The proposed method was tested on a set of 26 images of Callistemon rigidus
pollens acquired using an optical microscope coupled to an image sensor (Camera):
objective 60 and ocular 10.
3.1 The primitive Extraction
The image of Figure 3 occupied the best position among the pollen images acquired.
It is used to retrieve the mask of the object to find. Figure 4 shows the decomposition into
RGB components of the image in Figure 3.
Fig. 2. Algorithm of search for an object in the image
Beginning of the algorithm
Continue on the entire image
End of the algorithm
yes
If Err <= 0.02
no
If Npol < np
no
Yes
Fig. 3. Pollen of Callistemon rigidus in its optimal position
Fig. 4. The RVB components of the pollen of Callistemon rigidus in its optimal
position
3.2 Segmentation in two classes
Various approaches to color image segmentation were found in the literature and were
roughly classified into several categories: clustering methods (Sowmya and Sheelarani,
2009; Balafar et al., 2010), edge-based methods (Carron and Lambert, 1994), region
growing methods (Trémeau and Borel, 1998) and variationnal methods (Kichenassamy et
al., 1995). Clustering methods of the color histogram use 3D information and are time
consuming, so methods based on multi-thresholding of color planes might be preferred
(Lezoray and Cardot, 2002).but,fuzzy c-mean clustering method show best result for
segmentation of very textured image and so, for the pollen grain image. The mask of the
object being the same for the three RGB components, one component is necessary to get it:
we choose the red one. Thus the application of the fuzzy c-mean algorithm on the red
component pollen gave the black and white image of Figure 5 with the following parameters:
Number of classes c = 2;
Fuzzy Coefficient: m = 2;
Stop Parameter: ε = 0.001 and the number of iteration is adjusted according to the images.
(a) (b)
Fig. 5. Image of Callistemon rigidus pollen (a) and by FCM segmented image (b)
3.3 Improvement (post- processing)
Segmentation gives a Black object on a white background. This image is still noisy: we
extract the smallest rectangle encompassing the object. The background of the image
should not contain information for the signature of the object; we perform an inversion of the
pixels of the extracted object in order to obtain an image with a white object on a Black
background. The white object still containing black noise, then we proceed to denoising by
dilation of black pixels in the object. These steps are presented by the Figure 6.
(a) (b) (c)
Fig. 6. Improvement of the mask of the pollen of Callistemon rigidus:
(a) Smallest rectangle encompassing the object;
(b) Inversion of the pixels of the object;
(c) denoising of the object of the image (b).
3.4 Signature of the object
Once an authentic mask of the object being extracted, it is multiplied with each component
of the original image to obtain an isolated object on a Black uniform background, and
concatenating the three RGB components of the object, obtains the color isolated object on
a Black uniform background image as presented in Figure 7.
(a) (b) (c) (d)
Figure 7: Steps for obtaining Callistemon rigidus pollen isolated on a black
background:
(a) Red component on a black background,
(b) Green component on a black background,
(c) Blue component on a black background,
(d) Concatenation of three components.
3.5 Choice of the signature
The evaluation of seven moments on pollen isolated in different positions gave the curves in
Figure 8. We note that the moments Phi1, phi2, Phi3 and phi4 vary very little with the angle
of rotation of Pollen; the other parameters phi varying enough against these same angles.
This study was done on a set of seven Callistemon rigidus pollen in different positions on the
slide preparation showed that only moments Phi1 and phi4 varied little for most of the
Callistemon pollen. Thus, the signature of this pollen is a vector consisting of these two
moments Phi1 and phi4.
3.6 The object matching Results
The matching program apply on a set of 26 microscopic images of honey gave the
performance of the table 1:
Table 1: Summary performance of detecting program of pollen in the image
Number of tested images
26
Number of images containing the Callistemon rigidus pollen
20
Number of images containing of pollens of other families
6
Numbers of Good detection of Callistemon rigidus pollens
17
Number of Callistemon rigidus pollen undetected (blurred images)
3
Numbers of poor detection of Callistemon rigidus pollens
0
Percentage of correct identification: 17/20
85%
Percentage of misidentification: 0/6
Precision:
0%
100%
Success rate: 23/26
88,46%
Pollen analysis of honey by traditional or conventional methods is quite subjective. Thus, the
results provided are equally subjective. With automatic methods, one obtains up to 96% of
classification rate but always brings about some wrong classifications or a misclassification
rate different to 0%. Our method is robust since it detects the pollens sought with 0% of
misclassification despite the image quality, which represent 100% in term of precision.
Since the conventional method based on human visual analysis is very subjective and
comparing to Carrion P., ( Carrion et al., 2004), the success rate of 88.46% for this
preliminary research is very encouraging because of the very poor images quality.
4. CONCLUSION
This work shows that the color texture analysis combined with invariant moments is a fairly
robust tool for object recognition. The invariant moments tested on the co-occurrence matrix
of the object gave the best result. The Fuzzy C-mean classification method followed by a
labeling step method provides a fairly reliable segmentation.
Looking ahead, we believe that we should study the behavior of the method with various
other attributes of invariants moments. In addition, we will introduce a neuro- fuzzy classifier
to improve the obtained result.
Fig. 8. Evolution of Hu moments on Callistemon rigidus pollen images isolated on a
Black background. The seven images here represent the different face of Callistemon
rigidus pollen grain (polar and equatorial view)
ACKNOWLEDGMENTS
Authors thank Dr. Tchuenguem Fohouo Fernand-Nestor (Associate Professor), Head of the
Laboratory of Biology at the Faculty of science (The University of Ngaoundéré, Cameroon),
for his help on pollen analysis and for samples.
REFERENCES
1. Abdulla F., Abdulaziz M. A (1998). The prophylactic and curative effect of cedar honey
induced ulcers in rabbits. The Second International Arab Apicultural Conference-
Amman, 1: 26-31.
2. Allen G.P., Hodgson R. M., Marsland S. R., Arnold G., Flemmer R. C., Flenley J.,
Fountain D. W. (2006). Automatic recognition of light-microscope pollen images. Images
and vision computing, new zeland.
3. Anapuma D., Bhat K. K., Sapna V. K. (2003). Sensory and physico-chemical properties
of commercial samples of honey. Food Chem. 83:183-191.
4. Anklam E. (1998). A review of the analytical methods to determine the geographical and
botanical origin of honey. Food Chem. 63:549-562
5. Balafar M.A., Ramli A. R., Mashohor S. (2010). Edge-preserving Clustering Algorithms
and Their Application for MRI Image Segmentation. Proceeding of the International
MultiConference of Ingeneers and Computers Scientists, vol I
6. .Bertelli D., Plessi M., Sabatini A.G., Lolli M., Grillenzoni F. (2007). Classification of
Italian honeys by mid-infrared diffuse reflectance spectroscopy (DRIFTS). Food Chem.
101, 1565-1570.
7. Bitjoka L., Boukar O., Tenin D., Mbofung C. M., Tonye E. (2010). Digital camera images
processing of hard-to-cook beans. Journal of Engineering and Technology Research
Vol. 2(9), pp. 177-188.
8. Blasco J., Cubero S., Gómez-Sanchís J., Moltó E. (2009a). Automatic sorting of
satsuma (Citrus unshiu) segments using computer vision and morphological features.
Computers and electronics in agriculture, 66(1): 1-8.
9. Blasco J., Cubero S., Gómez-Sanchís J., Mira P., Moltó E. (2009b). Development of a
machine for the automatic sorting of pomegranate (Punicagranatum) arils based on
computer vision, J. Food Eng., 90(1): 27-34.
10. Blasco J., Aleixos N., Gómez-Sanchís J., Moltó E. (2009c). Recognition and
classification of external skin damage in citrus fruits using multispectral data and
morphological features. Biosys. Eng., 103(2): 137-145.
11. Blasco J., Aleixos N., Molto E. (2003) Machine vision system for automatic quality
grading of fruit. Biosyst. Eng., 85: 415423.
12. Bogdanov S., Blumer P. (2001). Propriétés antibiotiques naturelles du miel. Centre
Suisse de Recherches Apicoles. Station Fédérale de Recherches Laitières, Liebefeld,
CH.3003 Berne. (Suisse), 8p.
13. Carlos M. Travieso, Juan C. Briceño, Jaime R. Ticay-Rivas, Jesús B. Alonso, (2011a).
Pollen classification based on contour features. INES 2011 15th International
Conference on Intelligent Engineering Systems, Poprad, Slovakia
14. Carlos M. Travieso, Jose L. Vásquez, Juan C. Briceño, (2011b). 2D digital processing
tools applied to Biodiversity Conservation. I WORKSHOPINTELIGENCIA
BIOINSPIRADA
15. Carrion P., Cernadas E., Galvez J.F., Damian M., de Sa-Otero P. (2004). Classification
of honeybee pollen using a multiscale texture filtering scheme. Machine Vision and
Applications, 15, 186193.
16. Carron T., Lambert P., (1994). Color edge detector using jointly hue, saturation and
intensity, Proc. ICIP, 3, p. 977
17. Chuai-Aree S., Lursinsap C., Sophatsathit P., Siripant S. (2000). Fuzzy C Mean: A
statistical feature classification of text and image segmentation method, Proc. of
Intern. Conf. on Intelligent Technology 2000, December 13-15, Assumption University
Bangkok, Thailand, pp. 279-284.
18. Corbella E., Cozzolino D., (2008). Combining multivariate analysis and pollen count to
classify honey samples accordingly to different botanical origins. Chilean Journal of
Agricultural Research, 68, 1, 102-107.
19. Cordella C., Militao J.S., Clement M-C., Cabrol-Bass D. (2003). Honey characterization
and adulteration detection by pattern recognition on HPAEC-PAD profiles. 1. Honey
floral species characterization. J.Agric. Food Chem. 51:3234-3242.
20. Cordella Ch., Moussa I., Martel A-C., Sbirrazzuoli N., Lizzani-Cuvelier L. (2002). Recent
developments in food characterisation and adulteration detection: technique-oriented
perspective. J. Agric. Food Chem.50:1751-1764.
21. Cozzolino, D., Corbella E. (2005). The use of visible and near infrared spectroscopy to
classify the floral origin of honey samples produced in Uruguay. J. Near Infrared
Spectros. 13:63-68.
22. Devillers, J., Morlot M., Pham-Delegue M.H., Dore J.C. (2004). Classification of
monofloral honeys based on their quality control data. Food Chem. 86:305-312.
23. Gonzalez R. C., R. E. Woods; (2002). Digital Image Processing. Prentice Hall; 2nd
edition (January 15, 2002).
24. Guillaume Serge, (2001). Induction de règles floues interprétables, Thèse de Doctorat
Laboratoire d’analyse et d’architecture des systèmes du CNRS Toulouse.
25. Hermosin I., Chicon R.M., Cabezudo M.D. (2003). Free amino acid composition and
botanical origin of honey. Food Chem. 83:263-268.
26. Juan C. Briceño, Carlos M. Travieso, Jose L. Vásquez, (2011). A contour feature
oriented system for biological species classification. I WORKSHOPINTELIGENCIA
BIOINSPIRADA
27. Kang S. P., East A. R., Trujillo F. J. (2008). Color Vision system evaluation of bicolour
fruit: A case study with ‘B744 mango. Postharvest Biliogy Technol., 49: 77-85.
28. Kichenassamy S., Kumar A., Olver P.J., Tannenbaum A.,Yezzi A. (1995). Gradient flows
and geometric active contours, Proc. ICCV, pg. 810
29. Latorre M.J., Peña R., García S., Herrero C. (2000). Authentication of Galician (N.W.
Spain) honey by multivariate techniques based on metal content data. Analyst 125:307-
310.
30. Latorre, M.J., Peña R., Pita C., Botana A., García S., Herrero C. (1999). Chemometric
classification of honey samples according to their type. II. Metalcontent data. Food
Chem. 66:263-268.
31. Lezoray O., Cardot H. (2002). Histogram and watershed based segmentation of color
images, Proceedings of CGIV'2002, pp. 358-362
32. Li P., Flenley J. R. (1999). Pollen texture identification using neural networks. Grana 38,
5964 17.
33. Louveaux J., Maurizio A., Vorwohl G. (1978). Methods of Melissopalinology. Bee World,
59 (4), 139-157.
34. Marini F., Magri A.L., Balestrieri F., Fabretti F., Marinia D. (2004). Supervised pattern
recognition applied to the discrimination of the floral origin of six types of Italian honey
samples. Anal.Chim.Acta 515:117-125.
35. Mateo R., Bosch-Reig F. (1998). Classification of Spanish unifloral honeys by
discriminant analysis of electrical conductivity, color, water content, sugars and pH. J.
Agric. Food Chem. 46:393-400.
36. Mbawala A., Djouldé D. R., Essia Ngang J.-J., Tchuenguem Fohouo F.-N., Etoa F.-X.
(2002). Activité antimicrobienne de quelques miels d’origines variées de la savane de
l’Adamaoua camerounais. Cam. J. Biol. Bioch. Sc., 12 (1), 8-17.
37. Mendoza F, Aguilera J. M. (2004). Application of image analysis for classification of
ripening bananas. J. Food Sci., 69: 47-477.
38. Muselet D.,( 2005). Reconnaissance automatique d’objets sous éclairage non contrôlé
par analyse d’images couleur. Thèse de Doctorat Laboratoire d’Automatique, Génie
Informatique & Signal UMR CNRS 8146 de Lille 1.
39. Muselet D., Motamed C., Macaire L., Postaire J. G. (2003). Cooccurrence matrices
of color feature vectors for multi-camera vehicle identification. In Procs. Of the
conf. On Advanced Concepts for Intelligent Vision Systems ACIVS2003, pages 2229,
Ghent, Belgium.
40. Pereira P.C.M., Barraviera B., Burini R.C., Soares A.M.V.C., Bertani M.A. (1998). Use of
honey as nutritional and therapeutic supplement in the treatment of infectious diseases.
The Venomous Animals and Toxins. Preliminary Report; 1: 1-2
41. Rodrĺguez-Damián M., Cernadas E., Formella A., Fernández-Delgado M., De Sá-Otero
P., (2006). Automatic Detection and Classification of Grains of Pollen Based on Shape
and Texture, IEEE transactions on systems, man, and cybernetics-part c: applications
and reviews, vol. 36, no. 4
42. Sander H. Landsmeer, Emile A. Hendriks, Lettya. de Weger, Johan H. C. Reiber,
Berend C. Stoel (2009). Detection of pollen grains in multifocal optical microscopy image
of air samples, Microscopy Research and Technique 72:424430
43. Ronneberger O., Q. Wang, H. Burkhardt, (2008). Fast and robust segmentation of
spherical particles in volumetric data sets from brightfield microscopy. Biomedical
Imaging: From Nano to Macro, 5th IEEE International Symposium
44. Serrano S., Villarejo M., Espejo R., Jodral M. (2004). Chemical and physical parameters
of Andalusian honey: classification of Citrus and Eucalyptus honeys by discriminant
analysis. Food Chem. 87:619-625.
45. Sowmya B., Sheelarani B. (2009). Colour Image Segmentation Using Soft computing
Techniques. International Journal of Soft Computing Applications, pp.69-80
46. Terrab A., Escudero M.L., Gonzalez-Miret, Heredia F.J. (2004). Colour characteristics of
honey as influenced by pollen grain content: a multivariate study. J. Sci. Food Agric.
84:380-386.
47. Trémeau A., Borel N., (1998). A region growing and merging algorithm to color
segmentation, Pattern Recognition,30(7), 1191
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
this paper. The method is based on the segmentation of each color plane independently using a watershed based thresholding of the plane histograms. The segmentation maps obtained for each color plane are fused together according to a fusion operator taking into account a concordance of the labels of each segmentation map. This operator produces a fused segmentation map containing labeled and unlabeled pixels which is used as an image of seeds for a region growing method : the color watershed. The color watershed produces the final segmentation of the initial image. This segmentation scheme is experimented using several types of medical images and results in a fast and robust segmentation
Article
Full-text available
Honey, royal jelly, propolis, materials produced and/or gathered by Apis mellifera honeybees have been used as food and medication for centuries(6,7,8). Although extensively used, honeybee products have become a matter of interest and controversy for years. Honey, well known as healthy and natural food, has shown therapeutic properties in the treatment of digestive, respiratory, cardiac and rheumatic disorders, among others(4). Several studies have reported honey's immunological, antibacterial, antiinflammatory, antipyretic properties besides its importance in terms of energy intake(3,5). Furthermore, honey has proved to possesses wound healing and analgesic actions(1,2). Honey and propolis are also known to be effective in the treatment of respiratory disorders caused by bacteria and fungi which lead to a deterioration of the patient's nutritional and immunological condition(6,7). Thus, a nutritional supplementation with propolis and honey associated with specific treatment can contribute to the patient's recovery.
Thesis
L'objectif du travail présenté dans ce mémoire est l'induction de règles floues interprétables à partir de données dans le but de la coopération homme machine. Dans l'état de l'art que nous avons réalisé, les méthodes d'induction de règles floues sont organisées en trois familles. Leur comparaison montre que l'interprétabilité n'est pas garantie par le formalisme flou. La partie principale de ce mémoire décrit la méthode d'induction de règles floues que nous proposons. Elle vise à satisfaire trois conditions d'interprétabilité : lisibilité du partitionnement, nombre de règles minimal, règles incomplètes. La procédure est décomposée en trois étapes : une phase intradimensionnelle pour générer une famille de partitions par variable d'entrée, une composition multidimensionnelle pour construire un premier système performant, et une simplification de la base de règles. Elle s'appuie sur des concepts originaux tels qu'une distance entre observations qui prenne en compte la structure de la partition, ou encore le contexte défini par un groupe de règles. Elle est guidée par des indices, indice de couverture et d'hétérogénéité, que nous avons introduits en complément de l'index de performance numérique. Après une validation sur des exemples connus, la méthodologie est appliquée à la conception d'un système d'aide à la décision. Il s'agit d'induire les actions de conduite, sous forme de règles, qui accentuent la couleur du vin rouge au cours de la vinification.
Article
An earlier edition of Methods of melissopalynology was published in Bee World 51(3): 125–138 (1970), and has been widely used. It is now republished with minor corrections and updating, and with two significant additions. The acetolysis method is included, which has not previously been commonly used in melissopalynology; also the literature list is enlarged so that it provides an introduction to the extensive literature on palynology, which is scattered over many journals.
Article
Colour is an important quality attribute that dictates the quality and value of many fruit products. Accurately measuring and describing heterogenous fruit colour changes during ripening is difficult with the instrumentation available (chromometer and colorimeter) due to the small viewing area of the equipment. Calibrated computer vision systems (CVS) provide another technique that allows capture and quantitative description of whole fruit colour characteristics. Published research has demonstrated errors in CVS due to product curvature. In this work, it was confirmed that of the measured a* and b* colour values on a curved surface, 55% and 69% of the values were within the range measured for the same flat surface. This deviation of measurement results in description of hue angle and chroma with an average error of 2° and 2.5, respectively. The system developed allows capture of hue angle data of whole fruit of heterogeneous colour. The usefulness of the device for capturing descriptive colour data during maturation of fruit is demonstrated with ‘B74’ mangoes.
Article
This study reports the use of visible (vis) and near infrared (NIR) spectroscopy as a tool to classify honey samples from Uruguay, according to their floral origin. Classification models were developed using principal component analysis, discriminant partial least squares (DPLS) regression and linear discriminant analysis (LDA). Honey samples (n = 50) from two floral origins, namely Eucalyptus spp. and pasture, were split randomly into even calibration (n = 25) and validation sets (n = 25). Both LDA and DPLS models correctly classified, on average, more than 75% of the honey samples belonging to pasture and more than 85% of the honey samples belonging to Eucalyptus spp. These results showed that vis-NIR might be a suitable and alternative method that can easily be implemented by both the industry and retailers to classify samples according their floral origin. Vis-NIR analysis requires little sample preparation and is rapid. However, the relatively limited number of samples involved in the present work led us to be cautious in terms of extrapolating the results of this work to other floral types.
Article
Honey samples that are available commercially, differ in quality on account of various factors like geographical, seasonal and processing conditions, floral source, packaging and storage period. Eleven commercial samples of Indian honey were studied for sensory and physico-chemical properties. Sensory analysis was carried out using quantitative descriptive analysis method. Moisture content varied from 17 to 22.6%, °Brix from 76 to 81.5%, pH from 3.62 to 5.46, apparent viscosity from 1.79 to 13.8 Pascal sec, acidity from 0.03 to 0.15%, total reducing sugars from 61.3 to 72.6% and sucrose from 1.2 to 5.7%. Colour parameters namely L∗a∗b∗ were measured and a∗ was found to be an important variable in grouping the samples. Principal component analysis was performed on sensory and physico-chemical variables for analysing the relationship among groups of variables, and grouping the samples in the multidimensional space. The dominant quality attributes for the groups are discussed in relation to the directional vectors. The important sensory parameters selected by the multivariate program, which grouped the samples were flowery, fruity, waxy, jaggery-like, chemical and caramelised notes while the physico-chemical parameters were viscosity, a∗, percentage acidity and sucrose content.
Article
Our work is a part of a multi-camera system developed to track moving vehicles along an highway. The multi- camera system is constituted of several vision systems which are geographically distributed along the highway. Each vi- sion system analyzes the top view color images acquired by its camera in order to identify the vehicle which moves in the field of view of its camera. For this purpose, we pro- pose in this paper an original appearance-based approach for indexing and retrieving images. Since the images are acquired under uncontrolled lighting conditions, the pro- posed scheme is based on color invariant features.
Article
This paper explains the task of segmenting any given colour image using soft computing techniques. The segmentation techniques used are Fuzzy C means algorithm, Possibilistic Fuzzy C means and Competitive neural network. Image segmentation refers to the division pixels into homogeneous classes or clusters so that items in the same class are as similar as possible and items in different classes are as dissimilar as possible. The most basic attribute for segmentation is image luminance amplitude for a monochrome image and color components for a color image. Since there are more than 16 million colours available in any given image and it is difficult to analyse the image on all of its colours, the likely colours are grouped together by image segmentation. For that purpose soft computing techniques have been used. The soft computing techniques used are Fuzzy c means algorithm(FCM,) Possibilistic c means algorithm(PCM) and competitive neural network. A self estimation algorithm has been developed for determining the number of clusters. The segmented images are compared using image quality metrics. The image quality metrics used are Peak Signal to noise ratio(PSNR, error image and compression ratio. The time taken for image segmentation is also used as a comparison parameter. The techniques have been tested with images of different size and resolution and the results are proven to be better than the conventional hard clustering technique.
Article
This paper is a progress report on a project aimed at the realization of a low-cost, automatic, trainable system "AutoStage" for recognition and counting of pollen. Previous work on image feature selection and classification has been extended by design and integration of an XY stage to allow slides to be scanned, an auto-focus system, and segmentation software. The results of a series of classification tests are reported, and verified by comparison with classification performance by expert palynologists. A number of technical issues are addressed, including pollen slide preparation and slide sampling protocols.