ArticlePDF Available

Abstract

We present a self-organizing Kohonen neural network for quantizing colour graphics images. The network is compared with existing algorithmic methods for colour quantization. It is shown experimentally that, by adjusting a quality factor, the network can produce images of much greater quality with longer running times, or slightly better quality with shorter running times than the existing methods. This confounds the frequent observation that Kohonen neural networks are necessarily slow. The continuity of the colour map produced can be exploited for further image compression, or for colour palette editing.
B
GB
B
R
RR
G
a. Points Distribution
c. Median−Cut Algorithm d. Sophisticated Median−Cut
e. Oct−Trees (Quadtrees) f. Kohonen Neural Network
b. Equal−sized clusters
B
GB
B
R
RR
G
Radius = 2 Radius = 2
Original Neural Network
Closest
Neuron
Network
after update
New Data Point
... The segmentation process is independent of how the saliency and other preliminary computations (resizing, color reduction, etc.) are computed in the pre-processing step. To give evidence to this aspect we adopt different preliminary computation methods [13,[37][38][41][42][43] and we evaluate the corresponding experimental results. ...
... The reduction of the number of colors is obtained by a Color Quantization (CQ) method. Four different CQ methods are available to this purpose: [13,[41][42][43]. The number of colors, namely colnum, is fixed by the user. ...
... Then, we validated those values on the ISIC2016 dataset. Since the SCS performance is slightly better when CQ and Saliency Map are computed respectively by [38] and [41] and when we set the following parameters: T c =60, T n = 50, θ 1 = 0.2, T s =10, θ 2 =0.8, in the overall paper, the examples and the Tables are relative to the above-selected methods and setting. ...
Preprint
Full-text available
Skin lesion segmentation is one of the crucial steps for an efficient non-invasive computer-aided early diagnosis of melanoma. In this paper, we investigate how saliency and color information can be usefully employed to determine the lesion region. Unlike most existing saliency-based methods, to discriminate against the skin lesion from the surrounding regions we enucleate some properties related to saliency and color information and we propose a novel segmentation process using binarization coupled with new perceptual criteria based on these properties. To refine the accuracy of the proposed method, the segmentation step is preceded by a pre-processing aimed at reducing the computation burden, removing artifacts, and improving contrast. We have assessed the method on two public databases including 1497 dermoscopic images and compared its performance with that of classical saliency-based methods and with that of some more recent saliency-based methods specifically applied to dermoscopic images. Results of qualitative and quantitative evaluations of the proposed method are promising as the obtained skin lesion segmentation is accurate and the method performs satisfactorily in comparison to other existing saliency-based segmentation methods.
... In [4], the colour space is modelled using an octree structure whose sub-branches are combine to form the palette. Self-organising Kohonen neural networks have also been proposed [5], as have a number of soft computing methods such as genetic algorithms [6], simulated annealing [7], fuzzy c-means [8], rough c-means [9], and fuzzy-rough c-means [10]. ...
... To put the obtained results into context, we also run a number of conventional colour quantisation algorithms, namely popularity algorithm [1], median cut [1], octree quantisation [4], Neuquant [5], modified min-max [16], split & merge [17], and variance-based colour quantisation [18] as well as several soft computing-based approaches, namely stepwidth adaptive simulated annealing (SWASA) [7], fuzzy cmeans (FCM) [8], random sampling FCM (RSFCM) [8], Enhanced FCM (EnFCM) [8], anisotropic mean shift based FCM (AMSFCM) [8], rough c-means (RCM) [9], and fuzzyrough c-means (FRCM) [10] as well as HMS on its own [12]. ...
Conference Paper
Colour quantisation is a common image processing technique to reduce the number of distinct colours in an image which are then represented by a colour palette. Selection of appropriate entries in this palette is challenging since the quality of the quantised image is directly dictated by the palette colours. In this paper, we propose a novel colour quantisation algorithm based on the human mental search (HMS) algorithm and subsequent refinement of the colour palette using k-means. HMS is a recent population-based metaheuristic algorithm that has been shown to yield good performance on a variety of optimisation problems. In the first stage, we use HMS to find a high-quality initial colour palette. In the second stage, this palette is refined using k-means to converge towards a local optimum and thus to further improve the quality of the quantised image. We evaluate our algorithm on a set of benchmark images and compare it to several conventional and soft computing-based colour quantisation algorithms to demonstrate excellent image quality, outperforming the other methods.
... Ces méthodes bénéficient de la faible complexité de la quantification uniforme, ainsi que des performances supérieures de la méthode non uniforme. Il a été constaté que les performances xi de cette approche sont comparables à celles basées sur le regroupement des k-moyennes et le réseau neuronal compétitif de Kohonen [62], avec l'avantage d'être non itératifs et insensibles aux changements modérés de la distribution de données holographiques. ...
... Two companding grids are utilized, based on diamond and logarithmic spiral patterns. It has been found that the performance of companding quantizers is comparable to the ones based on k-means clustering and Kohonen competitive neural network [62], with the advantage of being noniterative and insensitive to moderate change in the holographic data distribution. In the same spirit, the technique introduced in [181] extracts clusters from the histograms of R-I or A-P hologram data to make decisions about the best quantization values. ...
Thesis
Full-text available
Digital holography is an emerging technology for 3D visualization, which is expected to revolutionize the industry of interactive displays in the near future. Contrary to conventional stereoscopic images, digital holograms provide all the human visual cues. This allows a natural and realistic eye focus without any eye strain or headaches caused by the inherent vergence-accommodation conflict. Unlike common images, digital holograms exhibit non-localized features with high frequency components. Moreover, high quality holograms with full-parallax, full-color and large field of view contain massive amount of data. To reduce the time needed to access and display holographic contents, efficient scalable compression must be applied to the hologram before its transfer to the user. In the first part of this work, we introduced two methods for digital holograms compression. First, we proposed a redundant light beams-based decomposition of holograms using the Gabor wavelets. For compression purposes, we sparsified the obtained expansion using the Matching Pursuit algorithm. Then, we designed a specific encoder framework for the coefficients and indexes of Gabor atoms. The proposed approach achieved better compression performance compared to the state of the art methods. Second, by exploiting the duality between Gabor wavelets and diffracted light beams, we developed a viewpoint-quality scalable coding scheme. Indeed, for a given observer's position, only the Gabor atoms that emit light into the viewer's window are selected, sorted and then encoded. The bit rate has been significantly reduced, without degrading the reconstruction quality obtained by encoding the whole hologram. In the second part of this work, we designed two server-client architectures for a view-dependent progressive transmission of holograms using scalable coding. In the first solution, a fine-grain scalable bitstream is generated online by the server, after each client notification about the user's position. Experimental results reveal that this method enables a rapid visualization by decoding the first received atoms in addition to a progressive increase of quality. Finally, to reduce the latency caused by the computational burden of encoding, we proposed a second solution where the whole Gabor expansion is encoded offline by the server, and then decoded online with respect to the viewer’s trajectory. To enable a scalable compression, we grouped the Gabor atoms following a block-based decomposition of the observer plane. Then, the atoms of each block are assigned to different quality levels and encoded in packets. Simulations tests show that the proposed architecture allows a low-latency transmission without significantly increasing the encoding rate.
...  Denetimli Teknikler  İleri Beslemeli Sinir Ağları (Feed-Forward Neural Networks) [10,11]  Geriye Yayılımlı Sinir Ağları (Back Propagation Neural Networks) [12]  Kademeli Korelasyon Sinir Ağları (Cascade Correllation Neural Networks) [13]  Denetimsiz Teknikler  Kısıtlı Yeterlikte Sinir Ağları (Constraint Satisfaction Neural Networks) [14]  Darbe Bağlı Sinir Ağları (Pulse Coupled Neural Networks) [15]  Salınımlı Sinir Ağları (Oscillatory Neural Networks) [16]  Hibrit Teknikler  Hopfield Sinir Ağları (Hopfield Neural Networks) [17]  Kohonen Sinir Ağları (Kohonen Neural Networks) [18] Denetimli(supervised) metodlar bir uzman yardımına ihtiyaç duyar, bunun anlamı görüntü bölütleme için seçilecek eğitim veri setini uzman kişi seçer. Denetimsiz(unsupervised) metodlarda ise bu işlem yarı veya tam otomatiktir. ...
Article
Full-text available
Görüntü bölütlemesinde görüntü üzerinde bir başlangıç eğrisi vererek, eğrinin hareketi ile görüntü üzerindeki objeleri sarması sağlanabilir. Burada eğri hareketine neden olan bir kısmi türevli yapı olduğu için, bu sınıfta bir bölütlemeye kısmi diferensiyel tabanlı bölütleme denilmektedir. Bu çalışmada, kısmi türevlerden oluşturulan bir matematik modelle görüntü segmentasyonu ile ilgili derin bir matematiksel analiz ve sayısal hesaplamalar bulunmaktadır. Sayısal hesaplamalarda, modele kullanıcı tarafından girilen parametrelerin incelemesi yapılmış, ayrıca bu parametrelerin yapay zeka algoritmaları ile optimizasyonu üzerinde durulmuştur. Ayrıca tüm nümerik hesapları yapan kullanıcı dostu bir arayüz uygulaması geliştirilmiştir. Uygulamadaki hesaplamalar yapay zeka algoritmaları ile yapılabilir, veya kullanıcı isterse arayüze gireceği değerlerle manuel bir hesaplamada yapabilir.
... Indeed, CQ [4,10,17,24,43,63,[65][66][67]97] is an important step in compression methods as improper quantization can produce distortion so reducing the visual quality of the image. In the past, due to the limitations of the display hardware and to the bandwidth restrictions of computer networks, the main applications of CQ were image display [24,94] and image compression [58,86]. ...
Article
Full-text available
The visual quality evaluation is one of the fundamental challenging problems in image processing. It plays a central role in the shaping, implementation, optimization, and testing of many methods. The existing image quality assessment methods centered mainly on images altered by common distortions while paying little attention to the distortion introduced by color quantization. This happens despite there is a wide range of applications requiring color quantization as a preprocessing step since many color-based tasks are more efficiently accomplished on an image with a reduced number of colors. To fill this gap, at least partially, we carry out a quantitative performance evaluation of nine currently widely-used full-reference image quality assessment measures. The evaluation runs on two publicly available and subjectively rated image quality databases for color quantization degradation by considering their appropriate combinations and subparts. The evaluation results indicate what are the quality measures that have closer performances in terms of their correlation to the subjective human rating and prove that the selected image database significantly impacts the evaluation of the quality measures, although a similar trend on each database is maintained. The detected strong trend similarity, both on individual databases and databases obtained by a proper combination, provides the ability to validate the database combination process and consider the quantitative performance evaluation on each database as an indicator for performance on the other databases. The experimental results are useful to address the choice of appropriate quality measures for color quantization and to improve their future employment.
... In particular, although the CQ process is considered a fundamental process for color image analysis and a significant amount of research has been done on CQ, as mentioned above, IQA for CQ degradation has received little attention. Indeed, CQ [41,42,43,44,45,46,47,48,49,50] is an important step in compression methods as improper quantization can produce distortion so reducing the visual quality of the image. In the past, due to the limitations of the display hardware and to the bandwidth restrictions of computer networks, the main applications of CQ were image display [42,51] and image compression [52,53]. ...
Preprint
Full-text available
Visual quality evaluation is one of the challenging basic problems in image processing. It also plays a central role in the shaping, implementation, optimization, and testing of many methods. The existing image quality assessment methods focused on images corrupted by common degradation types while little attention was paid to color quantization. This in spite there is a wide range of applications requiring color quantization assessment being used as a preprocessing step when color-based tasks are more efficiently accomplished on a reduced number of colors. In this paper, we propose and carry-out a quantitative performance evaluation of nine well-known and commonly used full-reference image quality assessment measures. The evaluation is done by using two publicly available and subjectively rated image quality databases for color quantization degradation and by considering suitable combinations or subparts of them. The results indicate the quality measures that have closer performances in terms of their correlation to the subjective human rating and show that the evaluation of the statistical performance of the quality measures for color quantization is significantly impacted by the selected image quality database while maintaining a similar trend on each database. The detected strong similarity both on individual databases and on databases obtained by integration provides the ability to validate the integration process and to consider the quantitative performance evaluation on each database as an indicator for performance on the other databases. The experimental results are useful to address the choice of suitable quality measures for color quantization and to improve their future employment.
... By the final weights, the palette color is selected. In [10], brief description of the SOM is presented and we employ it in our work. ...
Article
Full-text available
Color image quantization corresponds to the process of reducing the number of color in digital images.In the literature, color image quantization is regarded as one of the most important technique in image processing due to various applications in real-world scenario. From the literature, it is evident that the clustering algorithms are widely adopted in color quantization approach.In this paper, we provide a short review on some of the techniques employed for color quantization. In this work, we considered the following approachessuch as (a) k-Means Algorithm, (b) Fuzzy c-Means Clustering (FCM), and (c) Self-Organizing Map Neural Network (d)Median Cut Algorithm(MC) and analyzed their performance on color quantization. To analyze the clustering task, RGB color-coding is employed. We have employed mean square error (MSE) as the performance indicator to evaluate the performance of color quantization methods.The experimental results have illustrated that all these techniques in comparison are able to find out the significant color in an image and have presented image with less number of color.
... One of the most commonly used algorithms for GIF color quantization is the median-cut algorithm [5]. Dekker proposed using Kohonen neural networks for predicting cluster centers [10]. Other clustering techniques such as k-means [6], hierarchical clustering [7], particle swarm methods [24] have also been applied to the problem of color quantization [30]. ...
Preprint
Graphics Interchange Format (GIF) is a widely used image file format. Due to the limited number of palette colors, GIF encoding often introduces color banding artifacts. Traditionally, dithering is applied to reduce color banding, but introducing dotted-pattern artifacts. To reduce artifacts and provide a better and more efficient GIF encoding, we introduce a differentiable GIF encoding pipeline, which includes three novel neural networks: PaletteNet, DitherNet, and BandingNet. Each of these three networks provides an important functionality within the GIF encoding pipeline. PaletteNet predicts a near-optimal color palette given an input image. DitherNet manipulates the input image to reduce color banding artifacts and provides an alternative to traditional dithering. Finally, BandingNet is designed to detect color banding, and provides a new perceptual loss specifically for GIF images. As far as we know, this is the first fully differentiable GIF encoding pipeline based on deep neural networks and compatible with existing GIF decoders. User study shows that our algorithm is better than Floyd-Steinberg based GIF encoding.
... • Neural Quantization (NQ) [17] is an algorithm that uses a selforganising Kohonen neural network to quantize the colour image; • Wan Quantifier (AQ) [19] is an variance-based algorithm used for multidimensional data clustering, that uses a sum-of-squared error minimisation criterion between the quantized and original image; • Wu Quantifier (UQ) [21] is based on variance minimisation through linear search. The colour space cube is divided into two, in each of its axes and the division plane that minimises the sum of variances at both sides of the colour space is selected, thus creating two boxes. ...
Article
Full-text available
Accurate skin lesion segmentation is important for identification and classification through computational methods. However, when performed by dermatologists, the results of clinical segmentation are affected by a certain margin of inaccuracy (which exists since dermatologist do not delineate lesions for segmentation but for extraction) and also significant inter- and intra-individual variability, such segmentation is not sufficiently accurate for segmentation studies. This work addresses these limitations to enable detailed analysis of lesions’ geometry along with extraction of non-linear characteristics of region-of-interest border lines. A comprehensive review of 39 segmentation methods is carried out and a contribution to improve dermoscopic image segmentation is presented to determine the regions-of-interest of skin lesions, through accurate border lines with fine geometric details. This approach resorts to Local Binary Patterns and k-means clustering for precise identification of lesions boundaries, particularly the melanocytic. A comparative evaluation study is carried out using three different datasets and reviewed algorithms are grouped according to their approach. Results show that algorithms from the same group tend to perform similarly. Nevertheless, their performance does not depend uniquely on the algorithm itself but also on the underlying dataset characteristics. Throughout several evaluations, the proposed Local Binary Patterns method presents, consistently, better average performance than the current state-of-the-art techniques across the three different datasets without the need of training or supervised learning steps. Overall, apart from presenting a new segmentation method capable of outperforming the current state-of-the-art, this paper provides insightful information about the behaviour and performance of different image segmentation algorithms.
Article
This paper is a revised version of an article by the same title and author which appeared in the April 1991 issue of Communications of the ACM. For the past few years, a joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG’s proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based method is specified for “lossy’ ’ compression, and a predictive method for “lossless’ ’ compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. This article provides an overview of the JPEG standard, and focuses in detail on the Baseline method. 1
Article
This paper shows that the 2-neighbour Kohonen algorithm is self-organizing under pretty general assumptions on the stimuli distribution μ (supp(μc) contains a non-empty open set) and is a.s. convergent—in a weakened sense—as soon as μ admits a log-concave density. The 0-neighbour algorithm is shown to have similar converging properties. Some numerical simulations illustrate the theoretical results and a counter-example provided by a specific class of density functions.
Article
The nature of neurocomputing is discussed. Neurocomputing is defined as the engineering discipline concerned with nonprogrammed adaptive information processing systems (neural networks) that develop associations (transformations or mappings) between objects in response to their environment. The operation of a neural network is described, and its hardware realization is considered. Some applications of neural networks are examined.< >