Article

Color image quantization for frame buffer display

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Definition: Purify the protected images to nullify adversarial perturbations. Methods: JPEG [28], Quant [29], TVM [30], IMPRESS [15], DiffPure [31]. ...
... Approaches -Experience-based methods use the following transformation as τ : JPEG [28] is a lossy compression algorithm that uses discrete cosine transform to remove high-frequency components from x pro ; Quantization (Quant) [29] compresses pixel values to single discrete values. ...
... PhotoGuard (PGuard)[10] uses an ℓ ∞ perturbation limit of 16/255, step size of 2/255, and 200 optimization steps. Anti-Dreambooth (AntiDB)[27] employs 100 PGD iterations for FSMG and 50 for ASPL, with a perturbation budget of 8/255, a step size of 1/255, and a noise budget η of 0.05, minimized over 1000 training steps.In NP, JPEG Compression (JPEG)[28] sets the quality to 0.75, and Quantize (Quant)[29] sets the bit depth to 8. Total Variance Minimization (TVM)[30] sets a regularization weight of 0.5 with the l 2 norm and optimized with the BFGS algorithm. For IMPRESS[15], we use the original authors' hyperparameters, setting the learning rate to 0.001, purification intensity to 0.1, and 3000 iterations. ...
Preprint
Text-to-image diffusion models have emerged as powerful tools for generating high-quality images from textual descriptions. However, their increasing popularity has raised significant copyright concerns, as these models can be misused to reproduce copyrighted content without authorization. In response, recent studies have proposed various copyright protection methods, including adversarial perturbation, concept erasure, and watermarking techniques. However, their effectiveness and robustness against advanced attacks remain largely unexplored. Moreover, the lack of unified evaluation frameworks has hindered systematic comparison and fair assessment of different approaches. To bridge this gap, we systematize existing copyright protection methods and attacks, providing a unified taxonomy of their design spaces. We then develop CopyrightMeter, a unified evaluation framework that incorporates 17 state-of-the-art protections and 16 representative attacks. Leveraging CopyrightMeter, we comprehensively evaluate protection methods across multiple dimensions, thereby uncovering how different design choices impact fidelity, efficacy, and resilience under attacks. Our analysis reveals several key findings: (i) most protections (16/17) are not resilient against attacks; (ii) the "best" protection varies depending on the target priority; (iii) more advanced attacks significantly promote the upgrading of protections. These insights provide concrete guidance for developing more robust protection methods, while its unified evaluation protocol establishes a standard benchmark for future copyright protection research in text-to-image generation.
... For color or grayscale images, color quantization techniques [26] are widely applied to find initial domain partitions. Well-known methods include median cut [55], k-means clustering [72], Octree [43], self-organizing maps [29], and competitive learning [25]. For example, Figure 1 (c) shows a quantization for (a) using k-means with 5 colors, and the connected components are then filled with average colors of the pixels therein. ...
... Figure 12 (a) shows an input raster image for this set of experiments. For vectorization based on color quantization, we test multiple benchmark algorithms including Median Cut (MC) [55], K means++ (Kpp) [13], Incremental Online K-Means (IOKM) [1], Weighted TSE (WTSE) [80], and quantization based on self-organizing Kohonen neural network (NEU) [33]. These algorithms are efficient and commonly integrated into various software. ...
... (d) BG region merging (3.6); (e) Scale region merging (3.8) and (f) MS region merging (3.7). Quantization-based vectorization using (g) WTSE[80]; (h) IOKM[1]; (i) MC[55]; (j) Kpp[13]; and (k) NEU[33]. Region merging based results are closer to (a) with smaller number of regions. ...
Preprint
Image vectorization converts raster images into vector graphics composed of regions separated by curves. Typical vectorization methods first define the regions by grouping similar colored regions via color quantization, then approximate their boundaries by Bezier curves. In that way, the raster input is converted into an SVG format parameterizing the regions' colors and the Bezier control points. This compact representation has many graphical applications thanks to its universality and resolution-independence. In this paper, we remark that image vectorization is nothing but an image segmentation, and that it can be built by fine to coarse region merging. Our analysis of the problem leads us to propose a vectorization method alternating region merging and curve smoothing. We formalize the method by alternate operations on the dual and primal graph induced from any domain partition. In that way, we address a limitation of current vectorization methods, which separate the update of regional information from curve approximation. We formalize region merging methods by associating them with various gain functionals, including the classic Beaulieu-Goldberg and Mumford-Shah functionals. More generally, we introduce and compare region merging criteria involving region number, scale, area, and internal standard deviation. We also show that the curve smoothing, implicit in all vectorization methods, can be performed by the shape-preserving affine scale space. We extend this flow to a network of curves and give a sufficient condition for the topological preservation of the segmentation. The general vectorization method that follows from this analysis shows explainable behaviors, explicitly controlled by a few intuitive parameters. It is experimentally compared to state-of-the-art software and proved to have comparable or superior fidelity and cost efficiency.
... The term color quantization was coined in a little-known paper by Jain and Pratt (1972). However, most researchers consider Heckbert (1980Heckbert ( , 1982 the inventor of the first true cq algorithm. Heckbert described his celebrated median-cut cq algorithm first in his Bachelor's thesis (Heckbert 1980) and then, with slight modifications, in a journal paper (Heckbert 1982). ...
... However, most researchers consider Heckbert (1980Heckbert ( , 1982 the inventor of the first true cq algorithm. Heckbert described his celebrated median-cut cq algorithm first in his Bachelor's thesis (Heckbert 1980) and then, with slight modifications, in a journal paper (Heckbert 1982). In addition to being the first of its kind, Heckbert's seminal work introduced much of the terminology used in the cq literature to this day, described the first divisive cq algorithm, proposed bit-cutting as a preprocessing step and k-means as a postprocessing step, developed the first accelerated pixel mapping algorithm, and suggested the use of dithering to minimize false contours in the output image. ...
... • Image-independent vs. image-dependent (Gentile et al. 1990): Image-independent algorithms design a universal (or fixed) palette without regard to any particular input image, whereas image-dependent ones design a custom (or adaptive) palette based on the distribution of the colors in a given input image. • Uniform vs. nonuniform (Heckbert 1982): Uniform algorithms place the representatives uniformly throughout the color space, whereas nonuniform algorithms position the representatives nonuniformly based on the distribution of the input colors. Content courtesy of Springer Nature, terms of use apply. ...
Article
Full-text available
Color quantization (cq), the reduction of the number of distinct colors in a given image with minimal distortion, is a common image processing operation with various applications in computer graphics, image processing/analysis, and computer vision. The first cq algorithm, median-cut, was proposed over 40 years ago. Since then, many clustering algorithms have been applied to the cq problem. In this paper, we present a comprehensive overview of the cq algorithms proposed in the literature. We first examine various aspects of cq, including the number of distinguishable colors, cq artifacts, types of cq, applications of cq, data structures, data reduction, color spaces and color difference equations, and color image fidelity assessment. We then provide an overview of image-independent cq algorithms, followed by a detailed survey of image-dependent ones. After presenting a brief discussion of pixel mapping, we conclude our survey with an outline of the open problems in cq.
... There are different studies on color quantization in the literature. Quantization approaches can generally be categorized into two groups: splitting [8][9][10][11][12][13] and clustering [14][15][16]. Splitting methods are iterative approaches that divide the color space into sub-prisms. ...
... Splitting methods are iterative approaches that divide the color space into sub-prisms. Median Cut (MC) [8], Center-Cut (CC) [9], Octree (OCT) [10], Variance Cut (VC), Variance Cut with Lloyd-Max (VCL), Variance-Based (WAN) [17,18], Popularity (POP) and Modified Popularity (MPOP) [19,20], Greedy Orthogonal bi-Partitioning (WU) [12], Radius-Weighted Mean-Cut (RWM) [21], Split and Merge (SAM) [20], Self-Organization Map (SOM) [22], and Modified Max-Min (MMM) [23] are commonly used quantization methods. The MC method [8] repeatedly separates the three-dimensional color space into smaller regions. ...
... Median Cut (MC) [8], Center-Cut (CC) [9], Octree (OCT) [10], Variance Cut (VC), Variance Cut with Lloyd-Max (VCL), Variance-Based (WAN) [17,18], Popularity (POP) and Modified Popularity (MPOP) [19,20], Greedy Orthogonal bi-Partitioning (WU) [12], Radius-Weighted Mean-Cut (RWM) [21], Split and Merge (SAM) [20], Self-Organization Map (SOM) [22], and Modified Max-Min (MMM) [23] are commonly used quantization methods. The MC method [8] repeatedly separates the three-dimensional color space into smaller regions. The method is based on the distribution of colors in the image. ...
Article
Full-text available
In this study, a novel color quantization approach which automatically estimates the number of colors by multi-level thresholding based on the histogram is proposed. The method consists of three stages. First, red–green–blue is clustered by threshold values. Thus, the pixels are positioned in a cluster or sub-prism. Second, the color palette is produced by determining the centroids of the clusters. Finally, the pixels are reassigned to clusters based on their distance from each centroid. The average of the pixels included in each cluster also represents the color of that cluster. While conventional methods are user-dependent, the proposed algorithm automatically generates the number of colors by considering the pixels assigned to the clusters. Additionally, the multi-level thresholding approach is also a solution to the initialization problem, which is another important issue for quantization. Consequently, the experimental results of the method tested with various images show better performance than many frequently used quantization techniques.
... Color Quantization (CQ) is the process that selects and assigns a limited set of colors for representing a color image with maximum fidelity (Burger and Burge, 2009). The need of performing a CQ process frequently arises in image display (Heckbert, 1982;Weeks, 1998) and image compression (Wallace, 1991;Plataniotis and Venetsanopoulos, 2000). Moreover, CQ is considered as a prerequisite for many image processing tasks (i.e. ...
... In particular, it refines the inizialization step of the K-means algorithm and optimizes its implementation. In addition, we compare the proposed method with the Median-Cut (MC) (Heckbert, 1982;Wu, 1991;Cheng and Yang, 2001;Celebi, 2009;Celebi et al., 2013) (Heckbert, 1982) and its modifications in (Cheng and Yang, 2001) CY and (Wu, 1991) WU. Similarly to the proposed method, all of them are based on a recursive optimization of color histogram but they use different criteria and measures (respectively variances and directional distances from the mean) that do not explicitely depend on visual perception rules. ...
... In particular, it refines the inizialization step of the K-means algorithm and optimizes its implementation. In addition, we compare the proposed method with the Median-Cut (MC) (Heckbert, 1982;Wu, 1991;Cheng and Yang, 2001;Celebi, 2009;Celebi et al., 2013) (Heckbert, 1982) and its modifications in (Cheng and Yang, 2001) CY and (Wu, 1991) WU. Similarly to the proposed method, all of them are based on a recursive optimization of color histogram but they use different criteria and measures (respectively variances and directional distances from the mean) that do not explicitely depend on visual perception rules. ...
Conference Paper
Full-text available
The paper presents a novel method for color quantization (CQ) of dermoscopic images. The proposed method consists of an iterative procedure that selects image regions in a hierarchical way, according to the visual importance of their colors. Each region provides a color for the palette which is used for quantization. The method is automatic, image dependent and computationally not demanding. Preliminary results show that the mean square error of quantized dermoscopic images is competitive with existing CQ approaches.
... Following P. Heckbert (1982), it was simultaneously invented in two different institutions in 1978 by Tom Boyle & Andy Lippman from MIT's Architecture Machine Group and Ephraim Cohen from the New York Institute of Technology. The efforts of both groups of scholars do not appear to be published in any academic sources, though the procedure has been described by P. Heckbert (1980). ...
... Based on: P. Heckbert (1980) and Xiang (2007) ...
... The median cut algorithm had its first appearance in print in the early 1980s (P. Heckbert, 1980Heckbert, , 1982, though it was invented by the author of these publications in 1979. The algorithm showed a significant improvement compared to uniform quantisation and the popularity method, which were widely used back then. ...
Thesis
This thesis examines buyers' preferences and price determinants in the Polish art market regarding paintings, with a focus on the colours of the paintings and their influence on the price. The full-text is available here: https://www.wbc.poznan.pl/dlibra/publication/609909/edition/517564/content
... In the literature, a closely related line of work on the interplay between colors and structures is color quantization. Color quantization focuses on generating visually accurate images in restricted color spaces [1], [3]. This problem is human-centered, as it usually prioritizes the visual fidelity or quality for human viewing. ...
... The color quantization process is finalized by assigning an overall color to all pixels for each cluster. In this manner, pixels in the color [1] color quantized images (top) and corresponding class activation maps [2] with network confidence for the ground truth class (bottom). In "Original" (a), colors are described with 24 bits. ...
... To investigate the key structures for neural network recognition, we further reduce the color spaces to just a few bits, so as to minimize the contribution of colors to the recognition task. In this manner, we can ensure that the neural networks [1], [4], "MedianCut" [1], and "OCTree" [5] are traditional color quantization methods; "ColorCNN" and "ColorCNN+" are the proposed methods. Evaluation: We evaluate the color quantized images using a classification network pre-trained on the original images. ...
Preprint
Color and structure are the two pillars that combine to give an image its meaning. Interested in critical structures for neural network recognition, we isolate the influence of colors by limiting the color space to just a few bits, and find structures that enable network recognition under such constraints. To this end, we propose a color quantization network, ColorCNN, which learns to structure an image in limited color spaces by minimizing the classification loss. Building upon the architecture and insights of ColorCNN, we introduce ColorCNN+, which supports multiple color space size configurations, and addresses the previous issues of poor recognition accuracy and undesirable visual fidelity under large color spaces. Via a novel imitation learning approach, ColorCNN+ learns to cluster colors like traditional color quantization methods. This reduces overfitting and helps both visual fidelity and recognition accuracy under large color spaces. Experiments verify that ColorCNN+ achieves very competitive results under most circumstances, preserving both key structures for network recognition and visual fidelity with accurate colors. We further discuss differences between key structures and accurate colors, and their specific contributions to network recognition. For potential applications, we show that ColorCNNs can be used as image compression methods for network recognition.
... In comparison to the 24-bit full color space of natural images, MIs have only 1-bit. Existing color quantization methods perform poorly in extremely low-bit color space (Heckbert 1982;Gervautz and Purgathofer 1988;Hou, Zheng, and Gould 2020). In addition, the human visual system sometimes can not accurately perceive the foreground objects from MI, necessitating the flexible adjustment of perceptual difficulty. ...
... CQNet. To demonstrate the effectiveness of CQNet, we replaced it with three other representative color quantization methods (i.e., OcTree (Gervautz and Purgathofer 1988), Me-dianCut (Heckbert 1982), and ColorCNN (Hou, Zheng, and Gould 2020)). The qualitative comparison demonstrates that the MI generated using CQNet is closest to the natural images from the visual perception. ...
Article
Full-text available
Completely automated public Turing test to tell humans apart (CAPTCHA) is an effective mechanism to protect websites and online applications from malicious bots programs. Image-based CAPTCHA is one of the most widely used schemes. However, deep learning techniques have significantly weakened the security of some image-based CAPTCHA schemes. Mooney images (MIs) are important research materials in the field of cognitive science. Compared to natural images, MI exhibits fewer visual cues, fragmented content, and greater ambiguity, leading to the perception of MI relying more on the iterative process between feedforward and feedback mechanisms. In this paper, we raise an intriguing question: can MIs be used to enhance the security of CAPTCHA? Before this study, we first propose a novel framework HiMI that generates the high-quality MIs from natural images and also allows flexible adjustment of the perceived difficulty. Based on MI, we design two MI-CAPTCHA schemes related to object detection and instance segmentation tasks, respectively. We experimentally demonstrate that HiMI performs better than other baseline methods in terms of both image quality and application potential in two MI-CAPTCHA schemes. Additionally, we conduct experiments to explore the solving performance of humans and CAPTCHA solvers under different parameter settings of schemes, providing valuable reference for the practical application.
... These three key features are implicitly included in a classic color reduction method known as the median cut algorithm [8]. In summary: (1) the algorithm operates in a color space where color and luminance are mixed; (2) controls the number of color indices, (i.e., color variety), used to encode an image; and (3) manipulates the image using cumulative histograms of each input signal. ...
... The median cut algorithm [8] can be applied to various color spaces, such as RGB or LMS. Figure S1 illustrates the algorithm using the RGB space of an image. The process begins by determining the number of color indices for encoding, which is set to four in this case. ...
Preprint
Full-text available
An essential objective of early visual processing is to handle the redundant inputs from natural environments efficiently. For instance, cone signals from natural environments are highly correlated across cone types, indicating channel redundancy. Early visual processing transforms the signals into color and luminance information, such as principle component analysis, and thus achieves efficient and decorrelated representations of natural scenes. Building on these findings, previous research has investigated the effect of color on visual tasks such as object recognition using grayscale conversion, which separates luminance from color. However, recent work suggests that when focusing on object materials, color and luminance remain highly redundant due to complex optical properties. Although this finding indicates that there may be a more efficient decomposition of signals, the specific algorithms remain unknown. This study derives that a classic computer graphics algorithm, the median cut, offers a novel approach to enhance visual processing efficiency while capturing diagnostic features to separate material from object geometry information. Human behavioral experiments show that color reduction based on the algorithm disturbs material classification while preserving object recognition. These findings suggest that object geometric structures are available only from low-bit information. Finally, considering material information can be estimated from summary statistics of image sub-spaces, this study suggests an efficient decomposition of input color images.
... Many color quantization methods are found in the literature. Notable examples include the median-cut algorithm [1] , the uniform algorithm, the octree algorithm [2] , the center-cut algorithm [3] , PGF [4] , the window-based algorithm [5] , clustering-based algorithms such as k-means [6,7] , the adaptive algorithm clustering [8] , algorithms clustering SOM-based [9] , clustering colony ant an and algorithm [10] . Since these methods focus on analysis in different aspects of the quantization process, they have unique advantages and disadvantages depending on the color data sets encountered. ...
... Given a pixel x, which is a three-dimension vector, it is assigned to a proper type of color according to the Maximum likelihood(ML) estimation. It is labeled L j according to the following equation 1 1/ 2 1... ...
Article
Full-text available
Color quantization is an important technique for image analysis that reduces the number of distinct colors for a color image. A novel color image quantization algorithm based on Gaussian mixture model is proposed. In the approach, we develop a Gaussian mixture model to design the color palette. Each component in the GMM represents a type of color in the color palette. The task of color quantization is to group pixels into different component. Experimental results show that our quantization method can obtain better results than other methods.
... This initial cluster is recursively subdivided until K clusters are obtained. Well-known divisive methods include median-cut [9], octree [10], variance-based method [11], binary splitting [12], greedy orthogonal bipartitioning [13], center-cut [14], and rwm-cut [15]. More recent methods can be found in [16][17][18]. ...
... Median-cut (MC)[9]: This method starts by building a 32 × 32 × 32 color histogram that contains the original pixel values reduced to 5 bits per channel by uniform quantization. This histogram volume is then recursively split into smaller boxes until K boxes are obtained. ...
Preprint
Color quantization is an important operation with numerous applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. However, despite its popularity as a general purpose clustering algorithm, k-means has not received much respect in the color quantization literature because of its high computational requirements and sensitivity to initialization. In this paper, a fast color quantization method based on k-means is presented. The method involves several modifications to the conventional (batch) k-means algorithm including data reduction, sample weighting, and the use of triangle inequality to speed up the nearest neighbor search. Experiments on a diverse set of images demonstrate that, with the proposed modifications, k-means becomes very competitive with state-of-the-art color quantization methods in terms of both effectiveness and efficiency.
... Color quantization (CQ) is an image processing technique designed to reduce the number of distinct colors in a color image while preserving its visual quality. 2 By reducing the number of colors in an image, CQ can generate replicas that are difficult to distinguish from the original. ...
... CQ originally aimed to overcome the color depth limitations of early color display devices. 2 Although 24-bit color displays have become prevalent, CQ is still used in many visual computing applications, including 6 non-photorealistic rendering, image matting, image dehazing, image compression, color-to-grayscale conversion, image watermarking/steganography, image segmentation, content-based image retrieval, color analysis, color-texture analysis, saliency detection, and skin detection. ...
... The median cut algorithm, originally developed in the context of computer graphics [30], is an iterative algorithm used to quantize the colors of an image to a fixed set of Q unique colors. the first step consists of splitting the list into two segmented lists, containing the values that are either ≤, or > its median value. ...
... The median cut algorithm (Sec.3.1) was originally developed to quantize colors on red/green/blue axes [30]. As such, using the same principle, we may use data with multiple frequency-axes. ...
Conference Paper
Full-text available
The auralization of acoustics aims to reproduce the most salient attributes perceived during sound propagation. While different approaches produce various levels of detail, efficient methods such as low-order geometrical acoustics and artificial reverberation are often favored to minimize the computational cost of real-time immersive applications. Unlike most acoustics modeling approaches, artificial reverberators are perceptually motivated synthesis methods aiming to reproduce statistical properties occurring in a late reverberant sound field. A special class of reverberators, called directional feedback delay networks (DFDNs), produce direction-dependent reverberation perceived in anisotropic and inhomogeneous sound fields. However, due to a large parameter space, these reverberators can be complex to define, and reproducing very precise time-frequency-directional reverberation may become resource-intensive unless special care is taken in their design. This article introduces several design strategies for DFDNs used for reverberant sound field reproduction. This includes using a geometrical acoustics formulation of late reverberation to parameterize DFDNs, a perceptually-motivated reduction algorithm to limit the complexity of reproduction, a generic output grid design agnostic of the loudspeaker layouts or reproduction format, and special considerations for six-degree-of-freedom sound reproduction.
... On the other hand, our method takes a few seconds to generate the output, giving the user the possibility of varying parameters or testing a set of caps with multiple input images to decide which one is the best approximated by the caps. The best-known problems involving color reduction are quantization [7,8] and dithering [5,9,14]. Quantization was motivated by computer display limitations and consisted of mapping colors in a full-color image to a reduced palette. ...
... The search for the optimal final mapping (Sect. 3.3) is implemented by exploring color difference pre-computations in equations (7) and (8) and vectorization to achieve better performance on MATLAB. The minimization of (9) is performed by looping over all possible injective color mappings q and selecting the one with minimum energy. ...
Article
Full-text available
We present a new methodology for the problem of approximating an input image with a given set of plastic bottle caps. The first step of our method adaptively discretizes the image into a grid where the caps are going to be placed. The next step reduces the number of colors in this discrete image using K-means color clustering. The last step calculates color assignments performing a minimization that aims to approximate colors and color differences between clustered regions while respecting the given cap colors and quantities. A collection of results showcases the potential of our method to tackle this very constrained problem at a much lower cost than the current state of the art.
... VQ has been used in graphics for compression of image data, [Hec82], and textures, [BAC96] which, in turn, inspired the 2bpp and 1bpp texture compression in the Dreamcast console [Seg98]. It fell out of favour mainly because of the expense of overcoming the memory latency of the dependent codebook access. ...
... [Wu92], use principal component analysis to identify spatial splitting planes, but given each of the 16-dimensions only has 3 possible 'integer' values, attempting to partition sets of vectors with arbitrary 16dimensional planes did not seem suitable. Instead, we opted to use an algorithm inspired by [Hec82], where partitioning simply uses the major axes. ...
Conference Paper
Full-text available
Recently, schemes have been proposed for accelerating ‘alpha-tested’ triangles in ray-tracing through the use of precomputed, three-level Opacity Masks/Maps that can significantly reduce the need for expensive of ‘Any-Hit shader’ invocations. We propose and compare two related schemes, VQ2 and VQ4, of compressing such maps that provide both random access and low-cost decompression. Each compressed opacity map, however, relates to a pair of adjacent triangles, taking advantage of correlation across the shared edge and matching likely underlying hardware primitive models.
... Applying the four phases of color quantization provides a way to simplify the color palette of a digital image without dramatically reducing image quality. These phases, as described by Heckbert [38], are (1) sampling the original image(s), (2) creating a color map based on the sample, (3) assigning the original colors to color map values using nearest neighbor clusters, and (4) redrawing the image based on these clusters. Vector quantization methods, such as K-means clustering, are useful for this because they create a specified number of non-overlapping data clusters, assign each point to a single group, and minimize the mean distance to cluster centers for each group while maximizing cluster distance [39]. ...
... While Python was chosen for its accessibility, researchers may utilize other image analysis software or programming languagessuch as MATLAB or ImageJand successfully achieve similar results (see [43,45]). The first script follows the four phases of color quantization [38]. First, images were converted to the CIELAB color space using OpenCV [46] and the L*a*b* values of every pixel were sampled. ...
Article
Ultraviolet (UV) induced biofluorescence is being discovered with increasing frequency across the tree of life. However, there is not yet a standardized, low-cost, photographic methodology used to document, quantify, and minimize sources of bias that often accompany reports made by researchers new to fluorescence imaging. Here, a technique is described to create accurate photographs of biofluorescent specimens as well as how to use these images to quantify fluorescence via color quantization, using open-source code that utilizes K-means clusters within the International Commission on Illumination L*a*b* (CIELAB) color space. The complexity of photographing different excitation and emission wavelengths and methods to reduce bias from illumination source and/or camera color sensitivity (e.g., white balance) without the use of modified equipment is also addressed. This technique was applied to preserved southern flying squirrel (Glaucomys volans) specimens to quantify the color shift between illumination under visible and UV light and analyze variation among specimens. This relatively simple methodology can be adapted to future studies across a range of fluorescent wavelengths and project goals, and assist future research in which unbiased photographs and analyses are key to understanding the physiological and ecological role of biofluorescence.
... Splitting at the object median point has been a widely used approach, as can be seen in Heckbert's work on color image quantization, which proposes a median cut algorithm that creates a KDT as an alternative for the well-known former algorithm [4]. Alternatively, the spatial median can be used instead, leading to a split position as shown in Figure 2. The latter would be the case of Glassner's work [5], where the amount of ray intersection checks is reduced using an octree VOLUME 4, 2016 that divides the objects into compartments. ...
... Despite this, a priori finding a sub-optimal but good enough warehouse factor is not too complicated for the general case. Empirical evidence from Section V-A and Section V-C shows that f ∈ [4,8] and m ∈ [32, 64] are good choices for different scenes. ...
Article
Full-text available
The software HELIOS++ simulates the laser scanning of a given virtual scene that can be composed of different spatial primitives and 3D meshes with distinct granularity. The high computational cost of this type of simulation software demands efficient computational solutions. Classical solutions based on GPU are not well suited when irregular geometries compose the scene combining different primitives and physics models because they lead to different computation branches. In this paper, we explore the usage of parallelization strategies based on static and dynamic workload balancing and heuristic optimization strategies to speed up the ray tracing process based on a k-dimensional tree (KDT). Using HELIOS++ as our case study, we analyze the performance of our algorithms on different parallel computers, including the CESGA FinisTerrae-II supercomputer. There is a significant performance boost in all cases, with the decrease in computation time ranging from 89.5% to 99.4%. Our results show that the proposed algorithms can boost the performance of any software that relies heavily on a KDT or a similar data structure, as well as those that spend most of the time computing with only a few synchronization barriers. Hence, the algorithms presented in this paper improve performance, whether computed on personal computers or supercomputers.
... The process of creating a color scheme from an image has been utilized for a long time (Heckbert, 1982;Wu, 1992;;Braquelaire & Brun, 1997). Different stages and methods of color image quantization have been developed in these studies. ...
Article
Full-text available
Buildings and environmental colors create differences in perception, both psychologically and physically. Colors interact with and have an impact on the form, space components, surroundings, materials textures, space lighting, the size of the surfaces, and the function and aesthetics of the space. Color preference varies according to personality traits, age and gender, habits and experiences, fashion, and style. The research is aimed to determine the color preferences and trends in hotel interior designs. Many hotel projects are designed and completed throughout the world every year. In this context, project designs awarded in international competitions were selected. To collect the data for the study, details of award-winning hotel interior designs available through competition websites are reviewed and the Adobe Color analytical tool is employed to characterize the color schemes reflected in the designs. Adobe Color is more successful than other tools in creating color schemes, such as the color wheel, gradient extraction, and theme extraction, suitable for all color selection methods, including colorful, bright, muted, deep, and dark options. As a result of this study, based on the designs awarded in 2022, it was determined that blue, red, and green colors were most frequently used, respectively. Furthermore, the data suggest that all colors were used in pastel tones, with light shades of blue mixed with different grays being used more frequently, while red and green generally were used in darker tones. The results were significantly correlated with the theoretical research and studies on user preferences.
... Over the past four decades, substantial research efforts have been dedicated to its exploration and refinement [1]. In earlier times, due to constraints imposed by display hardware capabilities and the restricted bandwidth of computer networks, CQ primarily served applications in image display [8] and compression for storage and transmission purposes [9]. Although these limitations progressively diminish, CQ still holds practical significance within these domains. ...
Article
Full-text available
Spatial color algorithms (SCAs) are algorithms grounded in the retinex theory of color sensation that, mimicking the human visual system, perform image enhancement based on the spatial arrangement of the scene. Despite their established role in image enhancement, their potential as dequantizers has never been investigated. Here, we aim to assess the effectiveness of SCAs in addressing the dual objectives of color dequantization and image enhancement at the same time. To this end, we propose the term dequantenhancement. In this paper, through two experiments on a dataset of images, SCAs are evaluated through two distinct pathways: first, quantization followed by filtering to assess both dequantization and enhancement; and second, filtering applied to original images before quantization as further investigation of mainly the dequantization effect. The results are presented both qualitatively, with visual examples, and quantitatively, through metrics including the number of colors, retinal-like subsampling contrast (RSC), and structural similarity index (SSIM).
... GTs for vessel segmentation were generated using a novel, multiscale quantization and clustering-based approach, called multiscale median cut quantization (MMCQ), which we found to produce superior results to standard application of Niblack in preliminary analysis on the training set. MMCQ segments the choroidal vasculature by performing patchwise local contrast enhancement at several scales using median cut clustering (quantization) 41 and histogram equalization. The pixels of the subsequently enhanced choroidal space are then clustered globally using median cut clustering once more, classifying the pixels belonging to the clusters with the darkest pixel intensities as vasculature. ...
Article
Full-text available
Purpose: To develop Choroidalyzer, an open-source, end-to-end pipeline for segmenting the choroid region, vessels, and fovea, and deriving choroidal thickness, area, and vascular index. Methods: We used 5600 OCT B-scans (233 subjects, six systemic disease cohorts, three device types, two manufacturers). To generate region and vessel ground-truths, we used state-of-the-art automatic methods following manual correction of inaccurate segmentations, with foveal positions manually annotated. We trained a U-Net deep learning model to detect the region, vessels, and fovea to calculate choroid thickness, area, and vascular index in a fovea-centered region of interest. We analyzed segmentation agreement (AUC, Dice) and choroid metrics agreement (Pearson, Spearman, mean absolute error [MAE]) in internal and external test sets. We compared Choroidalyzer to two manual graders on a small subset of external test images and examined cases of high error. Results: Choroidalyzer took 0.299 seconds per image on a standard laptop and achieved excellent region (Dice: internal 0.9789, external 0.9749), very good vessel segmentation performance (Dice: internal 0.8817, external 0.8703), and excellent fovea location prediction (MAE: internal 3.9 pixels, external 3.4 pixels). For thickness, area, and vascular index, Pearson correlations were 0.9754, 0.9815, and 0.8285 (internal)/0.9831, 0.9779, 0.7948 (external), respectively (all P < 0.0001). Choroidalyzer's agreement with graders was comparable to the intergrader agreement across all metrics. Conclusions: Choroidalyzer is an open-source, end-to-end pipeline that accurately segments the choroid and reliably extracts thickness, area, and vascular index. Especially choroidal vessel segmentation is a difficult and subjective task, and fully automatic methods like Choroidalyzer could provide objectivity and standardization.
... Palette design requires a selection of a small set of colors that represents the original image colors (Celebi 2011). There are various algorithms for a palette selection, such as k-means (Celebi 2011), minmax (Xiang 1997), median-cut (Heckbert 1982), octree (Gervautz and Purgathofer 1988), and fuzzy c-means (Bezdek 2013). We chose k-means, since the performance of it as a color quantizer has been proven effective. ...
Article
This work describes a system to recolor a user’s painting based on the perceived emotional state of the viewer. An automatic palette selection algorithm is used to generate color palettes for a set of emotions. A user can create a painting using one of the generated palettes. To notify the end of the painting, the user clicks on the DONE button. Once the button is pressed, the colors of the user's painting change as the facial expression of the user changes. Facial emotion recognition is used in this process to classify the emotional status of the user’s face.
... Compression: To compare the effectiveness of compressed octrees, we use the same median-cut algorithm [14] to quantize the SH coefficients. Our compressed DOT models also support in-browser rendering using WebGL. ...
Preprint
Full-text available
The explicit neural radiance field (NeRF) has gained considerable interest for its efficient training and fast inference capabilities, making it a promising direction such as virtual reality and gaming. In particular, PlenOctree (POT)[1], an explicit hierarchical multi-scale octree representation, has emerged as a structural and influential framework. However, POT's fixed structure for direct optimization is sub-optimal as the scene complexity evolves continuously with updates to cached color and density, necessitating refining the sampling distribution to capture signal complexity accordingly. To address this issue, we propose the dynamic PlenOctree DOT, which adaptively refines the sample distribution to adjust to changing scene complexity. Specifically, DOT proposes a concise yet novel hierarchical feature fusion strategy during the iterative rendering process. Firstly, it identifies the regions of interest through training signals to ensure adaptive and efficient refinement. Next, rather than directly filtering out valueless nodes, DOT introduces the sampling and pruning operations for octrees to aggregate features, enabling rapid parameter learning. Compared with POT, our DOT outperforms it by enhancing visual quality, reducing over 55.15/68.84%68.84\% parameters, and providing 1.7/1.9 times FPS for NeRF-synthetic and Tanks &\& Temples, respectively. Project homepage:https://vlislab22.github.io/DOT. [1] Yu, Alex, et al. "Plenoctrees for real-time rendering of neural radiance fields." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
... In a natural image, there is a wide range of intensity to represent the image, but only a small portion of colors are required to represent most of the information. So, to keep this thing in mind, authors have proposed a quantized color histogram based on Color Image Quantization (CIQ) (Heckbert, 1982) since the CIQ represents the image based on salient color components. ...
Article
Full-text available
In the traditional content-based image retrieval (CBIR) framework, images are retrieved based on the combined primitive features. However, such a kind of fusion is always not effective as one feature may overshadow other image feature. To overcome this issue, in this particular paper, we have suggested a hierarchical framework where features like color, texture, and shape are considered in a single hierarchy only. The main concern of the hierarchical system is the proper selection of the order of retrieving visual features. So, semantic-based image retrieval has been carried out. Here, at the first level, the salience map-based region of interest has been identified, and then edge histogram descriptor-based shape features are incorporated. In the second hierarchy, we have proposed a novel directional texture feature extraction based on the Tamura features’ directionality. Further, color is considered another primitive feature of an image, but human visual perception is not sensitive to each color. The image can be visualized by the salient colors, and in this work, we have developed a color image quantization-based approach. Now, to validate the system, extensive experimental results and its comparison with its contemporaries through Corel-1 K, GHIM-10 K, Olivia-2688, and Produce-1400 databases have been carried out.
... 3, CC0 license, 768 × 512 pixels) and its quantized versions with 4, 16, 64, and 256 colors obtained using the median-cut algorithm. 4 It can be seen that the reproduction is reasonably accurate with only 64 colors and is nearly indistinguishable from its original with 256 colors. CQ is composed of two phases: color palette design and pixel mapping. ...
... Work in image quantization algorithm development has been gaining momentum since the 1980s. In 1982, a landmark work was published in the field of image quantization [81]. The idea behind this study was to procure high-fidelity color images with small frame buffers. ...
Article
Full-text available
Huge advancements have been made over the years in terms of modern image-sensing hardware and visual computing algorithms (e.g., computer vision, image processing, and computational photography). However, to this day, there still exists a current gap between the hardware and software design in an imaging system, which silos one research domain from another. Bridging this gap is the key to unlocking new visual computing capabilities for end applications in commercial photography, industrial inspection, and robotics. In this survey, we explore existing works in the literature that can be leveraged to replace conventional hardware components in an imaging system with software for enhanced reconfigurability. As a result, the user can program the image sensor in a way best suited to the end application. We refer to this as software-defined imaging (SDI), where image sensor behavior can be altered by the system software depending on the user’s needs. The scope of our survey covers imaging systems for single-image capture, multi-image, and burst photography, as well as video. We review works related to the sensor primitives, image signal processor (ISP) pipeline, computer architecture, and operating system elements of the SDI stack. Finally, we outline the infrastructure and resources for SDI systems, and we also discuss possible future research directions for the field.
... To acquire this data, satellite RGB images (Fig. 1a) for each lava flow were downloaded from the Bing Maps satellite imagery repository with a resolution of 0.26 to 0.28 m/pixel. Satellite images were converted to 8-bit indexed color ( Fig. 1b) through the median-cut color quantization algorithm (Heckbert, 1982) using Fiji, an opensource image processing package based on ImageJ (Schneider et al., grayscale-derived data measured along the surface profile. d) S-Transform spectral analysis. ...
... Long MTP latency can cause various side-effects for users, such as nausea or motion sickness. Inertial measurement unit (IMU) sensors play an active role in sampling the motion data consisting of the user's pose information, which will be later transmitted to the rendering engine to generate a framebuffer [64]. A time lag is observed when the framebuffer is sent to the headset to display the rendered imagery. ...
Article
Full-text available
Augmented reality and virtual reality technologies are witnessing an evolutionary change in the 5G and Beyond (5GB) network due to their promising ability to enable an immersive and interactive environment by coupling the virtual world with the real one. However, the requirement of low-latency connectivity, which is defined as the end-to-end delay between the action and the reaction, is very crucial to leverage these technologies for a high-quality immersive experience. This paper provides a comprehensive survey and detailed insight into various advantageous approaches from the hardware and software perspectives, as well as the integration of 5G technology, towards 5GB, in enabling a low-latency environment for AR and VR applications. The contribution of 5GB systems as an outcome of several cutting-edge technologies, such as massive multiple-input, multiple-output (mMIMO) and millimeter wave (mmWave), along with the utilization of artificial intelligence (AI) and machine learning (ML) techniques towards an ultra-low-latency communication system, is also discussed in this paper. The potential of using a visible-light communications (VLC)-guided beam through a learning algorithm for a futuristic, evolved immersive experience of augmented and virtual reality with the ultra-low-latency transmission of multi-sensory tracking information with an optimal scheduling policy is discussed in this paper.
... Therefore, we do not equip overhangs with textures from the model, but with the dominant color of the corresponding roof facet. The dominant color is determined using the median cut algorithm (Heckbert, 1982). This algorithm clusters color values iteratively by comparing values of the channel with the largest range to their median. ...
... They split the solid color into smaller and smaller boxes, whose representatives (e.g., mean colors) are included in the final color palette. A good example of these is the classic method of median cut [4]. Splitting methods are fast, but quantized images often differ significantly from the original images. ...
Article
Full-text available
Nature-inspired artificial intelligence algorithms have been applied to color image quantization (CIQ) for some time. Among these algorithms, the particle swarm optimization algorithm (PSO-CIQ) and its numerous modifications are important in CIQ. In this article, the usefulness of such a modification, labeled IDE-PSO-CIQ and additionally using the idea of individual difference evolution based on the emotional states of particles, is tested. The superiority of this algorithm over the PSO-CIQ algorithm was demonstrated using a set of quality indices based on pixels, patches, and superpixels. Furthermore, both algorithms studied were applied to superpixel versions of quantized images, creating color palettes in much less time. A heuristic method was proposed to select the number of superpixels, depending on the size of the palette. The effectiveness of the proposed algorithms was experimentally verified on a set of benchmark color images. The results obtained from the computational experiments indicate a multiple reduction in computation time for the superpixel methods while maintaining the high quality of the output quantized images, slightly inferior to that obtained with the pixel methods.
... CVTs, and as such the Quantizer problem, find applications in various fields. In data compression, especially image procession, for example, CVTs are used to reduce the number of colors in an image with the least amount of loss of quality [Heckbert, 1982]. Further examples include the meshing of surfaces [Du et al., 2010], grid optimisation [Du and Gunzburger, 2002], description of cell tissue [Honda, 1978[Honda, , 1983 or approximately model animal territorial behavior [Barlow, 1974, Tanemura and Hasegawa, 1980, Atsuo Suzuki and Masao Iri, 1986. ...
Thesis
Subdividing space through interfaces leads to many space partitions that are relevant to soft matter self-assembly. Prominent examples include cellular media, e.g. soap froths, which are bubbles of air separated by interfaces of soap and water, but also more complex partitions such as bicontinuous minimal surfaces. Using computer simulations, this thesis analyses soft matter systems in terms of the relationship between the physical forces between the system's constituents and the structure of the resulting interfaces or partitions. The focus is on two systems, copolymeric self-assembly and the so-called Quantizer problem, where the driving force of structure formation, the minimisation of the free-energy, is an interplay of surface area minimisation and stretching contributions, favouring cells of uniform thickness. In the first part of the thesis we address copolymeric phase formation with sharp interfaces. We analyse a columnar copolymer system "forced" to assemble on a spherical surface, where the perfect solution, the hexagonal tiling, is topologically prohibited. For a system of three-armed copolymers, the resulting structure is described by solutions of the so-called Thomson problem, the search of minimal energy configurations of repelling charges on a sphere. We find three intertwined Thomson problem solutions on a single sphere, occurring at a probability depending on the radius of the substrate. We then investigate the formation of amorphous and crystalline structures in the Quantizer system, a particulate model with an energy functional without surface tension that favours spherical cells of equal size. We find that quasi-static equilibrium cooling allows the Quantizer system to crystallise into a BCC ground state, whereas quenching and non-equilibrium cooling, i.e. cooling at slower rates then quenching, leads to an approximately hyperuniform, amorphous state. The assumed universality of the latter, i.e. independence of energy minimisation method or initial configuration, is strengthened by our results. We expand the Quantizer system by introducing interface tension, creating a model that we find to mimic polymeric micelle systems: An order-disorder phase transition is observed with a stable Frank-Caspar phase. The second part considers bicontinuous partitions of space into two network-like domains, and introduces an open-source tool for the identification of structures in electron microscopy images. We expand a method of matching experimentally accessible projections with computed projections of potential structures, introduced by Deng and Mieczkowski (1998). The computed structures are modelled using nodal representations of constant-mean-curvature surfaces. A case study conducted on etioplast cell membranes in chloroplast precursors establishes the double Diamond surface structure to be dominant in these plant cells. We automate the matching process employing deep-learning methods, which manage to identify structures with excellent accuracy.
... Broadly, three groups of color quantization methods can be distinguished [1,2]. The first group includes splitting techniques, such as median cut (MC) [3], octree [4], and Wu's algorithm [5]. These techniques iteratively split the RGB color solid into smaller boxes, the mean values of which form the colors of the resulting palette. ...
Article
Full-text available
We propose three methods for the color quantization of superpixel images. Prior to the application of each method, the target image is first segmented into a finite number of superpixels by grouping the pixels that are similar in color. The color of a superpixel is given by the arithmetic mean of the colors of all constituent pixels. Following this, the superpixels are quantized using common splitting or clustering methods, such as median cut, k-means, and fuzzy c-means. In this manner, a color palette is generated while the original pixel image undergoes color mapping. The effectiveness of each proposed superpixel method is validated via experimentation using different color images. We compare the proposed methods with state-of-the-art color quantization methods. The results show significantly decreased computation time along with high quality of the quantized images. However, a multi-index evaluation process shows that the image quality is slightly worse than that obtained via pixel methods.
Chapter
This chapter describes the methods used to prepare images for further analysis. A simple taxonomy of general image processing operations for points, lines, and regions is developed to guide the discussion, with background for filter design using spatial kernels. We discuss color management systems and necessary methods for color processing as a part of preparing images for analysis, including colorimetric spaces, as well as color-based segmentation and color space optimizations, approximations, and color reductions. Fourier processing for image analysis and enhancements is covered for 1D, 2D, and 3D cases, as well as a general discussion of related transforms such as Slant, Hadamaard, and Walsh. Morphology methods are presented, along with an introduction to super pixels. We survey segmentation methods using neural networks, such as the U-Net architecture, and the history of segmentation using neural networks, beginning with region proposals using Mask-RCNN using CNN’s. In addition, we examine a wider range of texture metrics useful for color enhancements. Various methods for object detection are introduced, such as Yolo.
Preprint
Full-text available
We present a new methodology for the problem of approximating an input image with a given set of plastic bottle caps. The first step of our method adaptively discretizes the image into a grid where the caps are going to be placed. The next step reduces the number of colors in this discrete image using K-means color clustering. The last step calculates color assignments performing a minimization that aims to approximate colors and color differences between clustered regions while respecting the given cap colors and quantities. A collection of results showcases the potential of our method to tackle this very constrained problem at a much lower cost than the current state of the art.
Article
Color quantization reduces the number of colors used in an image while preserving its content, which is essential in pixel art and knitting art creation. Traditional methods primarily focus on visual fidelity and treat it as a clustering problem in the RGB space. While effective in large (5-6 bits) color spaces, these approaches cannot guarantee semantics in small (1-2 bits) color spaces. On the other hand, deep color quantization methods use network viewers such as AlexNet and ResNet for supervision, effectively preserving semantics in small color spaces. However, in large color spaces, they lag behind traditional methods in terms of visual fidelity. In this work, we propose ColorCNN+, a novel approach that combines the strengths of both. It uses network viewer signals for supervision in small color spaces and learns to cluster the colors in large color spaces. Noteworthily, it is non-trivial for neural networks to do clustering, where existing deep clustering methods often need K-means to cluster the features. In this work, through a newly introduced cluster imitation loss, ColorCNN+ learns to directly output the cluster assignment without any additional steps. Furthermore, ColorCNN+ supports multiple color space sizes and network viewers, offering scalability and easy deployment. Experimental results demonstrate competitive performance of ColorCNN+ across various settings. Code is available at link.
Article
Full-text available
This article analyzes various color quantization methods using multiple image quality assessment indices. Experiments were conducted with ten color quantization methods and eight image quality indices on a dataset containing 100 RGB color images. The set of color quantization methods selected for this study includes well-known methods used by many researchers as a baseline against which to compare new methods. On the other hand, the image quality assessment indices selected are the following: mean squared error, mean absolute error, peak signal-to-noise ratio, structural similarity index, multi-scale structural similarity index, visual information fidelity index, universal image quality index, and spectral angle mapper index. The selected indices not only include the most popular indices in the color quantization literature but also more recent ones that have not yet been adopted in the aforementioned literature. The analysis of the results indicates that the conventional assessment indices used in the color quantization literature generate different results from those obtained by newer indices that take into account the visual characteristics of the images. Therefore, when comparing color quantization methods, it is recommended not to use a single index based solely on pixelwise comparisons, as is the case with most studies to date, but rather to use several indices that consider the various characteristics of the human visual system.
Article
Full-text available
Surgeons determine the treatment method for patients with epiglottis obstruction based on its severity, often by estimating the obstruction severity (using three obstruction degrees) from the examination of drug-induced sleep endoscopy images. However, the use of obstruction degrees is inadequate and fails to correspond to changes in respiratory airflow. Current artificial intelligence image technologies can effectively address this issue. To enhance the accuracy of epiglottis obstruction assessment and replace obstruction degrees with obstruction ratios, this study developed a computer vision system with a deep learning-based method for calculating epiglottis obstruction ratios. The system employs a convolutional neural network, the YOLOv4 model, for epiglottis cartilage localization, a color quantization method to transform pixels into regions, and a region puzzle algorithm to calculate the range of a patient’s epiglottis airway. This information is then utilized to compute the obstruction ratio of the patient’s epiglottis site. Additionally, this system integrates web-based and PC-based programming technologies to realize its functionalities. Through experimental validation, this system was found to autonomously calculate obstruction ratios with a precision of 0.1% (ranging from 0% to 100%). It presents epiglottis obstruction levels as continuous data, providing crucial diagnostic insight for surgeons to assess the severity of epiglottis obstruction in patients.
Preprint
Full-text available
Optical information processing and computing can potentially offer enhanced performance, scalability and energy efficiency. However, achieving nonlinearity-a critical component of computation-remains challenging in the optical domain. Here we introduce a design that leverages a multiple-scattering cavity to passively induce optical nonlinear random mapping with a continuous-wave laser at a low power. Each scattering event effectively mixes information from different areas of a spatial light modulator, resulting in a highly nonlinear mapping between the input data and output pattern. We demonstrate that our design retains vital information even when the readout dimensionality is reduced, thereby enabling optical data compression. This capability allows our optical platforms to offer efficient optical information processing solutions across applications. We demonstrate our design's efficacy across tasks, including classification, image reconstruction, keypoint detection and object detection, all of which are achieved through optical data compression combined with a digital decoder. In particular, high performance at extreme compression ratios is observed in real-time pedestrian detection. Our findings open pathways for novel algorithms and unconventional architectural designs for optical computing.
Article
Full-text available
Logos are one of the most important graphic design forms that use an abstracted shape to clearly represent the spirit of a community. Among various styles of abstraction, a particular golden‐ratio design is frequently employed by designers to create a concise and regular logo. In this context, designers utilize a set of circular arcs with golden ratios (i.e., all arcs are taken from circles whose radii form a geometric series based on the golden ratio) as the design elements to manually approximate a target shape. This error‐prone process requires a large amount of time and effort, posing a significant challenge for design space exploration. In this work, we present a novel computational framework that can automatically generate golden ratio logo abstractions from an input image. Our framework is based on a set of carefully identified design principles and a constrained optimization formulation respecting these principles. We also propose a progressive approach that can efficiently solve the optimization problem, resulting in a sequence of abstractions that approximate the input at decreasing levels of detail. We evaluate our work by testing on images with different formats including real photos, clip arts, and line drawings. We also extensively validate the key components and compare our results with manual results by designers to demonstrate the effectiveness of our framework. Moreover, our framework can largely benefit design space exploration via easy specification of design parameters such as abstraction levels, golden circle sizes, etc.
Article
Full-text available
Görüntü erişimi, dijital bir görüntü veri tabanından benzer veya özdeş görüntülerin indekslenmesi olarak tanımlanır. Benzer bir dijital görüntü aranırken görüntülerden elde edilen çeşitli öznitelik vektörleri kullanılır. Çünkü görüntülerin pikselleri üzerinde işlem yapmak maliyetli algoritmalar gerektirir. Ayrıca, erişim yaklaşımlarında kullanılan görüntülerin farklı boyutlarda olması olası bir problemdir. Bu nedenle, görüntüleri karşılaştırırken piksel düzeyindeki işlemler yetersiz kalmaktadır. Görüntüleri temsil eden vektörel yapılar gereklilik olarak karşımıza çıkmaktadır. Bu vektörel yapıları elde etme sürecine özellik çıkarımı denir ve içerik tabanlı görüntü erişiminin en önemli aşamalarından biridir. Histogram ise görüntünün boyutlarından bağımsız ve kolaylıkla hesaplanabilen en temel öznitelik vektörüdür. Gri seviyeli görüntülerde histogramın boyutu öznitelik vektörü olarak kullanıma uygundur. Ancak, renkli görüntülerdeki üç farklı kanal, özellik vektörleri olarak kullanılmak için çok fazla veri içerir. Bu nedenle vektör boyutunu küçültmek kaçınılmaz bir işlemdir. Bu çalışmada, insan görsel sisteminden esinlenerek İğnecikli Sinir Ağı modeline dayalı yeni bir çok-seviyeli eşikleme yöntemi önerilmiştir. Önerilen model ile RGB renk kanallarının her biri için 3 ayrı eşik değeri belirlenmiş ve her bir renk kanalı 4 parçaya bölünmüştür. Böylece elde edilen renk paleti ile renk uzayı 64 farklı renge indirgenir. Önerilen yöntem, görüntü erişimi için yaygın olarak kullanılan çok seviyeli eşikleme yöntemleri ile karşılaştırılmıştır. Elde edilen sonuçlar önerilen yöntemin başarısını açıkça göstermektedir.
Article
We propose a new image processing problem that consists in approximating an input image with a set of plastic bottle caps. This problem is motivated by the appreciation caused by low resolution imaging, combined with the goal of finding a new destination for plastic caps that would be discarded and possibly damage the environment. Our solution consists in formulating an energy that measures how well a bottle cap art reproduces structures and features from the reference image and maximizing it with a simulated annealing strategy. Examples in the paper show that our method produces high quality results in this very low spatial and color resolution setting.
ResearchGate has not been able to resolve any references for this publication.