ArticlePDF Available

Information Preserving Color Transformation for Protanopia and Deuteranopia

Authors:

Abstract and Figures

In this letter, we proposed a new recoloring method for people with protanopic and deuteranopic color deficiencies. We present a color transformation that aims to preserve the color information in the original images while maintaining the recolored images as natural as possible. Two error functions are introduced and combined together to form an objective function using the Lagrange multiplier with a user-specified parameter lambda. This objective function is then minimized to obtain the optimal settings. Experimental results show that the proposed method can yield more comprehensible images for color-deficient viewers while maintaining the naturalness of the recolored images for standard viewers.
Content may be subject to copyright.
IEEE SIGNAL PROCESSING LETTERS, VOL. 14, NO. 10, OCTOBER 2007 711
Information Preserving Color Transformation
for Protanopia and Deuteranopia
Jia-Bin Huang, Yu-Cheng Tseng, Se-In Wu, and Sheng-Jyh Wang, Member, IEEE
Abstract—In this letter, we proposed a new recoloring method
for people with protanopic and deuteranopic color deficiencies. We
present a color transformation that aims to preserve the color in-
formation in the original images while maintaining the recolored
images as natural as possible. Two error functions are introduced
and combined together to form an objective function using the La-
grange multiplier with a user-specified parameter
. This objective
function is then minimized to obtain the optimal settings. Experi-
mental results show that the proposed method can yield more com-
prehensible images for color-deficient viewers while maintaining
the naturalness of the recolored images for standard viewers.
Index Terms—Color deficiency, image processing, Lagrange
multiplier, recoloring.
I. INTRODUCTION
D
UE to the increasing use of colors in multimedia con-
tents to convey visual information, it becomes more impor-
tant to perceive colors for information interpretation. However,
roughly around 5%–8% of men and 0.8% of women have cer-
tain kinds of color deficiency. Unlike people with normal color
vision, people with color deficiency have difficulties discrimi-
nating certain color combinations and color differences. Hence,
multimedia contents with rich colors, which can be well dis-
criminated by people with normal color vision, may sometimes
cause misunderstanding to people with anomalous color vision.
Humans’ color vision is based on the responses to photons
in three different types of photoreceptors, which are named
“cones” and are contained in the retina of human eyes [1].
The peak sensitivities of these three distinct cones lie in the
long-Wavelength (L), middle-wavelength (M), and short-wave-
length (S) regions of the spectrum. Anomalous trichromacy is
frequently characterized by a shift of one or more cone types
so that the pigments in one type of cone are not sufficiently
distinct from the pigments in others. For example, L-Cones are
more like M-Cones in protanomaly and M-Ccones are more
like L-Cones in deuteranomaly. On the other hand, dichromats
have only two distinct pigments in the cones and entirely lack
one of the three cone types. Lack of L-cones is referred to
as protanopia, lack of M-cones is referred to as deuteranopia,
Manuscript received October 15, 2006; revised February 11, 2007. This work
was supported by the National Science Council of the Republic of China under
Grant NSC-94-2219-E-009-008. The associate editor coordinating the review
of this manuscript and approving it for publication was Dr. Konstantinos N.
Plataniotis.
The authors are with the Department of Electronics Engineering, National
Chiao Tung University, Hsin-Chu 30050, Taiwan, R.O.C. (e-mail: mysoul-
foryou.ee91@nctu.edu.tw).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/LSP.2007.898333
and lack of S-cones is referred to as tritanopia. Among these
three types of dichromats, protanopia and deuteranopia have
difficulty in distinguishing red from green, while tritanopia
has difficulty in discriminating blue from yellow. So far, many
research works have been conducted on simulating color-defi-
cient vision [2]–[5]. These approaches represent color stimuli
as vectors in the three-dimensional LMS space, where three
orthogonal axes L, M, and S represent the quantum catch for
each of the three distinct cone types. Since the dichromatic
vision is the reduced form of trichromatic vision, the lack of
one cone type can be simulated by collapsing one of the three
dimensions into a constant value.
To enhance the comprehensibility of images for color-defi-
cient viewers, daltonization is proposed in [6] to recolor images
for dichromats. In [6], the authors first increase the red/green
contrast in the image and then use the red/green contrast in-
formation to adjust brightness and blue/yellow contrast. In [7],
Ishikawa
et al. described the manipulation of webpage colors for
color-deficient viewers. They first decompose a webpage into a
hierarchy of colored regions and determine “important” pairs of
colors that are to be modified. An objective function is then de-
fined to maintain the distances of these color pairs, as well as to
minimize the extent of color remapping. This approach is fur-
ther extended to deal with full-color images in [8]. On the other
hand, Seuttgi Ymg et al. [9] proposed a method to modify colors
for dichromats and anomalous trichromats. For dichromats, a
monochromatic hue is changed into another hue with less sat-
uration, while for anomalous trichromats, the proposed method
tends to keep the original colors. In [10], Rasche et al. use a
linear transform to convert colors in the CIELAB color space
and enforce proportional color differences during the remap-
ping. Based on the same constraint for color deficiency, the au-
thors further improve the optimization process by using the ma-
jorization method [11].
Basically, all the aforementioned works may generate im-
ages that are more comprehensible to color-deficient viewers.
However, recolored images may look very unnatural to viewers
with normal vision. From an application viewpoint, images in a
public place may be simultaneously observed by normal people
and color-deficient people. For example, in a public transporta-
tion system, many advertisements and traffic maps are delivered
in colors. Without concerning the needs of deficient observers,
color-deficient people may have difficulty in understanding the
image contents. On the contrary, if only concerning the needs
of color-deficient people, then these recolored images may look
annoying to normal observers. Hence, in this letter, we aim to
develop a recoloring algorithm that can automatically construct
a transformation to maintain details for color-deficient viewers
while preserving naturalness for standard viewers.
1070-9908/$25.00 © 2007 IEEE
712 IEEE SIGNAL PROCESSING LETTERS, VOL. 14, NO. 10, OCTOBER 2007
Fig. 1. Rotation operation in the plane.
II. COLOR REPRODUCTION FOR PROTANOPIA
AND
DEUTERANOPIA
A. Color Reproduction Method
In this letter, we focus on protanopia and deuteranopia,
which are the major types of color deciency. In order to
mimic the color perception of protanopia and deuteranopia, we
adopt Brettels algorithm [2] to simulate the perceived images.
Here, we adopt CIELAB color space as the working domain.
In both protanopia and deuteranopia, there is strong correlation
between the original colors and the simulated colors in the
values of
and , while there is a weak correlation between
the original
and the perceived . That is, the original color
information in
gets lost signicantly. To retain the infor-
mation in
, a reasonable way is to do some kind of image
warping so that the information of
is mapped onto the
axis in the CIELAB color space.
In our approach, we aim to maintain the color differences of
color pairs in the CIELAB color space while keeping the recol-
ored images as natural as possible. To keep the recolored image
natural, three premises are adopted. First, the recolored image
has the same luminance as the original image. Second, colors
with the same hue in the original image still have the same hue
after recoloring. Third, the saturation of the original colors is
not altered after recoloring. In our approach, a rotation oper-
ation is adopted in the
plane to transform the informa-
tion of
onto the axis, as illustrated in Fig. 1. Here, we
assume some color stimuli
have the same in-
cluded angle
with respect to the axis. The rotation opera-
tion maps these colors to new colors
, which lay
on another line with the included angle
. If ignoring
the nonlinear property of the iso-hue curves in the CIELAB
color space [13], this rotation process simultaneously changes
the hue of
with the same amount of hue. Hence,
the transformed colors
still share the same hue
after color transformation. Moreover, the saturation of the orig-
inal color
is also preserved.
In mathematics, this rotation operation can be formulated as
a matrix multiplication. That is, we have
(1)
where
and are the CIELAB values of the
recolored color and the original color, respectively.
is a
monotonically decreasing function of
. Since the color differ-
ence along the
axis can be well discriminated by protanopic
Fig. 2. (a) Function
with three parameters: , and . (b) Func-
tion
with parameters
and for a half plane.
and deuteranopic viewers, ) decreases to zero when ap-
proaches
. In this letter, we dene to be
(2)
for the right half-plane of the
plane, where ranges from
to . Here, represents the maximal change of
the included angle and
represents the degree of the decreasing
rate. These two parameters will be specied by optimizing an
objective function based on the contents of the original color
image. For the left half-plane
,wedene the
function in a similar manner but with different and .
This is because in practice, we may want the right half and the
left half of the
plane to have different transformations, as
shown in Fig. 2(a). Moreover, since
approaches zero when
colors are close to the
axis, crossover of colors can be avoided
when crossing the b
axis.
In Fig. 2(b), we show the plot of the transformed hue
versus the original hue for the right-half plane.
If the
is positive, then the quadrant with positive
will be compressed while the quadrant with negative
will be expanded and vice versa. To avoid
colors crossover in the compressed quadrant, we require
(3)
By combining (2) and (3), we have
(4)
Since
ranges from to , the LHS of (4) has the
lower bound
. Thus, we can obtain the constraint
. On the other hand, the constraint in (4) is not necessary
in the expanded quadrant. Hence, we introduce two parameters
and , one for each quadrant. For the compressed quadrant,
the constraint in (4) is required, while for the expanded region,
no constraint is needed for
and . In the proposed algo-
rithm, there would be six parameters in total. Their notations
and meanings are listed in Table I.
B. Optimization Using Detail and Naturalness Criteria
In this section, we introduce two criteria, one for detail pre-
serving and the other for naturalness preserving. For each color
HUANG et al.: INFORMATION PRESERVING COLOR TRANSFORMATION FOR PROTANOPIA AND DEUTERANOPIA 713
TABLE I
P
ARAMETERS FOR
RECOLORING
pair in the original color domain, we rst calculate the perceived
color difference with respect to a person with normal vision.
Then, for the corresponding color pair in the transformed color
domain, we calculate the perceived color difference with re-
spect to a person with protanopic or deuteranopic deciencies.
As mentioned above, we follow Brettels algorithm [2] to simu-
late the color perception for protanopia and deuteranopia. In our
criterion, we wish these two perceived color differences to be as
similar as possible. Hence, we dene an error function to be
(5)
where
and range over the colors contained in the images,
is a perceptual color difference metric, is our recoloring
function, and
denotes the simulated color perception
using Brettles algorithm. By minimizing this error function, we
can preserve color details of the original image.
On the other hand, we attempt not to dramatically modify the
color perception of the color images since a severe modication
may make the recolored image extremely unnatural for normal
viewers. Hence, we dene another error function to be
(6)
where
ranges over all the colors in the original color image.
Minimizing this error function shortens the color distance
between the original colors and the corresponding remapped
colors. To preserve both details and naturalness, we combine
these two error functions using the Lagrange multiplier with a
user-specied parameter
. Here, we further normalize these
two error functions by their arithmetic means to achieve similar
order of magnitude. That is, the total error is written as
(7)
To minimize the objective function in (7), we roughly
estimate
and in the initialization stage
with
, and xed to 1. Then we use the
FletcherReeves conjugate-gradient method with the constraint
in (4) to obtain the optimal solution. By choosing different
values of
, users may adjust the tradeoff between details and
naturalness. A larger
makes the recolored image more natural
Fig. 3. (a) Original image. (b) Recolored by the Daltonization method with a
middle-level correction [6]. (c) Recolored by Rasches method [10]. (d) Recol-
ored by our proposed method with
. (e) Recolored by our proposed
method with
. (f)(j): Corresponding color images perceived by people
with deuteranopic color deciency.
for normal viewers, while a smaller
makes the recolored
image more comprehensible for color-decient viewers.
One more thing to mention is about the nonlinear property
of the iso-hue curves in the CIELAB color space [13]. That is,
two colors with the same included angle
in the plane
may not have the same value of hue. Due to this nonlinear prop-
erty, colors with the same hue in the original image may gen-
erate colors with different hues in the recolored image. To solve
this problem, we may simply apply the hue-linearization process
mentioned in [14] as a preprocessing and then apply the delin-
earization process after the recoloring algorithm.
III. E
XPERIMENTAL
RESULTS
In Fig. 3, we demonstrate some experimental results for the
“flower image. Fig. 3(a)(e) shows the images perceived by
normal viewers, while Fig. 3(f)(j) presents the images per-
ceived by viewers with deuteranopic deciency. We can ob-
serve that the color contrast between the red ower and the
green leaves is lost for people with deuteranopic deciency.
We compare our method with the Daltonization method [6] and
Rasches method [10], as shown in Fig. 3(b)(e) and (g)(j). We
may observe that even though the Daltonization method with
a middle-level correction may also preserve the naturalness of
the recolored image for normal people, the contrast between the
ower and leaves looks very poor for deuteranopic people. On
the other hand, even though Rasches method may create great
contrast for deuteranopic people, the naturalness of the recol-
ored image is extremely poor for people with normal vision. In
comparison, our method may well preserve both details and nat-
uralness at the same time.
To verify the effect of
, we also demonstrate in Fig. 3(e) that
our proposed method will produce an extremely unnatural re-
colored image if
. Furthermore, in Table II, we compare
the naturalness error and detail error among different methods,
based on (5) and (6). In our approach, the naturalness error de-
creases while detail error increases when
rises. For the Dal-
tonization method, even though its naturalness error is less than
ours, its detail error becomes extremely high. On the other hand,
even though Rasches method has a smaller detail error, its natu-
ralness error is larger. These experimental results show that both
naturalness and detail can be properly preserved by our method.
714 IEEE SIGNAL PROCESSING LETTERS, VOL. 14, NO. 10, OCTOBER 2007
TABLE II
C
OMPARISON OF
NATURALNESS
ERROR AND
DETAIL ERROR
Fig. 4. (a) Original image. (b) Perceived image by protanopic viewer. (c) Per-
ceived image by deuteranopic viewer. (d) Recolored image for protanopia. (e)
Perceived image of (d) by protanopic viewers. (f) Recolored image for deuteran-
gopia. (g) Perceived image of (f) by deuteranopic viewers.
In Fig. 4, we show more examples to verify the effectiveness of
the proposed method.
We also used Thurstones Law of Comparative Judgment [12]
for subjective evaluation. In our subjective experiments, ten par-
ticipants with normal vision were involved and six represen-
tative color images were chosen, as shown in Fig. 5. All ten
participants were graduate students with some background in
video coding and image processing. Since we have difculty in
nding color-decient viewers, we adopted Brettels algorithm
[2] to mimic the perception of protanopia and deuteranopia. In
the rst experiment, each of the six images was, respectively, re-
colored by the Daltonization method, Rasches method, and our
method with
. For each image, the original image was
rst shown to the participants. Then, exhaustive paired compar-
isons were performed over the recolored images, and the partic-
ipants were asked to choose the more natural image from each
pair. This experiment is to evaluate the naturalness of the re-
colored images from the viewpoint of normal viewers. In the
second experiment, Brettels algorithm was applied over the
original images and recolored images to simulate the perceived
images for deuteranopia. Exhaustive paired comparisons were
performed again over the simulated images, and the participants
were asked to choose the more comprehensible image from each
pair. This experiment is to evaluate the comprehensibility of the
recolored images from the viewpoint of deuteranopic viewers.
The results of these two subjective experiments were analyzed
based on Thurstones Law of Comparative Judgment [12]. The
scaling of data is shown in Fig. 6. Fig. 7(a) indicates that both
our method and the Daltonization method produce more natural
images, while Fig. 7(b) indicates that our method may preserve
more details than the other two methods.
Fig. 5. Six images for the subjective evaluation.
Fig. 6. Experimental results. (a) Scales from the naturalnessexperiment. (b)
Scales from the comprehensibility experiment.
IV. C
ONCLUSION
We have presented in this letter a new recoloring method for
people with protanopic or deuteranopic deciency. We propose
a color transformation that can yield more comprehensible im-
ages for protanopic or deuteranopic viewers while maintaining
the naturalness of the recolored images for standard viewers.
The same procedure can be extended to the case of tritanopia,
in which blue and yellow tones cannot be well distinguished.
The experimental results show that our proposed method per-
forms subjectively better than others, in terms of comprehensi-
bility and naturalness.
R
EFERENCES
[1] Wandell, Foundations of Vision. Sunderland, MA: Sinauer, 1995.
[2] H. Brettel, F. Vienot, and J. Mollon, Computerized simulation of
color appearance for dichromats, J. Optic. Soc. Amer. A, vol. 14, no.
10, pp. 26472655, Oct. 1997.
[3] G. Meyer and D. Greenberg, Color-defective vision and computer
graphics displays, IEEE Comput. Graph. Appl., vol. 8, no. 5, pp.
2840, Sep. 1988.
[4] S. Kondo, A computer simulation of anomalous color vision, in Color
Vision Deficiencies. Amsterdam, The Netherlands: Kugler & Ghe-
dini, 1990, pp. 145159.
[5] J. Walraven and J. W. Alferdinck, Color displays for the color blind,
in Proc. IS&T/SID 5th Color Imaging Conf., 1997, pp. 1722.
[6] R. Dougherty and A. Wade, Daltonize. [Online]. Available:
http://www.vischeck.com/daltonize/.
[7] M. Ichikawa, K. Tanaka, S. Kondo, K. Hiroshima, K. Ichikawa, S.
Tanabe, and K. Fukami, Web-page color modication for barrier-free
color vision with genetic algorithm,Lecture Notes Comput. Sci., vol.
2724, pp. 21342146, 2003.
[8] M. Ichikawa, K. Tanaka, S. Kondo, K. Hiroshima, K. Ichikawa, S.
Tanabe, and K. Fukami, Preliminary study on color modication for
still images to realize barrier-free color vision, in Proc. IEEE Int. Conf.
Systems, Man, Cybernetics, 2004, pp. 3641.
[9] S. Yang and Y. M. Ro, Visual contents adaptation for color vision
deciency,in Proc. IEEE Int. Conf. Image Process., Sep. 2003, vol. 1,
pp. 453456.
[10] K. Rasche, R. Geist, and J. Westall, Detail preserving reproduction
of color images for monochromats and dichromats, IEEE Comput.
Graph. Appl., vol. 25, no. 3, pp. 2230, MayJun. 2005.
[11] K. Rasche, R. Geist, and J. Westall, Re-coloring images for gamuts of
lower dimension,EuroGraphics, vol. 24, no. 3, pp. 423432, 2005.
[12] W. S. Torgerson, Theory and Method of Scaling. New York: Wiley,
1967.
[13] G. Hoffmann, CIELab Color Space. [Online]. Available: http://www.
fho-emden.de/~hoffmann/cielab03022003.pdf.
[14] G. J. Braun, F. Ebner, and M. D. Fairchild, Color Gamut mapping
in a hue-linearized CIELab color space, in Proc. IS&T/SID6th Color
Imaging Conf., 1998, pp. 163168.
... In addition, for contrast enhancement, Lin et al. [21] distorted the color distribution in the opposing color space, but the changes from the original image were too significant to meet to the naturalness requirement. Image recoloring methods [6,7,10,11,13,19,20] all aimed to produce a recolored image as close to the original as possible for naturalness preservation. First, Hassan et al. [6,7] increased the blue channel in proportion to the degree of perception bias, that is, the discrepancy between the real image and its CVD simulation. ...
... Yet, due to oversight of the relationship between pixels, the results showed significant contrast loss in the blue location. Further, Lau et al. [20] used k-means++ [1] to divide an image into numerous areas and enhance the contrast in nearby regions, whereas Huang et al. [13] reduced an image's departure from the source and improved the contrast of each color pair. Meanwhile, to optimize their recoloring result, Kuhn et al. [19] utilized the mass-spring mechanism and introduced k-means for image quantization, and Huang et al. [10,11] retrieved multiple key colors from photos or videos, which were then remapped for contrast improvement. ...
... Moreover, within the color gamut of dichromats, [32,33] extracted and recolored a limited number of dominant colors, the former of which was achieved by thoroughly comparing candidate clusters in terms of pixel numbers and distances in both the image and color spaces, a process repeated each time. The aforementioned naturalnesspreserving techniques [6,7,10,11,13,19,20,32,33] were all developed based on dichromacy simulation models [2,23] and yielded positive results for dichromacy compensation. However, the subjective experimental results in [32,33] also demonstrate considerable differences in perception between anomalous trichromats and dichromats. ...
Article
Full-text available
Color vision deficiency (CVD) is an eye disease caused by genetics that reduces the ability to distinguish colors, affecting approximately 200 million people worldwide. In response, image recoloring approaches have been proposed in existing studies for CVD compensation, and a state-of-the-art recoloring algorithm has even been adapted to offer personalized CVD compensation; however, it is built on a color space that is lacking perceptual uniformity, and its low computation efficiency hinders its usage in daily life by individuals with CVD. In this paper, we propose a fast and personalized degree-adaptive image-recoloring algorithm for CVD compensation that considers naturalness preservation and contrast enhancement. Moreover, we transferred the simulated color gamut of the varying degrees of CVD in RGB color space to CIE L*a*b* color space, which offers perceptual uniformity. To verify the effectiveness of our method, we conducted quantitative and subject evaluation experiments, demonstrating that our method achieved the best scores for contrast enhancement and naturalness preservation.
... Individuals with protan defects have a reduced sensitivity to red light, while those with deutan defects have a reduced sensitivity to green light. In the past decades, to address the chromatic contrast loss suffered by individuals with CVD, various recoloring methods for contrast enhancement have been proposed in existing studies [12,24,25,28,22,21,7,6,20,13,19,10,11,33,32,8,27,9,30,29]. These methods are based on CVD simulation models [23], which can be adopted to visualize CVD perceptions digitally. ...
... In addition, for contrast enhancement, Lin et al. [21] distorted the color distribution in the opposing color space, but the changes from the original image were too significant to meet to the naturalness requirement. Image recoloring methods [7,6,20,13,19,10,11] all aimed to produce a recolored image as close to the original as possible for naturalness preservation. First, Hassan et al. [7,6] increased the blue channel in proportion to the degree of perception bias, that is, the discrepancy between the real image and its CVD simulation. ...
... Yet, due to oversight of the relationship between pixels, the results showed significant contrast loss in the blue location. Further, Lau et al. [20] used k-means++ [1] to divide an image into numerous areas and enhance the contrast in nearby regions, whereas Huang et al. [13] reduced an image's departure from the source and improved the contrast of each color pair. Meanwhile, to optimize their recoloring result, Kuhn et al. [19] utilized the mass-spring mechanism and introduced k-means for image quantization, and Huang et al. [10,11] retrieved multiple key colors from photos or videos, which were then remapped for contrast improvement. ...
Preprint
Full-text available
Color vision deficiency (CVD) is an eye disease caused by genetics that reduces the ability to distinguish colors, affecting approximately 200 million people worldwide. In response, image recoloring approaches have been proposed in existing studies for CVD compensation, and a state-of-the-art recoloring algorithm has even been adapted to offer personalized CVD compensation; however, it is built on a color space that is lacking perceptual uniformity, and its low computation efficiency hinders its usage in daily life by individuals with CVD. In this paper, we propose a fast and personalized degree-adaptive image-recoloring algorithm for CVD compensation that considers naturalness preservation and contrast enhancement. Moreover, we transferred the simulated color gamut of the varying degrees of CVD in RGB color space to CIE L*a*b* color space, which offers perceptual uniformity. To verify the effectiveness of our method, we conducted quantitative and subject evaluation experiments, demonstrating that our method achieved the best scores for contrast enhancement and naturalness preservation.
... Most methods begin by changing the image hue to the correct color [11]- [15]. Huang et al. [16]- [18] proposed a method to transfer the information about defects to a normal position, reduce the difference between the color-corrected image and the original image by introducing an error function, and transfer the information on the defective axes a* to the b* axis by rotating the operation, thereby reducing the loss of image content. In subsequent research, they proposed an improved recoloring algorithm that sets the key colors of the image and measures the contrast between the two key colors by calculating the Kullback-Leibler dispersion and interpolating the colors according to the corresponding mapping to ensure the smoothness of the local colors in the recolored image. ...
Article
Images with rich color information are an important source of information that people obtain from the objective world. Occasionally, it is difficult for people with red-green color vision deficiencies to obtain color information from color images. We propose a method of color correction for dichromats based on the physiological characteristics of dichromats, considering hue information. First, the hue loss of color pairs under normal color vision was defined, an objective function was constructed on its basis, and the resultant image was obtained by minimizing it. Finally, the effectiveness of the proposed method is verified through comparison tests. Red-green color vision deficient people fail to distinguish between partial red and green colors. When the red and green connecting lines are parallel to the a* axis of CIE L*a*b*, red and green perception defectives cannot distinguish the color pair, but can distinguish the color pair parallel to the b* axis. Therefore, when two colors are parallel to the a* axis, their color correction yields good results. When color correction is performed on a color, the hue loss between the two colors under normal color vision is supplemented with b* so that red-green color vision-deficient individuals can distinguish the color difference between the color pairs. The magnitude of the correction is greatest when the connecting lines of the color pairs are parallel to the a* axis, and no color correction is applied when the connecting lines are parallel to the b* axis. The objective evaluation results show that the method achieves a higher score, indicating that the proposed method can maintain the naturalness of the image while reducing confusing colors.
... Kuhn et al. [26] enhanced color contrast and preserved naturalness based on mass-spring optimization, but the quality of the recoloring result depends on the quality of the quantization. In [27], Huang et al. proposed to extract key colors from an input image and to recolor them to enhance their distances, but not too far from the original image. Hassan et al. [10] compensated for the contrast loss by increasing the blue component of confusing colors, because CVD individuals with red-green deficiency, which is most individuals, have difficulty discriminating red and green, but they can perceive blue well. ...
Article
Full-text available
People with color vision deficiency (CVD) have difficulty in distinguishing differences between colors. To compensate for the loss of color contrast experienced by CVD individuals, a lot of image recoloring approaches have been proposed. However, the state-of-the-art methods suffer from the failures of simultaneously enhancing color contrast and preserving naturalness of colors [without reducing the Quality of Vision (QOV)], high computational cost, etc. In this paper, we propose an image recoloring method using deep neural network, whose loss function takes into consideration the naturalness and contrast, and the network is trained in an unsupervised manner. Moreover, Swin transformer layer, which has long-range dependency mechanism, is adopted in the proposed method. At the same time, a dataset, which contains confusing color pairs to CVD individuals, is newly collected in this study. To evaluate the performance of the proposed method, quantitative and subjective experiments have been conducted. The experimental results showed that the proposed method is competitive to the state-of-the-art methods in contrast enhancement and naturalness preservation and has a real-time advantage. The code and model will be made available at https://github.com/Ligeng-c/CVD_swin.
Article
Full-text available
To help maximize the impact of scientific journal articles, authors must ensure that article figures are accessible to people with color-vision deficiencies (CVDs), which affect up to 8% of males and 0.5% of females. We evaluated images published in biology- and medicine-oriented research articles between 2012 and 2022. Most included at least one color contrast that could be problematic for people with deuteranopia (‘deuteranopes’), the most common form of CVD. However, spatial distances and within-image labels frequently mitigated potential problems. Initially, we reviewed 4964 images from eLife , comparing each against a simulated version that approximated how it might appear to deuteranopes. We identified 636 (12.8%) images that we determined would be difficult for deuteranopes to interpret. Our findings suggest that the frequency of this problem has decreased over time and that articles from cell-oriented disciplines were most often problematic. We used machine learning to automate the identification of problematic images. For a hold-out test set from eLife (n=879), a convolutional neural network classified the images with an area under the precision-recall curve of 0.75. The same network classified images from PubMed Central (n=1191) with an area under the precision-recall curve of 0.39. We created a Web application ( https://bioapps.byu.edu/colorblind_image_tester ); users can upload images, view simulated versions, and obtain predictions. Our findings shed new light on the frequency and nature of scientific images that may be problematic for deuteranopes and motivate additional efforts to increase accessibility.
Preprint
To help maximize the impact of scientific journal articles, authors must ensure that article figures are accessible to people with color-vision deficiencies. Up to 8% of males and 0.5% of females experience a color-vision deficiency. For deuteranopia, the most common color-vision deficiency, we evaluated images published in biology-oriented research articles between 2012 and 2022. Out of 66,253 images, 56,816 (85.6%) included at least one color contrast that could be problematic for people with moderate-to-severe deuteranopia (“deuteranopes”). However, after informal evaluations, we concluded that spatial distances and within-image labels frequently mitigated potential problems. We systematically reviewed 4,964 images, comparing each against a simulated version that approximates how it appears to deuteranopes. We identified 636 (12.8%) images that would be difficult for deuteranopes to interpret. Although still prevalent, the frequency of this problem has decreased over time. Articles from cell-oriented biology subdisciplines were most likely to be problematic. We used machine-learning algorithms to automate the identification of problematic images. For a hold-out test set of 879 additional images, a convolutional neural network classified images with an area under the receiver operating characteristic curve of 0.89. To enable others to apply this model, we created a Web application where users can upload images, view deuteranopia-simulated versions, and obtain predictions about whether the images are problematic. Such efforts are critical to ensuring the biology literature is interpretable to diverse audiences.
Article
In recent years, with the development of media equipment, color images have been extensively applied in various domains. However, people with dichromatic or anomalous trichromatic vision have difficulty discriminating the colors that common trichromatic vision would distinguish. To improve image visibility for dichromats, a lightness modification method considering the chromaticity loss was proposed in this study. First, the chromaticity loss was defined for dichromats. Subsequently, an objective function including the chromaticity loss was constructed. To focus on color pairs with a large chromaticity loss, a weight was introduced into the objective function. The objective function was then minimized to obtain output images. Finally, the effectiveness of the proposed method was verified through experiments, which indicated that the proposed method could maintain naturalness for normal trichromats and improve recognition for dichromats.
Conference Paper
Full-text available
In this paper, we propose a color modification scheme for web-pages described by HTML markup language in order to realize barrier-free color vision on the internet. First, we present an abstracted image model, which describes a color image as a combination of several regions divided with color information, and define some mutual color relations between regions. Next, based on fundamental research on the anomalous color vision, we design some fitness functions to modify colors in a web-page properly and effectively. Then we solve the color modifi- cation problem, which contains complex mutual color relations, by using Genetic Algorithm. Experimental results verify that the proposed scheme can make the colors in a web-page more recognizable for anomalous vi- sion users through not only computer simulation but also psychological experiments with them.
Article
Full-text available
An effective gray-scale conversion method should preserve the detail in the original color image. The fundamental premise is that this is best achieved by a perceptual match of relative color differences between the color and the gray-scale images. Specifically, the perceived color difference between any pair of colors should be proportional to their perceived gray difference. A new approach to the gray-scale conversion problem that is built on this premise is proposed. The method automatically constructs a linear mapping that depends on the characteristics of the input image and incorporates information from all three dimensions of the color space.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
Color is a powerful medium for coding, structuring and emphasizing visual information and, as such, used in many computer applications. However, this tool is less effective, or even counterproductive, in the case of people with impaired color vision. This problem can be remedied to a reasonable extent, provided the display designer is able to anticipate the chromatic trouble spots of a particular color palette. For that purpose, a color editor was designed that allows an image to be displayed as if viewed through the eyes of a color-deficient observer. The model used for computing the color transformations, makes use of state-of-the-art knowledge concerning the polymorphism of human cone pigment and the spectral filtering of the eye lens and macular pigment. As a result, the color editor not only enables the emulation of dichromatic color vision, but also of anomalous trichromatism, the more complex, but also more frequently occurring form of deficient color vision (75% of the colorblind population). In addition to its use as a diagnostic design tool, the editor also provides the means for adjusting the color look-up table to the individual needs of a color-deficient display user.
Article
Color images have a gamut that typically spans three dimensions. Nevertheless, several important applications, such as the creation of grayscale images for printing and the re-coloring of images for color-deficient viewers, require a reduction of gamut dimension. This paper describes a technique for preserving visual detail while reducing gamut dimension. The technique is derived by focusing on the problem of converting color images to grayscale. A straightforward extension is then provided that allows re-coloring images for color-deficient viewers. Care is taken so that the resulting images remain within the available gamut and visual artifacts are not introduced.
Article
We propose an algorithm that transforms a digitized color image so as to simulate for normal observers the appearance of the image for people who have dichromatic forms of color blindness. The dichromat's color confusions are deduced from colorimetry, and the residual hues in the transformed image are derived from the reports of unilateral dichromats described in the literature. We represent color stimuli as vectors in a three-dimensional LMS space, and the simulation algorithm is expressed in terms of transformations of this space. The algorithm replaces each stimulus by its projection onto a reduced stimulus surface. This surface is defined by a neutral axis and by the LMS locations of those monochromatic stimuli that are perceived as the same hue by normal trichromats and a given type of dichromat. These monochromatic stimuli were a yellow of 575 nm and a blue of 475 nm for the protan and deutan simulations, and a red of 660 nm and a blue-green of 485 nm for the tritan simulation. The operation of the algorithm is demonstrated with a mosaic of square color patches. A protanope and a deuteranope accepted the match between the original and the appropriate image, confirming that the reduction is colorimetrically accurate. Although we can never be certain of another's sensations, the simulation provides a means of quantifying and illustrating the residual color information available to dichromats in any digitized image.
Article
The normal X-chromosome-linked color vision gene array is composed of a single red pigment gene followed by one or more green pigment genes. The high degree of homology between these genes predisposed them to unequal recombination, leading to gene deletions or the formation of red-green hybrid genes that explain the majority of the common red-green color vision deficiencies. Gene expression studies suggest that only the two most proximal genes of the array are expressed in the retina. The severity of the color vision defect is roughly related to the difference in absorption maxima of the photopigments encoded by the first two genes of the array. A single amino acid polymorphism (Ser180Ala) in the red pigment accounts for the subtle difference in normal color vision and influences the severity of color vision deficiency. Blue cone monochromacy is a rare disorder that involves absence of red and green cone function. It is caused either by deletion of a critical region that regulates expression of the red/green gene array, or by mutations that inactivate the red and green pigment genes. Total color blindness is another rare disease that involves complete absence of all cone function. A number of mutations in the genes encoding the cone-specific alpha- and beta-subunits of the cation channel and the alpha-subunit of transducin have been implicated in this disorder.
Conference Paper
We propose a color modification scheme for still images in order to realize barrier-free color vision in the IT society. Based on the knowledge of Kondo's anomalous color vision model, we quantify the degree of color discrimination among colors in a given image by the anomalous vision people, and modify the pixel colors to improve the discrimination by them while keeping naturalness of the image.
Conference Paper
In this paper, we propose methods to adapt colors on the visual content for people with color vision deficiency. The proposed adaptation consists of two parts: adaptations for dichromat and anomalous trichromat. The adaptation for dichromats aims to give them better color information, while the adaptation for anomalous trichromats aims to give them original color. To verify the proposed methods, we used both quantitative and qualitative measurements. Experimental results showed that the proposed adaptation enhanced color information readability of the people with color vision deficiency.