ArticlePDF Available

Abstract

Color constancy is one of the important research areas with a wide range of applications in the fields of color image processing and computer vision. One such application is video tracking. Color is used as one of the salient features and its robustness to illumination variation is essential to the adaptability of video tracking algorithms. Color constancy can be applied to discount the influence of changing illuminations. In this paper, we present a review of established color constancy approaches. We also investigate whether these approaches in their present form of implementation can be applied to the video tracking problem. The approaches are grouped into two categories, namely, Pre-Calibrated and Data-driven approaches. The paper also talks about the ill-posedness of the color constancy problem, implementation assumptions of color constancy approaches, and problem statement for tracking. Publications on video tracking algorithms involving color correction or color compensation techniques are not included in this review.
Journal of Pattern Recognition Research 1 (2006) 42-54
An Overview of Color Constancy Algorithms
Vivek Agarwal
agarwal1@purdue.edu
School of Nuclear Engineering, Purdue University,
400 Central Drive, West Lafayette, IN 47907, USA
Besma R. Abidi besma@utk.edu
Andreas Koschan aksochan@utk.edu
Mongi A. Abidi abidi@utk.edu
Department of Electrical and Computer Engineering, The University of Tennessee
334 Ferris Hall, Knoxville, TN 37996, USA
Received March 09, 2006. Received in revised form March 05, 2006. Accepted March 08, 2006.
Abstract
Color constancy is one of the important research areas with a wide range of ap-
plications in the fields of color image processing and computer vision. One such
application is video tracking. Color is used as one of the salient features and its
robustness to illumination variation is essential to the adaptability of video tracking
algorithms. Color constancy can be applied to discount the influence of changing
illuminations. In this paper, we present a review of established color constancy
approaches. We also investigate whether these approaches in their present form of
implementation can be applied to the video tracking problem. The approaches are
grouped into two categories, namely, Pre-Calibrated and Data-driven approaches.
The paper also talks about the ill-posedness of the color constancy problem, imple-
mentation assumptions of color constancy approaches, and problem statement for
tracking. Publications on video tracking algorithms involving color correction or
color compensation techniques are not included in this review.
Keywords: Color constancy, Categorization of algorithms, Video tracking.
1. Introduction
Over decades, researchers have tried to solve the problem of color constancy by proposing
a number of algorithmic and instrumentation approaches. Nevertheless, no unique solution
has been identified. Given a wide range of computer vision applications that require color
constancy, it is not possible to obtain a unique solution. This led researchers in the field to
identify sets of possible approaches that can be applied to particular problems. Particularly,
efforts are directed towards identifying color constancy approaches that can be applied to
real time video tracking as reviewd in this paper.
We present a simple example, which will give an insight into the problem of color con-
stancy. Imagine, a light emitted by a lamp and reflected by a red color object, causing
a color sensation in the brain of the observer. The physical composition of the reflected
light depends on the color of the light source. However, this effect is compensated by the
human vision system. Hence, regardless of the color of the light source, we will see the true
red color of the object. The ability to correct color deviations caused by a difference in
illumination as done by the human vision system is, known as color constancy. The same
process is not trivial to machine vision systems in an unconstrained scene. This defines the
. The first author is now with Purdue University. All the work reported in this paper was performed
during his graduate studies at The University of Tennessee, Knoxville.
c
2006 JPRR. All rights reserved.
V. Agarwal, B.R. Abidi, A. Koschan, and M.A. Abidi, "An Overview of Color Constancy Algorithms," Journal of
Pattern Recognition Research, Vol. 1, No. 1, pp. 42-54, May 2006.
219
An Overview of Color Constancy Algorithms
color constancy problem. Therefore, the goal of color constancy research is to achieve an
illuminant invariant description of a scene taken under illumination whose spectral charac-
teristics are unknown (It is referred to as unknown illumination). It is a two step process.
In the first step, an estimate of the illuminant parameters is obtained, and in the second
step, the illuminant independent surface descriptor is parametrically computed [12, 33, 42].
Often illumination invariant descriptor of the scene are computed under an illumination
whose spectral characteristics are known (It is referred to as canonical illumination)[24].
The choice of canonical illumination is somewhat arbitrary, but often this is the illumina-
tion for which the camera is balanced.
Mathematically, a color image is represented as,
E
k
(x, y, λ) =
Z
ω
R(x, y, λ)L(λ)S
k
(λ) (1)
a product of three variables, namely, R(x, y, λ) the surface reflectance, L(λ) the illumina-
tion property, and S
k
(λ) the sensor characteristics, as a function of the wavelength λ, over
the visible spectrum ω. The subscript k represents the sensor’s response in the k
th
channel
and E
k
(x, y, λ) is the image corresponding to the k
th
channel (k = R, G, B). If a constant
surface reflectance and a known sensor characteristics are assumed, then any variation in
illumination will change the color appearance of the image. In color constancy research,
efforts are directed toward discounting the effect of illumination and obtaining a canonical
color appearance.
The human vision system exhibits an approximate color constancy processing. The same
phenomena cannot be observed in machine vision systems. The idea to obtain color con-
stancy is based on many theories proposed by researchers in the field [10, 12, 13, 16, 18, 21,
24, 32, 40, 42, 52, 55]. Most of the theories identified color constancy as a very difficult and
an under constrained problem. According to Hadamard [35], a French mathematician, a
problem is well posed if the following three conditions are satisfied, (i) there exits a solution,
(ii) this solution is unique, and (iii) this unique solution is stable. If any of these conditions
are not satisfied, then the problem is known as ill-posed. In color constancy, the unique-
ness and the stability of the solution cannot be guaranteed because of the high correlation
between the color in the image and the color of the illuminant. This collinearity have the
following effects on the estimation of illumination coefficients, (i) imprecise estimation and
(ii) a slight variation is collinearity may lead to a large variation in the estimation.
A color applications such as video tracking is one of the active research fields in computer
vision. It focuses on identifying a target (a person or an object) in an unconstrained envi-
ronment. The tracking algorithm framework takes into consideration different features of
the target like shape, size, orientation and color. From equation (1), we observe that a color
image is a function of illumination. This makes the color feature more sensitive to chang-
ing environments and illumination conditions. Therefore, it is essentially important that
color-based tracking algorithms be adaptive to color variations. Hence, it becomes utterly
important to achieve approximate color constancy of the target to enhance the robustness
of tracking
A color image is a function of three variables (equation(1)), therefore the assumptions are
categorized into three classes, (i) assumptions based on sensors, (ii) assumptions based on
surface reflectance, and (iii) assumptions based on illumination. Most cameras automati-
cally perform a gamma correction, auto gain, white balancing and others, affect the image
acquisition. Sensors automatically perform a gamma correction on the image and it is im-
portant to inverse the gamma correction to obtain the true RGB values of the image. In
43
Agarwal et al.
some literature, these RGB values are also referred to as raw values of the image. Barnard
et al. [1] showed that the sensor factors can be normalized by careful camera calibration.
Most of the theories assume Lambertian surfaces and diffuse reflection conditions [2, 3, 24],
eventhough the occurrence of specular highlights have been of particular interest in the color
constancy research. Specular highlights are understood to carry illuminant chromaticity in-
formation [16, 52, 55]. Some algorithms also assume spatially uniform illumination across
the scene [2, 3, 21, 24], i.e., homogeneous illumination conditions. Such assumptions are
void in unconstrained scenes. Researchers [8, 32, 36] have also addressed the issue of color
constancy under inhomogeneous illumination conditions.
Besides these three general categories, assumptions about the diversity, and possible sta-
tistics of the surfaces and illuminants that will be encountered are also considered. The
gray world (GW) algorithm [12] is based on the assumption that the color in each sensor
channel averages to gray over the entire image. Any deviation from the gray value is due to
the chromaticity shift of the illuminant. It is one of the important assumptions when try-
ing to estimate the spectral distribution of the illuminant. Similarly, Scale by Max (SBM)
algorithm provides the estimates of the illuminant by measuring the maximum response in
each channel. SBM is also shown to be a subset of the Bayesian framework [49].
Barnard et al. [2, 3] provided a computational comparison between different color con-
stancy algorithms. Those computational comparisons were obtained using a dataset [4]
under constrained experimental conditions. The advantage of the dataset [4] is that the
true spectral information of the illumination used to collect the data is known. This helps
to compute the ground truth RGB and chromaticity values. The drawback of the dataset
is that it does not model the illumination spectrum that will be observed in real practical
images. We revisit most of the algorithms discussed in [2, 3] and also include reviews on
the algorithms proposed until 2004.
We categorize the reviewed color constancy approaches into two categories and discuss
them in detail in Section 2. In Section 3, the problem statement of video tracking is pre-
sented. We also discuss whether the algorithms of each categories can be applied to video
tracking in their present form. Finally, conclusions and possible future works are presented
in Section 4.
2. Color constancy algorithms
Researchers from various disciplines of engineering and science have tried to solve the color
constancy problem. They proposed and applied many theories taking into consideration
the assumptions and the sensor limitations. We categorizes methods into two main cate-
gories, namely, Pre-Calibrated approaches and Data-driven approaches. These categories
are further subcategorized as shown below:
I. Pre-Calibrated approaches
1. General transformation based approaches and
2. Diagonal transformation based approaches.
II. Data-driven approaches
1. Gray World and Scale By Max approaches,
2. Retinex approaches,
3. Gamut mapping approaches (may also be a diagonal approach),
4. Statistical approaches, and
5. Machine learning approaches.
44
An Overview of Color Constancy Algorithms
2.1 Pre-Calibrated approaches
The sensor used to capture the color image is calibrated [1] and its response is studied
under different illumination conditions. This is important for the selection of the canonical
illumination. To obtain an illuminant invariant description of an image captured under
unknown illumination conditions, transformation based approaches were introduced. These
approaches map the surface reflectance observed under the canonical illuminant to the sur-
face reflectance of the scene observed under unknown illuminant. They assume proper
knowledge of the sensor characteristics and other assumptions such as uniform illumination
and single illumination source. The linear (general) transformation and diagonal transfor-
mation approaches are discussed.
2.1.1 General transformation based approaches
In early 1980’s transform based approaches were introduced. Most authors consider the
transformation to be a linear map of 3 x 3 matrices. Gershon et al. [33] proposed an
algorithm to solve for the transformation, based on three assumptions: (i) both the illu-
mination and the surface reflectance spectra can be modeled using small dimensional basis
sets, (ii) the average surface reflectance in every Mondrian patch is the same, and (iii) the
illumination is uniform. The algorithm solved for the illuminant first and then estimated
the transformation. However, the algorithm showed poor performance because the second
assumption varied significantly and it is very difficult to always maintain uniform illumina-
tion. Maloney et al. [42, 43] proposed a 3-2 algorithm to solve for the limitations of [33] by
modifying the assumptions. They made two further assumptions: (i) if there are n sensors,
then the dimensionality of the illuminant is less than or equal to n and (ii) the illumina-
tion is locally uniform. These assumptions suggest that pseudo-inverse can be applied to
solve for color constancy, if the surface reflectances are two dimensional. Unfortunately,the
surface reflectances have higher dimension. Forsyth [24] extended [42, 43] these algorithms,
MWEXT (Maloney-Wandell EXTension), to obtain a set of plausible mappings instead of
a unique mapping.
2.1.2 Diagonal transformation based approaches
The color constancy algorithms in [21, 24, 36, 40, 49, 59] are based on diagonal matrix
transformation. In this case, color constancy is obtained, by simply taking the dot product
of diagonal matrix and the image matrix obtained under unknown illumination. This is
equivalent to independently scaling each channel by a factor. West et al. [58] showed that
von Kries hypothesis that chromatic adaptation is a central mechanism for color constancy
is based on the diagonal matrix transformation. Barnard et al. [5] and Finlayson et al. [23],
proposed a sensor sharpening method to improve the performance of the color constancy
algorithms based on diagonal matrix transformation. The idea of sensor sharpening is to
map the data by a linear transform into a new space where diagonal models are more
reliable. The final result is then mapped back to the original RGB space by taking the
inverse transformation. The performance of the color constancy algorithms depending on
diagonal transformation is improved by spectral sharpening in terms of low root mean
square error.
2.2 Data-driven approaches
The approaches discussed in this section evolve from a simple algorithmic implementation to
the implementation of sophisticated statistical and machine learning algorithms to achieve
color constancy.
45
Agarwal et al.
2.2.1 Gray World and Scale by Max approaches
Gray World [12] and Scale by Max algorithms [2] are regarded as simple algorithms on
the basis of simplicity of their implementation. They are still used as a benchmark for
comparison when it comes to algorithmic approach to color constancy. The gray world
algorithm is one of the oldest and the simplest color constancy algorithms. It is based on
the assumption that the color in each sensor channel averages to gray over the entire image.
The gray world algorithm estimate the deviation from the assumptions and is given by a
simple expression,
l
r
= mean(E
R
), l
g
= mean(E
G
), l
b
= mean(E
B
) (2)
where l
r
, l
g
, l
b
are the mean value in each channel respectively and E
R
, E
G
, E
B
are indi-
vidual image channels.
In the scale by max algorithm, the estimate of the illuminant is obtained by measuring
the maximum of the responses in each channel. The estimation formulation is very similar
to that of the GW algorithm in equation (2), except for the fact that mean is replaced
by the maximum of the sensor responses in each channel. It is a subset of the Bayesian
approach under the assumption that the reflectance is independent and uniform [49]. The
presence of specularities in the images means that the maximum reflectance is greater than
pure white and it leads to incorrect illuminant estimation. Alternatively, these specularities
can be used to measure the illuminant chromaticity.
2.2.2 Retinex approaches
The Retinex theory introduced in the late 1970’s by E. Land [40] is based on the study of
image formation in the human eye and its interpretation by the human vision system. This
approach investigates color constancy behavior from psychophysical experiments. Land
studied the psychological aspects of lightness and color perception of the human vision and
proposed a theory to obtain an analogous performance in machine vision systems. Retinex is
not only used as a model of the human vision color constancy, but is also used as a platform
for digital image enhancement and lightness/color rendition. Land’s Retinex theory is
based on the design of a surround function [40]. Hurlbert [37] proposed a Gaussian surround
function by choosing three different sigma values to achieve good dynamic range compression
and color rendition. From that point onwards, numerous Retinex theory implementations
were published [7, 14, 27, 31, 39, 41, 46, 48] and effort were made to optimize the performance
of the Retinex algorithm by tuning the free parameters [28]. The Multiscale Retinex (MSR)
implementation [46] intertwined a number of image processing operations and as a result,
the colors are changed in the image in an unpredicted way. Barnard et al. [7] presented a
way to make MSR operations more clear and to ensure color fidelity.
2.2.3 Gamut approaches
The concept of gamut approach is based on the work of Forsyth [24] presented in the early
1990’s. It can also be referred to as a constraint based approach because color constancy is
achieved by imposing constraints on the reflectance and/or the illuminant of the scene. It
also imposes hard constraints on the range of occurrence of the illuminant [21, 24]. The im-
plementation of gamut algorithms requires the knowledge of the canonical illuminants. The
initial approach was proposed in the RGB color space, so it is also referred to as 3D gamut
mapping algorithm. It is a two step approach. In the first step, two possible gamuts are ob-
tained namely, the canonical gamut and the image gamut. The canonical gamut is obtained
by taking the set of all possible (R, G, B) values due to surface reflectance under canonical
46
An Overview of Color Constancy Algorithms
illuminant. The choice of the canonical illuminant is arbitrary. Similarly, the image gamut
is obtained by taking the set of all possible (R, G, B) values due to surface reflectance under
unknown illumination. Both gamuts are convex and are represented by the convex hull. In
the second step, under the diagonal assumptions, both convex hulls are mapped. The image
gamut is mapped onto the canonical gamut using a linear mapping procedure developed by
Forsyth, [24] and called Maloney–Wandell EXTension. MWEXT required both the surface
reflectance and illuminants to be selected from a finite dimensional space. This posed some
limitation on the MWEXT. Forsyth suggested an algorithm CRULE (based on coefficient
rule) to solve for the MWEXT limitations. A heuristic approach was adopted to select a
single diagonal mapping from the set of plausible mappings [24]. Finlayson [21] proposed a
modification to Forsyth’s theory [24] in his work on gamut mapping color constancy in 2D
space. Both [21, 24] used the same heuristic approach for the selection of a single mapping
matrix. Barnard [9] suggested a mapping selection method based on averaging the set of
feasible mappings in both the chromaticity space and the RGB space. This method is
based on the assumption that all illuminants and their corresponding mappings are equally
probable. Under such assumption, the mean or the expected value is used for the selection
of the single mapping. However, in the 2D perspective method [21],unwanted distortion
affected the mapping sets; thereby suggesting that the 2D mean estimation for the selection
of a single mapping is biased in the chromaticity space. Therefore, Finlayson et al. [20]
suggested a mean estimation from the reconstructed 3D maps. Finlayson et al. [19] also
proposed angular error and median based mapping selection.
2.2.4 Statistical approaches
Color constancy algorithms discussed under this classification are often based on the basic
statistical assumption that the probability distribution of the data is Gaussian. Maximum
likelihood is used as the parameter estimator [38]. However, there are some algorithms
that applied different probability distributions [49] and parameter estimators [10, 25, 50].
Freeman et al. [25] applied Bayesian theory to color constancy. They provided an insight
on how to use all the information about the illuminant that is contained in the sensor
response, including the information used by the gray world, subspace and physical realiz-
ability algorithms. The algorithms [40, 22] assumed a priori knowledge on the illumination
distribution. The a priori knowledge on the occurrence of the illumination can be assumed
to be uniform, i.e., probability of occurrence of all the illuminants is equal. This assumption
is fair, if the range of occurrences of the illuminant is not known. Alternatively, if the range
of occurrence of the illuminant is known, then apriori information on the illuminant can be
estimated from a specified set of images within that range.
In their work on Bayesian based color constancy Brainard et al. [10] and Freeman et
al. [25] proposed a bilinear modeling technique to estimate the spectral distribution from
the statistical information of the illuminants. The prior information was obtained by using
principal component analysis (PCA). Skaff et al. [53] extended their work to multiple non-
uniform sensors. They developed a multi-sensor Bayesian technique for color constancy by
sequentially acquiring measurements from independent sensors. Tsin et al. [56] further im-
proved the work presented in [10, 25] and extended it to outdoor object recognition. They
proposed a simple bilinear diagonal color model and an iterative linear update method
based on maximum a posteriori (MAP) estimation technique. Cubber et al. [15] applied
Bayesian framework to achieve color constancy and updated the model in order to achieve
correct classification of pixels in their color based visual servoing approach. They assumed
a multivariate Gaussian distribution and the dichromatic reflectance model which is limited
47
Agarwal et al.
to inhomogeneous dielectric materials. Rosenberg et al. [49] presented Bayesian color con-
stancy method using non Gaussian models. They replaced the independent and Gaussian
distribution of the reflectance with an exchangeable reflectance distribution defined by a
Dirichlet-multinomial model. Rosenberg et al. [50] also proposed the Kullback-Leibler (KL)
Divergence approach for parameter estimation instead of maximum likelihood approach.
Finlayson et al. [18] introduced a new method, known as color by correlation. It is based
on the correlation framework to estimate the illuminant chromaticity in the chromaticity
space. Barnard et al. [6] extended it to 3D space based on two observations: (i) the pixel
brightness makes a significant contribution and (ii) the use of its statistical knowledge is
also useful. Sapiro [51] also proposed an algorithm based on the correlation framework for
color constancy.
2.2.5 Machine learning approaches
Machine learning algorithms are data based approaches. These algorithms involve two
stages, training and testing. In the training stage, the algorithm learns the functional as-
sociation between the input and the output data. Based on the learning, they predict the
output of previously unseen data in the testing stage. So the sample dataset and training
algorithms used in the training stage pretty much define these approaches. Therefore, pre-
processing of the dataset is very important in order to avoid any undesirable prediction due
to outliers(s).
Initial learning approaches to color constancy were based on neural networks. Cardei
et al. [13] and Funt et al. [29] in their work on color constancy proposed a multilayer
perceptron (MLP) feedforward neural network based approach in the chromaticity spaces.
The proposed network architecture consisted of 3600 input nodes, 400 neurons in the first
hidden layer, 40 neurons in the second hidden layer and 2 output neurons. They experi-
mented with both synthetic and real dataset. In the case of real images, significantly large
numbers of images were required to train the network. Due to the practical limitation of
collecting a large number of images, Funt et al. [29] adopted a statistical approach known
as bootstrapping to generate a large number of training images from a small sample of real
images. They showed that neural networks achieved better color constancy than color by
correlation [13]. Ebner [17] proposed a neural network performing parallel algorithm in the
RGB color space. Moore et al. [44] addressed the issue of multiple illuminations in their ap-
plication of neural network for color constancy. Nayak et al. [45] proposed a neural network
approach in the RGB space to achieve color correction for skin tracking. Stanikunas et al.
[54] performed an investigation of color constancy using neural network and compared it to
the human vision system. They concluded that background color information is important
to achieve human equivalent color constancy in machine vision systems.
Apart from neural networks, there are other machine learning algorithms that have also
been applied to achieve illumination invariance description of a surface reflectance. Huang
et al. [34] used an adaptive fuzzy based method called fuzzy associated memory (FAM) to
recognize color objects in complex background and varying illumination conditions. Funt
et al. [26] showed how Vapnik’s support vector machines [57] can be applied to estimate
the illumination chromaticity and also by incorporating the brightness information. They
provided discussion on polynomial and radial basis function kernels. They showed that un-
der controlled (laboratory) conditions support vector machines performs better than neural
networks and color by correlation.
48
An Overview of Color Constancy Algorithms
3. Problem Statement of Tracking
Video tracking is an active research field in computer vision. The complexity of the research
can be understood from the fact that it involves optimal integration between hardware (cam-
eras) and software. Most of modern tracking algorithms require tracking to be performed
in real time under unconstrained illumination conditions and in a dynamic environment.
These requirements demand that video tracking algorithms be adaptive to factors, such as,
color variations, changing background, obstructions, and motion of the target. Therefore,
in order to enhance adaptability, tracking algorithms employ multiple moving cameras and
take into account a number of features of the target like shape, size, orientation and color.
Inclusion of more features enhances the adaptability and robustness of the tracking algo-
rithm, but also adds to the complexity of these algorithms. This is similar to the problem
of optimal number of features selection to achieve good classification in pattern recogni-
tion. Color is one of the salient features used in tracking algorithms. Therefore, it becomes
important to achieve approximate color constancy of the target. If this is achieved in real
time, then a number of other factors can be discounted. This signifies the importance of
color constancy in real time video tracking applications.
The color constancy algorithms discussed above under each of the individual categories
are well established and have been applied under different conditions. From Section 2, we
have the conceptual understanding of color constancy algorithms, the circumstances under
which they are applicable, and computational constraints. Now, we try to identify, whether
it is possible to apply these algorithms in their current form to video tracking.
Sensor based approaches require linear or diagonal transform mapping to obtain illumi-
nant invariant description. These approaches assume that the sensors are calibrated, the
spectral response of canonical illumination known, and uniform illumination. They are vul-
nerable, if these assumptions are violated. In practical applications, these conditions are
difficult to achieve, so their application is restricted. Barnard et al. [3] showed that Gray
World, Scale by Max, and Gamut mapping algorithms are based on specific assumptions.
They also mentioned that most of the algorithms make additional assumptions to achieve
solutions. As the assumptions get stronger, the success of the algorithm increases but at the
same time its vulnerability to failure also increases if the assumptions fail. Retinex based
approaches provide good color rendition and improve the dynamic range of the image, for
dark image in particular. However, these algorithms requires parameter optimization to
achieve good color rendition. Optimal selection of parameters is not a trivial issue.
Gamut based approaches require mapping between canonical and image gamuts. The
selection of single mapping between the gamuts from the plausible mappings is not a trivial
issue in both RGB and chromaticity spaces, as discussed in Section 2. These approaches
assume knowledge of the canonical illuminants and range of occurrence of the illuminants.
These are hard constraint based approaches. Barnard et al. [2] showed that gamut mapping
algorithms are the best at achieving color constancy on real images. Funt et al. [30] showed
that even gamut mapping is not good enough for object recognition. The non–adaptive
nature, dependency on the knowledge of canonical illumination, and computational com-
plexity accounts for the limitation of the gamut approach.
Statistical approaches are widely used in most of the applications. But the assumption
about the Gaussian distribution model [38] and prior knowledge of illumination distrib-
ution in most cases, restricts its application. Although, researchers have looked beyond
those assumptions, they have been applied with limited success in some cases [15]. Color
by correlation [18] is a special case of statistical approaches where the distribution was
49
Agarwal et al.
modeled from a fixed set of images. These approaches are adaptive and have been extended
to outdoor applications.
Machine learning based approaches provide an interesting alternative to statistical meth-
ods. They learn the dependency between the input and the output from the data presented
to them and are data driven. Nonlinear machine learning approaches, namely, neural net-
work and support vector machines are applied to estimate the illumination chromaticity and
shown to perform better than color by correlation approach as [13, 26]. These approaches
are less dependent on assumptions. However, optimization of neural network and support
vector machines is a complicated issue due to a number of factors. A discussion on these
factors is beyond the scope of this paper, but see [11, 57]. In its present form, both neural
networks and support vector machines are less suitable for many practical applications, as
they take large training time for a respectable amount of training data. The advantages
and disadvantages of these algorithms are summarized in Table 1.
Color constancy and video tracking are two independent areas of research with individual
requirements, constraints, and complexities. From our discussion on color constancy and
video tracking in the above sections, we can observe that color constancy is an essential and
integral part of video tracking research. The requirement of achieving color invariance un-
der unconstrained tracking conditions, is similar to achieving color constancy in real time.
However, it is a challenging problem. In this review, we show that the implementation of
most of the current color constancy algorithms is restricted by numerous constraints and
assumptions, which are violated in real time tracking.
Statistical based and machine learning-based approaches have shown promise, especially
machine learning based approach, which relaxes most of the constraints and assumptions
to achieve good color constancy [13, 26]. However, the optimization of parameters of non-
linear learning approaches restricts its application to real time. In our review on machine
learning based approaches for color constancy, we observed that linear machine learning
algorithms have not been tested for color constancy. It is an unexplored field of research in
color constancy. If linear learning techniques can model the association between the color
in the image and illumination chromaticity, then there is a potential that color constancy
can be achieved in real time video tracking.
4. Conclusions
The main purpose of this review was to evaluate current color constancy algorithms based
on their algorithmic approaches, highlight the importance of color constancy in real time
video tracking, and identify whether the reviewed algorithms can be applied to video track-
ing in their present form. On the basis of our review and comparison of different color
constancy approaches, we observed that the application of the reviewed approaches to real
time video tracking has been limited. All the algorithms reviewed in this paper make some
assumption about the statistics of the reflectance to be encountered, and most make as-
sumptions about the illuminants that will be encountered. The gray world algorithm makes
assumptions about the expected value of scene average, scale by max algorithm makes the
assumptions about the maximum value in each channel and the gamut mapping algorithms
make assumptions about the ranges of expected reflectances and illuminants. Besides these
specific assumptions, each of the algorithms makes additional assumptions of flat surface
and homogeneous illumination conditions. Moreover, most machine color constancy ap-
proaches cannot handle situations with more than one illumination present in the scene.
The dependency of color constancy algorithms on assumptions has restricted its applica-
tion to unconstrained practical applications like video tracking. Furthermore, their comp-
50
An Overview of Color Constancy Algorithms
Table 1: Advantages and disadvantages of color constancy algorithms.
Classification methods
Advantages Disadvantages
Pre-Calibrated
approaches
Performs good color
constancy if sensor
characteristics are
known.
Requires sensor pre-
calibration.
Gray World and Scale by
Max approaches
Computationally less
expensive.
Analytical solution is
possible.
Less reliable and not
adaptive.
Depend higly on as-
sumptions.
Retinex approaches
Improves the visual ap-
pearance of images.
Performs better on dark
images.
Poor color fidelity.
Requires optimal selec-
tion of free parameters.
Gamut approaches
Better color reproduc-
tion than other ap-
proaches.
Computationally ex-
pensive.
Depends upon sensor
sensitivity.
Assumes uniform illu-
mination distribution.
Requires knowledge of
the range of illuminant.
Statistical approaches
Adaptive to changing
illumination conditions.
The illumination distri-
bution is obtained from
the knowledge of statis-
tical probabilistic dis-
tribution.
Prior knowledge of
the illumination is not
mandatory.
In real time appli-
cations statistical
assumptions are vio-
lated.
Computationally ex-
pensive in terms of
time.
Machine learning
approaches
Adaptive to changing
illumination conditions.
Color constancy can be
achieved from approxi-
mate knowledge of illu-
mination chromaticity.
Algorithms discussed
are unstable and re-
quire large training
time.
It is difficult to regu-
larize the estimation of
illumination chromatic-
ity.
51
Agarwal et al.
utational compelxity does usually not allow for real time computations without using ad-
ditional special hardware. One way to overcome these restrictions in the future can be to
design tracking algorithms that are less sensitive to color constancy and at the same time
to achieve rough estimations of color constancy with fewer assumptions, in less time. For
example, machine learning approaches showed less dependency on assumptions, but their
implementation requires optimal selection of number of parameters which imposes restric-
tion on their application to video tracking. Machine learning based approaches, like ridge
regression and kernel regression have not been evaluated to estimate illumination chro-
maticity. It will be interesting in the future to evaluate the performance of either of these
approaches in achieving color constancy. The performance of these algorithms will be of
interest from the real time color constancy point of view in video tracking.
Acknowledgments
The authors would like to thank Dr. Andrei Gribok for his valuable comments and sug-
gestions. This work was supported by the DOE University Research Program in Robotics
under grant DOE- DE -FG52- 2004NA25589 and by FAA/NSSA Program, R01-1344-48/49.
References
[1] K. Barnard and B. Funt, “Camera Characterization for Color Research,” Color Research and
Application, vol. 27, no. 3, pp. 153-164, 2002.
[2] K. Barnard, V. C. Cardei, and B. V. Funt, “A Comparison of Computational Color Constancy
Algorithms - Part I: Methodology and Experiments with Synthesized Data,” IEEE Transactions
on Image Processing, vol. 11, no. 9, pp. 972-983, 2002.
[3] K. Barnard, L. Martin, A. Coath, and B. V. Funt, “A Comparison of Computational Color
Constancy Algorithms - Part II: Experiments with Image Data,” IEEE Transactions on Image
Processing, vol. 11, no. 9, pp. 985-996, 2002.
[4] K. Barnard, L. Martin, B. Funt, and A. Coath, “A Data Set for Color Research,” Color
Research and Application, vol. 27, no. 3, pp. 147-151, 2002.
[5] K. Barnard, F. Ciurea, and B. Funt, “Spectral Sharpening for Computational Color Constancy,”
Journal of Optical Society of America A, vol. 18, pp. 2728-2743, 2001.
[6] K. Barnard, L. Martin, and B. Funt, “Color by Correlation in a Three Dimensional Color
Space,” in Proc. of 6th European Conference on Computer Vision, pp. 375-389, 2000.
[7] K. Barnard and B. Funt, “Investigation into Multiscale Retinex,” Color Imaging in Multimedia,
pp. 9-17, 1998.
[8] K. Barnard, G. Finlayson, and B. Funt, “Color Constancy for Scenes with Varying Illumina-
tion,” Computer Vision Image Understanding, vol. 65, pp. 311-321, 1997.
[9] K. Barnard, “Computational Color Constancy: Taking Theory into Practice,” M.Sc thesis,
Simon Fraser Univ., School of Computing Science, BC, Canada, 1997.
[10] D. H. Brainard and W. T. Freeman, “Bayesian Color Constancy,” Journal of Optical Society
of America A, vol. 14, pp. 1393-1411, 1997.
[11] C. M. Bishop, “Neural Network for Pattern Recognition,” Oxford University Press, Oxford,
1997.
[12] G. Buchsbaum, “A Spatial Processor Model for Object Color Perception,” Journal of Franklin
Institute, vol. 310, pp. 1-26, 1980.
[13] V. Cardei, B. V. Funt, and K. Barnard, “Estimating the Scene Illumination Chromaticity Using
a Neural Network,” Journal of the Optical Society of America A, vol. 19, no. 12, pp. 2374-2386,
2002.
[14] T. J. Cooper and F. A, Baqai, “Analysis and Extensions of the Frankle-McCann Retinex
Algorithm,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 85-92, 2004.
[15] G. D. Cubber, S. A. Berrabah, and H. Sahli,, “Color Based Visual Servoing Under Varying
Illumination Conditions,” Robotics and Autonomous Systems, vol. 47, pp. 225-249, 2004.
52
An Overview of Color Constancy Algorithms
[16] M. D’Zmura and G. Iverson, “Color Constancy. I. Basic Theory of Two Stage Linear Recovery
of Spectral Descriptions for Lights and Surfaces,” Journal of Optical Society of America A, vol.
10, pp. 2148-2156, 1993.
[17] M. Ebner, “A Parallel Algorithm for Color Constancy,” Journal of Parallel and Distributed
Computing, vol. 64, no. 1, pp. 79-88, 2004.
[18] G. H. Finlayson, S. D. Hordley, and P. M. Hubel, “Color by Correlation: A Simple, Unify-
ing Framework for Color Constancy,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 23, no. 11, pp. 1209-1221, 2001.
[19] G. H. Finlayson and S. Hordley, “Improving Gamut Mapping Color Constancy,” IEEE Trans-
actions on Image Processing, vol. 9, no. 10, pp. 1774-1783, 2000.
[20] G. H. Finlayson and S. D. Hordley, “Selection for Gamut Mapping Color Constancy,” Image
and Vision Computing, vol. 17, no. 8, pp. 597-604, 1999.
[21] G. H. Finlayson, “Color in Perspective,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 18, no. 10, pp. 1034-1038, 1996.
[22] G. H. Finlayson, “Coefficient of Color Constancy,” Ph.D Dissertation, Simon Fraser University,
BC, Canada, 1995.
[23] G. H. Finlayson, M. S. Drew, and B. V. Funt, “Spectral Sharpening: Sensor Transformations for
Improved Color Constancy,” Journal of Optical Society of America A, vol. 11, pp. 1553-1563,
1994.
[24] G. D. Forsyth, “A Novel Algorithm for Color Constancy,” International Journal of Computer
Vision, vol. 5, no.1, pp. 5-36, 1990.
[25] W. T. Freeman and D. H. Brainard, “Bayesian Decision Theory, the Maximum Local Mass
Estimate, and Color Constancy,” in Proc. of IEEE 5th International Conference on Computer
Vision, pp. 210-217, 1995.
[26] B. V. Funt and W. Xiong, “Estimating Illumination Chromaticity via Support Vector Regres-
sion,” in Proc. of Twelfth Color Imaging Conference: Color Science and Engineering Systems
and Applications, pp. 47-52, 2004.
[27] B. V. Funt, F. Ciurea, and J. McCann, “Retinex in MATLAB,” Journal of Electronic Imaging,
vol. 13, no. 1, pp. 48-57, 2004.
[28] B. V. Funt and F. Ciurea, “Parameters for Retinex,” in AIC Proc. of 9
th
Congress of the
International Color Association, Rochester, June, 2004.
[29] B. V. Funt and V. Cardei, “Bootstrapping Color Constancy,” in Proc. of SPIE, Electronic
Imaging IV , vol. 3644, 1999.
[30] B. V. Funt, K. Barnard, and L. Martin, “Is Color Constancy Good Enough,” in Proc. of 5th
European Conference on Computer Vision, pp. pp. 445-459, 1998.
[31] B. V. Funt, K. Barnard, and K. Brockington, “Luminance Based Multiscale Retinex,” in AIC
International Color Association, 1997
[32] B. V. Funt and M. S. Drew, “Color Space Analysis of Mutual Illumination,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 15, no. 12, pp. 1319-1326, 1993.
[33] R. Gershon, A. D. Jepson, and J. K. Tsotsos, “From [r,g,b] to Surface Reflectance: Computing
Color Constant Descriptors in Images,” Perception, pp. 755-758, 1988.
[34] W. C. Huang and C. H. Wu, “Adaptive Color Image Processing and Recognition for Varying
Backgrounds and Illumination Conditions,” IEEE Transactions on Industrial Electronics, vol.
45, no. 2, pp. 351-357, 1998.
[35] J. Hadamard, Lectures on Cauchy’s Problem in Linear Partial Differential Equations, Yale
University Press, New Haven, 1953.
[36] B. K. P. Horn, “Determining Lightness from an Image,” Computer Vision, Graphics and Image
Processing, vol. 3, pp. 277-299, 1974.
[37] A. C. Hurlbert, “The Computation of Color,” Ph.D Dissetation, Massachusetts Institute of
Technology, September, 1989.
[38] V. Kecman, “Learning and Soft Computing: Support Vector Machine, Neural Networks, and
Fuzzy Logic Models,” MIT Press, Cambridge, London, 2001.
[39] R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel, “A Variational Framework for
Retinex,” International Journal of Computer Vision, vol. 52, no.1, pp. 7-23, 2003.
[40] E. H. Land, “The Retinex Theory of Color Constancy,” Scientific American, pp. 108-129,
1977.
53
Agarwal et al.
[41] J. J. McCann, “Capturing a Black Cat in Shade: Past and Present of Retinex Color Appearance
Models,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 36-47, 2004.
[42] L. T. Maloney and B. A. Wandell, “Color Constancy: A Method for Recovering Surface
Reflectance,” Journal of Optical Society of America A, vol. 3, no. 1, pp. 29-33, 1986.
[43] L. T. Maloney, “Evaluation of Linear Models of Surface Spectral Reflectance with Small Number
of Parameters,” Journal of Optical Society of America A, vol. 3, no. 10, pp. 1673-1683, 1986.
[44] A. Moore, J. Allman, and R. M. Goodman, “A Real Time Neural System for Color Constancy,”
IEEE Transactions on Neural Networks, vol. 2, no. 2, pp. 237-247, 1991.
[45] A. Nayak and S. Chaudhuri, “Self - induced Color Correction for Skin Tracking under Varying
Illumination,” in Proc. of International Conference on Image Processing, pp. 1009-1012, 2003.
[46] Z. Rahman, D. Jobson, and G. A. Woodell, “A Multiscale Retinex for Bridging the Gap between
Color Images and the Human Observation of Scenes,” IEEE Transactions on Image Processing,
vol. 6, no. 7, pp. 965-976, 1997.
[47] H. K. Rising III, “Analysis and Generalization of Retinex by Recasting the Algorithm in
Wavelet,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 93-99, 2004.
[48] A. Rizzi, C. Gatta, and D. Marini, “From Retinex to Automatic Color Equalization: Issues
in Developing a New Algorithm for Unsupervised Color Equalization,” Journal of Electronic
Imaging, vol. 13, no. 1, pp. 75-84, 2004.
[49] C. Rosenberg, T. Minka, and A. Ladsariya, “Bayesian Color Constancy with Non-Gaussian
Models,” in Proc. of NIPS , 2003.
[50] C. Rosenberg, M. Hebert, and S. Thrun, “Color Constancy using KL-Divergence,” in Proc. of
International Conference on Computer Vision, pp. 239-246, 2001.
[51] G. Sapiro, “Color and Illuminant Voting,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 21, pp. 1210-215, 1999.
[52] S. A. Shafer, “Using Color to Separate Reflection Components,” Color Research and Applica-
tion, vol. 10, pp. 210-218, 1985.
[53] S. Skaff, T. Arbel, and J. J. Clark, “Active Bayesian Color Constancy with Non-Uniform
Sensors,” in Proc. of the IEEE 16th International Conference on Pattern Recognition, vol. 2,
pp. 681-685, 2002.
[54] R. Stanikunas, H. Vaitkevicius, and J. J. Kulikowski, “Investigation of Color constancy with a
Neural Network,” Neural Networks, vol. 17, pp. 327-337, 2004.
[55] S. Tominaga, “Surface Reflectance Estimation by the Dichromatic Model,” Color Research and
Application, vol. 21, no. 2, pp. 104-114, 1996.
[56] Y. Tsin, R. T. Collins, V. Ramesh, and T. Kanade, “Bayesian Color Constancy for Outdoor Ob-
ject Recognition,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition,
vol. 1, pp. 1132-1139, 2001.
[57] V. N. Vapnik, “Statistical Learning Theory,” John Wiley and Sons, Inc., NJ, 1998.
[58] G. West and M. H. Brill, “Necessary and Sufficient Conditions for Von Kries Chromatic Adap-
tation to give Color Constancy,” Journal of Mathematical Biology, vol. 15, no. 2, pp. 249-258,
1982.
[59] J. A. Worthey and M. H. Brill, “Heuristic Analysis of Von Kries Color Constancy”, Journal of
Optical Society of America A, vol. 3, pp. 1708-1712, 1986.
54
... Color constancy approaches compensate the colors of a light source and can be used to achieve color consistency for images acquired under different illumination conditions. Most color constancy approaches (e.g., pre-calibrated, Gamut-based, statistical-based, or machine learning approaches) are either computationally expensive, unstable, or require sensor calibration [3]. On the other hand, certain Retinex-based approaches that have been previously used for lightness/color constancy, as well as for image enhancement, are automatic, robust, and suitable for real-time applications. ...
... Second, we compute the value of β (x) in Equation (7) automatically for each pixel, compared to using a fixed value in [7]. This guarantees that the color processing function in Equation (6) is brightness-preserving and only modifies the chromaticities, compared to the result of the original Retinex [2] in Equation (3). Third, we apply the gain/offset function (Eqation (3)) before the color processing function, in contrast to [7] where the order of the two operations is reversed (i.e., we use I Ret i (x) in Equation (6), instead of R i (x, c)). ...
Article
Full-text available
Mobile change detection systems allow for acquiring image sequences on a route of interest at different time points and display changes on a monitor. For the display of color images, a processing approach is required to enhance details, to reduce lightness/color inconsistencies along each image sequence as well as between corresponding image sequences due to the different illumination conditions, and to determine colors with natural appearance. We have developed a real-time local/global color processing approach for local contrast enhancement and lightness/color consistency, which processes images of the different sequences independently. Our approach combines the center/surround Retinex model and the Gray World hypothesis using a nonlinear color processing function. We propose an extended gain/offset scheme for Retinex to reduce the halo effect on shadow boundaries, and we employ stacked integral images (SII) for efficient Gaussian convolution. By applying the gain/offset function before the color processing function, we avoid color inversion issues, compared to the original scheme. Our combined Retinex/Gray World approach has been successfully applied to pairs of image sequences acquired on outdoor routes for change detection, and an experimental comparison with previous Retinex-based approaches has been carried out.
... Dies wird durch die raum-zeitliche Zuordnung zu vergangenen Beobachtungen ermöglicht. Dagegen ist eine solche Kompensation für eine erscheinungsbasierte Wiedererkennung deutlich schwieriger, da sich Raum und Zeit für zwei Beobachtungen signifikant unterscheiden können [Bak et al., 2010, Farenzena et al., 2010, Cheng et al., 2011 Laut [Agarwal et al., 2006a] wird bei der Beleuchtungsschätzung in Bildern zwischen vorkalibrierten und datengetriebenen Ansätzen unterschieden (siehe Abbildung B.5). Datengetriebene Ansätze werden des Weiteren unterschieden in statische [ van de Weijer und Gevers, 2005], farbskalabasierte [Forsyth, 1990] und Ansätze des maschinellen Lernens [Gijsenij et al., 2010]. Für die in dieser Arbeit betrachteten Einsatzszenarien kommen nur maschinelle Lernverfahren in Betracht, da eine umfangreiche Kalibrierung oder Messung nicht praktikabel wären. ...
... Eine Abwandlung des k-Means-Clustering ist das k-Medoids-Clustering, auch bekannt als Partitioning Around Medoids (PAM, dt. Partitionierung um Medoiden) [Kaufman und Rousseeuw, 1987] • allgemeine Transformation [Maloney und Wandell, 1986], [Maloney, 1986], [Gershon et al., 1987] • diagonale Transformation [West und Brill, 1982], [Finlayson et al., 1994], [Barnard et al., 2001] • weiß-Patch-Annahme [Land, 1977] • graue-Welt-Annahme [Buchsbaum, 1980], [Gershon et al., 1987], [Barnard et al., 2002a], [Barnard et al., 2002b], [van de Weijer und Gevers, 2005] • Schätzung Beleuchtung aufgrund des dichromatischen Reflexionsmodells [Brill, 1990], [Tominaga, 1996], [Tominaga und Wandell, 1989] • Schätzung Standardfarbbereich [Forsyth, 1990], [Funt et al., 1998] • Brightness Transfer Function [Javed et al., 2005], [Prosser et al., 2008], [D'Orazio et al., 2009], [Siebler et al., 2010], [Datta et al., 2012] • Klassifikation [Cardei et al., 2002], [Funt und Xiong, 2004] • Regression [Agarwal et al., 2006b], [Agarwal et al., 2007], [Xiong et al., 2007], [Agarwal et al., 2009], [Wang et al., 2009] • probabilistisch [D'Zmura et al., 1995], [Brainard und Freeman, 1997], [Sapiro, 1999], [Finlayson et al., 2001], [Rosenberg et al., 2001] • Optimierung [Chakrabarti et al., 2008], [Owens et al., 2011], [Monari, 2012], [Eisenbach et al., 2013] • Kombination von Ansätzen mehrerer Kategorien [Cardei und Funt, 1999], [Schaefer et al., 2005], [Bianco et al., 2008], [Gijsenij und Gevers, 2007] Abbildung B.5: State of the Art Farbkonstanz und Beleuchtungsausgleich Kategorisierung nach [Agarwal et al., 2006a], [Gijsenij et al., 2010], [Scheiner, 2012] 6 und [Eisenbach et al., 2013] quellen mittels Schattenwürfen und Reflexionen am Boden geschätzt. Die Inhomogenität des Bodens, fehlende Reflexionen, sowie verschiedene Fußbodenbeläge verhindern den Einsatz des Verfahrens in den betrachteten Szenarien dieser Arbeit. ...
Thesis
Full-text available
Appearance-based person re-identification in public environments is one of the most challenging, still unsolved computer vision tasks. Many subtasks can only be solved by combining machine learning with computer vision methods. In this thesis, we use machine learning approaches in order to improve all processing steps of the appearance-based person re-identification: We apply convolutional neural networks for learning appearance-based features capable of performing re-identification at human level. For generating a template to describe the person of interest, we apply machine learning approaches that automatically select person-specific, discriminative features. A learned metric helps to compensate for scenariospecific perturbations while matching features. Fusing complementary features at score level improves the re-identification performance. This is achieved by a learned feature weighting. We deploy our approach in two applications, namely surveillance and robotics. In the surveillance application, person re-identification enables multi-camera tracking. This helps human operators to quickly determine the current location of the person of interest. By applying appearance-based re-identification, a mobile service robot is able to keep track of users when following or guiding them. In this thesis, we measure the quality of the appearance-based person re-identification by twelve criteria. These criteria enable a comparison with biometric approaches. Due to the application of machine learning techniques, in the considered unsupervised, public fields of application, the appearance-based person re-identification performs on par with biometric approaches.
... The first part: Non-GAN methods. Traditional data augmentation methods are mainly based on matching methods, such as Histogram matching and its variants [137,138], Graph matching [136], color constancy algorithm [139], gray world [140], etc. Although these methods can quickly achieve transferring, they are insufficient when faced with a huge domain shift and new data; besides, it is hard to obtain a good performance when faced with multi-target domains. ...
Article
Full-text available
With the rapid development of the remote sensing monitoring and computer vision technology, the deep learning method has made a great progress to achieve applications such as earth observation, climate change and even space exploration. However, the model trained on existing data cannot be directly used to handle the new remote sensing data, and labeling the new data is also time-consuming and labor-intensive. Unsupervised Domain Adaptation (UDA) is one of the solutions to the aforementioned problems of labeled data defined as the source domain and unlabeled data as the target domain, i.e., its essential purpose is to obtain a well-trained model and tackle the problem of data distribution discrepancy defined as the domain shift between the source and target domain. There are a lot of reviews that have elaborated on UDA methods based on natural data, but few of these studies take into consideration thorough remote sensing applications and contributions. Thus, in this paper, in order to explore the further progress and development of UDA methods in remote sensing, based on the analysis of the causes of domain shift, a comprehensive review is provided with a fine-grained taxonomy of UDA methods applied for remote sensing data, which includes Generative training, Adversarial training, Self-training and Hybrid training methods, to better assist scholars in understanding remote sensing data and further advance the development of methods. Moreover, remote sensing applications are introduced by a thorough dataset analysis. Meanwhile, we sort out definitions and methodology introductions of partial, open-set and multi-domain UDA, which are more pertinent to real-world remote sensing applications. We can draw the conclusion that UDA methods in the field of remote sensing data are carried out later than those applied in natural images, and due to the domain gap caused by appearance differences, most of methods focus on how to use generative training (GT) methods to improve the model’s performance. Finally, we describe the potential deficiencies and further in-depth insights of UDA in the field of remote sensing.
... Proposals for solving problems described on the basis of assumptions about visual, optical, capture device, or their combinations [5] are very different. However, despite the many proposed mechanisms for color stability, no single solution has been identified, the possibilities of applications and environmental conditions are so vast and diverse that they prevent the global solution from being achieved, and thus approaches are highly dependent applications and specific issues [6,7]. Recent advances have shown promising results through convolutional neural networks and in-depth learning [8][9][10], but its operation requires training data and relatively high computational costs, restricting its use. ...
Article
Full-text available
The present work proposes to evaluate, compare, and determine software alternatives that present good detection performance and low computational cost for the plant segmentation operation in computer vision systems. In practical aspects, it aims to enable low-cost and accessible hardware to be used efficiently in real-time embedded systems for detecting seedlings in the agricultural environment. The analyses carried out in the study show that the process of separating and classifying plant seedlings is complex and depends on the capture scene, which becomes a real challenge when exposed to unstable conditions of the external environment without the use of light control or more specific hardware. These restrictions are driven by functionality and market perspective, aimed at low-cost and access to technology, resulting in limitations in processing, hardware, operating practices, and consequently possible solutions. Despite the difficulties and precautions, the experiments showed the most promising solutions for separation, even in situations such as noise and lack of visibility.
... The specularities are utilized to measure illuminate chromaticity. The maximum reflectance is a more excellent value than white it produces incorrect illuminant [1]. Then the dimension of the images is reduced to have superior results with low processing time. ...
Article
Full-text available
An accurate diagnosis of breast cancer requires analyzing the tumors and giving appropriate treatment. A cancer diagnosis should be carried out as early as possible to minimize mortality rates. The handcrafted features are utilized for traditional breast cancer diagnosis, and the system’s performance is based on selected features. However, it is a challenging task to analyze shape complexity and different sizes. In recent years, deep learning has become a viable alternative to overcome the drawbacks of conventional cancer diagnosis methods. In this work, machine learning (ML) classifier, deep convolutional neural network (CNN) and transfer learning models (Alexnet, VGG-16, VGG-19, Resnet-50, and Resnet-101) are used to analyse the performance of the classifier. To process data before machine learning classification, an anisotropic diffusion filter is used to extract the tumor. The significant ten features were selected based on statistical testing and achieved an accuracy of 99% in the KNN classifier. The proposed deep CNN achieves a most satisfactory accuracy of 100%. The results outperform current techniques and demonstrate the ability of breast cancer classification. Also, the fast evaluation speed makes the analysis possible in real-time. The accuracy results of Alexnet, VGG-16, VGG-19, Resnet-50, and Resnet-101 are 86%, 82%, 84%, 84% and 74% respectively. Due to the small size of the data, the transfer learning model produces insignificant results, whereas the transfer learning models require a specific data size. The CNN can classify breast ultrasound images of cancer with promising results with relatively few training cases and is suitable for biomedical applications.
... According to [29] most of the existing algorithms are based on assumptions. For instance, the Max-RGB color constancy algorithm assumes the presence of white patch in the image to calculate the illuminant, whereas the gray world algorithm uses the average reflectance, and encourages a data-driven approach in color constancy [30]. ...
Article
Full-text available
Diabetic retinopathy (DR) is a diabetes complication that affects the eye and can cause damage from mild vision problems to complete blindness. It has been observed that the eye fundus images show various kinds of color aberrations and irrelevant illuminations, which degrade the diagnostic analysis and may hinder the results. In this research, we present a methodology to eliminate these unnecessary reflectance properties of the images using a novel image processing schema and a stacked deep learning technique for the diagnosis. For the luminosity normalization of the image, the gray world color constancy algorithm is implemented which does image desaturation and improves the overall image quality. The effectiveness of the proposed image enhancement technique is evaluated based on the peak signal to noise ratio (PSNR) and mean squared error (MSE) of the normalized image. To develop a deep learning based computer-aided diagnostic system, we present a novel methodology of stacked generalization of convolution neural networks (CNN). Three custom CNN model weights are fed on the top of a single meta-learner classifier, which combines the most optimum weights of the three sub-neural networks to obtain superior metrics of evaluation and robust prediction results. The proposed stacked model reports an overall test accuracy of 97.92% (binary classification) and 87.45% (multi-class classification). Extensive experimental results in terms of accuracy, F-measure, sensitivity, specificity, recall and precision reveal that the proposed methodology of illumination normalization greatly facilitated the deep learning model and yields better results than various state-of-art techniques.
... Another interesting review was presented in 2006 by Agarwal et al. 36 This work introduces a review of the color enhancement algorithms that preserves the color constancy and cites ACE while explaining the Retinex approach. Authors cite paper B 2 with others implementations of Retinex and then focus on MSR. ...
Article
Digital image processing is at the base of everyday applications aiding humans in several fields, such as underwater monitoring, analysis of cultural heritage drawings, and medical imaging for computer-aided diagnosis. The starting point of all such application regards the image enhancement step. A desirable image enhancement step should simultaneously standardize the illumination in the image set, possibly removing bad or not-uniform illumination effects, and reveal all hidden details. In 2002, a successful perceptual image enhancement model, the automatic color equalization (ACE) algorithm, was proposed, which mimics the color and contrast adjustment of the human visual system (HVS). Given its widespread usage, its correlation with the HVS, and since it is easily implementable, we propose a scoping review to identify and classify the available evidence on ACE, starting from the papers citing the two funding papers on the algorithm. The aim of this work is the identification of what extent and in which ways ACE may have influenced the research in the color imaging field. Thanks to an accurate process of papers tagging, classification, and validation, we provide an overview of the main application domains in which ACE was successfully used and of the different ways in which this algorithm was implemented, modified, used, or compared.
Article
By reason of factors such as terrains, weather conditions, sensor imaging methods and cultural and economic development, there is a large shift between the remote sensing imagery collected from different geographic locations and different sensors, which makes the state-of-the-art semantic segmentation models trained on source domain (a image set gathered from specific geographic locations and sensors) difficult to generalize to target domain (another image set collected from other geographic locations and sensors). Currently, unsupervised domain adaptation using adversarial training whose purpose is to align the marginal distribution in the output space between source and target domain, is the most explored and practical approach to address this issue. However, this global alignment approach does not take into account diversities of different regions in a specific image nor the category-level distribution, which leads to the consequence that some regions and categories which are already well aligned between the source and target domain may be incorrectly remapped. Therefore, we propose a region and category adaptive domain discriminator, aiming to emphasize the differences in regions and categories during the process of alignment. Specifically, on the one hand, we propose an entropy-based regional attention module in domain discriminator to emphasize the importance of difficult-to-align regions. On the other hand, we propose a class-clear module to update only the distribution of existing categories in one iteration without affecting all categories. Finally, a lot of experiments are introduced to indicate that the proposed method can obtain better results when compared with other state-of-the-art unsupervised domain adaptation methods using adversarial training.
Article
Full-text available
Artificial lights, which are powered by alternating current (AC), are ubiquitous nowadays. The intensity of these lights fluctuates dynamically depending on the AC power. In contrast to previous color constancy methods that exploited the spatial color information, we propose a novel deep learning-based color constancy method that exploits the temporal variations exhibited by AC-powered lights. Using a high-speed camera, we capture the intensity variations of AC lights. Then, we use these variations as an important cue for illuminant learning. We propose a network composed of spatial and temporal branches to train the model with both spatial and temporal features. The spatial branch learns the conventional spatial features from a single image, whereas the temporal branch learns the temporal features of AC-induced light intensity variations in a high-speed video. The proposed method calculates the temporal correlation between the high-speed frames to extract the effective temporal features. The calculations are done at a low computational cost and the output is fed into the temporal branch to help the model concentrate on illuminant-attentive regions. By learning both spatial and temporal features, the proposed method performs remarkably under a complex illuminant environment in a real world scenario in which color constancy is difficult to investigate. The experimental results demonstrate that the proposed method produces lower angular error than the previous state-of-the-art by 30%, and works exceptionally well under various illuminants, including complex ambient light environments.
Article
Full-text available
We introduce a new method for estimating camera sensitivity functions from spectral power input and camera response data. We also show how the procedure can be extended to deal with camera non-linearities. Linearization is an important part of camera characterization and we argue that it is best to jointly fit the linearization and the sensor response functions. We compare our method with a number of others, both on synthetic data and for the characterization of a real camera. All data used in this study is available on-line at http://www.cs.sfu.ca/~colour/data.
Article
Full-text available
A method is described for estimating the surface-spectral reflectances of glossy objects when the color signal is a mixture of diffuse reflections, specular reflections, and interreflections. The objects are inhomogeneous dielectric materials, and the reflected light is measured using a spectroradiometer. We first describe the main idea; the color signals, reflected from two closely apposed surfaces with a single interreflection between them, can be expressed by a linear combination of the illuminant spectrum and two diffuse spectral reflection functions. We introduce a representation in which each of these three terms is projected onto a point on a unit sphere. Estimation of the diffuse reflection functions is then reduced to finding the vertices of the spherical triangle. Next, an algorithm is described to estimate the locations of the vertices of diffuse reflectance functions from the measured samples. The reliability of the algorithm is demonstrated in an experiment using two plastic objects with glossy surfaces. © 1996 John Wiley & Sons, Inc.
Article
In computer vision, the goal of which is to identify objects and their positions by examining images, one of the key steps is computing the surface normal of the visible surface at each point (“pixel”) in the image. Many sources of information are studied, such as outlines ofsuifaces, intensity gradients, object motion, and color. This article presents a method for analyzing a standard color image to determine the amount of interface (“specular”) and body (“diffuse”) reflection at each pixel. The interface reflection represents the highlights from the original image, and the body reflection represents the original image with highlights removed. Such intrinsic images are of interest because the geometric properties of each type of reflection are simpler than the geometric properties of intensity in a black-and-white image. The method is based upon a physical model of reflection which states that two distinct types of reflection–interface and body reflection–occur, and that each type can be decomposed into a relative spectral distribution and a geometric scale factor. This model is far more general than typical models used in computer vision and computer graphics, and includes most such models as special cases. In addition, the model does not assume a point light source or uniform illumination distribution over the scene. The properties of tristimulus integration are used to derive a new model of pixel-value color distribution, and this model is exploited in an algorithm to derive the desired quantities. Suggestions are provided for extending the model to deal with diffuse illumination and for analyzing the two components of reflection.
Article
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Article
The recently published Matlab implementation of the retinex algorithm has free parameters for the user to specify. The parameters include the number of iterations to perform at each spatial scale, the viewing angle, image resolution, and the lookup table function (post-lut) to be applied upon completion of the main retinex computation. These parameters were specifically left unspecified in since the previous descriptions of retinex upon which the new Matlab implementations were based do not define them. In this paper we determine values for these parameters based on a best fit to the experimental data provided by McCann et. al.