Image Matching Using Photometric Information
ABSTRACT Image matching is an essential task in many computer vision applications. It is obvious that thorough utilization of all available information is critical for the success of matching algorithms. However most popular matching methods do not incorporate effectively photometric data. Some algorithms are based on geometric, color invariant features, thus completely neglecting available photometric information. Others assume that color does not differ significantly in the two images; that assumption may be wrong when the images are not taken at the same time, for example when a recently taken image is compared with a database. This paper introduces a method for using color information in image matching tasks. Initially the images are segmented using an off-the-shelf segmentation process (EDISON). No assumptions are made on the quality of the segmentation. Then the algorithm employs a model for natural illumination change to define the probability of two segments to originate from the same surface. When additional information is supplied (for example suspected corresponding point features in both images), the probabilities are updated. We show that the probabilities can easily be utilized in any existing image matching system. We propose a technique to make use of them in a SIFT-based algorithm. The techniques capabilities are demonstrated on real images, where it causes a significant improvement in comparison with the original SIFT results in the percentage of correct matches found.
- SourceAvailable from: psu.edu
Article: Effective Corner Matching[show abstract] [hide abstract]
ABSTRACT: This paper tackles the problem of obtaining a good initial set of corner matches between two images without resorting to any constraints from motion or structure models. Several different matching metrics, both traditional and statistical, are evaluated and the effect of matching using sub-pixel information is studied. It is found that, in most cases, the commonly-used cross-correlation does not perform as well as some other measures, such as the 2 test or the sum of squared differences, and that it is essential to use sub-pixel accuracy if mismatches are to be avoided. Further, a new technique, the Median Flow Filter, is introduced. This detects outliers by assuming that the image motion is locally similar. Any matches which are in gross disagreement with the local "median flow" are discarded. Experiments show this technique to be particularly effective, typically lowering the percentage of outliers from around 35% to less than 5%, permitting direct model fitting rathe...05/2000;
- [show abstract] [hide abstract]
ABSTRACT: In this paper, we present a new method for matching points in stereoscopic, uncalibrated color images. Our approach consists first in characterizing points of interest using differential invariants. Then we define additional invariants of first order, exploiting color information. We show that this contribution makes the characterization sufficient till first order. In addition, we make our description robust to usual transformations of image. Second, we present a robust generalization of a gray level corner detector to the case of color images. Third, we propose a simple and efficient scheme for matching these points, using our characterization. Finally, we present matching results and epipolar geometry obtained on complex scenes, which clearly show the pertinence of our approach. We are able to match points robustly and rapidly, using only derivatives till first order. 1. Introduction This work addresses the problem of matching stereoscopic, uncalibrated color images. The matching m...08/1998;
Article: Image registration methods: a survey[show abstract] [hide abstract]
ABSTRACT: This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrically align two images (the reference and sensed images). The reviewed approaches are classified according to their nature (area-based and feature-based) and according to four basic steps of image registration procedure: feature detection, feature matching, mapping function design, and image transformation and resampling. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of image registration and outlook for the future research are discussed too. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas.Image and Vision Computing. 01/2003;
Image Matching Using Photometric Information
Electrical Engineering Department,Technion
Haifa 32000, ISRAEL
Department of Management Information Systems, Haifa University
Haifa 31905, ISRAEL
sion applications. It is obvious that thorough utilization of
ing algorithms. However most popular matching methods
do not incorporate effectively photometric data. Some al-
gorithms are based on geometric, color invariant features,
thus completely neglecting available photometric informa-
tion. Others assume that color does not differ significantly
in the two images; that assumption may be wrong when the
images are not taken at the same time, for example when
a recently taken image is compared with a database. This
paper introduces a method for using color information in
image matching tasks. Initially the images are segmented
using an off-the-shelf segmentation process (EDISON). No
assumptions are made on the quality of the segmentation.
Then the algorithm employs a model for natural illumina-
tion change to define the probability of two segments to
originate from the same surface. When additional informa-
tion is supplied (for example suspected corresponding point
features in both images), the probabilities are updated. We
show that the probabilities can easily be utilized in any ex-
isting image matching system. We propose a technique to
make use of them in a SIFT-based algorithm. The tech-
it causes a significant improvement in comparison with the
original SIFT results in the percentage of correct matches
∗This work was supported partly by grant 01-99-08430 of the Israeli
Image matching, or finding corresponding features in
different images of a scene is an essential step in most com-
puter vision tasks (3D reconstruction, object recognition,
tracking, image mosaicing, to name a few). Features of
various forms - points, edges, contours and regions - have
been used for this purpose, see Zitova and Flusser  for
a survey. In this work we concentrate on point features, as
they have the widest usage, but our results can easily be ex-
panded to work with other feature types.
The accuracy of matching algorithms depends on their
ability to reliably extract all available information from the
neighborhood of the feature. While the geometric proper-
ties of the object are relatively stable, its color may vary
significantly with the time of day, cloud cover and other at-
mospheric conditions, Judd et al. . See Fig. 1 for illumi-
nation influence on object colors. With respect to the usage
Colors of 4 Macbeth cells under
3 different illuminants
Figure 1. Four Macbeth cells under different natural illuminants -
morning, noon and afternoon light of a cloudy day. (a) A row is
the same cell under different three illuminants. A column has four
cells with the same illuminant. (b) The values of the cells in the
normalized R/?(RGB),B/?(RGB) format. Every cell has
are sometimes closer than the colors of the same surface (the upper
green mark is closer to grey than to the other green marks).
a mark in appropriate color. Note that colors of different surfaces
0-7695-2646-2/06 $20.00 (c) 2006 IEEE
of color, point feature correspondence algorithms can be di-
vided into three main groups. The first group consists of
methods relying entirely on geometric features: Lowe ,
Schmid and Mohr , Beardsley . The neglect of pho-
tometric information by these methods can hurt their perfor-
mance and can even cause failures when the geometric in-
formation is insufficient. The methods of the second group
assume that the illumination color is constant in all images.
They can use correlation - Smith et al. , wavelets - Sebe
et al. , affine invariant regions - Georgescu and Meer
, or a combination of point and region data - Matas et
al.  and Tuytelaars and Van Gool . The disadvan-
tage of these methods is their inability to work with im-
ages taken at different times. For example when real-time
data should be compared with an image database prepared
in advance. The third group employs color constancy tech-
niques (see Barnard ) to model the possible illumination
change, such as the diagonal model - Montesinos et al. .
These methods are better suited for handling illumination
changes. Still the diagonal model cannot cope with signifi-
cant illumination changes or wide-band camera sensors. An
additional drawback of many algorithms of the second and
the third group is that the color is examined around salient
points. Salient points are typically formed on surface dis-
continuities - corners, edges and so on. Finite pixel sizes
and camera de-focus often cause colors from neighboring
surfaces to merge and to create spurious colors near salient
The above difficulties raise the following questions:
• What is the probability that two colors originate from
the same surface? We are not concerned with the ac-
tual surface and illumination properties, but only with
the relationships of various colors.
• How to combine those probabilities with geometric
features? Geometric feature matching has been exten-
sively researched and many good quality applications
exist. We search for a way to associate the data ex-
tracted from those applications with photometric cor-
respondence probabilities. Schaffalitzky and Zisser-
man  suggest an analogous approach for texture-
based region descriptors. They match regions and then
match features from the corresponding regions only.
We propose a method that answers these questions. Initially
we segment the image using an off-the-shelf segmentation
application: EDISON by Christoudias et al. . Segments
allow for more stable results, whereas the color of a single
pixel is noisy and tends to be affected by its neighbors, es-
pecially on the borders between several surfaces. Nothing
is assumed on the quality of the segmentation.
We employ the natural illumination model suggested by
Finlayson et al.  to calculate the probability that two col-
ors belong to the same surface. Given a surface color, the
model allows us to estimate the range of colors that the sur-
face can obtain under any natural illumination conditions.
Any two colors that lie within this predefined range of each
other will get a high probability to originate from the same
source surface. When we are given two colors that are
considered to belong to the same surface, the illumination
change can be calculated.
The main difference between our method and Fin-
layson’s et al.  method is that we take into account in our
analysis the magnitude of the illumination change together
with its direction. The accuracy of the segment probabilities
to match each other is improved by comparing the immedi-
ate neighbors of the segments. Only when enough neigh-
bors can match each other, and the illumination change is
the same for the whole group, the probability remains high.
The robustness to segmentation flaws is ensured by allow-
ing a certain fraction of the neighbors of the segment not to
abide by these rules.
The probabilities can easily be utilized in any exist-
ing image matching system. They can be used as a false
matches removing factor, additional weights in matching
process or serve for validation purposes. We present a more
sophisticated method that uses the segment correspondence
probabilities to aid the feature matching process. Initially,
the segment probabilities are updated according to the fea-
ture matching results using a Bayesian approach. After-
wards, the feature matchings are influenced by the corre-
spondence of the segments they belong to.
The main innovations of our method are:
• Full exploitation of the illumination change between
the two images.
• Probabilistic, color based approach to segment match-
ing that exploits the segment neighborhood structure.
• A Bayesian method for integrating point features and
segment correspondence information.
The algorithm contribution is demonstrated with an off-
the-shelf implementation of the SIFT feature detection and
matching method by Lowe . Our method was tested on
real images and showed significant improvements in com-
parison with the original SIFT results in the percentage of
correct matches found. Given the same absolute number of
inliers, the inlier rate of our method is at least twice as large
as in SIFT. The method also works successfully for the spe-
cial case of images taken at the same time.
The paper continues as follows. Section 2 describes the
illumination change model. Our segment matching method
is described in Section 3. Section 4 provides an example of
the successful combination of our algorithm with the SIFT
feature matching application. Finally experimental results
are presented in Section 5.
0-7695-2646-2/06 $20.00 (c) 2006 IEEE
2. Color path model
This section presents the model that estimates the proba-
bility that two colors originated from the same surface. The
basic idea of the method is inspired by the work of Fin-
layson et al. . They defined a 2D rb color space:
rk= log(pk/pg) = log(sk/sg) + (ek− eg)/T,
where pkare pixel values, k = R,G,B, skare constants
that depend on the observed surface and the camera, ek
depend only on the camera and T is the temperature in
Planck’s law approximation of the illumination spectrum.
As the temperature (in other words illumination)
changes, the 2D vectors rkwill form a straight line - termed
color paths - in the 2D color space. All the lines will have
the same direction. Thus, all possible colors of the same
surface will lie on one color path under any illumination
We observed that the natural extension of the above con-
clusion is that the length of vectors rkdepends only on the
temperature change and not on the color itself (as ekand
egare constant). Therefore when the same scene is shot at
different illumination conditions, all colors rkof the scene
will travel the same distance along their color paths. This
observation allows us to estimate the probabilities that var-
ious colors of two images match each other.
We use the same log-log color space, only replacing pg
by the geometric mean of all colors pm =
cause it gives more stable results. So our color coordinates:
rk= log(pk/pm) = log(sk/sm) + (ek− em)/T,
where sm =
modification does not change the basic principles. It only
alters the color path directions and distances.
3√srsgsband em =
3(er+ eg+ eb). The
2.1. Color paths construction
Macbeth Chart cells data and color paths
lumination in the log(R/
estimated lines (see text).
3√RGB) domain with
To obtain the color paths we followed the calibration
method suggested by Finlayson et al. . Fig. 2 displays
the colors of Macbeth Chart cells in the log-log color space
under various illumination conditions throughout the day.
The colors are marked by red stars, and the color paths (es-
timated using SVD for every cell) are blue lines. It can be
easily observed the color path behavior deviates from the
theory: the color paths do not have the same lengths and
slopes, and cell colors do not lie exactly on the path but
are scattered around it. The errors are likely to be caused
by the inaccuracy of the model assumptions: narrowness of
the sensors, illumination approximation and other reasons.
Random noise is not considered to be a source of signifi-
cant errors, as Macbeth Chart colors represent an average
over large (100 × 100 pixels) image regions.
The meaning of the errors in our model is that even in
ideal conditions we cannot expect the colors to behave ac-
cording to the theory and that slight deviations from the
model (that are estimated experimentally) must either be
taken into account by the algorithms (see Section 3.1) or
incorporated into the model itself.
The lengths and and directions of the color paths vary
along the color space. The dependency between the lengths
and the color path locations was expressed with a second
degree polynomial. The direction variations seem to have
a random nature. They were approximated by the average
We observed that despite variation in the lengths, the rel-
ative distances that the colors move are approximately the
same. For example, if one color moves a distance equal to
half its color path length under illumination change T, other
colors will also move half their color path length under il-
lumination change T. To neutralize the influence of color
path lengths we represent each pair of colors ab from two
images A and B in a normalized coordinate frame, where
the x axis is the projection of the vector ab on a’s color
path divided by a’s color path length, and the y axis is the
projection of ab on a vector perpendicular to a’s color path
divided by NR (see Fig. 3(b)). As a and b may have dif-
ferent color path lengths, we use their average in order to
guarantee identical representation of ab and ba.
3. Photometric Image Matching - PIM
The rather theoretical term color used in the definition
of the color path model is usually replaced by pixels. Most
segmentation methods unite pixels into segments, mainly
because of two reasons: noise sensitivity and complexity.
The pixels are usually combined according to two parame-
ters (constant or adaptive): spatial and color distance. The
only assumption on the segmentation quality that we make
is that those parameters have reasonable values. In our ex-
periments we use the off-the-shelf segmentation application
EDISON by Christoudias et al. . Additional assump-
0-7695-2646-2/06 $20.00 (c) 2006 IEEE
tion used throughout the paper is that illumination changes
slowly in the image, or in other words that the illumination
is constant in the segment’s neighborhood. The assump-
tion is justified for most segments, except those lying on a
shadow border. However the robust nature of our algorithm
enables us to deal with cases in which these assumptions are
3.1. Probability distribution function
At first we add a few definitions:
GK - segment K and its immediate geometrical neighbors.
Kimeans i member of K.
Td - the illumination (temperature) change that cause seg-
ments to move distance d along the color path - along
the x axis of the normalized coordinates.
dAB - the signed distance between segments A and B on
the x axis of the normalized coordinates. In Fig. 3(b):
Figure 3. (a) - Three surfaces from two images. The first color
of a pair (marked by blue circle) is from the first image, and the
second (marked by red star) is from the second image. The black
arrows are the projections of the vectors between pairs of colors
onto normalized coordinates. Light brown lines (partly covered by
the black arrows) are the color paths calculated in the first color.
NR is the solid green line. The dotted green line represents ENR
the direction of the color path. The second color of the pair 3 can
belong to both color paths 2 and 3, it is in their NR. The projection
of a wrong pair (pair 2 in the first image with pair 3 in the second
image) is represented by the blue arrows. (b) - The movement
vectors of the above pairs presented in normalized coordinates.
In Section 2.1 we saw that the possible range of object
colors in an image can be estimated from its color in another
image. Given two images of the same scene, we define the
probability density function of a segment A of the first im-
age to have color a, when the image B of the second image
has color b and the segments cover (at least partly) the same
p(A = a|B = b,S(A) ≈ S(B)) ≡ p(a|b,A ≈ B).
We assume that the above density function can be decom-
posed into two independent perpendicular components:
p(a|b,A ≈ B) = pp(a|b,A ≈ B) · pc(a|b,A ≈ B). (4)
The first component pp(a|b,A ≈ B) arises due to the model
errors and random noise. It reflects the distance of A from
B’s color path. Two thresholds were defined to approximate
the probability. The first one, noise range (NR) is the max-
imal distance of a color from a path that does not result in
any penalty to the color’s probability to belong to that path.
The second one, extended noise range (ENR) represents
the maximal possible distance of a “real world” color to its
color path. If the distance is larger, the tested color cannot
distance is between NR and ENR. NR is computed from
the Macbeth Chart data as the maximal distance of a point
to its color path, ENR was experimentally set to 2·NR. See
The second component pc(a|b,A ≈ B) corresponds to
the deviations along the color path. In addition to model
errors and random noise it depends on the variations in the
illumination. Therefore we include Tdinto its definition:
P(Td) indicates our prior knowledge about Tdvariation (it
is equal to u1(Td) if no such knowledge is accessible). It is
pc(a|b,A ≈ B) =
pc(a|b,A ≈ B,Td)P(Td)dTd.
pc(a|b,A ≈ B,Td) = um(a − b − Td),
?x? ≤ k
and a − b is calculated along the color path. Comparing
colors of segments that underwent the same illumination
change we set m to 0.2.
When no information about Tdis available substituting
Eq. 6 into Eq. 5 gives:
pc(a|b,A ≈ B) =
u0.2(a − b − Td)u1(Td)dTd. (8)
To continue we want to precisely define the term match
as its apparently straightforward meaning has diverse inter-
pretations. We say that segment A matches segment B if
the physical area covered by segment A is covered at least
partly by segment B and fully by B’s neighbors Bi. The
definition is much more robust to segmentation flaws. For-
mally we write:
A → B if (A ⊆ [B ∪B1∪...∪Bn])∧(A∩B = ∅). (9)
0-7695-2646-2/06 $20.00 (c) 2006 IEEE
Occasionally we may omit → for simplification. Note that
Eq. 9 implies that: A → B ? B → A.
for segment neighborhoods:
p?(GA= ga|GB= gb,AB) = p?
where gkrepresents colors of GK.
WeassumethatA → B impliesthatforeverysegmentin
GAthere is a segment in GBthat (at least partly) originates
from the same surface. Formally:
A ⊆ [B ∪ B1∪ ... ∪ Bn] ⇒ ∀j∃ij: Aj∩ Bij= ∅. (11)
Assuming that the segments in GAare independent we
p(ga|gb,AB) = pp(a|b,AB)
where B1(Ai) is the neighbor of B that maximizes
and B2(Ai) maximizes
To make the algorithm more robust we do not calculate
the product in Eq. 12 for all Ai, but only for a predefined
percent of the neighbors (called minimal number of inliers)
that provide the highest probabilities.
Moreover, substitutingˆ Td = a − b in Eq. 12 instead
of calculating the integral over T ∈ [−1,1] significantly
speeds up the algorithm without hurting its performance.
3.2. Matching probability definition
We are interested in the probability of segment A of the
first image to match segments Biof the second image (the
segments of the two images are examined independently).
We assume that a segment can match only one segment of
another image, but in turn can be matched by any number
of segments. According to the Probability law:
where P(AB∅|ga) is the probability of A to represent an
object that is not in the second image. We set this number
to a constant for all segments.
Bi∈ Image 2
P(ABi|ga,gbi) = 1,
From Bayes law:
If no prior information is available to prefer Bkover other
segments, P(ABi|gbi) are equal for all i and can be re-
moved. p(ga|gbi) =?
lated from Eq. 12 and Eq. 10.
Bip(ga|gbi,ABi)P(ABi|gbi) is the
normalizing coefficient. Thus Eq. 14 can easily be calcu-
4. SIFT-based PIM
In the previous section we showed how to calculate the
probability of two segments to match each other. However
bare segment-to-segment matching cannot provide enough
information for many subsequent computer vision tasks (for
example 3D reconstruction algorithms require point corre-
spondences). As a variety of successful techniques has been
to improve the point correspondence quality by combining
segment-to-segment matching probabilities with the point
features provided by them.
Our experiments demonstrated that even a simple, intu-
itive approach to reject point features that do not reside in
matchable segments, can cause significant increase in cor-
rect matches percent. To achieve even better results, we
suggest a more sophisticated method that incorporates point
and segment probabilities in a process whose purpose is to
separate correct and spurious matches.
Our method is suitable for any point feature correspon-
dence algorithm that is able to provide correctness proba-
bility for a match or at least an overall inlier rate (percent of
correct matches). We selected Lowe’s SIFT algorithm 
to present the method’s capabilities.
Several new definitions are used in the following sec-
jk - member of keypoint pair j that resides in image k.
S - locations of all keypoint pairs.
Sj - locations of keypoint pair j.
SA - locations of all keypoint pairs that have a member in
ρj≡ P(Cj) - probability that the keypoint pair j is a cor-
Our algorithm is summarized as follows:
1. Calculate probabilities P(AB) using Eq. 14.
2. Obtain initial probability ρjfrom SIFT.
3. Update segment-to-segment matching probabilities
given the spreading of keypoints and their correspon-
dence probabilities - P(AB|S,P(C)).
0-7695-2646-2/06 $20.00 (c) 2006 IEEE