IMAGE MATCHING ROBUST TO CHANGES IN IMAGING CONDITIONS WITH A
Naoko Enami, Norimichi Ukita, Masatsugu Kidode
Nara Institute of Science and Technology
8916-5 Takayama-cho, Ikoma-shi, Nara 630-0192, JAPAN
In this paper, we propose a matching method for images cap-
tured at different times and under different capturing condi-
and low frame-rate images captured asynchronously. To cope
with this difficulty, previous and current panoramic images
are created from sequential images which are rectified based
on the view direction of a camera, and then compared. In ad-
dition, in order to allow the matching method to be applicable
to images captured under varying conditions, (1) for differ-
ent lanes, enlarged/reduced panoramic images are compared
with each other, and (2) robustness to noises and changes in
illumination is improvedby the edge features. To confirm the
effectiveness of the proposed method, we conducted experi-
ments matching real images captured under various capturing
Index Terms— Image Matching, Car Mounted Camera,
GPS, Panoramic Image
society. For accurate navigation, the map should be up-to-
date. However, most map information is updated on the basis
of surveillance fieldwork.
roads (e.g., the locations of road markers and traffic signals,
the number of lanes). Streetscape information provides the
locations of stores/buildings that can be landmarks and desti-
nations for drivers. Road informationis scheduled to be com-
piled as a database by the government ministries. Therefore,
if streetscape information can be also automatically updated
by change detection in streetscapes, all the map information
can be efficiently updated.
Methods for detecting a change in streetscapes, by com-
paring their images, have been proposed. One of the methods
uses satellite images . The change of a streetscape is de-
tected by the difference between segments in the images that
are captured in the same location at different times. However,
some important information such as signs can’t be obtained
because the images are captured from the sky. On the other
hand,  uses a car-mounted omni-directionalcamera and an
off-the-shelf GPS. Since the sides of buildings are thus ob-
served, information obtained by this method is more detailed
thanthat obtainedfromsatellite images. Imagesequences ob-
served in roughly the same location are first extracted from a
numberof images that are capturedat differenttimes with po-
sition information. Since the off-the-shelf GPS has a margin
of error of about 15m, the images in the same location are ex-
the sequences. The change in streetscapes is then detected by
the pixel difference between the extracted images.
Since this method  assumes that an omni-directional
camera captures images under convenient capturing condi-
tions (i.e., high frame-rate capturing in a low car-speed), (1)
a small change is difficult to detect because the resolution per
area of the omni-directionalcamera is low, and (2) wide areas
can’t be observed simultaneously because image observation
under the above conditions is not desired for public automo-
biles, that is, probecars are required;it is impossibleto secure
a large number of probe cars in order to observe wide areas
Consequently, we propose a map-update system with the
• Wide areas can be observed simultaneously by obser-
vation from a number of normal automobiles.
• The system assumes that each automobile has an or-
dinary low-resolution car-mounted camera, of the type
which is often used in a drive-recorder, and an off-the-
shelf GPS receiver.
server system via wireless communication. The server
system analyzes the collected images in order to detect
changes in streetscapes.
978-1-4244-2665-2/08/$25.00 c ?2008 IEEE
As with the previous method , our system first matches
previous and current images captured in the same location.
Any change is then detected based on the difference between
the matched images. However, our system has the following
problems that need solving:
• An off-the-shelf GPS receiver has a margin of error of
• Images at different times look different due to changes
• Since imagesare capturedfroma numberof normalau-
are different,andthe framerateof the imagesis low due
to wireless communication. These features result in the
large difference in the image-capturing positions.
in the images changes significantly.
While the first two problems are dealt with also by existent
methods, the last three are our own system’s own problems.
We propose an image matching method that solves all the
problems. This matching method must be robust to changes
in capturing conditions that are due to observation by various
car-mounted cameras at different times.
2. MAP UPDATE SYSTEM
In this section, we describe the details of the map update sys-
This system collects the detailed streetscape information
in streetscape are detected by using the collected information.
For simultaneous wide-area observation, normal automobiles
shown in Fig.1(a) collect information while running freely
with a monocular camera and a GPS. The camera positions
and the view direction are different for each automobile. The
camera is assumed to be set in the back of the rearview mir-
ror. Its view direction is pointed to the automobile’s moving
direction as shown in Fig.2. In Fig.1(b), the collected infor-
mation (i.e., image, positioning information, capturing time,
internal parameters of the camera, camera positions) is sent
to the map update system via a wireless communication such
as the cell-phonethat is used by an existing map-deliverysys-
tem . Because of wireless communication,images must be
compressed (e.g., JPEG) and low resolution (e.g., 640×480
pixel) in order to decrease data traffic. However, unlike an
omni-directional image that captures 360-degrees view in a
360×240pixel image, a normal camera can capture the detail
Server: Map-update system Server: Map-update systemServer: Map-update system Server: Map-update system
(3) Detecting the change in(3) Detecting the change in (3) Detecting the change in (3) Detecting the change in
(4) Updating the map(4) Updating the map (4) Updating the map (4) Updating the map
(1) Recording the input data(1) Recording the input data(1) Recording the input data (1) Recording the input data
(2) Matching between (2) Matching between(2) Matching between (2) Matching between
Previous & input dataPrevious & input data Previous & input dataPrevious & input data
Car-navigation systemCar-navigation systemCar-navigation system Car-navigation system
Client: automobilesClient: automobilesClient: automobiles Client: automobiles
Monocular cameraMonocular cameraMonocular camera Monocular camera
(b) Output data(b) Output data (b) Output data(b) Output data(b) Output data(b) Output data(b) Output data
Newest mapNewest mapNewest map Newest mapNewest map Newest map Newest map
(a) Input data(a) Input data (a) Input data (a) Input data(a) Input data (a) Input data (a) Input data
ImagesImagesImages ImagesImages Images Images
PositionsPositionsPositions PositionsPositions PositionsPositions
Capturing conditions Capturing conditionsCapturing conditionsCapturing conditionsCapturing conditions Capturing conditionsCapturing conditions
Wireless comm.Wireless comm.Wireless comm. Wireless comm.
Fig. 1. Navigation map update system.
Fig. 2. Camera mounted in a car.
of a streetscape (e.g., a store sign) even in a low-resolution
image, as shown in Fig.3. If the imaging conditions are as-
sumed to be the following, a maximum of 450KB of trans-
mission occurs in 1 second. The file size of an image is about
100KB-150KB, and the framerate is 3fps. The bandwidth
of a current cell-phone is about 269kbps. However, in the
next-generation cell-phone it is 20Mbps in an experimental
measurement. Therefore, if the above mentioned compressed
images are received selectively from only several cars, it is
possible for the server system to collect enough online the in-
formationto updatea map. The system updatesthe mapusing
the collected information as illustrated in Fig.1;
(1) Thepreviousinformationdatabaseis compiledfromin-
(2) The system matches the previous and current image se-
quences that are captured in the same location.
(3) The change is detected by comparing the matched im-
(4) The map is updated based on the detected change.
Fig. 3. (a): Observed image in our system, (b): Omnidirec-
tional image in .
 C. Harris and M. Stephens, “A combined corner and
edge detector,” 4th Alvey Vision Conf., pp.147–151,
 Z. Zhang, “A Flexible New Technique for Camera Cali-
bration,” IEEE Trans. on PAMI, Vol.22, pp.1330–1334,
 H-Y Shum and L-W He, “Rendering with concentric
mosaics,” ACM SIGGRAPH, pp.299–306,1999.
 M. Brown and D. G. Lowe “Automatic Panoramic Im-
age Stitching using Invariant Features,” IJCV, Vol.74,
 B. Heigl, R. Koch, M. Pollefeys, J. Denzler, and L. Van
Gool “ Plenotic modeling and rendering from image se-
quence taken by a hand-held camera,” DAGM Sympo-
sium, pp.94–101, 1999.