Conference PaperPDF Available

Traversing complex environments using time-indexed high dynamic range panoramas

Authors:
  • INSIGHT (Institute for the Study and Integration of Graphical Heritage Techniques)

Abstract and Figures

Static panoramic photography has been shown to contribute to context-rich descriptions of regions under archaeological study. We show that fast traversal through a matrix of dynamic panoramas can allow users to quickly locate specific target features within a complex scene. Results are presented using two large archaeological monuments as a test subjects.
Content may be subject to copyright.
Traversing Complex Environments Using Time-Indexed High Dynamic Range Panoramas
Antonio Hui
*
INSIGHT, Ramesseum.com
Philippe Martinez
MAFTO, INSIGHT
Kevin Cain
INSIGHT
1 Introduction
Static panoramic photography has been shown to contribute to
context-rich descriptions of regions under archaeological study
[Allen et al. 2004]. We show that fast traversal through a matrix
of dynamic panoramas can allow users to quickly locate specific
target features within a complex scene. Results are presented
using two large archaeological monuments as a test subjects.
2 Capture and Processing
We collect source images using a camera with a circular fisheye
lens, articulated by a custom rotation controller. Capture is
automated. The camera is positioned by unidirectional,
monotonic rotation around the lens nodal point. Exposures are
taken at ten discrete rotation intervals. In order to enable high
dynamic range output, three bracketed exposures are acquired for
each camera position.
Figure 1. Initial image assembly from fisheye images.
As the capture process iterates, a time-lapse sequence is built
frame by frame with each successive revolution of the camera.
The rotation period is approximately 1.8 minutes. In the
processing steps, source images are geometrically calibrated to
account for lens distortion and an HDR representation is
computed. A rendered view for one frame of a panoramic
sequence is shown in Figure 1.
In our viewer application, we first index the recorded panoramas
in a spatial network, using global coordinates recorded during
image capture. After initialization, users interactively navigate
through the image data using a novel multi-node viewer in which
each node represents a different location on the time axis. The
user’s current position is drawn in the viewer’s main window.
The 3D location for each image capture location in the dataset is
projected in this view as navigation links (Figure 2). By hovering
over a given link, the user obtains an interactive preview of the
linked panorama. In a separate window, a ground plan of the site
enables spatial domain traversal of the scene. Indexed metadata is
presented in a third window. View information between windows
is updated in real time via an XML data stream (Figure 3).
--------------------------------------------
*
e-mail: antonioh@mac.com
e-mail: pmartine@ens.fr
e-mail: kevin@insightdigital.org
Figure 2. User interface for the panoramic viewer.
2 Results and Validation
To test our system, we acquired 30 time-lapse image nodes in situ
at Chichén-Itzá, Mexico. Separately, we acquired 60 image nodes
at the Temple of Ramses II, Egypt. Archaeological researchers
were asked to use our viewer system to identify 20 specific, non-
repeating visual features present in each actual scene. They were
then asked to identify the same features by viewing the set of
rectified input photos gathered during our capture step. We found
the seek time for a given target feature when using our system was
faster by a factor of 3.1 for the smaller data set, and a factor of 7.4
for a larger data set. Our approach was shown to be an efficient,
low cost technique for interrogating complex real-world scenes.
We also found that the time advantage scales with database size.
Figure 3. Components of the interactive viewer
References
ALLEN, P., FEINER, S., TROCCOLI, A., BENKO, H., ISHAK, E. AND
SMITH, B. 2004. Seeing into the Past: Creating a 3D Modeling
Pipeline for Archaeological Visualization. In Proceedings of
the 3D Data Processing, Visualization, and Transmission, 2nd
international Symposium on (3dpvt'04), IEEE Computer
Society, 751-758.
... Antonio Hui's automated time-lapse panoramic capture system enabled him to shoot time-lapse panoramas at Chichen Itza from many points, simultaneously. In a SIGGRAPH 2008 New Tech Demo [6], Antonio demonstrated a system that managed the large data sets that result from . We found that users can target features by traversing sets of panoramas much more quickly than with access to photos, and that this speed advantage scales with the size of the data set. ...
Data
Full-text available
In this paper we survey an open source data archive for Chichén Itzá. The archive contains 3D data, photographs and other field data gathered on site and at museums in the Yucatan, Mexico. In the first part of the paper we survey the data available in the archive, with special emphasis on point clouds obtained with laser scanners and digital models created as data-driven archaeological reconstructions of structures at Chichén Itzá. Next, we introduce several tools built to enable researchers to make productive use of the archive, stressing real-world applications for the archive. We conclude with some of the uses for the archive to date, and an assessment of future work. More information is available at www.mayaskies.net and www.insightdigital.org. Field access at Chichén Itzá was provided by the Instituto National de Anthropologia e Historia (INAH), with financial support from the National Science Foundation.
Conference Paper
Full-text available
Archaeology is a destructive process in which accurate and detailed recording of a site is imperative. As a site is exposed, documentation is required in order to recreate and understand the site in context. We have developed a 3D modeling pipeline that can assist archaeologists in the documentation effort by building rich, geometrically and photometrically accurate 3D models of the site. The modeling effort begins with data acquisition (images, range scans, GIS data, and video) and ends with the use of a sophisticated visualization tool that can be used by researchers to explore and understand the site. The pipeline includes new methods for shadow-based registration of 2D images and temporal change detection. Our multimodal augmented reality system allows users wearing head-tracked, see-through, head-worn displays to visualize the site model and associated archaeological artifacts, and to interact with them using speech and gesture.