Conference PaperPDF Available

Principles and Practices of Robust, Photography-based Digital Imaging Techniques for Museums

Authors:
  • Cultural Heritage Imaging, a California non-profit organization
  • Cultural Heritage Imaging

Abstract and Figures

This full day tutorial will use lectures and demonstrations from leading researchers and museum practitioners to present the principles and practices for robust photography-based digital techniques in museum contexts. The tutorial will present many examples of existing and cutting-edge uses of photography-based imaging including Reflectance Transformation Imaging (RTI), Algorithmic Rendering (AR), camera calibration, and methods of imaged-based generation of textured 3D geometry. The tutorial will also explore a framework for Leading museums are now adopting the more mature members of this family of robust digital imaging practices. These practices are part of the emerging science known as Computational Photography (CP). The imaging family’s common feature is the purpose-driven selective extraction of information from sequences of standard digital photographs. The information is extracted from the photographic sequences by computer algorithms. The extracted information is then integrated into a new digital representations containing knowledge not present in the original photogs, examined either alone or sequentially. The tutorial will examine strategies that promote widespread museum adoption of empirical acquisition technologies, generate scientifically reliable digital representations that are ‘born archival’, assist this knowledge’s long-term digital preservation, enable its future reuse for novel purposes, aid the physical conservation of the digitally represented museum materials, and enable public access and research.
Content may be subject to copyright.
Principles and Practices of Robust, Photography-based
Digital Imaging Techniques for Museums
Mark Mudge, Carla Schroer, Graeme Earl, Kirk Martinez, Hembo Pagi, Corey Toler-Franklin, Szymon Rusinkiewicz,
Gianpaolo Palma, Melvin Wachowiak, Michael Ashley, Neffra Matthews, Tommy Noble, Matteo Dellepiane
Abstract
This full day tutorial will use lectures and demonstrations from leading researchers and museum practitioners to
present the principles and practices for robust photography-based digital techniques in museum contexts. The tutorial
will present many examples of existing and cutting-edge uses of photography-based imaging including Reflectance
Transformation Imaging (RTI), Algorithmic Rendering (AR), camera calibration, and methods of imaged-based
generation of textured 3D geometry.
Leading museums are now adopting the more mature members of this family of robust digital imaging practices. These
practices are part of the emerging science known as Computational Photography (CP). The imaging family’s common
feature is the purpose-driven selective extraction of information from sequences of standard digital photographs. The
information is extracted from the photographic sequences by computer algorithms. The extracted information is then
integrated into a new digital representations containing knowledge not present in the original photographs, examined
either alone or sequentially.
The tutorial will examine strategies that promote widespread museum adoption of empirical acquisition technologies,
generate scientifically reliable digital representations that are ‘born archival’, assist this knowledge’s long-term digital
preservation, enable its future reuse for novel purposes, aid the physical conservation of the digitally represented
museum materials, and enable public access and research.
Keywords: Reflectance transformation imaging, empirical provenance, photogrammetry, non-photorealistic rendering,
digital preservation, cultural heritage
1. Tutorial Overview
Today, leading museums are adopting a new family of
robust digital imaging practices. This imaging family’s
common feature is the purpose-driven selective extraction of
information from a sequence of standard digital
photographs. The information extracted from the
photographic sequence is selected by computer algorithms.
The extracted information is then integrated into a new
digital representation containing knowledge not present in
the original photographs, examined either alone or
sequentially. These practices are part of the emerging
science known as Computational Photography.
The algorithms can be embedded in software tools that
keep the computer science ‘under the hood’ and allow the
user to ‘drive’ the tools in service of their customary
working culture. No ongoing assitance from outside digital
imaging technologists is necessary.
The imaging family is able to process the information from
the photographs with only minor user involvement. This
highly automatic operation permits the writing of a scientific
‘lab notebook’ chronicling each of the means and
circumstances of the new digital representation’s
generation. This log permits qualitative evaluation of the
representation’s reliability and suitability for its original and
potential novel purposes both now and in the future.
Following international metadata standards, the lab
notebook, bundled with the original photographs and the
newly generated representations form a ‘born archival’
package ready for ingest into the world’ knowledge base and
the museum/library/archive long-term digital preservation
environment.
The following presentations describe the practices of
Reflectance Transformation Imaging, Algorithmic
Rendering, dense, close range Photogrammetry, semantic
knowledge management, long term digital preservation, and
the application of these tools within museums and cultural
heritage environments.
1.1 Sequence of Presentations
Mark Mudge and Carla Schroer from Cultural Heritage
Imaging will present an overview of the themes uniting the
tutorial’s presentations. They will explore issues that
influence technology adoption decisions and the advantages
that can be realized when image-based empirical
information acquisition is organized in conformance with
the fundamental principles of science. They will also present
a unified photographic data capture strategy that acquires all
the information necessary to enable Reflectance
Transformation Imaging, Algorithmic Rendering and
Photogrammetry.
The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST (2010)7ORKSHOPSAND4UTORIALS
A. Artusi, M. Joly-Parvex, G. Lucet, A. Ribes, and D. Pitzalis (Editors)
DOI: 10.2312/PE/VAST/VAST10S/111-137
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Graeme Earl, Kirk Martinez, and Hembo Pagi from
Southampton University will provide a summary of their
uses of reflectance transformation imaging in archaeological
contexts. They will also introduce the UK Arts and
Humanities Research Council funded Reflectance
Transformation Imaging (RTI) System for Ancient
Documentary Artefacts project. The AHRC RTI project is a
collaboration with Alan Bowman, Charles Crowther and
Jacob Dahl at the University of Oxford.
Corey Toler-Franklin and Szymon Rusinkiewicz from
Princeton University will discuss Algorithmic Rendering
(AR). Their AR work takes photographic image sequences
containing reflective spheres, such as the RTI data set, and
generates RGBN images with per-pixel color and surface
shape information, in the form of surface normals. These
RGBN images are powerful tools for documenting complex
real-world objects because they are easy to capture at a high
resolution, and readily extendible to processing tools
originally developed for full 3D models. Most state-of-the-
art non-photorealistic rendering algorithms are simply
functions of the surface normal, lighting and viewing
directions. Simple extensions to signal processing tools can
preserve the integrity of the normals, while introducing a
wide range of control for a variety of stylistic effects. RGBN
images are more efficient to process than full 3D geometry,
requiring less storage and computation time. Functions are
computed in image space producing powerful 3D results
with simpler 2D methods.
Gianpaolo Palma from the Visual Computing Lab, from
the Italian National Research Council’s (CNR) Institute for
Information Science and Technology (ISTI) will present two
tools to visualize and analyze RTI images in an interactive
way. The first one is a multi-platform viewer, RTIViewer,
developed also to work remotely through HTTP, that allows
the user to apply a set of new shading enhancement
techniques improving the virtual examination and
interpretation of details of the artifact. The second is a web
application based on SpiderGL [DBPGS10], a JavaScript
3D graphics library which relies on WebGL, which permits
the realtime rendering of huge RTIs with a multi-resolution
encoding in the next generation of web browser.
Mel Wachowiak from the Smithsonian Institution’s
Museum Conservation Institute (MCI) will describe some
museum uses of RTI and its place among photographic
capture and 3D scanning at the Smithsonian Institution (SI).
MCI has a central role as a research unit and collaborator in
analysis of heritage objects and sites. MCI’s part in the
digitization of collections is to offer an expanded vision of
the application of appropriate technologies. He will show
how RTI fills a niche that other imaging solutions can’t fill
by offering an immersive, near 3D experience and image
processing tools as well as accurately document features that
are impossible to acquire with 3D scanning. He will also
show a broad range of RTI projects. These have ranged in
size and scope from tiny natural history specimens to large
artworks, both in the studio and on location. Buttons,
jewelry, fossils, prehistoric stone tools and many other
materials will demonstrate the strengths and weaknesses of
the current RTI technology and software.
Michael Ashley from Cultural Heritage Imaging will
discuss and demonstrate practical digital preservation
frameworks that protect images throughout the entire
production life-cycle. Using off the shelf and open source
software coupled with a basic understanding of metadata, he
will show it is possible to produce and manage high value
digital representations of physical objects that are born
archive-ready and long-term sustainable. He will also
demystify the alphabet soup of file formats, data standards,
and parametric imaging, and demonstrate proven workflows
that can be deployed in any museum production
environment, scalable from the individual part time shooter
to full fledged imaging departments.
Neffra Matthews and Tommy Noble from the U.S.
Department of the Interior, Bureau of Land Management’s,
National Operations Center will present the principles of
photogrammetry, deriving measurements from photographs.
They will demonstrate that by following the
photogrammetric fundamentals, mathematically sound and
highly accurate textured 3D geometric results may be
achieved. they will also show how technological advances
in digital cameras, computer processors, and computational
techniques, such as sub-pixel image matching, make
photogrammetry an even more portable and powerful tool.
Extremely dense and accurate 3D surface data can be
created with a limited number of photos, equipment, and
image capture time.
Matteo Dellepiane from the Visual Computing Lab of the
Italian National Research Council’s (CNR) Institute for
Information Science and Technology (ISTI) will present 2
applications. The first is an alternate method for generating
textured 3D geometry for interpretive purposes using the
Arc3D web service. The Arc3D web service, inputs user
uploaded uncalibrated photographic sequences to generate
and then return the 3D model. The second application,
Meshlab is and open source tool for processing 3D data
from a wide variety of 3D scanning and image-based
sources into high quality 3D geometric models.
The tutorial will also include a live demonstration by Mark
Mudge and Carla Schroer of the Highlight RTI image
acquisition process along with the capture of a camera
calibration and photogrammetric image sequences.
*********************************************
2. Integrated Capture Methods for the Generation of
Multiple Scientifically Reliable Digital Representations
for Museums
Tutorial Presenters: Mark Mudge, Carla Schroer
Additional Author: Marlin Lum
Cultural Heritage Imaging, USA
112
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Adoption of RTI tools is underway at leading museums
including the Smithsonian Institution, the Museum of
Modern Art, the Metropolitan Museum, the Fine Arts
Museums of San Francisco, the Los Angeles County
Museum of Art, and the Worcester Art Museum. The
lessons learned by CHI and its collaborators, which
established the sufficient conditions for this adoption, can
guide the development of emerging technologies and the
adaptation of existing technologies to the adoption
requirements of the museum community and cultural
heritage activities generally.
Figure 1: Unified photo sequence capture of the
Sennenjem Lintel from the collection of the Phoebe A.
Hearst Museum of Anthropology at the University of
California Berkeley. Data to generate RTIs, ARs, and
dense textured 3D geometry was acquired during the
session
2.1 Factors influencing widespread adoption of digital
imaging practices
CHI and our collaborators have extensively discussed the
obstacles to widespread adoption of robust digital
documentary technologies by cultural heritage professionals
and the means to remove these obstacles in prior literature.
[MMSL06] [MMC*08] [RSM09]. The following material
reviews the central themes of this analysis.
2.1.1 Ease of use for museum professionals
Designed from the beginning through intensive
collaboration with cultural heritage practitioners,
Reflectance Transformation Imaging (RTI), and related
emerging technologies such as Algorithmic Rendering (AR)
along with its next generation Collaborative Algorithmic
Rendering Engine (CARE) tool, are crafted to be compatible
with current working cultures and digital-imaging skill sets.
The goal is to democratize technology and foster widespread
adoption of robust digital documentary methods by greatly
reducing the barriers of cost and technological complexity
that characterize many current 3D methodologies.
Until recently, adoption of robust digital practices was
slow in museum contexts, largely because many of today’s
legacy digital practices, required museum workers to seek
help from expensive digital imaging experts, or to learn
complex computer programs themselves. For successful
widespread adoption, practices must not require extensive
technical re-education, and must remain within the scope of
restrictive budgets.
The key design insight behind Cultural Heritage Imaging’s
(CHI’s) international RTI software research development
collaborations and now the AR-based emerging CARE tool
is that automation of digital processing tasks can put the
computer-science complexity and genius ’under the hood,’
leaving humanities users free to explore in the direction that
accomplishes their primary objectives, using their
knowledge more effectively. This strategy overcomes the
’hard to learn’ ’hard to use’ obstacles to digital technology
adoption and greatly enhances the effective use of work and
research time among domain experts.
2.1.2 Scientific reliability
Over the past eight years, CHI's discussions with
numerous humanities and natural science professionals
revealed that widespread adoption of digital representations
in all fields, including the multi-disciplinary study of
cultural heritage, requires confidence that the data they
represent are reliable. This confidence requires means to
qualitatively evaluate the digital representation. For scholars
to use digital representations built by someone else, they
need to know that what is represented in the digital
surrogate is truly what is observed on the physical original.
If archaeologists are relying on digital representations to
study Paleolithic stone tools, they must be able to judge the
likelihood that a feature on the representation is also on the
original and vice versa. For scholars to widely adopt use of
digital representations, they must be able to have absolute
trust in the representation’s quality and authenticity.
RTIs and the CARE tool are designed to record the same
information that a scientist records in a lab notebook or an
archaeologist records in field notes. The RTI and CARE
tools are and will be based on digital photography, capable
of automatic post-processing and automatic recording of
image generation process history in a machine readable log.
Additional software features are under construction. These
features will automatically map this log to a semantically
robust information architecture. Once the mapping process
has been completed, digital processing can automatically
record empirical provenance information into these
semantic architectures. We will document process history
within CARE using the same robust semantic knowledge
management common language, the International Council
of Museums’ (ICOM) Committee on Documentation’s
(CIDOC) Conceptual Reference Model (CRM) Special
Interest Group’s ISO standard 21127 [CCRMweb],
including its most recent extension CRM Digital [TTD*10].
This work will build upon CHI’s deep involvement in the
CRM, including the recent amendment to permit its use to
113
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
record process-history provenance during the ’digitization
process’ of ’digital objects.’ Incorporation of semantic
knowledge management greatly simplifies long-term
preservation, permits concatenation of RTI information and
information related to its real-world subjects archived in
many collections using dissimilar metadata architectures,
and demystifies queries of vast amounts of information to
efficiently find relevant material. Semantically managed
archives remove physical barriers to scholarly and public
access and foster widespread information re-purposing,
future re-use of previously collected information, public
access, and distributed scholarship.
Each RTI and AR records the access path to the original
empirical data — in this case, the raw photographs and
processing files. RTIs and ARs are constructed to contain
links to their raw data, and are bundled with the raw data
when archived. As we have seen, because the processing of
the raw photos into RTIs and ARs is performed
automatically, we can automatically save the history of this
process, each step of the way.
The CARE tool will display a visual gallery of different
graphic possibilities by performing mathematical
transformations not only on the color information, but also
on the rich 3D surface-structure information derived from
the originally captured photo sequence. Such a gallery of
surface-feature depiction and visual emphasis can disclose
both anticipated information and accidental discoveries
uncovered by processing pipeline options never imagined by
the user.
Because the record of the processing pipeline settings is
compact (in contrast to saving the entire image at each
stage), wthis permits the transmission of the settings over a
network to another user who has the same original data. This
enables collaborative, interactive visualization design. Any
changes performed by one user may be instantly visible to
the other. This allows for easy interaction among multiple
domain experts, image-processing experts, and professional
illustrators, working together to produce the most effective
possible visualizations. The CARE tool will record this
entire history of parameter configuration, sample AR
generation, user evaluation in the form of AR selection and
parameter reconfiguration, subsequent AR generation,
evaluation, further AR generation, and so on, until the final
AR is created from the RTI capture set. This construction
history will be available during and after the AR creation,
and can be shared, in real time, with collaborators anywhere
in the world. This distributed interplay of creation and
discovery will become part of the AR record, enabling
others to relive moments of discovery and learn from
successful practices. Over time, as computer scientists
continue to develop processing possibilities and humanities
users continue to describe features that are useful to abstract
and emphasize, the number of available processing pipelines
is likely to grow very large and the opportunities for
serendipitous discovery will increase accordingly. Anyone
can view the final AR in an associated viewer and replay
this history of discovery and decision.
In summary, the RTIs and ARs build in the ability for
anyone to access both the original image data and the
complete RTI and AR generation process history, in order to
track and reconfirm the quality and authenticity of the data.
Both current and future users of these digital surrogates can
decide for themselves whether the RTI or AR is appropriate
for their research.
2.1.3 Usefulness for the museum community
The documentary usefulness of RTI technology has been
demonstrated in many natural science and cultural heritage
subject areas [MVSL05] and offers significant advantages,
suggesting widespread future adoption. RTI enables robust
’virtual’ examination and interpretation of real-world
subjects that possess surface relief features. An enormous
benefit of the technology is the fact that RTI information
can be mathematically enhanced to disclose surface features
that are impossible to discern under direct physical
examination, including raking light photography and
microscopy [CHIweb1]. There is a growing family of
enhancement functions that use RTI color and 3D shape data
to aid the examination, analysis, communication and
interpretation of scholarly material. The enhanced interplay
of light and shadow in the image interacts with the human
perceptual system to reveal fine details of a subject’s 3D
surface form. This ability to efficiently communicate both
color and true 3D shape information is the source of RTI’s
documentary power.
For many documentary purposes, RTI also offers cost and
precision advantages over other 3D scanning methods.
Reflectance information can be captured with widely
available and relatively inexpensive digital photographic
equipment. CHI has developed techniques for capturing
RTIs over a large size range, from a few millimeters to
several square meters, and for acquiring a sample density
and precision that most 3D scanners are unable to reach.
RTIs can capture the surface features of a wide variety of
material types, including highly specular reflective material
such as jade or gold.
The CARE tool will offer significant advantages to
museum operations including documentation, curation,
conservation, and public outreach. Museum professionals
will be able to generate high-quality, comprehensible
illustrations for scientific papers and books, with control
over selective emphasis, contrast, attention, and abstraction.
The process will have lower cost, greater flexibility, and
more precise archival documentation than is available with
hand-drawn or Photoshopped illustrations.
2.2 Unified capture methodology
Today we know how to capture the digital photography
image sequences that enables the integrated acquisition of
Reflectance Transformation Imaging (RTI), Algorithmic
114
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Rendering (AR), digital camera calibration, and the
generation of measurable, dense, textured 3D geometry.
There are 3 photographic image sets required for the
integrated capture process. The first sequence, the RTI and
AR data acquisition set, requires a fixed camera to subject
alignment. At least 2 black reflective spheres are placed near
the subject in the camera’s field of view. The subject is then
illuminated from 24 to 72 evenly distributed lighting
directions with a fixed light to subject distance. The second
photographic sequence captures the information necessary
to calibrate the camera. This sequence requires 6 to 8
overlapping photos with the camera positioned in different
horizontal and vertical orientations. A detailed discussion of
this procedure is found in Section 8. The camera calibration
permits optical distortion correction and ortho rectification
of the RTI and AR data set. It also lays the ground work for
the third photographic sequence, the 66% overlapping set of
photos covering the subject that will be used to generate
dense, textured 3D geometry.
An example of this unified capture method is CHI’s
documentation of the Sennenjem Lintel from the collection
of the Phoebe A. Hearst Museum of Anthropology at the
University of California Berkeley, depicted in Figure 1.
The RTI, and dense textured 3D geometry results of the
unified capture method are seen in Figures 2,3,and4
below. Images of the AR results from the lintel can be seen
in Section 4, Figure 7.
Figure 2: RTI representation of the Sennenjem Lintel
showing the effects of interactive relighting and
mathematical enhancement
Figure 3: Textured 3D geometry of the Sennenjem
Lintel
Figure 4: Un-textured 3D geometry of the Sennenjem
Lintel
2.3 Conclusion
Experience from the RTI and AR software architecture
design process has provided a road map to produce
scientifically reliable, ’born archival’ knowledge, built for
long-term digital preservation, that fosters widespread
adoption within the museum and cultural heritage
community.
Currently, the tools producing the highest quality 3D
textured geometry from photographic images are less likely
to see widespread adoption. They are proprietary,
expensive, or closely held. Process history logs from these
tools are also incomplete or non-existent. These attributes
make their long-term preservation less likely.
Nonetheless, It is now understood how to capture the
image sequences we need to archive the information
necessary to insure a subject’s reflectance properties, 3D
geometry, and registered texture is well documented. While
the current textured 3D geometry processing software is
difficult to adopt, practically out of reach, and offers less
than the desired level of scientific reliability and long-term
preservation prospects, capture of the documentary
photographic sequences today will make the information
available for processing in the future with hopefully more
affordable, easier to use, more scientifically reliable,
preservation friendly, and widely adoptable tools.
For museums, this means that collection materials can
now be imaged once and returned to an optimized physical
preservation environment without fear that they will need to
be re-imaged in the near future. The information present in
the archived photo sequences will likely increase in value
and descriptive power as the computational photography
tools designed to exploit them increase in power and
practical adoptability.
References
[CCRMweb] CIDOC Conceptual Reference Model
(accessed August, 2010). http://cidoc.ics.forth.gr
[CHIweb1] Art Conservation and Reflectance
Transformation Imaging, Video. (accessed August 2010)
http://c-h-i.org/conservation
115
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
[TTD*10] Theodoridou, M., Tzitzikas, Y., Doerr, M.,
Marketakis, Y., Melessanakis, V., 2010. Modeling and
querying provenance by extending CIDOC CRM,
Distributed and Parallel Databases, Volume 27, Number
2 / April, 2010, pp. 169-210
[MMC*08] Mudge, M., Malzbender, T., Chalmers, A.,
Scopigno, R., Davis, J., Wang, O., Gunawardane, P.,
Ashley, M., Doerr, M., Proenca, A., Barbosa, J., 2008.
Image-based Empirical Acquisition, Scientific Reliability,
and Long-term Digital Preservation for the Natural
Sciences and Cultural Heritage. Eurographics Tutorial
Notes, 2008.
[MMSL06] Mudge M., Malzbender T., Schroer C., Lum M.,
2006. New Reflection Transformation Imaging Methods
for Rock Art and Multiple-Viewpoint Display. VAST:
International Symposium on Virtual Reality, Archaeology
and Intelligent Cultural Heritage (Nicosia, Cyprus,
2006), Ioannides M., Arnold D., Niccolucci F., Mania K.,
(Eds.), Eurographics Association, pp. 195–202.
[MVSL05] Mudge M., Voutaz J.P., Schroer C. and Lum M.,
2005. Reflection Transformation Imaging and Virtual
Representations of Coins from the Hospice of the Grand
St. Bernard. Proceedings of 6th International Symposium
on Virtual Reality, Archaeology and Cultural Heritage
(VAST2005), Mudge M., Ryan N., Scopigno R. (Eds.),
Eurographics Association, pp. 29– 39, 2005.
[RSM09] Rabinowitz, A., Schroer, C., Mudge, M., 2009.
Grass-roots Imaging: A Case-study in Sustainable
Heritage Documentation at Chersonesos, Ukraine,
Proceedings of the CAA Conference March 22-26, 2009
Williamsburg Virginia, pp. 320-328
*********************************************
3. Reflectance Transformation Imaging (RTI) System
for Ancient Documentary Artefacts
Tutorial Presenters: Graeme Earl1, Kirk Martinez2 and
Hembo Pagi1
1 Archaeological Computing Research Group, School of
Humanities, University of Southampton, UK
2 School of Electronics and Computer Science, University
of Southampton, UK
3.1 Introduction
This tutorial will provide a summary of our uses of
reflectance transformation imaging in archaeological
contexts. It also introduces the UK Arts and Humanities
Research Council funded Reflectance Transformation
Imaging (RTI) System for Ancient Documentary Artefacts
project. Some of the case studies and methodologies
introduced here are explored in more detail in [EBMP10]
and [EMM10]. The AHRC RTI project is a collaboration
with Alan Bowman, Charles Crowther and Jacob Dahl at the
University of Oxford.
3.2 Recent applications and lessons learned
Over the past five years we have been undertaking RTI
data capture in a broad range of cultural heritage contexts. In
each case the capture technologies employed have been
adapted as far as possible to suit specific needs. Experiences
from this process have fed directly into the RTI DEDEFI
project.
3.2.1 Conservation recording
We have applied RTI techniques in a range of
conservation contexts. For example, on projects with
English Heritage and the Hampshire and Wight Trust for
Maritime Archaeology we are using RTI datasets alongside
non-contact digitizing via a Minolta laser scanner to provide
an emerging conservation record of wooden artefacts
recovered from shipwreck and submerged landscape
contexts. RTI in particular has proved itself an invaluable
interrogative tool for conservators and artefact specialists. In
the first case the RTI data produced provide a low cost,
easy, portable and interactive means for engaging with fine
surface detail. Secondly, comparisons between RTI datasets
pre- and post-conservation identify clear transformations in
the morphology of the wooden objects as a consequence of
the conservation techniques employed, including reburial
(Figure 1).
Figure 1: Representation of the RTI normal maps as
model geometry (left), and a subsequent metric
comparison of these representations.
Conservation applications have also been demonstrated in
ceramic assemblages. Figure 2 shows the subtle surface
details made visible by RTI captures. In addition to cracking
and repaired fractures on ceramics the technique clearly
identified scratched initial sketches on a Greek bowl
fragment. This application of the technique at the
Fitzwilliam Museum also showed the ability of RTI datasets
to reveal small changes in surface reflectance as a function
116
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
of successive modifications to the glaze of some medieval
ceramics.
Figure 2: Ceramic viewed under normal lighting (left)
and with specular highlights in a captured RTI dataset
using the HP PTM fitter (right).
The application of RTI captures to complex, irregular
solids presents a range of problems. These are well
demonstrated in our work to provide a complete
conservation record of a Bronze ship ram (Figure 3). A
number of RTI datasets were produced at different scales
and from different orientations.
Figure 3: RTI captures of a Bronze ship ram from a
maritime context
Problems developing an holistic understanding of the
object in part prompted the development of the virtual PTM
rig described below, where the photographic coverage is
used to derive the camera and light positions for each RTI
capture in 3D space. What remains is a need for an RTI
viewer that provides a transition between RTI datasets in a
fully three-dimensional space, in a way analogous to the
Microsoft Photosynth browser.
A final conservation application has been in the recording
of trial RTI datasets at Herculaneum. Here the technique has
provided a good record of the surface morphology of Roman
wall painting fragments visible on site (Figure 4).
3.2.2 Analysis of archaeological materials
In addition to provision of conservation records our
applications of RTI have been driven by specific research
needs. In our case studies to date these have focussed on the
reading of ancient texts and graffiti, visualization and
interpretation of rock art, identification of diagnostic traits
in osteo-archaeological materials, reading of ceramic
stamps, inscriptions and coins, definition of tool marks in
wood and stone, and working practices in ceramics and
lithics (Figure 5). In these and other applications it has been
clear that whilst the RTI approach is ideal for recording the
state of a surface and for sharing this representation with a
wide public, the possibilities of the viewer are paramount. In
a recent case study in recording medieval pictorial graffiti it
was only through the capture of detailed RTI datasets that
the full scale of the material was revealed. The practicalities
of moving around and changing the illumination within
archaeological sites preclude extended visual engagements
with the material in a physical context. Conversely the
digital analogue provided by the RTI dataset offers limitless,
comfortable possibilities for adjusting parameters both
within and beyond physical ranges and provides a wholly
new form of engagement.
.
Figure 4: Highlight RTI capture underway on site at
Herculaneum
Figure 5: RTI dataset of an eroded brick stamp
excavated by the AHRC Portus Project
117
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
3.2.3 Representation of archaeological data
RTI datasets provide an excellent record of surface
morphology, reflectance and color for use in the
development of computer graphic simulations of
archaeological data. Plug-ins enabling direct rendering of
RTI data within modelling environments have been limited
and short lived and we would welcome the ability to use the
gathered data directly in interactive and offline rendering. In
the short term we have identified a simple technique for
direct comparison of captured RTI and digital geometry,
using the camera matching algorithms included in Autodesk
3ds Max to define the three-dimensional locations of RTI
capture planes in space (Figure 6).
The RTI approach has also been shown to offer a potential
as a viewing format for photographic datasets illustrating
properties other than reflectance. [MGW01] provide
examples of illustrating changing times of day and focal
plane via their PTM viewer. We have produced a virtual
RTI capture rig in software that enables other digital
datasets to be loaded. To date we have used this to represent
laser scan datasets and topographic models derived from
Geographic Information Systems. The approach also works
as an effective means to blend representative styles, for
example as a means to demonstrate the data underlying a
digital reconstruction. Thus, in our online viewer we are
able to load laser scan datasets represented simultaneously
as meshes and rendered surfaces (Figure 6). Finally we have
used the HP PTM fitter to combine multiple GIS-based
viewshed calculations providing an interactive cumulative
viewshed viewer
Figure 6: Automatic generation of a virtual RTI capture
rig from a single camera match
.
Figure 7: Using an embedded online HP PTM viewer to
interrogate a laser scan dataset
3.3 The AHRC RTI DEDEFI project
During the course of this tutorial we have described a
number of areas of focus for the AHRC RTI research
project. To date the project has collated a comprehensive
repository of publications relating to RTI, brought together
many of its developers and practitioners, designed and built
a new capture system, begun to write new capture, fitting
and viewing software and captured new RTI datasets from
various domains. In addition the project WIKI forms the
focus of discussions relating to ongoing and future software
and hardware developments. There is not scope in this
tutorial to cover all areas but we have identified some
crucial steps in what follows, many of which are being
explored under the aegis of the project. We very much
welcome input from new partners.
The needs of conservators to produce records enabling
both qualitative and quantitative comparisons have
prompted discussions concerning RTI enhancements.
Firstly, the need to derive true three-dimensional data via a
photogrammetric or other technique is clear. Whilst RTI
fulfils a different role to such techniques the need to
measure values such as lost volume is considerable.
Secondly, formal modes for registering and comparing
normal maps need to be integrated into the RTI viewer. This
would enable habitual application of the comparisons shown
in [DCCS06]. Thirdly, the automated or manual calculation
of a per-pixel scale factor per RTI should be incorporated
into the fitting and viewing process. Similarly an automated
process for removing metric distortion and for application of
color calibration across datasets tied to individual light
sources is needed.
The RTI viewing experience remains limited. While the
fully three-dimensional viewer described above is an ideal,
simpler paths to the development of 3D RTI datasets are
needed. C-H-I and others have proposed potential solutions
to this problem. Of further significance is the ability to
annotate RTI datasets, including the ability to associate
annotations with RTI viewer parameters such as light
position and image processing parameters. These are core
118
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
requirements for our Oxford University colleagues and
others working on transcription of ancient document
artefacts, and ones with a considerable extant body of
literature. Furthermore, the dataset loaded into the viewer
needs to be potentially far larger, with the ideal being a
seamless tiling of multiple RTI datasets, in addition to the
tiled delivery of single high resolution RTIs.
RTI capture hardware continues to improve with a number
of groups developing dome and other rig systems. Our own
project is developing a series of systems. The first
completed dome is 1m in diameter and divided into four
portable segments, with a current maximum of 76 light
positions. The system uses a Nikon D3X SLR. Our next
capture dome will fit on a standard archaeological
microscope enabling rapid, very high resolution RTI
capture.
3.4 Conclusions
The RTI technique remains under-utilized. Whilst we
continue to come across new and exciting applications of
RTI it is surprising the extent to which colleagues in
archaeology, conservation science, museum studies, art
history, epigraphy and ancient document studies remain
ignorant of the technique. Above all other challenges our
RTI project and this VAST workshop must seek to induce
step changes in the technology, the awareness of its
potential, and crucially the further development of a shared
community of practice.
3.5 Acknowledgements
Ongoing work in Reflectance Transformation Imaging is
funded under the AHRC DEDEFI programme. It is a
collaboration between the University of Southampton and
the University of Oxford. The project draws on the generous
contributions of a great many partners via our project WIKI.
We would very much welcome new members to join this
group. We are particularly grateful to input from Tom
Malzbender and from Cultural Heritage Imaging who have
been instrumental in our work with RTI and in our ability to
develop this tutorial.
Further details of the project are available at: http://
www.southampton.ac.uk/archaeology/acrg/
acrg_research_DEDEFI.html
References
[DCCS06] Dellepiane, M., Corsini, M., Callieri, M.,
Scopigno, R., 2006. High quality PTM acquisition:
Reflection Transformation Imaging for large objects in:
Ioannides, M., Arnold, D., Niccolucci, F., Mania, K. (eds)
VAST06: Proceedings of the.7th International
Symposium on Virtual Reality, Archaeology and Cultural
Heritage (Cyprus, 2006), pp. 179-86
[EBMP10] Earl, G., Beale, G., Martinez, K. and Pagi, H.
2010. Polynomial texture mapping and related imaging
technologies for the recording, analysis and presentation
of archaeological materials. Proceedings of ISPRS
Newcastle 2010 Commission V, WG VI/4. Available
from: http://eprints.soton.ac.uk/153235/
[EMM10] Earl, G., Martinez, K. and Malzbender, T. 2010.
Archaeological applications of polynomial texture
mapping: analysis, conservation and representation.
Journal of Archaeological Science, 37. Available from:
http://eprints.soton.ac.uk/156253/
[MGW01] Malzbender, T., Gelb, D., Wolters, H.,
Polynomial Texture Maps, Proceedings of ACM Siggraph
2001, pp. 519-528.
*********************************************
4. Visualizing and Re-Assembling Cultural Heritage
Artifacts Using Images with Normals
Tutorial Presenters: Corey Toler-Franklin, Szymon
Rusinkiewicz
Princeton University, USA
4.1 Introduction
Images with normals (RGBN images[TFFR07] are a type
of data that lies between simple 2D images and full 3D
models: images with both a color and a surface normal
(orientation) stored at each pixel. RGBN images are
powerful tools for documenting complex real-world objects
because they are easy to capture at a high resolution, and
readily extendible to processing tools originally developed
for full 3D models. Several characteristics of RGBN images
make them practical solutions for illustrating artifacts of
cultural heritage significance:
Easy to Acquire: The process for capturing RGBN data is
only mildly more complex than taking a digital photograph.
Low-cost, off-the-shelf capture devices (digital cameras and
2D scanners), make the process practical and significantly
easier than 3D scanning. For example, complex shapes with
significant occlusion, like the pinecone in Figure 1, would
require the alignment of dozens of 3D scans to create a hole-
free model (even from a single viewpoint).
High Resolution: RGBN images are more informative than
traditional color images because they store some
information about the object’s shape. In addition, they have
higher resolution color and normal maps (Figure 2) than 3D
geometry from 3D laser scanners, giving us the ability to
document, visualize, and analyze fine surface detail.
Easily Extended For Stylized Rendering: Most state-of-
the-art nonphotorealistic [GG01] rendering algorithms are
simply functions of the surface normal, lighting and viewing
directions. Simple extensions to signal processing tools can
preserve the integrity of the normals, while introducing a
wide range of control for a variety of stylistic effects.
119
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Simple and Efficient: RGBN images are more efficient to
process than full 3D geometry, requiring less storage and
computation time. Functions are computed in image space
producing powerful 3D results with simpler 2D methods.
Figure 1: Capturing RGBN images using a digital SLR
camera and hand-held flash. White and mirror spheres
are used to find the flash intensity and position. Right:
The original image, extracted normals, colors, and depth
discontinuities
.
Figure 2: Capturing RGBN images using a high
resolution 2D flat-bed scanner. Left: The object is
scanned at multiple orientations. The scanner’s light
source is linear (Top Right); a calibration step is used to
measure I(n). The output is a high resolution color
texture and normal map.
4.2 Capturing RGBN Datasets
There are several methods for acquiring RGBN datasets.
We use photometric stereo [Woo80], a process whereby
normals are inferred from several images (captured from a
single camera position) of an object illuminated from
different directions. We assume a perfectly diffuse object
with equal brightness in all directions. Under these
conditions, the observed intensities are given by the
Lambertian lighting law
where a is the albedo of a point, n is the surface normal,
and li is each lighting direction. With at least 3 (preferably
more) such observations, we can solve for the normal n
using linear least squares. Our set-up is depicted in Figure 2.
When there is less control over background lighting, or
objects are small and flat, a 2D scanner is a more effective
capture device for recording fine details. Brown et
al.[BTFN*08] deployed this technique (Figure 2) to archive
fragments of the Theran frescos at the archaeological site of
ancient Akroriti, (modern day Santorini, Greece). Although
we use photometric stereo, we cannot use the traditional
formulation of the Lambertian Lighting Law because the
scanner’s light source is linear (rather than a point source).
We introduce a one-time calibration phase to measure I(n),
the observed brightness as a function of the surface normal.
This is achieved by sampling intensities over a wide range
of known normal orientations. We then fit a first-order
spherical harmonic model to the sampled data to obtain this
parametric representation
Fragments are scanned at multiple orientations (typically
four). Given a set of scans a0 a1 a2 a3, we invert I to
solve for the normal, n. Figures 3 and 4 show the results.
Figure 3: Computed normals (top right) reveal more
surface detail than those extracted from the geometry
(top left). Extracted RGB color (bottom right) has a
higher resolution than color maps from the 3D scanner
(bottom left).
Figure 4: String impressions, most clearly visible in the
computed normals, are important cues for
reconstruction [TFBW*10], restoration, and
archaeological study.
120
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
4.3 Tools for RGBN Processing
Nonphotorealistic rendering algorithms rely on
fundamental signal processing tools that are easily adaptable
for use with RGBN images.
Filtering: Smoothing is important for de-noising and scale-
space analysis of images. However, we cannot naively
convolve an RGBN image with a smoothing kernel.We
must account for foreshortening: over-smoothing in regions
where normals tilt away from the view direction.We assume
a constant view direction (along z) and scale the contribution
of each normal by secq, transforming the vector (nx,ny,nz)
into (nx/nz,ny/nz,1).
To avoid blurring across depth discontinuities, we adopt
the bilateral filter [TM98] which is edge-preserving.
Specifically, we augment the bilateral filter with a term that
reduces the influence of samples on the basis of differences
in normals:
where ci and xi are the color and location of pixel i, g is a
Gaussian, and the sum is over all pixels j in the image. In
this equation,
σ
x and
σ
c are the widths of the domain and
range filters, respectively; decreasing
σ
c leads to better
preservation of edges. The normal differences |ni - nj| are
computed using the foreshortening correction, as above.
Figure 5 shows the effects of adjusting the bilateral filter
parameters.
Segmentation: One illustration technique separates regions
of an image and renders them in different shading styles.
RGBN segmentation extends the graph-cut segmentation
algorithm of Felzenszwalb et al. [FH04] to consider not only
color, but normals. RGBN pixels are continually clustered to
form components, such that edges between components in
the graph have larger weights (larger dissimilarity values)
than edges within components. Figure 9 shows how
segmentation by color and shape can be more effective than
segmentation by color alone.
Curvature Estimation: Several stylization techniques use
surface curvature to convey shape. The normal curvature of
a surface is the reciprocal of the radius of the circle that best
approximates a normal slice of surface in the given
direction. By tracking the changes in direction of the
normals over the surface, we can compute properties such as
mean curvature, Gaussian curvature or the principal
curvatures. However, we must account for the
foreshortening effect. Refer to Toler-Franklin et
al. [TFFR07] for the details.
Figure 5: The RGBN bilateral filter is capable of
producing different results, depending on the settings of
the domain and range filter widths. For large
σ
c and,
σ
n
there is little edge preservation, and the filter resembles
a simple Gaussian. Making
σ
c small preserves color
detail, such as that around the eye, while making
σ
n
small as well preserves both color and geometric edges.
4.4 Depiction Styles
We can apply the signal processing framework for
manipulating RGBN images to several stylization
techniques.
Toon shading: Cartoon shading consists of quantizing the
amount of diffuse shading (i.e., n·l) and mapping each
discrete value to a different color. This technique is effective
because it abstracts shading while conveying information
about geometry (the boundaries between toon shading
regions are isophotes, curves of constant illumination that
have been shown to convey shape). Because toon shading
only depends on the surface normals, it easily extends to
RGBN images. Figure 6 is an example of how toon shading
is used to enhance surface features not apparent in the color
image.
Line drawings: Many line drawing algorithms are
computed directly on the normal maps. For example,
discontinuity lines mark locations where there are
significant changes in depth. They occur where there are
sharp changes in normal direction among neighboring
normals, and at least one normal is nearly orthogonal to the
viewing direction. Figure 6 combines discontinuity lines
with toon shading to define silhouette edges. Suggestive
contours are similar to lines that artists draw. They are found
by calculating n·v, (where v is the viewing direction) over
the entire intensity map and then searching for local valleys
in intensity.
121
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Figure 6: Locations of depth discontinuities overlaid on
toon shading.
Exaggerated Shading: Exaggerated shading [RBD]
considers all possible lighting positions to find the
maximum contrast at all scales over multiple orientations.
The result reveals fine surface details (Figure 7top left) that
are not readily apparent in color only.
Figure 7: Sennedjem Lintel from the Phoebe A. Hearst
Museum of Anthropology: A variety of stylization
techniques can be used to reveal more information than
is readily apparent in the color-only image. (Top Left)
Exaggerated shading reveals fine surface detail. Details
are further enhanced by darkening groves and
emphasizing large features (Top Right). Lambertian
shading (Bottom Right) can be computed on the grey
scale image by combining the normal map (Bottom
Left) with a lighting direction to convey shape.
Curvature Shading and Shadows: Shadows are important
for conveying shape. Because RGBN images have no depth
information, we must simulate shadowing effects. Multi-
scale mean curvature shading works by darkening regions
with negative mean curvature and brightening those with
positive mean curvature. The result is averaged over
multiple scales to reduce high-frequency noise (Figure 8).
.
Figure 8: Multi-scale curvature shading closely
resembles ambient occlusion, revealing shape over local
neighborhoods
4.5 Cultural Heritage Applications
RGBN images are suitable for many cultural heritage
applications. High quality renderings generated with
flexible signal processing tools are ideal for textbook
illustrations. RGBN datasets are suitable for art historical
and scientific study. Figure 11 uses exaggerated and mean
curvature shading to analyze a petroglyph. The
nonphotorealistic visualization (Bottom) reveals inscriptions
that are fairly deep, and almost invisible in the color
photograph (Top). Fine surface markings on fragments
(Figure 4) are important cues for matching and re-
assembling fragments of objects. The 2D acquisition
pipeline and the resulting high fidelity data would be
suitable for applications in forensics, where surface cues are
important.
122
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
.
Figure 9: RGBN segmentation produced accurate
results without visible color edges. The hammer has
been segmented into multiple facets
.
Figure 10: Illustration of tools reveals fine details, such
as the maker’s stamp on the shears.
References
[BTFN*08] BROWN B., TOLER-FRANKLIN C., NEHAB
D., BURNS M., DOBKIN D., VLACHOPOULOS A.,
DOUMAS C., RUSINKIEWICZ S., WEYRICH T.: A
system for high-volume acquisition and matching of
fresco fragments: Reassembling Theran wall paintings. In
ACM Transactions on Graphics (Proc. SIGGRAPH)
(Aug. 2008), vol. 27.
[FH04] FELZENSZWALB P., HUTTENLOCHER D.:
Efficient graph-based image segmentation. International
Journal of Computer Vision 59, 2 (Sept. 2004).
Figure 11: Color image and nonphotorealistic
rendering (with mean curvature shading and
exaggerated shading) of the Legend Rock
archaeological site.
[GG01] GOOCH B., GOOCH A.: Non-Photorealistic
Rendering. A. K. Peters Ltd., 2001.
[RBD] RUSINKIEWICZ S., BURNS M., DECARLO D.:
Exaggerated shading for depicting shape and detail. In
ACM Transactions on Graphics (Proc. SIGGRAPH), vol.
25.
[TFBW*10] TOLER-FRANKLIN C., BROWN B.,
WEYRICH T., FUNKHOUSER T., RUSINKIEWICZ S.:
Multi-feature matching of fresco fragments. In ACM
Transactions on Graphics (Proc. SIGGRAPH Asia) (Dec.
2010).
[TFFR07] TOLER-FRANKLIN C., FINKELSTEIN A.,
RUSINKIEWICZ S.: Illustration of complex real-world
objects using images with normals. In International
123
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Symposium on Non-Photorealistic Animation and
Rendering (NPAR) (2007).
[TM98] TOMASI C., MANDUCHI R.: Bilateral filtering
for gray and color images. Proc. ICCV (1998).
[Woo80] WOODHAM R.: Photometric method for
determining surface orientation from multiple images.
Optical Engineering 19, 1 (1980), 139–144.
*********************************************
5. Visualization of RTI images
Tutorial Presenter: Gainpaolo Palma
Additional author: Massimiliano Corsini
Visual Computing Lab, ISTI - CNR, Italy
5.1 Introduction
Reflectance Transformation Images (RTI) have significant
potential in the Cultural Heritage (CH) field, where the
way light interacts with the geometry is important in the
visual examination of the artifact. The characteristics of the
material, the reflectance behavior, and the texture offer
major perceptual and cognitive hints for the study of these
kind of objects with respect to the simple 3D geometry. To
further improve the user ability to interactively inspect the
content of the RTI media, several shading enhancement
techniques have been proposed for improving the perception
of the details and the shape characteristics.
We present two tools to visualize and analyze RTI images
in an interactive way. The first one is a multi-platform
viewer, RTIViewer [CHI], developed also to work remotely
through HTTP, that allows the user to apply a set of new
shading enhancement techniques improving the virtual
examination and interpretation of several details of the
artifact. The second is a web application based on SpiderGL
[DBPGS10], a JavaScript 3D graphics library which relies
on WebGL, which permits the realtime rendering of huge
RTIs with a multiresolution encoding in the next generation
of web browser.
5.2 RTIViewer
RTIViewer is a multi-platform tool to load and examine
images created with RTI techniques. The tool supports
several formats, collectively called RTI files: Polynomial
Texture Maps (PTM files) [MGW01]; Hemispherical
Harmonics Maps (HSH files) [GWS09]; Universal
Reflectance Transformation Imaging (URTI files). The
viewer can display both single-view and multi-view images;
a multi-view RTI [GWS09] is a collection of single-view
images together with optical flow data that generates
intermediate views.
.
Figure 1: High-relief in gilded wood representing a kiss
between Corsica and Elba islands from Isola D’Elba
museum; (top) standard rendering; (middle) specular
enhancement; (bottom) static multi-light enhancement.
124
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
The tool is capable to visualize a RTI image loading it
from a local hard disk or from a remote server through
HTTP connection. In order to handle the remote loading, the
original image (usually of huge resolution) has to be
processed by a command line tool to prepare a multi-
resolution encoding.
The tool allows also the interactive changing of several
rendering parameters, like the zoom factor, the light
Figure 2: Sumerian cuneiform tablet: (Left) standard rendering; (Center) diffuse gain; (Right) normal
unsharp masking
Figure 3: Roman sarcophagus in the Camposanto Monumentale of Pisa: (Left) standard rendering;
(Center) luminance unsharp masking; (Right) coefficient unsharp masking
Figure 4: Tomb of the Archbishop Giovanni Scherlatti in the Opera Primaziale of Pisa: (Top) standard
rendering; (Bottom) dynamic multi-light enhancement.
125
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
direction, the shading enhancement technique to apply to the
image and its settings, and, only for multi-view format, the
viewpoint around the object.
Several shading enhancement methods are available:
Diffuse Gain [MGW01], that enhances the perception of
the surface shape by increasing the curvature of the
reflectance function (Figure 2);
Specular Enhancement [MGW01] to add a specular
effect to the surface by a Phong/Blinn shading (Figure 1);
Normal Unsharp Masking [MWGA06] [PCC10] that
enhances the high frequency details of the normals by
unsharp masking (Figure 2);
Luminance and Coefficient Unsharp Masking
[PCC10] to enhance the high frequency details of the
luminance channel of the LRGB RTI or of the basis
coefficients of the polynomial by unsharp masking
(Figure 3);
Multi-Light Detail Enhancement [PCC10] that uses
different light directions to create a virtual lighting
environment that maximizes the sharpness of the image and
at the same time preserves the global brightness. There exist
two versions: dynamic enhancement where the light chosen
by the user is locally perturbed (Figure 4); static
enhancement that produces an automatic high-contrast and
well-illuminated single image by sampling all possible light
directions (Figure 1).
Some methods are based on a per-pixel surface normal
estimated by photometric stereo methods or by computing
the light direction that maximizes the reflectance function
used in the PTM images [MGW01] assuming a lambertian
material.
5.3 RTI on the web
Thanks to WebGL, graphics API specification for the
JavaScript programming language, it is possible to use GPU
capabilities in a next generation of web browser without the
need for an ad-hoc plug-in. SpiderGL is a JavaScript library
for developing 3D graphics web applications based on
WebGL, providing a set of data structures and algorithms to
ease the development of WebGL application, to define and
manipulate shapes, to import 3D models in various formats,
to handle asynchronous data loading.
These characteristics can be exploited even for the
visualization of huge RTI image in the web (see [Ben] for a
demo) with a multi-resolution encoding. This encoding
needs a hierarchical layout of the data to prepare the data to
store in a web server, an algorithm to visit such hierarchy
and determine the nodes to use for producing the current
viewport, and the ability to load the nodes of the hierarchy
asynchronously, i.e. to proceed with rendering while
missing data are being fetched. The hierarchical layout is
made with a quadtree where for each nodes we save a
number of images to store the RTI data, i.e 3 PNG images
for a LRGB PTM.
References
[Ben] BENEDETTO M. D.: SpiderGL - PTM example.
http://www.spidergl.org/example.php?id=9.
[CHI] C-H-I: Cultural Heritage Imaging - RTIViewer.
http://www.c-h-i.org/learn/index.html.
[DBPGS10] DI BENEDETTO M., PONCHIO F.,
GANOVELLI F., SCOPIGNO R.: Spidergl: A javascript
3d graphics library for next-generation www. In Web3D
2010. 15th Conference on 3D Web technology (2010).
[GWS09] GUNAWARDANE P., WANG O., SCHER S.,
DAVIS J., RICKARD I., MALZBENDER T.: Optimized
image sampling for view and light interpolation, 2009.
[MGW01] MALZBENDER T., GELB D., WOLTERS H.:
Polynomial texture maps. In SIGGRAPH’01 (2001), pp.
519–528.
[MWGA06] MALZBENDER T., WILBURN B., GELB D.,
AMBRISCO B.: Surface enhancement using real-time
photometric stereo and reflectance transformation. In
Eurographics Workshop/ Symposium on Rendering
(2006), pp. 245–250.
[PCC10] PALMA G., CORSINI M., CIGNONI P.,
SCOPIGNO R., MUDGE M.: Dynamic shading
enhancement for reflectance transformation imaging.
Journal on Computing and Cultural heritage (2010).
**********************************************
6. Museum uses of RTI at the Smithsonian Institution
Tutorial Presenter: Mel Wachowiak
Additional Author: Elizabeth Keats Webb
Museum Conservation Institute (MCI), Smithsonian
Institution
Washington, D.C. USA
6.1 Introduction
This section will describe some museum uses of RTI and
its place among photographic capture and 3D scanning at
the Smithsonian Institution (SI). The SI is the world’s
largest museum complex and has an incredible breadth of
collections. MCI has a central role as a research unit and
collaborator in analysis of heritage objects and sites.
Imaging is part of the routine research, diagnostic, and
documentation conducted at MCI. The SI has only recently
begun a major examination of digitization of collections,
which can include data, still images, video and other motion
picture, sound, and associated metadata. MCI’s part in the
digitization of collections is to offer a more expanded vision
of the application of appropriate technologies.
126
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
While RTI is the focus of this presentation, it is but one
part of imaging technology used by MCI. It should be no
surprise that there are overlapping techniques, in terms of
data collected and the scale of the objects. Work can include
microstructure to macrostructure, two and three dimensions,
and wavelengths beyond the visible. Several methods are
based on the computational advances that digital
photography offers. For example, high dynamic range
imaging (HDRI) offers an enormous increase in dynamic
range compared to 8-bit images. Multispectral imaging is
possible with modified digital cameras, and can be
improved by sensors with increased or more specific ranges
(such as infrared sensors). Laser or structured light scanners
can capture extremely high-resolution data in three
dimensions, and some capture color for each point in the
spatial data. Multifocus montage, or extended depth of field,
has added a third dimension to microscopy in a practical
solution.
As application specialists, not developers, we in the
"Imaging Group" at MCI have a unique responsibility. We
conduct research and interpret objects using various
technologies. Our task is often to find the correct solution
from among available technologies, or collaborate with
specialists.
One interesting fact about RTI is that it fills a niche that
other imaging solution can’t fill. In particular, it bridges the
gap between photography and 3D scanning. However, it is
more than a "2-dimensional" solution. It has been amply
demonstrated that it offers an immersive, near 3D
experience and image processing tools. It should also be
pointed out that it can accurately document features that are
impossible to acquire with 3D scanning. As such, it is an
important addition to the cultural heritage community.
After receiving training by CHI staff in 2009, lead by
Mark Mudge and Carla Schroer, we have compiled a fairly
broad range of RTI projects. These have ranged in size and
scope from tiny natural history specimens to large artworks,
both in the studio and on location. Buttons, jewelry, fossils,
prehistoric stone tools and many other materials have helped
us understand the strengths and weaknesses of the current
RTI technology and software.
Several examples below will illustrate RTI results and
some creative solutions to problems encountered.
6.2 Preparation for RTI
It is certainly worth mentioning again the importance of
understanding the digital camera and good workflow
practices. The best images are those that require no post-
processing! For this reason we spend appropriate time with
color balance and exposure conditions. We also have a
responsibility as stewards of the collection to take great care
in handling and positioning objects, and to limit light
exposure.
6.3 Examples of RTI: Easel Paintings
Two recent projects have demonstrated the great value of
RTI for documentation and investigation of paintings. One
painting is Italian from the late 15th century, the other by an
American working in Paris the early 20thcentury.
The 20th century oil painting was created with heavy
impasto to accentuate light and shadow effects. RTI was an
excellent was method to reveal the texture, as well as later
defects. The conservators and curators are also interested in
creating interactive display stations for the museum visitors.
While single images can capture the general effect of RTI
images, the great strength of the technique is rapid,
nondestructive processing. By making the RTI files
available to the public, we greatly enhance their
appreciation and understanding of the object. Conservation
and preservation are also better understood since it is easy to
demonstrate both the subtlety of the art and the fragile
conditions of some objects. Digital surrogates made by RTI
are excellent preservation and research tools
Figure 1: RTI of painting showing normal lighting (left)
and specular enhancement (right). The specular
enhancement shows the surface texture without
distraction of pictorial elements.
The earlier painting is painted on a panel of edge-jointed
wood. It is very thinly painted and almost transparent in
some areas. The surface is somewhat glossy, which
precludes most 3D scanning techniques. The conservator
investigating the painting was most interested in the scribe
lines in the preparatory layer used to layout the single point
perspective of the painting. While they are evident in raking
light, the indented scribe lines are difficult to image and
study.
127
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Figure 2: RTI set up for large panel painting: reference
spheres at top, scale and color target below.
In our preparation for imaging the painting, which is
nearly seven feet (2.13 meters) wide, we tested the ability of
our camera to resolve the scribe lines. The sub-millimeter
wide indentations made by the stylus are too small to be
seen in an image of the entire painting. Therefore, we
needed to divide the imaging into three sections; all with
about 25% overlap.
The images below show fine details of the painting made
visible using the specular enhancement algorithm. Note the
alteration of the original composition of the building on
right (arrow in specular enhancement image)
Figure 3: Panel painting detail approximately 30cm
wide (normal view above); specular enhancement shows
alternation of original building composition (upper part
of central area). Note that the black painted areas create
a false impression of texture.
RTI of other paintings has revealed features that were
intriguing glimpses into their histories, including subtle
deformations of the canvas occurring during storage.
6.4 Daguerreotype
This mid-19th century photograph proved another
excellent project for RTI. Daguerreotypes are notoriously
difficult to light and photograph for documentation. In
addition, the surface is essentially polished metal and would
therefore be a poor subject for 3D scanning. We were able to
successfully do RTI with only slight modification of our
typical method. A custom-made velvet lined snoot for the
flash made a significant improvement in the RTI. One of the
great advantages of RTI is in the creation of excellent
documentation for condition examination. The many
scratches and accretions are quite apparent, as is the general
deformation of the sheet metal. The level of detail is
impressive in an RTI averaging 4000 x 5000 pixels. Many
observers have remarked that it is similar to examining the
object with a stereomicroscope.
Figure 4: Daguerreotype above, and 2cm high detail
below; specular enhancement at left, image unsharp
mask to right.
6.5 Ebony door
The RTI of a pair of ebony and ivory veneered doors was
prompted by technological limitations of other 3D scanning
128
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
and photography technologies. The conservator’s wish was
to document the condition before treatment. These doors are
part of a very glossy French polished cabinet. Three aspects
of the surface geometry eliminated other imaging
techniques.
1.The highly reflective surface cannot be imaged by 3D
scanning because of scattering of light, and the surface
could not be modified.
2. Black surfaces or white surfaces cause related problems
(high absorption or reflection).
3. Black and white material adjacent to one another is even
more problematic.
Not surprisingly, attempts made using a structure light
scanner were unsuccessful. RTI was very effective, as seen
in the figure below.
Figure 5: Detail of French polished ebony and ivory
veneered door (approx. 30cm x 30cm). RTI image at left
"illuminated" by source perpendicular to surface. At
right, specular enhancement and raking angle shows
surface deformation and Shreger lines in ivory. Inset
circle at lower left is from an actual image processed for
the RTI and shows high reflection from flash at 65¡.
6.6 Lenape Indian bandolier bag
This leather object is Native American and dates from
approximately 1820. The curator could decipher seven
words from an inscription on one side of the bag. The ink
was dark brown in areas, and apparently faded in others.
Multispectral imaging was not particularly helpful, but did
reinforce an earlier assumption. Since the brown ink
completely disappeared in infrared bands, we concluded it
was probably iron gall ink. The ink had eroded some of the
leather, leaving a slight impression, which led us to attempt
RTI.
The RTI was not particular successful, most likely due to
the overall suede-like surface texture. However, we were
able to use the RTI and many of the individual images used
to create it. By comparison of the nearly 50 images, as well
as the RTI, we were able to determine that there were 21
words in the inscription and could decipher 19.
Figure 6: Lenape Indian leather bandolier bag (top);
detail showing inscription (bottom)
6.7 Summary
This last example of the uses of RTI is a bit of a
cautionary tale. RTI is certainly not the only method for
documenting the surfaces of objects. But it has earned an
important place among our practical imaging tools.
With our practical imaging experience, we have been
able to use RTI at the SI almost immediately. We have
appreciated this powerful new tool and have addressed
several problems in order to maximize the results. We are
especially looking forward to improvements and new
developments in the software.
All of the examples shown here share a common trait:
they are generally flat, with only a slight waviness. They
also have a finer scale texture that comprises the surface of
interest. These flat objects show the subtlest detail to
advantage. They were chosen for their high contrast of
features are not meant to misrepresent the possibilities. We
have successfully completed RTI of many other object types
including larger carvings, human teeth, and other highly
textured objects.
Just as in the in the case of the Lenape bandolier bag, the
combination of tools, not exclusive use of one, will give the
best result.
6.8 Acknowledgments
We greatly appreciate the guidance of the staff at
Cultural Heritage imaging, especially Mark Mudge and
Carla Schroer, and Tom Malzbender of Hewlett-Packard
Laboratories.
*********************************************
129
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
7. Digital Preservation Workflows for Museum Imaging
Environments
Tutorial Presenter: Michael Ashley
Cultural Heritage Imaging, USA
7.1 Introduction
We discuss and demonstrate practical digital preservation
frameworks that protect images throughout the entire
production life-cycle. Using off the shelf and open source
software coupled with a basic understanding of metadata, it
is possible to produce and manage high value digital
representations of physical objects that are born archive-
ready and long-term sustainable. We demystify the alphabet
soup of file formats, data standards, and parametric imaging,
and demonstrate proven workflows that can be deployed in
any museum production environment, scalable from the
individual part time shooter to full fledged imaging
departments.
7.2 The iPad Effect and Museum Imaging
The world of imaging is going through its next paradigm
shift, and it requires radically rethinking how digital
curators work with their media collections. Mobile and
cloud computing is application based, not file based, and the
tendency is to hide the file system from users in favor of
media libraries held within and accessed through
applications. "Apple is dramatically rethinking how
applications organize their documents on iPad, ... Rather
than iPad apps saving their documents into a wide open file
system, apps on iPad save all their documents within their
own installation directory. Delete the app and you’ll clean
out all of its related files." [Dil10]
The divide between professional producers/managers of
content and consumers has grown, but convergence is on the
way. So much attention (and financial development
resources) are trained on mobile computing that we are
seeing amazing applications for creating high definition
media that are much ‘smarter’ than their predecessors. This
includes wifi transfer of images in realtime, built-in GPS,
3D video, and native DNG shooting (see below). For digital
imaging professionals and enthusiasts, it is an exciting but
confusing moment in history.
This technology shift has direct implications for digital
preservation and access workflows. We can prepare our
digital assets to not only withstand the transformations they
go through as they move from one application and platform,
but to become more valuable through context aware
metadata embedding.
You can safeguard your media collection while taking
advantage of the positive impacts mobile computing is
having on cultural heritage imaging. Small tweaks to well
defined image production workflows can yield dramatic
dividends both in digital asset management/preservation and
in public access/enjoyment.
This tutorial focuses on straightforward steps that will help
museum imaging professionals produce born archival media
that can survive the iPad effect by strategically embedding
essential metadata within files. Following these steps will
save you time and help to future proof your image collection
investment.
This tutorial relies heavily on the corpus of excellent
materials available on digital asset management for
professionals and attempts to augment rather than repeat.
We provide essential references throughout the guide.
7.3 Basics of Born Archival Imaging
As Doerr et al. argue, the CIDOC-CRM provides a nearly
generic information model for handling cultural heritage
events, documents, places and people [DI08]. Born archival
imaging can be described as a method for implementing the
CRM. This simple approach requires three components,
defined as a framework of things, people, places, and media,
meeting in space and time [ATP10].
Uniquely Identified Entities. Treat every digital original
with the same respect as museum objects. Give every
person, place and object and media item in your museum a
unique identifier. A majority of this effort can be automated
using non-destructive parametric image editing (PIE)
software [Kro09].
Refined Relations Between Entities. The list of potential
relations between people, places, things and media is
comprehensive and manageable. This TIF (media) is a photo
of a painting (thing). The Night Cafe (thing) was painted by
Van Gogh (person) in Arles, France (place) in 1887 (event),
and is now located in the Yale University Art Gallery in
New Haven (place). We will explain the process of defining
provenance information for imaging below.
Parameterize Properties. There is a fine balance between
exhaustive controlled vocabularies and Google’s text
indexing algorithm. Within museum contexts, it is relatively
easy to define ‘local’ terms that people can agree upon and
use in their daily practice. Defining a list of properties to
describe the people, places, things and media limited to a
particular museum context will produce incredibly accurate
search and browse capabilities, whether using Google or a
desktop file browser. This localization can be articulated
with other standardized efforts, such as Metaweb’s
Freebase, to link up your data and media to the world’s
knowledgebase.
7.4 Born-Archival Implications
What do we mean by ’born-archival’? John Kunze,
preservation specialist for the California Digital Library,
calls for ’born-archival’ media that is fully accessible and
preservable at every stage, throughout the life-cycle of this
data, from birth through pre-release to publication to
revision to relative dis-use and later resurgence. Data that is
born-archival can remain long-term viable at significantly
reduced preservation cost [Kun08].
130
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Figure 1: Seeing Double: Standards for media and
museums require mapping and translation.
Born archival images require the medium and the message
to be openly readable and stable, akin to a digital Rosetta
Stone. The medium is the file format. The message is the
raw image data and associated metadata. The challenge is to
produce a resilient digital surrogate that can withstand
limitless media transfers and potentially destructive
metadata modifications over the span of its life.
Standards. There are hundreds of metadata standards to
choose from, and they all have their virtues. We are not
addressing metadata standards in this tutorial, except to say
that it is vital to follow a standard, documented protocol.
Jenn Riley and designer Devin Becker recently mapped the
myriad of standards that are applied to worldwide
information (Figure 1) [RB10]. Whatever standards you are
using to manage your collections, born archival imaging
simply requires the best practice of explicitly stating what
standards you are using and how you are implementing
them. We call this the desk instructions, an essential
component for describing the provenance of image
processing. Born-archival imaging requires the desk
instructions be accessible to the viewer, either through URL
linking or direct embedding within the asset.
File Media. What good is metadata if the medium they
and the images they describe are stored on will not last?
This is a reality of digital storage. While stone tablets can
last millennia and silver based film for hundreds of years,
digital media is trending toward shorter and shorter lifespans
[DB10]. We overcome this risk by creating a structured
network of pointers from derivatives and data files to the
original source image, no matter where it is stored, or in
what format.
We advocate for a two-pronged approach to media
management, described by Peter Krogh as the ‘truth is in the
catalog’ vs. ‘the truth is in the file’ [Kro09]. The idea is that
you push the best, most current and accurate information
into your image files as you can, a combination of
descriptive and technical metadata, a snapshot at the end of
the image’s production. You keep a separate, external
database of the metadata. This is the definitive informational
source. If anything happens to the image or its data, the
provenance can be reconstituted from the database. We
describe this as the passport and the bureau model.
Figure 2: Media through the millennia, from analog to
hybrid to virtual, the trend is toward short-term
lifespans.
7.5 The Passport (file) and the Bureau (library)
The passport is the data in the file and the bureau is the full
metadata, stored in the catalog. Image files, from RAW to
TIF and JPG, can include embedded data in XMP format.
XMP is a derivative of XML, an open standard created by
Adobe, and is held within the DNA of every image file.
Backed by camera manufacturers, international standards
bodies and software companies, XMP is the Esperanto of
photographic metadata.
XMP is a loose standard, providing a rich framework for
you to embed your metadata in the format that best fits your
needs. There are many open source, inexpensive and
professional options for embedding data into images
through XMP (Figure 3). With careful planning, XMP will
accommodate even the most complex metadata schema.
Because it is simply structured XML, it is a highly resilient
preservation format.
The library can be a catalog file, such as an Adobe
Lightroom database, to a full-fledged XML or relational
database, a simple photo log saved as an Excel spreadsheet,
or a saved query from your collection management system.
131
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Figure 3: XMP is an emerging open standard, supported
by Adobe and embraced by camera and software
companies.
7.6 Binding message, media and objects in perpetuity
The metadata stored within a file or externally in a
database is only as useful as the bound relationship between
both. How can you assure a connection? Embed within the
file the links to unique identifiers associated with data
records and original surrogates. Once this connection is
made, it is much more trivial to link derivative files and
subsequent comments and notes.
Example: In a shooting session at the deYoung museum,
the staff images a painted frame attributed to Domenico
Ghirlandaio, accession number A387967. The image,
A387967_AT_V1_001_O.tif, has a data record stored in the
deYoung image database, http://A387967_AT_V1.html.
This URL provides links to the metadata on the masterpiece,
the image production, and all known derivative works, as
well as information on the photo sessions associated with
the painting.
Filename: The filename is comprised of the object ID,
photo session ID, a serial number and file version. In this
example, AT = after treatment, V1 = version 1, O = digital
original. The session ID = A387967_AT. In the simplest
system, you would swap the O for T (thumbnail) or F (full
resolution JPEG). We have bound the physical object to its
shooting event, version and file type in a compact format.
At the absolute minimum, you would embed the data URL
in the file. Ideally, you would add descriptive metadata
about the subject, technical metadata about the file, and
additional information about who produced the image and
how. This example assumes that an established museum
such as the deYoung will continue to have robust data
infrastructure, thus the amount of carried metadata can be
quite minimal.
To assure maximum reusability, we would embed as much
provenance information in the XMP of the file as possible,
describing who helped create the image, where it was shot
and when, what was imaged and how.
.
Figure 4: Smartphones can automatically capture place/
time data for sessions in images, essential on location.
7.7 Reducing Risk Through Chained Embedded
Provenance
We have avoided describing backup strategies or file
management in this tutorial, as these topics are amply
covered elsewhere. You will need to have a robust backup
system in place that covers each stage in production, from
camera to final archive. For an excellent description of
backup strategies and PIE workflows, see The Dam Book by
Peter Krogh [Kro09].
We conclude by describing a real world example dealing
with the challenge of image sequences in order to produce a
robust digital surrogate of a painting (Figure 5). We are
producing an RTI, a Reflectance Transformation Image,
from a series of 48 images (for a description of RTI, see
[MMSL06]. This RTI was produced in partnership with
Cultural Heritage Imaging (CHI) and the Fine Arts
Museums of San Francisco (FAMSF).
The resulting RTI is derived from the image data in the
raw captures, therefore we want to embed pointers to all of
the documents required to reproduce the RTI in the future,
including the images, processing logs, object information
and desk instructions.
132
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
The Camera RAW files are converted to Digital Negative
Format (DNG) using Adobe Camera Raw (see Figure 3) and
assigned a unique filename. We embed the sequence series
IDs into each image, binding them together. All process
history, processing information and museum object
information available is structured and embedded in the
image XMP.
Provenance documents are gathered in one location. The
documents are parsed into a single XML document, from
which we can extract information blocks to embed in the
DNG files through XMP. At this stage, each image in the
sequence contains a light but coherent set of information
about its sibling files in the sequence, plus preliminary
information about the resulting RTI.
The RTI is processed, and its processing notes are synced
with the provenance documents, and pushed into the DNG
images. The finalized XML is linked to the image sequence
and RTI, and the RTI XMP points to the XML file available
online.
As additional derivatives are created – JPEGs, movies,
related RTI files – or more information about the object is
discovered, the XML document is updated, providing future
users with up-to-date information and trustworthy
provenance of the file they are evaluating.
Most importantly, the files are carriers of their own life
histories, securely weaved into the XMP and relatively safe
from the deleterious effects of the application-centric iPad
effect.
References
[ATP10] Ashley, M., R. Tringham, C. Perlingieri: Last
House on the Hill: Digitally Remediating Data and Media
for Preservation and Access. Journal on Computing and
Cultural Heritage. Friedlander. Ed. (in press)
[DB10] Dunne, T., Bollacker, K.: Avoiding a Digital Dark
Age. In American Scientist. Accessed Aug 2, 2010.
http://www.americanscientist.org/issues/num2/2010/3/
avoiding-a-digital-dark-age/1
[Dil10] "Apple reinventing file access, wireless sharing for
iPad" Retrieved, Aug 3 2010, from
http://www.roughlydrafted.com/2010/01/29/apple-
reinventing-file-access-wireless-sharing-for-ipad/
[DI08] Doerr, M. and D. Iorizzo: The dream of a global
knowledge network—A new approach. ACM Journal on
Computing and Cultural Heritage, Vol. 1, No. 1. 2008.
[Kro09] Krogh, P.: The Dam Book. Digital Asset
Management for Photographers. O’Reilly Press. 2009
[Kun08] Kunze, J.: New Knowledge and Data Preservation
Initiative. California Digital Library (2008)
[RB10] Riley, J., Becker, D: Seeing Standards. Accessed
Aug 2, 2010.
http://www.dlib.indiana.edu/~jenlrile/metadatamap/
[MMSL06] Mudge M., Malzbender T., Schroer C., Lum M.:
New reflection transformation imaging methods for rock
art and multiple-viewpoint display. In VAST:
International Symposium on Virtual Reality, Archaeology
and Intelligent Cultural Heritage (Nicosia, Cyprus, 2006),
Ioannides M., Arnold D., Niccolucci F., Mania K., (Eds.),
Eurographics Association, pp. 195–202.
*********************************************
133
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
8. Photogrammetric Principles, Examples and
Demonstration
Tutorial Presenters: Neffra Matthews, Tommy Noble
U.S. Department of the Interior, Bureau of Land
Management, National Operations Center
8.1 Introduction
Since its inception the principles of photogrammetry,
deriving measurements from photographs, have remained
constant. Even today, when following the fundamentals,
mathematically sound and highly accurate results may be
achieved. While requirements, such as overlapping
(stereoscopic) images remain, technological advances in
digital cameras, computer processors, and computational
techniques, such as sub-pixel image matching, make
photogrammetry an even more portable and powerful tool.
Extremely dense and accurate 3D surface data can be
created with a limited number of photos, equipment, and
image capture time. An overlap of 60% has traditionally
been required for analytical photogrammetry providing a
very strong base to height ratio. Now, because of the highly
automatic image correlation algorithms available today, a
perfect photogrammetry sequence of photos would be 66%
overlapping images. Points matched in at least three images,
provide a high level of redundancy and a more robust
solution, tri-lap. While there are a number of commercial
software products available (3DM, PhotoModeler, 2D3,
Alice labs PhotoStruct, to name a few) the basic principles
for capturing robust stereoscopic images and the photos
needed for camera calibration remain consistent.
8.2 Basics of Stereoscopic Photogrammetry
A crucial element of successful photogrammetric process
is obtaining “good” photographs. Herein the term good
refers to a series of sharp pictures that have uniform
exposure, high contrast, and fill the frame with the subject.
The final accuracy of the resulting dense surface model is
governed by the image resolution, or ground sample
distance (GSD). The GSD is a result of the resolution of the
camera sensor (higher is better), the focal length of the lens,
and the distance from the subject (closer is better). The
resolution of the images is governed by the number of pixels
per given area and the size of the sensor. The camera should
be set to aperture priority (preferable F8) and the ISO,
shutter speed, white balance, and other settings be adjusted
to achieve properly exposed images. To obtain the highest
order results it is necessary to ensure that focal distance and
zoom do not change for a given sequence of photos. This
can be achieved by taking a single photo, at the desired
Figure 1: Sequence of images needed to capture a basic stereoscopic project and perform a camera calibration.
134
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
distance, using the autofocus function, than turn the camera
to manual focus and tape the focus ring in place. To
maintain a consistent 66% overlap, the camera must be
moved a distance equivalent to 34% of a single photo field
of view (Figure 1). To ensure the entire subject is covered
by at least two overlapping photos, position the left extent of
the subject in the center of the first frame. Proceed
systematically from left to right along the length of the
subject and take as many photos as necessary to ensure
complete stereo coverage.
8.3 Camera Calibration Sequence
The main purpose of the camera calibration is to determine
and map the distortions in the lens with respect to the sensor
location. This can be accomplished most effectively when
there are a large number of auto-correlated points in
common between the stereoscopic images and the additional
set of calibration photographs. The camera calibration
photographs must be captured at the same settings as the
stereo photos described in section 2. At least four additional
photos are required; two taken with the camera physically
rotated 90° to the previous line of stereoscopic photos and
two additional photos with the camera rotated 270°
(Figure 1). The additional four camera calibration photos
may be taken at any location along the line of stereo
photographs, however the best results occur in areas where
the greatest number of autocorrelated points may be
generated.
Figure 1 illustrates the sequence of images needed to
capture a basic stereoscopic project and perform a camera
calibration. The dashed outlines highlight a pair of
photographs that overlap each other by 66%. Arrows,
indicating the rotation at which photos were taken, show a 0
degree (or landscape) orientation. The solid outlines
highlight the 4 photos required for the camera calibration
process. These photos are taken at 90 degrees (or portrait)
orientation. By stacking the camera calibration photos over
the previously taken stereoscopic photos, maximum benefit
will be achieved. Areas of minimum overlap are illustrated
by shading of the photographs. Note the calibrated target
sticks positioned along the subject.
8.4 Adding Measurability
In addition to maintaining a proper base to height for 66%
overlap and the camera calibration photo sequence, the next
most important component needed to acquire geometrically
correct dense surface models is the ability to introduce real
world values, or scale, to a project. This is accomplished by
simply adding an object of known dimension (meter stick or
other object) that is visible in at least two stereo models
(three photos). It is preferable to have two or more such
objects, to ensure visibility and for accuracy assessment.
Calibrated target sticks may be used in addition to, or in
place of, the object of known dimension. These objects may
then be assigned their proper length during processing. Most
photogrammetrically based software products conduct a
mathematical procedure knows as a bundle adjustment.
Once an object length is established the bundle adjustment
passes those measurements to all photos and reduces error in
the project. As a result high accuracy may be extended for a
long distance along a series of photos allowing for the object
of know dimension to be placed so as to not detract visually
from the subject.
8.5 Conclusion
Capturing photographs for stereoscopic photogrammetric
processing may be accomplished in as few as 6 photos for a
small subject and can provide extremely dense, high
resolution, geometric and orthometrically correct 3D digital
data sets. Because of the flexibility of this technique, it is
possible to obtain high accuracy 3D data from subjects that
are at almost any orientation (horizontal, vertical, above, or
below) the camera position. However, it is important to keep
the plane of the sensor and lens parallel to the subject and to
maintain a consistent height (or distance) from the subject.
Although currently, low- or no-cost automatic image
matching alternatives are not available for typical analytical
photogrammetric processing, the same sub-pixel matching
algorithms used in structure for motion and gigapixel
panoramas will undoubtedly lead to this niche being filled.
Regardless, incorporating the basic photogrammetric image
capture steps described above (correct base to height,
addition of camera calibration photos, and adding an object
of know dimension) to other capture methods will
undoubtedly increase their geometric accuracy. In addition,
photogrammetric image capture steps can be done in concert
with the RTI image capture process[MMSL06]. This
combination will result in a geometrically correct RTI when
the photogrammetric preprocessing is completed. In
addition, as the dense surface model is derived directly from
the images, there is no need for further registration
processing. The resulting dense surface model and image
texture may be output directly to an OBJ or XYZRGB
format. Additional processing of these files may produce
.ply, .las, .stl,, as well as, a variety of solid model printouts.
It is also possible to use gigapixel techniques to produce
mosaiced stereoscopic "pairs" of images resulting in
spectacularly dense surface models. These very dense
surface models may be processed through the chain
resulting in Algorithmic Rendering entire rock art panels.
References
[Mathews08] Matthews, N. A. 2008. Aerial and Close-
Range Photogrammetric Technology: Providing Resource
Documentation, Interpretation, and Preservation.
Technical Note 428. U.S. Department of the Interior,
Bureau of Land Management, National Operations
Center, Denver, Colorado. 42 pp.
135
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
[MMSL06] MUDGE M., MALZBENDER T., SCHROER
C., LUM M.: New reflection transformation imaging
methods for rock art and multiple-viewpoint display. In
VAST: International Symposium on Virtual Reality,
Archaeology and Intelligent Cultural Heritage (Nicosia,
Cyprus, 2006), Ioannides M., Arnold D., Niccolucci F.,
Mania K.,(Eds.), Eurographics Association, pp. 195–202.
*********************************************
9. 3D models from un-calibrated images and use of
MeshLab
Tutorial Presenter: Matteo Dellepiane
Visual Computing Lab, ISTI - CNR, Italy
9.1 Introduction
In the last few years, the advancement of active acquisition
systems has been impressive. This has happened in terms of
both hardware and software improvements. Nevertheless,
there is still a tradeoff between the costs and the accuracy in
the acquisition of the geometry. This prevents from having a
technology which is able to cover a wide range of possible
objects. The best solution would be to be able to obtain a
dense reconstruction of a scene with a complete "passive"
approach. This means that only images of the object could
be used to automatically infer a dense geometric information
of the object of interest. This would overcome the problem
of the need of expertise for the photogrammetric approach.
Recently, dense stereo matching approaches have been
made fast and robust enough to handle a large number of
images, obtaining accurate results. This brings the
possibility to apply dense stereo matching in the context of a
lot of applications, from archeological excavation to
visualization and restoration support, in Cultural Heritage
9.2 Dense stereo matching
Dense stereo matching approaches derive from the success
of structure-from-motion techniques [BL05, SSS06] which
produce sparse feature points, and have recently been
demonstrated to operate effectively. Instead of obtaining a
discrete depth map, as is common in many stereo methods
[SS01], dense stereo matching aims at reconstructing a sub-
pixel-accurate continuous depth map.
Essentially, stereo matching is applied on each pixel of the
images considering all the possible couples among them,
starting from an initial estimate provided by some
descriptors (e.g. SIFT features). The method proved to be
robust enough to be applied also to community photo
collections [GSC07], obtaining impressive results.
Moreover, it was recently shown that it can obtain a high
degree of accuracy [FP10].
While usually completely automatic, dense stereo
matching systems are computationally intensive. Recently,
the code for some of them has been made available [SSS06],
but most of the practical solutions for the reconstruction of
geometry from images are based on Web services [VG06].
This permits to use the resources of big clusters of
computers, so that the whole matching operation can be
completed within minutes.
Although reconstruction from un-calibrated images can’t
be considered as a measurement system, it’s becoming a
widely used approach, especially in the context of
archaeological excavations [HPA10], where the
interpretation of data is more important than the geometric
accuracy.
Figure 1: A snapshot of the interface which loads
Arc3D data in Meshlab
9.3 Geometry from images using a Web Service and an
Open Source Tool
The Arc3D web-service (http://www.arc3d.be) [VG06]
was one of the result of the Epoch Network of excellence,
which aimed at improving the quality and effectiveness of
the use of Information and Communication Technology for
Cultural Heritage. Using a simple upload tool, and following
some guidelines about how to acquire the data, it’s possible
to obtain a 3D model of an object of interest. Moreover, the
output of the web-service (which is made of a group of
depth maps associated to each uploaded image, together
with a quality map related to each pixel of each image), can
be loaded and processed through MeshLab (http://
meshlab.sourceforge.net/) [CCC08], an open source,
portable, and extensible system for the processing and
editing of unstructured 3D triangular meshes.
Figure 1 shows how the Arc3D output data are visualized
in the context of MeshLab. Using the interface, it’s possible
to select only a subgroup of images from which the final
geometry, and define a number of useful operations
(masking, hole filling, smoothing) to improve the quality of
the result.
Figure 2top shows the output of the geometry
reconstruction. Using MeshLab, it’s also possible to further
improve the quality of the 3D model by applying a simple
pipeline, made of three steps:
Mesh Cleaning: with simple editing and processing tool,
unwanted data and main geometric artifacts are removed.
136
M. Mudge, C. Schroer, et al. / Photography-based Digital Imaging Techniques for Museums
Remeshing: using a reconstruction method (e.g. Poisson
approach) the geometry is re-created, by preserving the
geometric features and removing the typical noise generated
by the generation from images.
Color transfer: a filter permits to transfer the color
information from the original to the reconstructed model.
Figure 2: The model produced using a subset of the
original images, before and after the processing pipeline
of MeshLab
Figure 2bottom shows the model after applying the
pipeline: while the geometric features are preserved, the
mesh is cleaner and more complete.
9.4 Conclusion
The use of dense stereo matching techniques for the
reconstruction of 3D models from images is a very
promising direction of work especially for the application in
the context of Cultural Heritage. Preliminary but structured
test cases are currently under development, especially for
the support of archaeological excavations.
Nevertheless, the limitations of the approach must be
made clear, in order to understand the possible applicability
a usefulness of the method. The main limitation are
essentially:
Kind of objects and acquisition environment: in order
to exploit dense stereo matching at its best, there are several
prerequisites on the material of the object, the photographic
campaign strategy and the external environment. This can
prevent from being able to acquire certain kind of objects.
Accuracy and scaling: since the system starts from
uncalibrated images, the scale of the object isn’t known in
advance: hence, a scaling operation is needed. Moreover, in-
depth testing of the accuracy of the method should be
necessary to show that this can be a low-cost alternative to
3D Scanning.
In conclusion, geometry reconstruction from images will
be probably applied massively in the next future in the
context of Cultural Heritage. The availability of open source
tools to process and present the data can boost its usefulness
for the whole community.
References
[BL05] BROWN M., LOWE D. G.: Unsupervised 3d object
recognition nd reconstruction in unordered datasets. In
3DIM’05: Proceedings of the Fifth International
Conference on 3-D Digital Imaging and Modeling
(Washington, DC, USA, 2005), IEEE Computer Society,
pp. 56–63.
[CCC08] CIGNONI P., CALLIERI M., CORSINI M.,
DELLEPIANE M., GANOVELLI F., RANZUGLIA G.:
Meshlab: an open-source mesh processing tool. In Sixth
Eurographics Italian Chapter Conference (2008), pp.
129–136.
[FP10] FURUKAWA Y., PONCE J.: Accurate, dense, and
robust multiview stereopsis. Pattern Analysis and
Machine Intelligence, IEEE Transactions on 32, 8 (Aug.
2010), 1362 –1376.
[GSC07] GOESELE M., SNAVELY N., CURLESS B.,
HOPPE H., SEITZ S.: Multi-view stereo for community
photo collections. pp. 1–8.
[HPA10] HERMON S., PILIDES D., AMICO N.,
DŠANDREA A., IANNONE G., CHAMBERLAIN M.:
Arc3d and 3d laser scanning a comparison of two
alternate technologies for 3d data acquisition. In CAA
2010, Granada, Spain (2010).
[ SS01] SCHARSTEIN D., SZELISKI R.: A taxonomy and
evaluation of dense two-frame stereo correspondence
algorithms. International Journal of Computer Vision 47
(2001), 7–42.
[SSS06] SNAVELY N., SEITZ S. M., SZELISKI R.: Photo
tourism: exploring photo collections in 3d. In
SIGGRAPH’06: ACM SIGGRAPH 2006 Papers (New
York, NY, USA, 2006), ACM, pp. 835–846.
[VG06] VERGAUWEN M., GOOL L. V.: Web-based 3d
reconstruction service. Mach. Vision Appl. 17, 6 (2006),
411–426.
137
... In terms of the technicality of digital capture, close-range photogrammetry is now relatively well-known, and the technical details will not be covered in detail in this article. For the readership, the foundations of close-range photogrammetry principles is accessible (Luhmann et al., 2006;Mudge et al., 2010;Luhmann et al., 2014), and best practice for mass photogrammetry using heterogenous devices and software, within environments that do not favour photography is available (Ch'ng et al., 2019c). ...
Article
Purpose The need to digitise is an awareness that is shared across our community globally, and yet the probability of the intersection between resources, expertise and institutions are not as prospective. A strategic view towards the long-term goal of cultivating and digitally upskilling the younger generation, building a community and creating awareness with digital activities that can be beneficial for cultural heritage is necessary. Design/methodology/approach The work involves distributing tasks between stakeholders and local volunteers. It uses close-range photogrammetry for reconstructing the entire heritage site in 3D, and outlines achievable digitisation activities in the crowdsourced, close-range photogrammetry of a 19th century Cheah Kongsi clan temple located in George Town, a UNESCO World Heritage Site in Penang, Malaysia. Findings The research explores whether loosely distributing photogrammetry work that partially simulates an unorganised crowdsourcing activity can generate complete models of a site that meets the criteria set by the needs of the clan temple. The data acquired were able to provide a complete visual record of the site, but the 3D models that was generated through the distributed task revealed gaps that needed further measurements. Practical implications Key lessons learned in this activity is transferable. Furthermore, the involvement of volunteers can also raise awareness of ownership, identity and care for local cultural heritage. Social implications Key lessons learned in this activity is transferable. Furthermore, the involvement of volunteers can also raise awareness of identity, ownership, cultural understanding, and care for local cultural heritage. Originality/value The value of semi-formal activities indicated that set goals can be achieved through crowdsourcing and that the new generation can be taught both to care for their heritage, and that the transfer of digital skills is made possible through such activities. The mass crowdsourcing activity is the first of its kind that attempts to completely digitise a cultural heritage site in 3D via distributed activities.
... Our main goal is to obtain 3D reconstructions of cultural heritage objects that can faithfully capture the form and colour of the actual artefacts. The 3D models are neither digital modelling that requires designers' interpretations nor technology-savvy documentation that involves additional geometric information or interior measurement [44,45]. Mass photogrammetry rarely achieves professional, archivable copies due to the high variability of imaging devices, lens settings and lighting conditions. ...
Article
Full-text available
Disorganised and self-organised crowdsourcing activities that harness collective behaviours to achieve specific level of performance and task completeness are not well understood. Such phenomena become indistinct when highly varied environments are present, particularly for crowdsourcing photogrammetry-based 3D models. Mass photogrammetry can democratise traditional close-range photogrammetry procedures by outsourcing image acquisition tasks to a crowd of non-experts to capture geographically scattered 3D objects. To improve public engagement, we need to understand how individual behaviour in collective efforts work in traditional disorganised crowdsourcing and how it can be organised for better performance. This research aims to investigate the effectiveness of disorganised and self-organised collaborative crowdsourcing. It examines the collaborative dynamics among participants and the trends we could leverage if team structures were incorporated. Two scenarios were proposed and constructed: asynchronous crowdsourcing, which implicitly aggregates isolated contributions from disorganised individuals; and synchronous collaborative crowdsourcing, which assigns participants into a crowd-based self-organised team. Our experiment demonstrated that a self-organised team working in synchrony can effectively improve crowdsourcing photogrammetric 3D models in terms of model completeness and user experience. Through our study, we demonstrated that this crowdsourcing mechanism can provide a social context where participants can exchange information via implicit communication, and collectively build a shared mental model that pertains to their responsibilities and task goals. It stimulates participants’ prosocial motivation and reinforces their commitment. With more time and effort invested, their positive sense of ownership increases, fostering higher dedication and better contribution. Our findings shed further light on the potentials of adopting team structures to encourage effective collaborations in conventionally individual-based voluntary crowdsourcing settings, especially in the digital heritage domain.
... Reflectance Transformation Imaging is a technique that captures the surface shape and colour of opaque materials at per pixel level using traditional photographic techniques [16]. RTI images are produced by photographing an object several times from a fixed position but illuminating it from multiple angles. ...
Article
Full-text available
The frieze of the Palace of the stuccoes, dated between the 5 th and 6 th century BC, was a polychrome Maya relief discovered in the 1907 in Yucatán, Mexico. It was documented in watercolours and hand tinted photographs by Adela Breton. After years of exposure to the harsh environmental conditions of the Maya area, the colours and the stucco relief disappeared. The aim of the project is to develop a hybrid digital-analogue printing method for reconstructing the appearance of the original polychrome relief based on digitised hand-made records. A description of the process to produce full colour images combining digital and photomechanical printing is provided. Using photopolymer plates, an intaglio printing process has been used to produce colour images, whilst inverse relief plates have been created based on height maps to transfer a positive embossing on paper when applying pressure on a printing press. The influence of physical parameters related to the appearance is studied. Reflectance Transformation Imaging was carried out to record the colour and surface shape of the prints. Measurements of gloss were made on relief inkjet prints and intaglio prints on paper to compare the outcomes of commercial 2.5D print and the method proposed here. By modifying an analogue process with digital technology, it is possible to incorporate ancient materials to the printmaking process and therefore approach naturally the appearance of the original. On the other hand, incorporating imaging techniques and quality measurements enables to improve the quality in analogue printing techniques.
Article
The article is of a historiographic nature and is intended to record the main stages in the development of hardware-software complexes and systems for creating digital images (DI) of museum storage items and their (DI) application in intra-museum activities: from solving practical problems of documenting the discovery (acquisition) of museum funds with historical artifacts, their existence in museums, conservation and restoration, creation and development of catalogues, to the implementation of historical and art history research, implemented using the methods of mathematical statistics and a wide range of modern approaches, technologies and scientific disciplines of Data Science. For the first time in Russian historiography, the expansion of the range of research tasks is considered, which has been going on from the mid-1970s to the present and has become possible, on the one hand, in connection with the growing understanding of the information potential of DI of museum objects, and on the other hand, with the digital transformation of traditional technical and technological methods of analysis of museum items, highlights the history of design and development of specialized hardware and software systems; an original periodization of the identified processes is proposed, a brief description of each of the 4 identified stages is given (including a methodological breakthrough that occurred at the turn of the 20th-21st centuries) and the results of the most significant scientific projects are described.
Book
Full-text available
The Artistic goods represent a priceless asset of our cultural patrimony since they play a crucial role in defining and understanding the identity of communities. Nevertheless, they are not always adequately protected against possible dangers and hazards or the effects of time. In these last decades, the new technologies - such as the digital control, the 3D reconstructions etc. - have experienced great developments even in their application to the art collections, increasing the monitoring activities, the safety checking, and their interface with the community. The ARCO 2020 Conference collected contributions from different areas for the preservation, the enhancement, and the protection of the art goods exhibited in the Museums. This volume collects the proceeding of the sessions about Design and Museum Design, Digital Heritage, Historical Research and Posters of the ARCO 2020 international conference which took place on 21-23 September 2020 in Florence, Italy, at the Dipartimento di Architettura (DiDA).
Article
Ancient graffiti are evidence of the past, scattered all over the globe and common to many cultures. Documentation is a crucial step for their study, and must allow for clear interpretation. There are a variety of traditional methods to document ancient graffiti, from sketches to frottages, to contact tracing and photography. Digital instruments and other innovative methods developed during recent years in the field of heritage documentation can also be successfully applied to graffiti, improving the quality of results and increasing their readability. This paper presents the principal methods used for graffiti documentation and discusses the trends over the last few decades; it also presents two different case studies where the principal methods are tested and reviewed.
Chapter
Computer graphics tools and techniques enable researchers to investigate cultural heritage and archaeological sites. They can facilitate documentation of real-world sites for further investigation, and enable archaeologists and historians to accurately study a past environment through simulations. This chapter explores how light plays a major role in examining computer-based representations of heritage. We discuss how light is both documented and modelled today using computer graphics techniques and tools. We also identify why both physical and historical accuracy in modelling light is becoming increasingly important to study the past, and how emerging technologies such as High Dynamic Range (HDR) imaging and physically-based rendering is necessary to accurately represent heritage.
Article
Reflectance Transformation Imaging (RTI) is a non-invasive method of examination which can be used to document and visualize the surface texture of objects, including artworks. Through imaging an object under different directions of illumination, topographical maps of a low relief surface can be produced, which when further processed can be used as an interactive visual aid. This paper summarizes the current state of research on micro-RTI, and presents an investigation into a micro-RTI system comprised of a digital microscope and miniature lighting array dome, which was originally developed to study small areas of surface texture. The article reviews the techniques required to adequately use the micro-RTI, and evaluates the system’s use as a practical tool for conserving and documenting modern and contemporary easel paintings. This was determined by evaluating its data processing, ability to accurately document test panels and case study paintings, and comparing it with alternative documentation methods of standard microscopy and laser scanning. It was found that micro-RTI is a useful tool for modern and contemporary paintings and their often complex and/or delicate surfaces, where subtle surface alterations can result in visually distracting features.
Article
Radiography testing (RT) is an effective, non-invasive method for the detection of hidden structures, details and fracture regions of valuable ancient ceramic objects. To maximize detail extraction from the radiographs and to better interpret images, we use an efficient image processing method that reveals the design and defect regions of objects. For larger radiographic images, it is preferable that a desktop computer is capable of processing the algorithm. Algorithms that require a large memory size and CPU time necessitate the resizing of the images, resulting in lower quality output images. In this study, the fast bilateral filter (FBF) was implemented to enhance the visualization of the radiographs of ceramic objects from the archaeological museum of Burriana, Spain. The digital radiographs were provided by a computed radiography (CR) system at the Laboratory of Documentation and Registration in the University Institute for the Restoration of the Patrimony (IRP) of the Universitat Politècnica de València (UPV), Spain. The FBF method based on the Fourier series and the truncated Gaussian function was successfully implemented with radiographs of antique objects. The results show that the internal structures, manufacturing processes, and fractures are better visualized and the edges and fine details are preserved. The design details were shown to be better visualized than the original radiographs based on opinions and evaluations from experts in radiography and the conservation and restoration of cultural heritage.
Chapter
Full-text available
O Capítulo versa sobre as possibilidades analíticas da aplicação do desenho técnico e da fotografia digital e técnicas combinadas para a análise de materiais líticos pré-históricos. Buscou-se demonstrar como ambas as técnicas são complementares e como seu uso combinado pode trazer vantagens às análises arqueológicas.
Conference Paper
Full-text available
Thanks to the WebGL graphics API specification for the JavaScript programming language, the possibility of using the GPU capabilities in a web browser without the need for an ad-hoc plug-in is now coming true. This paper introduces SpiderGL, a JavaScript library for developing D graphics web applications. SpiderGL provides data structures and algorithms to ease the use of WebGL, to define and manipulate shapes, to import 3D models in various formats, to handle asynchronous data loading. We show the potential of this novel library with a number of demo applications. Furthermore, we introduce MeShade, a SpiderGL-based web application for shader material editing from within the web browser, which produces all the code needed for embedding interactive D model visualization capabilities inside web pages and online repositories.
Conference Paper
Full-text available
Reflection Transformation Imaging has proved to be a powerful method to acquire and represent the 3D reflectance properties of an object, displaying them as a 2D image. Recently, Polynomial Texture Maps (PTM), which are relightable images created from a set of photos of the object taken under several different lighting conditions, have been used in Cultural Heritage field to document and virtually inspect several sets of small objects, such as cuneiform tablets and coins. In this paper we explore the possibility of producing high quality PTM of medium or large size objects. The aim is to analyze the acquisition pipeline, resolving all the issues related to the size of the object, and the conditions of acquisition. We will discuss issues regarding acquisition planning and data gathering. We also present a new tool to interactively browse high resolution PTMs. Moreover, we perform some quality assessment considerations, in order to study the degradation of quality of the PTMs respect to the number and position of lights used to acquire the PTM. The results of our acquisition system are presented with some examples of PTMs of large artifacts like a sarcophagus of 2.4 × 1 m size. PTM can be a good alternative to 3D scanning for capturing and representing certain class of objects, like bas-relieves, having lower costs in terms of acquisition equipment and data processing time.
Conference Paper
Full-text available
We offer two new methods of documenting and communicating cultural heritage information using Reflection Transformation Imaging (RTI). One imaging method is able to acquire Polynomial Texture Maps (PTMs) of 3D rock art possessing a large range of sizes, shapes, and environmental contexts. Unlike existing PTM capture methods requiring known light source positions, we rely on the user to position a handheld light source, and recover the lighting direction from the specular highlights produced on a black sphere included in the field of view captured by the camera. The acquisition method is simple, fast, very low cost, and easy to learn. A complementary method of integrating digital RTI representations of subjects from multiple viewpoints is also presented. It permits RTI examination "in the round" in a unified, interactive, image-based representation. Collaborative tests between Cultural Heritage Imaging, Hewlett- Packard Labs, and the UNESCO Prehistoric Rock-Art Sites in the Côa Valley, a World Heritage Site in Portugal, suggest this approach will be very beneficial when applied to paleolithic petroglyphs of various sizes, both in the field and in the laboratory. These benefits over current standards of best practice can be generalized to a broad range of cultural heritage material.
Conference Paper
Full-text available
Accurate virtual reconstruction of real world objects has long been a desired goal of image-based computer graphics. Usually this involves a lengthy capture process where an object is photographed from different viewpoints and illumination conditions. Using this collection of input images, we can now re-render the object from any viewing angle or lighting condition. However, acquiring a dense sampling of both the lighting and view space is time consuming. We carry out an analysis on this combined lighting and view space to find the optimal sampling given a restricted image budget. We also analyze the order of interpolation and find that improved results are obtained by interpolating first in viewpoint and second in lighting, the reverse of the usual order.
Article
We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.
Conference Paper
Polynomial Texture Mapping (PTM; Malzbender et al 2001) uses multiple images to capture the reflectance properties of a given surface. Multiple captures may be combined in order to produce interactive, relit records of the material recorded. In addition recent research enables the capture and rendition of interactive PTMs for detailed examination of surface details. Cultural heritage examples of the technology include work on Cuneiform tablets, numismatic archives and lithic artefacts. This paper will describe the PTM data capture and processing technologies developed by the University of Southampton, with support from Hewlett Packard Labs Palo Alto. It will also identify the perceived archaeological potential of additional recording to supplement the standard PTM datasets, including the recording of the surface BRDF (bi-directional reflectance distribution function) and accurate extraction of surface normals. Such data offer considerable, under-exploited value in production of comparative conservation datasets. They also enable new forms of analysis, and the possibility for a step-change in the visual fidelity of reconstructions of archaeological surfaces. Case studies will include ongoing work on the examination of Roman wall paintings, Roman stylus writing tablets, medieval wood, bronze artefacts from a maritime contexts, Neolithic architectural plaster, excavation contexts, brick stamps and sculpture. Each of these presents particular challenges and opportunities for recording, analysis and presentation. The paper will conclude by identifying the synergies between PTM, related imaging technologies, photogrammetry and non-contact digitisation through recent case studies on African rock art and on excavated material from the Portus Project (www.portusproject.org). It will identify the ongoing challenges and proposed future developments.
Article
Kurt D. Bollacker discusses how important digital data could be preserved for future generations. The general problem of data preservation is twofold. The first matter is preservation of data itself. The physical media on which data are written must be preserved, and this media must continue to accurately hold the data that are entrusted to it. The second part of the equation is the comprehensibility of the data. Unlike in the analog world, digital data representations do not inherently degrade gracefully, because digital encoding methods represent data as a string of binary digits. A common digital encoding mechanism, pulse code modulation (PCM), represents the total amplitude value of an audio signal as a binary number, so damage to a random bit causes an unpredictable amount of actual damage to the signal. Digital data could be reverted into an analog form and traditional media-preservation techniques could be used.
Article
Can you find your digital photographs when you need them, or do you spend more time rifling through your hard drive and file cabinets than you'd like? Do you have a system for assigning and tracking content data on your photos? If you make a living as a photographer, do your images bear your copyright and contact information, or do they circulate in the marketplace unprotected? As professional photographer and author Peter Krogh sees it, "your DAM system is fundamental to the way your images are known, both to you and to everyone else." DAM, or Digital Asset Management, in the world of digital photography refers to every part of the process that follows the taking of the picture, through final output and permanent storage. Anyone who shoots, scans or stores digital photographs, is practicing some form of digital asset management. Unfortunately, most of us don't yet know how to manage our files (and our time) very systematically, or efficiently. In The DAM Book: Digital Asset Management for Photographers, Krogh brings clarity to the often overwhelming task of managing digital photographs, with a solid plan and practical advice for fellow photographers on how to file, find, protect and re-use photographs. Following a thorough overview of the DAM system and de-mystifications of metadata and digital archiving, Krogh focuses on best practices for digital photographers using Adobe Photoshop CS2. He explains how to use Adobe Bridge, the new CS2 navigational software that replaces the File Browser introduced in Photoshop 7, with full details on integrating Bridge, Camera Raw and Digital Asset Management software. Compellingly presented in four-color format, The DAM Book: Digital Asset Management for Photographers brings Krogh's award-winning creative approach to a subject that could have been technically intimidating. Instead, Krogh's twenty years of experience and instructive visual storytelling make this material not only accessible, but compulsory reading for serious digital photographers.
Article
Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.