choosing camera conﬁgurations and algorithms
that can achieve high-resolution reconstructions
where most critical, and relatively low-resolution
reconstructions everywhere else.
Despite our achievements over several years,
we are still in the early stages of a long-term
effort. Many problems remain to be solved, and
there is much work to be done to achieve the
scale, ﬁdelity, ﬂexibility, and completeness that
In the near term, we will continue to recon-
struct and process various objects and mock pro-
cedures. For example, we are procuring some
rapid-prototype models of a real skull for a mock
sagittal synostosis procedure. Soon we hope to
provide anaglyphic stereo 3D data sets on the
Web, letting viewers control the viewpoint and
time. We are trying to solve problems with the
network connection to the Tablet PC used for
authoring and viewing. Although a wireless par-
adigm is clearly desirable, the VR-Cube hardware
seems to interfere with the wireless connection.
A long-term primary goal is to scale our system
up to enable the acquisition and reconstruction
of (at least) a surgical table, the surrounding area,
and the involved medical personnel. In fact, with-
in several years we plan to equip an intensive care
unit with numerous heterogeneous cameras to
capture a variety of medical procedures. This
involves solving nontrivial problems such as res-
olution, visibility, computational complexity, and
massive model management. To assist with cam-
era placement, we are working on a mathemati-
cal and graphical tool to help estimate and
visualize acquisition information/uncertainty
throughout the acquisition volume for a particu-
lar candidate set of cameras.
In light of our goals for ﬁdelity and scale, we
continue to improve our existing methods for 3D
reconstruction and to investigate new methods.
For example, we might combine pre-acquired
laser scans of an operating room with camera-
based dynamic reconstructions so we can better
allocate cameras for dynamic events. We are also
investigating possibilities for capturing real-time
data from medical monitoring equipment, and
plan to include it as metadata in an IEBook.
Whereas in the past we chose to hold off on
audio acquisition, instead attempting to address
the more difficult problems related to visual
reconstruction, we plan to acquire audio as well.
We are also rethinking our choice of a Tablet PC
as the primary interface to the immersive author-
ing system. The tablet display is hard to read while
wearing stereo glasses, and both hands are typical-
ly busy holding the tablet and the stylus, so remov-
ing the stereo glasses when necessary is not an
attractive solution. In our current paradigm,
authors choose a view with their heads (looking at
the data of interest) while trying to use the tablet
stylus to initiate snapshots, which is awkward. One
possibility is to track the Tablet PC so users can
choose snapshots in a “viewﬁnder” mode.
Finally, we hope to increase the impact on the
medical community by making complete IEBooks
available on the Web. The primary difﬁculty here
is in determining which interaction techniques
are appropriate and how to implement them.
Rather than simply “dumbing down” the fully
immersive interfaces, we want to use the best
interfaces for each paradigm and authoring tools
that appropriately target each. MM
At the University of North Carolina, we
acknowledge Marc Pollefeys for VDPC collabora-
tion; Jim Mahaney and John Thomas for techni-
cal support; and surgeons Ramon Ruiz and
Anthony Meyer for general collaboration. At
Brown University, we thank Melih Betim and
Mark Oribello for their systems and video support.
This research was primarily supported by US
National Science Foundation Information
Technology Research grant IIS0121657, and in
part by US National Library of Medicine contract
N01LM33514 and NSF Research Infrastructure
1. H. Fuchs et al.,“Virtual Space Teleconferencing
Using a Sea of Cameras,” Proc. 1st Int’l Symp.
Medical Robotics and Computer Assisted Surgery,
Shadyside Hospital, Pittsburgh, 1994, pp. 161-167.
2. A. van Dam et al., “Immersive Electronic Books for
Teaching Surgical Procedures,” Telecomm.,
Teleimmersion, and Telexistence, S. Tachi, ed.,
Ohmsha, 2002, pp. 99-132.
3. J.Y. Bouguet, “Camera Calibration Toolbox for
4. W. Press et al., Numerical Recipes in C: The Art of
Scientiﬁc Computing, 2nd ed., Cambridge Univ.
5. R. Yang, View-Dependent Pixel Coloring—A Physically
Based Approach for 2D View Synthesis, doctoral disser-
tation, Univ. of North Carolina at Chapel Hill, 2003.