ArticlePDF Available

Abstract and Figures

In this paper, a software-based system for the real-time synchronization of images captured by a low-cost camera framework is presented. It is most well suited for cases where special hardware cannot be utilized (e.g. remote or wireless applications) and when cost efficiency is critical. The proposed method utilizes messages to establish a consensus on the time of image acquisition and NTP synchronization of computer clocks. It also provides with an error signal, in case of failure of the synchronization. The evaluation of the proposed algorithm using a precise LED array system (1ms accuracy) proves the effectiveness of this method.
Content may be subject to copyright.
Abstract
In this paper, a software-based system for the real-time
synchronization of images captured by a low-cost camera
framework is presented. It is most well suited for cases
where special hardware cannot be utilized (e.g. remote or
wireless applications) and when cost efficiency is critical.
The proposed method utilizes messages to establish a
consensus on the time of image acquisition and NTP
synchronization of computer clocks. It also provides with
an error signal, in case of failure of the synchronization.
The evaluation of the proposed algorithm using a precise
LED array system (1ms accuracy) proves the effectiveness
of this method.
1. Introduction
The problem of camera synchronization was one of the
first significant issues that have been raised with the
production of the camera in the beginning of the 20
th
century.
Since 1923 and for the needs of the silent film
production, F.H. Richardson [1], stated clearly the
importance of synchronizing taking and camera speeds:
“Any departure from perfect synchronization of the
taking, or camera speed, and the speed of reproduction, or
projection, must, and inevitably will cause the moving
object to appear differently upon the screen than it
appeared to and was photographed by the camera, hence
under such conditions the spectator cannot and will, not
see the moving object as the camera saw it.”
From this problem of synchronization for the silent film
production, nowadays, we have moved to a similar issue
in the area of the stereoscopic and multi-view imaging
since the vast majority of modern computer vision
techniques require the synchronized image acquisition up
to the millisecond in real-time.
A powerful multi-PC computing platform is needed
since the simultaneous (synchronized) acquisition and
processing of multiple video sequences requires data
transfer rates that are far above the specifications of a
standard workstation since the computer buses as well as
the camera connection buses (firewire, USB, etc) cannot
serve the high data transfer needs of a multicamera
application. way to overcome this limitation is to use a
distributed PC system, where each unit handles one or
more cameras.
In this paper, a software-based system for the real-time
synchronization of images captured by a low-cost camera
framework is presented. It is highly recommended for
cases where special hardware cannot be used. Thus, this
system is applicable where a remote multicamera platform
is used for the synchronized image acquisition. Examples
of these applications include a multicamera system to
monitor airplanes in the airport, a wireless camera network
etc.
The system consists of two software parts: The camera
servers and a client. The camera server runs on each
computer that has one or more cameras attached to it. The
client can run on any machine with a network connection.
Capturing is controlled by the client. The proposed
approach provides synchronization with an error-checking
constraint. The evaluation of the proposed algorithm using
a precise LED array system (1ms accuracy) proves the
effectiveness of this method.
The remainder of the paper is organized as follows: In
section 2 several approaches for the problem of the multi-
camera synchronization are presented. Section 3
introduces the proposed method to synchronized image
acquisition for a cluster of cameras. In section 4,
experimental results are illustrated and the proposed
method is evaluated. Finally, conclusions are drawn in
section 5.
2. Background Work
Several approaches to camera synchronization can be
found in the literature. They can be sorted under the
categories of (a) special-purpose hardware, (b) “post-
processing” synchronization algorithms, (c) Network
synchronization, (d) software-based methods.
The most common one is to use special-purpose
hardware. It usually consists of a microcomputer control
unit which is dedicated to propagating external
Synchronous Image Acquisition based on Network Synchronization
Georgios Litos, Xenophon Zabulis and Georgios Triantafyllidis
Informatics and Telematics Institute
Thessaloniki, Greece
{gl,xenophon,gatrian}@iti.gr
synchronization signals for triggering the cameras and
achieving the cameras synchronization.
Point Grey Research Inc. manufactured the Sync Unit
[2] which synchronizes the image acquisition of multiple
cameras on different IEEE-1394 buses, either within the
same computer or across multiple computers. Without a
Sync Unit, there is no timing correlation between separate
cameras on separate buses. The Objective Imaging
OASIS-DC1 [3] is another board controller, providing
synchronization for various digital cameras supporting
trigger output signals. In [4], the authors introduce a single
chip microcomputer control system for the
synchronization operation of several rotating high-speed
cameras. The proposed system consists of four parts: the
microcomputer control unit (including the synchronization
part and precise measurement part and the time delay
part), the shutter control unit, the motor driving unit and
the high voltage pulse generator unit. Several approaches
for synchronization at the hardware level are proposed in
[5] using either specialized cameras or external dedicated
electric signal. All these hardware-based solutions
properly and precisely synchronize the cameras but they
are also potentially costly, technically complex and not
very flexible (not applicable for many applications).
An alternative approach for the camera synchronization
is based on various “post-processing” algorithms which
are applied on unsynchronized sequences and try to find
the temporal offset between them. Thus, knowing the
time-shifts between enough view pairs, the whole network
can be synchronized. In [6], Sinha et al proposed an
automatic approach to synchronize a network of
uncalibrated and unsynchronized video cameras by
computing the epipolar geometry from dynamic
silhouettes and finding the temporal offset between them.
This is then used to compute the fundamental matrices and
the temporal offsets between many view-pairs in the
network.
A new method to synchronize video recording the same
scene from different viewpoints is presented in [7]. This
method relies on correlating space-time interest point
distribution in time between videos. Space-time interest
points represent events in video that have high variation in
both space and time. Authors show that by detecting,
selecting spacetime interest points and correlating their
distribution, videos from different viewpoints can be
automatically synchronized. Another similar approach is
presented in [8] where the proposed approach models the
asynchronism of the videos within the photogrammetric
analysis.
These post-processing methods are quite effective in
automatically recovering the frame temporal offset
between image sequences and thus enabling the “post-
synchronization” of the cameras. However, these
algorithms cannot be applied real-time and assume a
constant temporal offset. Furthermore such approaches are
sensitive to occlusion, since they rely on the tracking of
image features; which can be hidden by another imaged
surface at arbitrary time frames.
Interconnecting the computers that the cameras are
attached to through a local area network (LAN) facilitates
their software synchronization, using network protocols,
such as NTP (network time protocol) [9].This is a simple
procedure, but experiments show that is not more efficient
in camera synchronization compared to the methods using
special-purpose hardware.
Recently, some software-based methods have been
proposed for camera synchronization, which utilize cost-
efficient and off-the-shelf cameras that do not allow
synchronization through external triggering.
In [10], the proposed system consists of the camera-
computers and the triggering-computer. This triggering
computer can launch simultaneously all the cameras.
Then, the cameras will immediately start capturing an
image. Since there is not any proposal to handle the
situation in case of failure of the synchronization, the
method is dependent on the quality of the Ethernet
connection, the operating system’s latency of response to
the received triggering signal, as well as the camera
drivers and hardware.
In [11], a more sophisticated approach to software-
based camera synchronization is introduced. This method
uses the server-client architecture such as the previous
method with a simple error-checking technique. The
problem of the discrepancy in synchronization is solved
by calculating the time for sending data over the network
by sending some test data and calculates the network
latency. Then this latency is added to the reference time
and sent to all the computers which they accept this value
as their time value thereby synchronizing their clocks with
the true reference time. This technique improves the
camera synchronization, but assumes a constant latency
which is added to the reference time. This assumption
does not always hold, since the network, software and
hardware lag do not always sum to a constant value along
time and, thus, the results for the camera synchronization
are inaccurate.
A similar system is presented in [12] where the authors
proposed an easy to set up, software based system to
record synchronized multi-video data. This system sends
only the start pulse of the recording and then leaving it up
to each computer to individually keep the pace.
Specifically, they proposed to synchronize the clocks of
the computers by using a time synchronisation daemon,
and then a future start-time is sent instead of a direct
triggering message. However, no results are reported for
the efficiency of this synchronization.
In this work, a software-based client-server method is
proposed for the efficient and real-time synchronization of
a multi-camera network, without any need for special-
purpose hardware or post-processing algorithm. It utilizes
messages to establish a consensus on the time of image
acquisition and NTP synchronization of computer clocks
to attempt that this time for each computer corresponds to
the same instant for all computers. It also provides with an
error signal, in case of failure of the synchronization.
3. Theory
In this section, an approach to synchronized image
acquisition for a cluster of cameras is introduced. This
approach concerns cameras that are mounted on different
computers which are, in turn, connected through a LAN.
The method utilizes messages to perform the
synchronization and provides also with an error signal, in
case of failure of the synchronization. The method
assumes that the computers are already synchronized by
virtue of a NTP [9].
The proposed synchronization requires an estimate L of
how much time is required for a message broadcasted by a
client to reach all servers. Since the messages will contain
a very small amount of data the L is assigned with an
estimate of the LANs latency. Synchronized image
acquisition is then achieved as follows.
Upon triggering, at time t
0
, the client broadcasts a
message to all servers. This message contains a time value
is t
a
= t
0
+L, which corresponds to the time instance that
the acquisition is planned to occur. The servers receive
this message at different time instances. Upon reception of
the message each server records the time into value t
i
.
Immediately after, they enter a busy waiting mode. The
amount of time that each server will wait is: x
i
=t
i
-t
0
-L.
Therefore, if servers are in synchronization, values t
i
accurately represent the time differences in the reception
of messages. Thus, values x
i
will correspond to the same
time instance and should be equal to t
a
(see figure 1).
The proposed approach provides a synchronization
error-checking constraint. If x
i
< 0, then time interval L
was too short for the LAN’s latency and, thus, is required
to be increased. A software event is then generated
signaling the failure of the synchronization process. This
event can facilitate the automatic adjustment of L’s value,
in order for the system to modulate the temporal frequency
of image acquisition with respect to LAN latency, traffic,
and bandwidth.
4. Experimental results
For the evaluation of the proposed method a PCB board
is used which is based on DSP (MicroChip) embedded
microprocessor with LEDs as an accurate chronometer
providing 1 ms accuracy (see Figure 2). There are 4 rows
of 10 LEDs that correspond to the following tempopral
granuilarities: 1000ms, 100ms, 10ms, and 1ms.
The proposed method of evaluation using the PCB
board provides better accuracy compared to the method of
using a chronometer in a CRT screen. A CRT screen of
100 Hz provides approximately 10 ms accuracy.
Two identical workstations were used considered high-
end at the time of the experiments, running at 3.2GHz
with Firewire-800 PCI boards. Windows 2000 operating
system was used due to incompatibilities with later
versions with the firewire implementation. Although this
is not a real-time OS, since our application runs with high-
priority, we achieve high performance and real-time
response times. The time-waiting algorithm is reliable
because it is optimized for accuracy. Several time-
counting methods were tested and the best one for the
specific CPUs used in the experiment was chosen. The
acquisition devices used were Unibrain’s Fire-i cameras
connected on an IEEE1394 standard cable.
Figure 0 Illustration of the events occurring within the proposed approach to synchronous image acquisition.
Figure 2 A PCB board based on DSP (MicroChip)
embedded microprocessor with LEDs
Figure 3 shows a multi-camera system with four
cameras targeting the PCB board. The experiment took
place in a dark room. The shutter time of capture devices
was reduced to 437μs to increase the light level and react
better in sudden scene changes (of LEDs).
Figure 3 A multi-camera system with four cameras
The multi capture picture application framework
consists of two parts (see Figure 4):
The capture device station which powers the
cameras, accept requests from control station and
can be used as image storage system (see Figure
5a)
The control station which controls the capture
device stations, requests and gathers captured
images from all stations (see figure 5b)
Figure 4 Multi camera capture architecture
(a)
(b)
Figure 5 a) Camera Server, b) Control Client
Temporal synchronization of computers was retained by
virtue of an SNTP server. The camera stations are
synchronized at the start of each experiment. The network
setup is based on a local 100mbit network (LAN) isolated
from external networks. All cameras are identical so that
we can assume that the devices have the same response
time because it’s a value that is cannot me measured.
Capturing occurred at 30fps in the maximum resolution
(640x480 pixels). The time interval (L) was 1500ms in our
experiments (to avoid synchronization errors when taking
large number of images) although it can be lowered to
500ms without problems. The proposed algorithm rejects
the frames not taken at the correct intervals and reports the
error to the user.
The accuracy we achieve by synchronizing each time
two workstations using SNTP is between 0.000001ms and
0.000015ms (0.000005ms in average). The number is
better than expected because in our case one workstation
is acting as the SNTP server, therefore its time doesn’t
change and only one workstation has to be synchronized.
The goal of our experiments is to acquire images that
can be accepted as captured at the same time frame in all
camera stations. To test the efficiency, we measure the
‘capture delay’ between the camera systems. All cameras
take pictures from the same target, our reference timer
which is the LED-PCB board. The accuracy of the
hardware counter is 1 ms. An application was developed
to read pictures taken from the experiments and to extract
the LED value using an optical recognition method. We
count the difference in time (ms) between each acquisition
system at the same capture command. The results give the
total true delay including all the delays of network latency,
of the synchronization using the NTP, of the cameras
hardware and of the operating system reaction. Table 1
and Figures 6 and 7 summarize the experimental results.
Experiment Mean Value or
Standard Deviation
t
Minimum
Latency
Maximum
Latency
Total 2 cameras
1 camera/PC
11.95 ms 0 ms 35 ms
Total 4 cameras
2 cameras/PC
9.988 ms 0 ms 92 ms
Table 1
Figure 6 Difference in time (ms) between each acquisition
system for the case of two cameras setup
Standard deviation / sample
0
10
20
30
40
50
60
1 30 59 88 117 146 175 204 233 262 291 320 349 378 407 436 465 494 523
_ (ms)
Figure 7 Standard deviation in time (ms) between each
acquisition system for the case of four cameras setup
5. Conclusions
Although providing the most precise synchronization,
external triggering requires special cameras that can
receive and recognize such input as well as additional
hardware and wiring. This increases the camera’s cost and
casts hardware-synchronization weak in term of cost-
efficiency, since the cost of such components could be
unreachable in several types of applications.
Moreover, in several types of applications independence
from physical links (cable) is required due to the large
distance between the cameras (e.g. ground traffic control,
etc). This independence can also facilitate the
synchronization of cameras mounted on wireless stations,
suitable for mobile applications.
Therefore, software-based camera synchronization
constitutes a useful alternative for a wide range of
applications.
Difference /
0
5
1
1
2
2
3
3
4
1
6 12 18 24 30 36 42 48 55 61 67 73 79 85 91 97
m
In this work a software-based client-server method is
introduced for the real-time synchronization of a multi-
camera network, which does not require any special-
purpose hardware or post-processing algorithm. The
advantages of this approach are: a) it is software-based, b)
it can achieve real-time camera synchronization, c) it is
cost-efficient, d) it is suitable for mobile applications and
e) it is easy to set up since there is no need for special-
purposed hardware.
The proposed approach is based on the precision with
which the utilized NTP synchronization is achieved.
Future work will temporally interpolate video-streams to
predict the perfectly synchronous images that would have
been acquired and also predict network latency to improve
temporal synchronization.
Acknowledgement
The authors are grateful for support through the 3DTV
Network of Excellence, 6
th
Framework, IST Programme.
References
[1] F.H. Richardson, "Importance of Synchronizing Taking and
Camera Speeds," Transactions of S.M.P.E., No. 17, 1924,
pages 117-123, October 1-4, 1923.
[2] Sync Unit, Point Grey Research Inc.,
http://www.ptgrey.com/products/sync/inde
x.html
[3] OASIS-DC1 Digital Camera Trigger Interface,
http://www.objectiveimaging.com/OASIS_DC
1.htm
[4] Ningwen Liu, Yunfeng Wu, Xianxiang Tan, Guoji Lai,
“Control system for several rotating mirror camera
synchronization operation”, 22nd International Congress on
High-Speed Photography and Photonics; Dennis L. Paisley,
ALan M. Frank; Eds., Proc. SPIE Vol. 2869, p. 695-699,
May 1997.
[5] Bertrand Holveck, Hervé Mathieu, “Infrastructure of the
GrImage experimental platform: the video acquisition part”,
Technical Report RT-0301, INRIA, Number RT-0301, Nov
2004.
[6] Sinha, S. N., Pollefeys, M.: Synchronization and Calibration
of Camera Networks from Silhouettes. International
Conference on Pattern Recognition ICPR, Vol. I, p. 116-
119, 2004.
[7] Yan, J., Pollefeys, M., “Video Synchronization via Space-
Time Interest Point Distribution”, Advanced Concepts for
Intelligent Vision Systems ACIVS, 2004.
[8] Raguse, K., Heipke, C., “Photogrammetric analysis of
asynchronously acquired image sequences”, In: Grün, A. ;
Kahmen, H. (Hrsg.): Optical 3-D Measurement Techniques
VII,. Band II, p. 71-80, 2005.
[9] Mills, D.L. Internet time synchronization: the Network
Time Protocol. IEEE Trans. Communications COM-39, vol.
10, p. 1482-1493, Oct 1991.
[10] Tomas Svoboda, Hanspeter Hug, and Luc Van Gool,
“ViRoom -- low cost synchronized multicamera system and
its self-calibration”, In Pattern Recognition, 24th DAGM
Symposium, pages 515-522. Springer, September 2002.
[11] Piyush Kumar Rai, Kamal Tiwari, Prithwijit Guha,
Amitabha Mukerjee, “A Cost-effective Multiple Camera
Vision System using FireWire Cameras and Software
Synchronization”, 10th International Conference on High
Performance Computing (HiPC 2003), Hyderabad, India,
Dec. 17-20, 2003.
[12] Lukas Ahrenberg, Ivo Ihrke and Marcus Magnor, “A
Mobile System for Multi-Video Recording”, IEE 1st
European Conference on Visual Media Production
(CVMP), London, UK, March 2004
... The extent of desynchronization in multi-camera systems varies based on whether hardware trigger synchronization or softwarebased synchronization [17] is used. The ability to use hardware synchronization is limited by cost [18], but provides much higher synchronization accuracy than any other method [19]. ...
... Synchronization of multiple cameras is commonly treated as a video alignment problem. Sequences are aligned via image feature correspondences [18,24,25,26,27], intensity-matching [28,29], or even time-stamp based stream buffering [17,19]. However, all these approaches estimate synchronization error and align videos to the nearest-frame at the cost of post-processing computation time. ...
Conference Paper
Accurately recording motion from multiple perspectives is relevant for recording and processing immersive multi-media and virtual reality content. However, synchronization errors between multiple cameras limit the precision of scene depth reconstruction and rendering. In order to quantify this limit, a relation between camera de-synchronization, camera parameters, and scene element motion has to be identified. In this paper, a parametric ray model describing depth uncertainty is derived and adapted for the pinhole camera model. A two-camera scenario is simulated to investigate the model behavior and how camera synchronization delay, scene element speed, and camera positions affect the system's depth uncertainty. Results reveal a linear relation between synchronization error, element speed, and depth uncertainty. View convergence is shown to affect mean depth uncertainty up to a factor of 10. Results also show that depth uncertainty must be assessed on the full set of camera rays instead of a central subset.
... These limitations were addressed with the introduction of NBS. With the advent of digital cameras networked using UDP and TCPIP protocols, cameras were synchronized using the network-time-protocol (NTP) [19] for the synchronization of clocks between various computer systems through packet-switched, variable-latency data networks [20]. While NTP provides synchronization accuracy in the millisecond range, high-speed industrial applications require higher accuracy that led to the introduction of IEEE-1588 standard [21] called precision-time-protocol (PTP). ...
Article
Full-text available
High-speed industrial machine-vision (MV) applications such as surface inspection of steel sheets necessitate synchronous operation of multiple high-resolution cameras. Synchronization of cameras in the microsecond band is necessary to ensure accurate frame matching while melding images together. Existing approaches for synchronization employ dedicated electronic circuits or network-time-protocol (NTP) whose accuracies are in the millisecond band. Conversely, IEEE-1508 precision-time-protocol (PTP) synchronizes computers in highly accurate industrial measurement and control networks. Synchronization algorithms using PTP involve synchronizing computers connected to cameras. Although the computers synchronize in the microsecond band, the cameras synchronize in the millisecond band. Moreover, PTP is practically not used for synchronizing multiple devices due to the high bandwidth utilization of the network. This paper proposes a temporal synchronization algorithm and framework with two-way communication with timestamps and estimates mean path delays. Unicast transmission forms the basis of the synchronization framework, so that the network utilization is minimal, thereby ensuring the necessary bandwidth is available for image transmission. Experimental results show that the proposed approach outperforms the existing methodologies with synchronization accuracies in the microsecond band.
... A non-standardized method uses the transmission of the Society of Motion Picture and Television Engi- neers (SMPTE) time codes over wireless networks such as Wi-fi [50]. Litos et al. [51] constructed a PCB to refine the NTP-based rough synchronization of computers with Firewire cameras by recording and evaluating a high-resolution clock constructed from LEDs. Obviously, this yields very accurate knowledge of synchronicity, but it relies on the computer's operating system and camera implementation for syn- chronous recording. ...
Chapter
Full-text available
Multi-camera systems are frequently used in applications such as panorama videos creation, free-viewpoint rendering, and 3D reconstruction. A critical aspect for visual quality in these systems is that the cameras are closely synchronized. In our research, we require high-definition panorama videos generated in real time using several cameras in parallel. This is an essential part of our sports analytics system called Bagadus, which has several synchronization requirements. The system is currently in use for soccer games at the Alfheim stadium for Tromsø IL and at the Ullevaal stadium for the Norwegian national soccer team. Each Bagadus installation is capable of combining the video from five 2 K cameras into a single 50 fps cylindrical panorama video. Due to proper camera synchronization, the produced panoramas exhibit neither ghosting effects nor other visual inconsistencies at the seams. Our panorama videos are designed to support several members of the trainer team at the same time. Using our system, they are able to pan, tilt, and zoom interactively, independently over the entire field, from an overview shot to close-ups of individual players in arbitrary locations. To create such panoramas, each of our cameras covers one part of the field with small overlapping regions, where the individual frames are transformed and stitched together into a single view. We faced two main synchronization challenges in the panorama generation process. First, to stitch frames together without visual artifacts and inconsistencies due to motion, the shutters in the cameras had to be synchronized with sub-millisecond accuracy. Second, to circumvent the need for software readjustment of color and brightness around the seams between cameras, the exposure settings were synchronized. This chapter describes these synchronization mechanisms that were designed, implemented, evaluated, and integrated in the Bagadus system.
Conference Paper
Motion capture systems are extensively used to track human movement to study healthy and pathological movements, allowing for objective diagnosis and effective therapy of conditions that affect our motor system. Current motion capture systems typically require marker placements which is cumbersome and can lead to contrived movements.Here, we describe and evaluate our developed markerless and modular multi-camera motion capture system to record human movements in 3D. The system consists of several interconnected single-board microcomputers, each coupled to a camera (i.e., the camera modules), and one additional microcomputer, which acts as the controller. The system allows for integration with upcoming machine-learning techniques, such as DeepLabCut and AniPose. These tools convert the video frames into virtual marker trajectories and provide input for further biomechanical analysis.The system obtains a frame rate of 40 Hz with a sub-millisecond synchronization between the camera modules. We evaluated the system by recording index finger movement using six camera modules. The recordings were converted via trajectories of the bony segments into finger joint angles. The retrieved finger joint angles were compared to a marker-based system resulting in a root-mean-square error of 7.5 degrees difference for a full range metacarpophalangeal joint motion.Our system allows for out-of-the-lab motion capture studies while eliminating the need for reflective markers. The setup is modular by design, enabling various configurations for both coarse and fine movement studies, allowing for machine learning integration to automatically label the data. Although we compared our system for a small movement, this method can also be extended to full-body experiments in larger volumes.
Article
This paper presents a fully hardware synchronized mapping robot with support for a hardware synchronized external tracking system, for super-precise timing and localization. Nine high-resolution cameras and two 32-beam 3D Lidars were used along with a professional, static 3D scanner for ground truth map collection. With all the sensors calibrated on the mapping robot, three datasets are collected to evaluate the performance of mapping algorithms within a room and between rooms. Based on these datasets we generate maps and trajectory data, which is then fed into evaluation algorithms. We provide the datasets for download and the mapping and evaluation procedures are made in a very easily reproducible manner for maximum comparability. We have also conducted a survey on available robotics-related datasets and compiled a big table with those datasets and a number of properties of them.
Article
Full-text available
In this paper, we describe a cost-effective Multiple-Camera Vision system using low cost simple FireWire web cameras. The FireWire cameras, like other FireWire devices operate on the high speed FireWire bus. Current supported bandwidth is 400 Mbps. Right from its introduction, the FireWire (synonymously known as IEEE 1394) bus interface specification has proved its capabilities and has been supported by both developers and users. Due to its low cost and ease in connecting, FireWire has been recommended as the technology to be used in Machine-Vision systems and Image-processing applications. We have developed a Multiple-camera synchronized Vision system using FireWire cameras. The synchronization has been achieved using "Software Triggering" which has
Article
Full-text available
We propose a novel algorithm to synchronize video recording the same scene from different viewpoints. Our method relies on correlating space-time interest point distribution in time between videos. Space-time interest points represent events in video that have high variation in both space and time. These events are unique in time and may pronounce themselves in videos from different viewpoints. We show that by detecting, selecting space-time interest points and correlating their distribution, videos from different viewpoints can be automatically synchronized.
Conference Paper
Full-text available
This paper presents a multicamera Visual Room (ViRoom). It is constructed from low-cost digital cameras and standard computers running on Linux. Software based synchronized image capture is introduced. A fully automatic self-calibration method for multiple cameras and without any known calibration object is proposed and verified by 3D reconstruction experiments. This handy calibration allows an easy reconfiguration of the setup. Aside from the computers which are usually already available, such a synchronized multicamera setup with six or seven cameras costs less than 1000 USD.
Conference Paper
Full-text available
We propose an automatic approach to synchronize a network of uncalibrated and unsynchronized video cameras, and recover the complete calibration of all these cameras. In this paper, we extend recent work on computing the epipolar geometry from dynamic silhouettes, to deal with unsynchronized sequences and find the temporal offset between them. This is used to compute the fundamental matrices and the temporal offsets between many view-pairs in the network. Knowing the time-shifts between enough view-pairs allows us to robustly synchronize the whole network. The calibration of all the cameras is recovered from these fundamental matrices. The dynamic shape of the object can then be recovered using a visual-hull algorithm. Our method is especially useful for multi-camera shape-from-silhouette systems, as visual hulls can now be reconstructed without the need for a specific calibration session.
Article
Full-text available
GrImage (Grid and Image) is an experimental platform for the virtual reality domain. It is located at INRIA Rhône Alpes. GrImage is a test-bed dedicated to interactive applications. GrImage aggregates commodity components for high performance video acquisition, computation and graphics rendering. The video acquisition system consists of 25 cameras, connected to 12 computers. The camera placement al-lows the acquiring of a 2 m by 2 m by 2 m volume space. A typical application consists in: (1) doing an acquisition from multiple views of a human; (2) extracting the human visual hull, for instance, by using a background subtraction algorithm; and at the end (3) processing the immersion of the virtual human visual hull into a virtual world. The video acquisition system presents some interesting challenges: (1) to get the system real-time; (2) to get a high frame rate acquisition; (3) to warranty high quality images; (4) to be easy to install and to maintain. We present in this document the full description of the video acquisition system. It aims to share our knowl-edge with others similar projects, to help people working on this experimental platform to understand the sys-tem, but also to help maintain the system itself.
Article
This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed. Bibtex entry for this abstract Preferred format for this abstract (see Preferences) Find Similar Abstracts: Use: Authors Title Abstract Text Return: Query Results Return items starting with number Query Form Database: Astronomy Physics arXiv e-prints
Article
The network time protocol (NTP), which is designed to distribute time information in a large, diverse internet system, is described. It uses a symmetric architecture in which a distributed subnet of time servers operating in a self-organizing, hierarchical configuration synchronizes local clocks within the subnet and to national time standards via wire, radio, or calibrated atomic clock. The servers can also redistribute time information within a network via local routing algorithms and time daemons. The NTP synchronization system, which has been in regular operation in the Internet for the last several years, is described, along with performance data which show that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few milliseconds, even in cases of failure or disruption of clocks, time servers, or networks.
Article
The three-dimensional photogrammetric analysis of dynamic processes using image sequences represents a growing field of application for digital photogrammetry. An important precondition is the use of accurately synchronized cameras. In most applications the synchronization of the cameras is realized by external master clocks. Other approaches use stereo-beam splitting for the synchronous acquisition of image sequences. In this article a new method for the synchronization of measurements of asynchronously acquired image sequences is presented. In contrast to the other mentioned procedures our approach models the asynchronism within the photogrammetric analysis instead of using additional hardware components. We model the asynchronism with a linear approach. The constant part is called time-offset and the linear part is called temporal drift. These two parts are combined and converted to an interpolation factor. This factor is included in the functional model as a temporal correction term and is regarded as an unknown parameter in an extended bundle adjustment. Due to the temporal interpolation, measurements from successive epochs are needed. Because of modeling the asynchronism terms in the analysis, the accuracy of the three-dimensional object point determination from image sequences is significantly improved in contrast to procedures which neglect the asynchronism. Also, using the suggested method image sequences of cameras, which due to technical reasons cannot be synchronized by external hardware, can be processed. We have implemented the suggested method and have run a number of experiments in the context of vehicle impact testing. The test series confirm the theoretical expectations of the new method. With a frame rate of 1000 Hz, an object speed of up to 7 m/s and an asynchronism of 0.8 ms the accuracy of the object coordinates can be improved approximately by the factor 10.
Article
The network time protocol (NTP), which is designed to distribute time information in a large, diverse system, is described. It uses a symmetric architecture in which a distributed subnet of time servers operating in a self-organizing, hierarchical configuration synchronizes local clocks within the subnet and to national time standards via wire, radio, or calibrated atomic clock. The servers can also redistribute time information within a network via local routing algorithms and time daemons. The NTP synchronization system, which has been in regular operation in the Internet for the last several years, is described, along with performance data which show that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few milliseconds, even in cases of failure or disruption of clocks, time servers, or networks