Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A new extrinsic calibration method of a display-camera system that the display is not in the direct view of the camera is proposed. An annular mirror is used so that the camera can capture the virtual image of the display. The position of the mirror can be obtained after outer circle is detected. The position parameters are determined uniquely by putting two orthogonal lines in the background and optimized by re-projecting the inner circle to the image plane. The display pixels are encoded using gray code and its imaging position in the mirror can be obtained by solving the PnP problem. Finally, the real extrinsic parameters between the camera and the display are obtained according to the mirror imaging principle. Compared with other existing methods, our approach is simple and fully automatic without any manual intervention. The mirror only needs to be fixed at one position and the degenerate case using a common planar mirror is avoided. Both simulation and real experiments validate our approach.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Multi-camera setups require the transformations between the involved cameras to be known. The intrinsic camera calibration can be done relatively easily in advance in lab conditions before positioning the cameras. The extrinsic calibration on the other hand can only be obtained when the cameras are mounted in their final position. This is usually done using checkerboards or similar markers, which require some degree of overlap of the field of view. In this work, we propose a method where Gray code is projected on a plane (floor or wall) using a standard projector. The extrinsic calibration is found using a bundle adjustment method that optimizes the pose of the plane and the cameras with respect to the projector. No overlap is required in the field of view of the cameras; it is enough that in each camera part of the Gray code pattern is visible. A sensitivity analysis is performed on simulated images. The method has a low sensitivity to sensor noise and to errors in the intrinsic calibration of the projector. Using real-world experiments, we show that the method is accurate for both overlapping and non-overlapping cameras, with median rotation errors of 0.36° and below. The accuracy of the proposed method is comparable to the state of the art, but this method provides a more practical procedure: no checkerboards must be held in place and only one projector pose is needed.
Article
Full-text available
An autostereoscopic display system can provide users the great enjoyment of stereo visualization without the uncomfortable and inconvenient drawbacks of wearing stereo glasses or head-mounted displays. To render stereo video with respect to the user's viewpoints and to accurately project stereo video onto the user's eyes, the left and right eye positions of the user, who is allowed to move around freely, have to be obtained when the user is watching the autostereoscopic display. We present real-time tracking techniques that can efficiently provide the user's eye positions in images. These techniques comprise: 1. face detection by using multiple eigenspaces of various lighting conditions, 2. fast block matching for tracking four motion parameters (X and Y translation, scaling, and rotation) of the user's face, and 3. eye locating in the obtained face region. According to our implementation on a PC with a Pentium III 700 MHz CPU, the frame rate of the eye tracking process can achieve 30 Hz.
Conference Paper
Full-text available
This paper tackles the problem of reconstructing the shape of a smooth mirror surface from a single image. In particular, we consider the case where the camera is ob-serving the reflection of a static reference target in the un-known mirror. We first study the reconstruction problem given dense correspondences between 3D points on the ref-erence target and image locations. In such conditions, our differential geometry analysis provides a theoretical proof that the shape of the mirror surface can be uniquely recov-ered if the pose of the reference target is known. We then relax our assumptions by considering the case where only sparse correspondences are available. In this scenario, we formulate reconstruction as an optimization problem, which can be solved using a nonlinear least-squares method. We demonstrate the effectiveness of our method on both syn-thetic and real images.
Conference Paper
Full-text available
In this paper we present a method for efficient calibra- tionof a screen-camerasetup, in which thecamerais notdi- rectly facing the screen. A spherical mirror is used to make the screen visible to the camera. Using Gray code illumina- tion patterns, we can uniquely identify the reflection of each screen pixel on the imaged spherical mirror. This allows us to compute a large set of 2D-3D correspondences, using only two sphere locations. Compared to previous work, this means we require less manual interventions, combined with a more robust screen pixel detection scheme. This results in a consistent improvement in accuracy, which we illustrate with experiments on both synthetic and real data.
Conference Paper
Full-text available
Calibr ating a network of cameras with non-overlapping views is an important and challenging problem in computer vision. In this paper, we present a novel technique for cam- era calibration using a planar mirror. We overcome the need for all cameras to see a common calibration object directly by allowing them to see it through a mirror. We use the fact that the mirrored views generate a family of mirrored camera poses that uniquely describe the real cam- era pose. Our method consists of the following two steps: (1) using standard calibration methods to find the internal and external parameters of a set of mirrored camera poses, (2) estimating the external parameters of the real cameras from their mirrored poses by formulating constraints be- tween them. We demonstrate our method on real and syn- thetic data for camera clusters with small overlap between the views and non-overlapping views.
Conference Paper
Full-text available
In deflectometry, the shape of mirror objects is recovered from distorted images of a calibrated scene. While remarkably high accuracies are achievable, state-of-the-art methods suffer from two distinct weaknesses: First, for mainly constructive reasons, these can only capture a few square centimeters of surface area at once. Second, reconstructions are ambiguous i.e. infinitely many surfaces lead to the same visual impression. We resolve both of these problems by introducing the first multiview specular stereo approach, which jointly evaluates a series of overlapping deflectometric images. Two publicly available benchmarks accompany this paper, enabling us to numerically demonstrate viability and practicability of our approach.
Conference Paper
Full-text available
In this paper, we describe a novel camera calibration method to estimate the extrinsic parameters and the focal length of a camera by using only one single image of two coplanar circles with arbitrary radius. We consider that a method of simple operation to estimate the extrinsic parameters and the focal length of a camera is very important because in many vision based applications, the position, the pose and the zooming factor of a camera is adjusted frequently. An easy to use and convenient camera calibration method should have two characteristics: 1) the calibration object can be produced or prepared easily, and 2) the operation of a calibration job is simple and easy. Our new method satisfies this requirement, while most existing camera calibration methods do not because they need a specially designed calibration object, and require multi-view images. Because drawing beautiful circles with arbitrary radius is so easy that one can even draw it on the ground with only a rope and a stick, the calibration object used by our method can be prepared very easily. On the other hand, our method need only one image, and it allows that the centers of the circle and/or part of the circles to be occluded. Another useful feature of our method is that it can estimate the focal length as well as the extrinsic parameters of a camera simultaneously. This is because zoom lenses are used so widely, and the zooming factor is adjusted as frequently as the camera setting, the estimation of the focal length is almost a must whenever the camera setting is changed. The extensive experiments over simulated images and real images demonstrate the robustness and the effectiveness of our method.
Conference Paper
Full-text available
The image of a planar mirror reflection (IPMR) can be interpreted as a virtual view of the scene, acquired by a camera with a pose symmetric to the pose of the real camera with respect to the mirror plane. The epipolar geometry of virtual views associated with different IPMRs is well understood, and it is possible to recover the camera motion and perform D scene reconstruction by applying standard structure-from-motion methods that use image correspondences as input. In this article we address the problem of estimating the pose of the real camera, as well as the positions of the mirror plane, by assuming that the rigid motion between N virtual views induced by planar mirror reflections is known. The solution of this problem enables the registration of objects lying outside the camera field-of-view, which can have important applications in domains like non-overlapping camera network calibration and robot vision. We show that the positions of the mirror planes can be uniquely determined by solving a system of linear equations. This enables to estimate the pose of the real camera in a straightforward closed-form manner using a minimum of N = 3 virtual views. Both synthetic tests and real experiments show the superiority of our approach with respect to current state-of-the-art methods.
Conference Paper
Full-text available
This paper addresses the problem of estimating the poses of a reference plane in specular shape recovery. Unlike existing methods which require an extra mirror or an extra reference plane and camera, our proposed method recovers the poses of the reference plane directly from its reflections on the specular surface. By establishing reflection correspondences on the reference plane in three distinct poses, our method estimates the poses of the reference plane in two steps. First, by applying a colinearity constraint to the reflection correspondences, a simple closed-form solution is derived for recovering the poses of the reference plane relative to its initial pose. Second, by applying a ray incidence constraint to the incident rays formed by the reflection correspondences and the visual rays cast from the image, a closed-form solution is derived for recovering the poses of the reference plane relative to the camera. The shape of the specular surface then follows. Experimental results on both synthetic and real data are presented, which demonstrate the feasibility and accuracy of our proposed method.
Conference Paper
Full-text available
We propose an efficient technique for normal map acquisition, using a cheap and easy to build setup. Our setup consists solely of off-the-shelf components, such as an LCD screen, a digital camera and a linear polarizer filter. The LCD screen is employed as a linearly polarized light source emitting gradient patterns, whereas the digital camera is used to capture the incident illumination reflected off the scanned object's surface. Also, by exploiting the fact that light emitted by an LCD screen has the property of being linearly polarized, we use the filter to surpress any specular highlights. Based on the observed Lambertian reflection of only four different light patterns, we are able to obtain a detailed normal map of the scanned surface. Overall, our techniques produces convincing results, even on weak specular materials.
Conference Paper
Full-text available
Developments in the consumer market have indicated that the average user of a personal computer is likely to also own a webcam. With the emergence of this new user group will come a new set of applications, which will require a user-friendly way to calibrate the position of the camera with respect to the location of the screen. This paper presents a fully automatic method to calibrate a screen-camera setup, using a single moving spherical mirror. Unlike other methods, our algorithm needs no user intervention other then moving around a spherical mirror. In addition, if the user provides the algorithm with the exact radius of the sphere in millimeters, the scale of the computed solution is uniquely defined.
Article
Full-text available
We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.
Article
In order to solve the duality problem in pose estimation of a single circle in machine vision, an approach based on Euclidean angular constraint is presented to identify the unique pose result. The accuracy of pose estimation is analyzed, which provides constructive suggestions on achieving accurate pose estimation of circles based on experimental results. The pose of single circle is recovered from its projection in an image with a calibrated camera, though the result is ambiguous, only two possible results exist. Then, the unique pose is identified based on the Euclidean angular invariant. Finally, the accuracy of pose estimation is analyzed based on the theory of error propagation. Experimental results indicate that the absolute error of pose angle of the circle plane and the relative error of position determination are within 0.2° and 0.5% m respectively, and the absolute error of reconstructed distance between the two lines is within 0.8%. These results show that the approach can identify correctively poses and positions of circle planes and can offer a high measuring accuracy and reliable results by a simplecomputing process.
Conference Paper
We consider the problem of estimating the extrinsic parameters (pose) of a camera with respect to a reference 3D object without a direct view. Since the camera does not view the object directly, previous approaches have utilized reflections in a planar mirror to solve this problem. However, a planar mirror based approach requires a minimum of three reflections and has degenerate configurations where estimation fails. In this paper, we show that the pose can be obtained using a single reflection in a spherical mirror of known radius. This makes our approach simpler and easier in practice. In addition, unlike planar mirrors, the spherical mirror based approach does not have any degenerate configurations, leading to a robust algorithm. While a planar mirror reflection results in a virtual perspective camera, a spherical mirror reflection results in a non-perspective axial camera. The axial nature of rays allows us to compute the axis (direction of sphere center) and few pose parameters in a linear fashion. We then derive an analytical solution to obtain the distance to the sphere center and remaining pose parameters and show that it corresponds to solving a 16th degree equation. We present comparisons with a recent method that use planar mirrors and show that our approach recovers more accurate pose in the presence of noise. Extensive simulations and results on real data validate our algorithm.
Conference Paper
This paper is aimed at calibrating the relative posture and position, i.e. extrinsic parameters, of a stationary camera against a 3D reference object which is not directly visible from the camera. We capture the reference object via a mirror under three different unknown poses, and then calibrate the extrinsic parameters from 2D appearances of reflections of the reference object in the mirrors. The key contribution of this paper is to present a new algorithm which returns a unique solution of three P3P problems from three mirrored images. While each P3P problem has up to four solutions and therefore a set of three P3P problems has up to 64 solutions, our method can select a solution based on an orthogonality constraint which should be satisfied by all families of reflections of a single reference object. In addition we propose a new scheme to compute the extrinsic parameters by solving a large system of linear equations. These two points enable us to provide a unique and robust solution. We demonstrate the advantages of the proposed method against a state-of-the-art by qualitative and quantitative evaluations using synthesized and real data.
Article
The nonlinear least-squares minimization problem is considered. Algorithms for the numerical solution of this problem have been proposed in the past, notably by Levenberg (Quart. Appl. Math., 2, 164-168 (1944)) and Marquardt (SIAM J. Appl. Math., 11, 431-441 (1963)). The present work discusses a robust and efficient implementation of a version of the Levenberg--Marquardt algorithm and shows that it has strong convergence properties. In addition to robustness, the main features of this implementation are the proper use of implicitly scaled variables and the choice of the Levenberg--Marquardt parameter by means of a scheme due to Hebden (AERE Report TP515). Numerical results illustrating the behavior of this implementation are included. 1 table. (RWR)
Article
Objects with mirroring optical characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is captured for the mirroring object, using the interference of patterns with different frequencies to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by exploiting the self-coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps can be further processed using standard techniques to produce a complete 3D mesh of the object.
Article
Due to recent increase of computer power and decrease of camera cost, it became very common to see a camera on top of a computer monitor. This paper presents the vision-based technology which allows one in such a setup to significantly enhance the perceptual power of the computer. The described techniques for tracking a face using a convex-shape nose feature as well as for face-tracking with two off-the-shelf cameras allow one to track faces robustly and precisely in both 2D and 3D with low resolution cameras. Supplemented by the mechanism for detecting multiple eye blinks, this technology provides a complete solution for building intelligent hands-free input devices. The theory behind the technology is presented. The results from running several perceptual user interfaces built with this technology are shown.
Conference Paper
We present a novel technique for calibrating display-camera systems from reflections in the user's eyes. Display-camera systems enable a range of vision applications that need controlled illumination, including 3D object reconstruction, facial modeling and human computer interaction. One important issue, though, is the geometric calibration of the display, which requires additional hardware and tedious user interaction. The proposed approach eliminates this requirement by analyzing patterns that are reflected in the cornea, a mirroring device that naturally exists in any display-camera system. We introduce an optimization strategy that is able to refine eye and spherical mirror calibration results. When applied to the eye, it even outperforms spherical mirror calibration unoptimized. Furthermore, we obtain a robust estimation of eye poses which can be used for eye tracking applications. Despite the difficult working conditions, the calibration results are good and should be sufficient for many applications.
Article
The least-squares fitting minimizes the squares sum of error-of-fit in predefined measures. By the geometric fitting, the error distances are defined with the orthogonal, or shortest, distances from the given points to the geometric feature to be fitted. For the geometric fitting of circle/sphere/ellipse/hyperbola/parabola, simple and robust nonparametric algorithms are proposed. These are based on the coordinate description of the corresponding point on the geometric feature for the given point, where the connecting line of the two points is the shortest path from the given point to the geometric feature to be fitted.
Conference Paper
We present a method for the reconstruction of a specular surface, using a single camera viewpoint and the reflection of a planar target placed at two different positions. Contrarily to most specular sur- face reconstruction algorithms, our method makes no assumption on the regularity or continuity of the specular surface, and outputs a set of 3D points along with corresponding surface normals, all independent from one another. A point on the specular surface can be reconstructed if its corresponding pixel in the image has been matched to its source in both of the target planes. We present original solutions to the problem of dense point matching and planar target pose estimation, along with reconstruction results in real-world scenarii.
Conference Paper
This paper presents a new controlled lighting apparatus which uses a raster display device as a light source. The setup has the advantage over other alternatives in that it is relatively inexpensive and uses commonly available components. The apparatus is studied through application to shape recovery using photometric stereo. Experiments on synthetic and real images demonstrate how the depth map of an object can be recovered using only a camera and a computer monitor.
Mirror-based extrinsic camera calibration
  • J A Hesch
  • A I Mourikis
  • S I Roumeliotis
J. A. Hesch, A. I. Mourikis, and S. I. Roumeliotis, "Mirror-based extrinsic camera calibration," Algorithmic Foundation of Robotics VIII, 2009, pp. 285-299.
鹏 ) is a Ph.D. student in University of Chinese Academy of Sciences, China. He received his B.S. degree in logistics engineering from Shandong University in 2007. His research focuses on computer vision based 3D measurement
  • Shengpeng Fu
Shengpeng Fu ( 鹏 ) is a Ph.D. student in University of Chinese Academy of Sciences, China. He received his B.S. degree in logistics engineering from Shandong University in 2007. His research focuses on computer vision based 3D measurement.
Chinese Academy of Sciences in 2004. He is currently a Professor in SIA
  • Jibin Zhao
Jibin Zhao (赵 宾) received his MS degree in Mechanical Engineering from Shandong University, China in 2000 and Ph.D. degree from Shenyang institute of Automation (SIA), Chinese Academy of Sciences in 2004. He is currently a Professor in SIA, China. His main research interests include computer aided design and rapid prototyping.