Conference Paper

3D environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder

Robot Res. Dept., Electron. & Telecommun. Res. Inst., Daejeon, South Korea
DOI: 10.1109/IROS.2009.5354500 Conference: Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on
Source: IEEE Xplore

ABSTRACT In this paper, we propose a system which reconstructs the environment with both color and 3D information. We perform extrinsic calibration of a camera and a LRF (laser range finder) to fuse 3D information and color information of objects. We also formularize an equation to measure the result of the calibration. Moreover, we acquire 3D data by rotating 2D LRF with camera, and use ICP (iterative closest point) algorithm to combine data acquired in other places. We use the SIFT (scale invariant feature transform) matching for the initial estimation of ICP algorithm. It offers accurate and stable initial estimation robust to motion change compare to odometry. We also modify the ICP algorithm using color information. Computation time of ICP algorithm can be reduced by using color information.

0 Bookmarks
 · 
67 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: 3D image registration is an emerging research field in the study of computer vision. In this paper, two effective global optimization methods are considered for the 3D registration of point clouds. Experiments were conducted by applying each algorithm and their performance was evaluated with respect to rigidity, similarity and affine transformations. Comparison of algorithms and its effectiveness was tested for the average performance to find the global solution for minimizing the error in the terms of distance between the model cloud and the data cloud. The parameters for the transformation matrix were considered as the design variables. Further comparisons of the considered methods were done for the computational effort, computational time and the convergence of the algorithm. The results reveal that the use of TLBO was outstanding for image processing application involving 3D registration.
    3D Research. 09/2013; 4(3).
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose to overcome a significant limitation of the KinectFusion algorithm, namely, its sole reliance upon geometric information to estimate camera pose. Our approach uses both geometric and color information in a direct manner that uses all the data in order to perform the association of data between two RGBD point clouds. Data association is performed by aligning the two color images associated with the two point clouds by estimating a projective warp using the Lucas-Kanade algorithm. This projective warp is then used to create a correspondence map between the two point clouds, which is then used as the data association for a point-to-plane error minimization. This approach to correspondence allows camera tracking to be maintained through areas of low geometric features. We show that our proposed LKDA data association technique enables accurate scene reconstruction in environments in which low geometric texture causes the existing approach to fail, while at the same time demonstrating that the new technique does not adversely affect results in environments in which the existing technique succeeds.
    Robotics and Automation (ICRA), 2013 IEEE International Conference on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \\rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.
    Proceedings of the 25th annual ACM symposium on User interface software and technology; 10/2012