Conference Paper

3D environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder

Robot Res. Dept., Electron. & Telecommun. Res. Inst., Daejeon, South Korea
DOI: 10.1109/IROS.2009.5354500 Conference: Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on
Source: IEEE Xplore

ABSTRACT In this paper, we propose a system which reconstructs the environment with both color and 3D information. We perform extrinsic calibration of a camera and a LRF (laser range finder) to fuse 3D information and color information of objects. We also formularize an equation to measure the result of the calibration. Moreover, we acquire 3D data by rotating 2D LRF with camera, and use ICP (iterative closest point) algorithm to combine data acquired in other places. We use the SIFT (scale invariant feature transform) matching for the initial estimation of ICP algorithm. It offers accurate and stable initial estimation robust to motion change compare to odometry. We also modify the ICP algorithm using color information. Computation time of ICP algorithm can be reduced by using color information.

0 Bookmarks
 · 
74 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a face set model generation method from 3D point clouds obtained from 3D camera for high-speed and light-weight storing and showing environment information in tele-operation task for robots. In the proposed method, following procedures run in parallel; 3 dominant orthogonal axis estimation and point cloud grouping by normal vectors based on the Manhattan - world assumption, fast registration using dominant axis grouped point cloud, plane position estimation for each dominant axis group, and face set generation by shape estimation for each plane. Experimental results shows that accuracy of plane position estimation is equivalent to measurement accuracy, registration takes about 0.1[s] for each frame, and storage size is reduced to about 10 - 20% from original 3D point cloud size. We also show some generated environment models as experimental results.
    2013 IEEE/SICE International Symposium on System Integration (SII); 12/2013
  • Advanced Robotics 04/2014; 28(12):841-857. · 0.56 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \\rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.
    Proceedings of the 25th annual ACM symposium on User interface software and technology; 10/2012