Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking

School of Information Science and Engineering, Central South University, Changsha 410083, People's Republic of China
Pattern Recognition 01/2009; DOI: 10.1016/j.patcog.2008.12.024
Source: DBLP

ABSTRACT We present a method to reconstruct human motion pose from uncalibrated monocular video sequences based on the morphing appearance model matching. The human pose estimation is made by integrated human joint tracking with pose reconstruction in depth-first order. Firstly, the Euler angles of joint are estimated by inverse kinematics based on human skeleton constrain. Then, the coordinates of pixels in the body segments in the scene are determined by forward kinematics, by projecting these pixels in the scene onto the image plane under the assumption of perspective projection to obtain the region of morphing appearance model in the image. Finally, the human motion pose can be reconstructed by histogram matching. The experimental results show that this method can obtain favorable reconstruction results on a number of complex human motion sequences.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The newly developed enhanced hexagonal-based search using point-oriented inner search (EHS-POIS) enormously speeds up hexagon-based search (HS). From a different perspective, an inherent correlation between distortion and spatial direction through statistical analysis is found. Based on the observed distortion distribution, a novel enhanced hexagonal-based search with direction-oriented inner search (EHS-DIOS) is proposed to avoid real distortion calculation and thus reduce high computation. Experimental results show that, the proposed algorithm is faster than EHS-POIS by achieving two times improvement in terms of inner search speed, and as compared with previous works, it makes a better tradeoff between speed and decoded image quality.
    IEEE Transactions on Circuits and Systems for Video Technology 02/2010; · 1.82 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a data-driven, multi-view body pose estimation algorithm for video. It can operate in uncontrolled environments with loosely calibrated and low resolution cameras and without restricting assumptions on the family of possible poses or motions. Our algorithm first estimates a rough pose estimation using a spatial and temporal silhouette based search in a database of known poses. The estimated pose is improved in a novel pose consistency step acting locally on single frames and globally over the entire sequence. Finally, the resulting pose estimation is refined in a spatial and temporal pose optimization consisting of novel constraints to obtain an accurate pose. Our method proved to perform well on low resolution video footage from real broadcast of soccer games.
    International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, 3DIMPVT 2011, Hangzhou, China, 16-19 May 2011; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel solution to the problem of tracking the 3D position, orientation and full articulation of a human hand from single depth images. We choose the model-based approach and treat the tracking task as an optimization problem. A new objective function based on depth information is presented to quantify the discrepancy between the appearance of hypothesized instances of a hand model and actual hand observations. Sequential Particle Swarm Optimization method is proposed to minimize the objective function for sequential optimization. An semi-automatic hand location method is adopted to predict hand region for sequential tracking. A GPU-based implementation of the proposed method is well designed to address the computational intensity. Extensive experimental results demonstrate qualitatively and quantitatively that tracking of an articulated hand can be achieved in real-time.
    Pattern Recognition Letters 09/2013; 34(12):1437–1445. · 1.27 Impact Factor