Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking
ABSTRACT We present a method to reconstruct human motion pose from uncalibrated monocular video sequences based on the morphing appearance model matching. The human pose estimation is made by integrated human joint tracking with pose reconstruction in depth-first order. Firstly, the Euler angles of joint are estimated by inverse kinematics based on human skeleton constrain. Then, the coordinates of pixels in the body segments in the scene are determined by forward kinematics, by projecting these pixels in the scene onto the image plane under the assumption of perspective projection to obtain the region of morphing appearance model in the image. Finally, the human motion pose can be reconstructed by histogram matching. The experimental results show that this method can obtain favorable reconstruction results on a number of complex human motion sequences.
- SourceAvailable from: psu.edu[show abstract] [hide abstract]
ABSTRACT: A new, exemplar-based, probabilistic paradigm for visual tracking is presented. Probabilistic mecha- nisms are attractive because they handle fusion of information, especially temporal fusion, in a principled manner. Exemplars are selected representatives of raw training data, used here to represent probabilistic mixture distributions of object configurations. Their use avoids tedious hand-construction of object models, and problems with changes of topology. Using exemplars in place of a parameterized model poses several challenges, addressed here with what we call the "Metric Mixture" (M2) approach, which has a number of attractions. Principally, it provides alternatives to standard learning algorithms by allowing the use of metrics that are not embedded in a vector space. Secondly, it uses a noise model that is learned from training data. Lastly, it eliminates any need for an assumption of probabilistic pixelwise independence. Experiments demonstrate the effectiveness of the M2 model in two domains: tracking walking people using "chamfer" distances on binary edge images, and tracking mouth movements by means of a shuffle distance.International Journal of Computer Vision 01/2002; 48:9-19. · 3.62 Impact Factor
- [show abstract] [hide abstract]
ABSTRACT: Visual analysis of human motion is currently one of the most active research topics in computer vision. This strong interest is driven by a wide spectrum of promising applications in many areas such as virtual reality, smart surveillance, perceptual interface, etc. Human motion analysis concerns the detection, tracking and recognition of people, and more generally, the understanding of human behaviors, from image sequences involving humans. This paper provides a comprehensive survey of research on computer-vision-based human motion analysis. The emphasis is on three major issues involved in a general human motion analysis system, namely human detection, tracking and activity understanding. Various methods for each issue are discussed in order to examine the state of the art. Finally, some research challenges and future directions are discussed.Pattern Recognition. 01/2003;
- [show abstract] [hide abstract]
ABSTRACT: Recently, we developed a technique that allows semi-automatic estimation of anthropometry and pose from a single image. However, estimation was limited to a class of images for which an adequate number of human body segments were almost parallel to the image plane. In this paper, we present a generalization of that estimation algorithm that exploits pairwise geometric relationships of body segments to allow estimation from a broader class of images. In addition, we refine our search space by constructing a fully populated discrete hyper-ellipsoid of stick human body models in order to capture the variance of the statistical anthropometric information. As a result, a better initial estimate can be computed by our algorithm and thus the number of iterations needed during minimization are reduced tenfold. We present our results over a variety of images to demonstrate the broad coverage of our algorithm.Machine Vision and Applications 08/2003; 14(4):229-236. · 1.10 Impact Factor