Fig 9 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Example of estimated pose from "S9-WalkDog" seq. after 11 sec. of input. Left: input frame, middle: result obtained with SuperTrack trained on longer sequences, right: result obtained with LARP.
Source publication
We propose a novel neural network approach, LARP (Learned Articulated Rigid body Physics), to model the dynamics of articulated human motion with contact. Our goal is to develop a faster and more convenient methodological alternative to traditional physics simulators for use in computer vision tasks such as human motion reconstruction from video. T...
Context in source publication
Context 1
... is not necessarily a function of the number of simulation steps, but rather happens during complex motions that are underrepresented in the training data. For example in fig. 7 (right) the displacement error changes little for most of the 25 sec. long sequence, but jumps up during the abrupt transition from walking into rushing forward (see fig. 9). We believe that training on larger and more diverse dataset might mitigate this. We show examples of the human pose reconstructions on AIST-easy in fig. 1 (left plot, middle column) and AIST-hard in fig. 6 (left). In fig. 6 (left) we compare output of LARP (bottom row) to 3d pose reconstruction that does not employ physics-based ...