Figure - available from: Applied Sciences
This content is subject to copyright.
Overview of the deep-learning-based motion capture (two-dimensional pose estimation DeepLabCut) workflow for the (A) knee JPR test trials and of the (B) hip JPR test trials.

Overview of the deep-learning-based motion capture (two-dimensional pose estimation DeepLabCut) workflow for the (A) knee JPR test trials and of the (B) hip JPR test trials.

Source publication
Article
Full-text available
Proprioceptive deficits can lead to impaired motor performance. Therefore, accurately measuring proprioceptive function in order to identify deficits as soon as possible is important. Techniques based on deep learning to track body landmarks in simple video recordings are promising to assess proprioception (joint position sense) during joint positi...

Citations

... While the traditional methods of pose estimation rely on geometric feature matching [5] or point cloud registration [6], they struggle under dynamic lighting and occlusion [7]. Recent research advances have combined pose estimation with deep learning to perform end-to-end pose estimation by training convolutional neural networks with datasets [8][9][10][11]. Although this method is very convenient, there are still two key challenges in practical applications: (1) the cost of obtaining the real-world gesture data with intensive annotation [12] is too high, and human errors easily occur in the annotation process; (2) the lack of discriminability of features leads to a limited generalization ability of the model. ...
Article
Full-text available
An accurate 3D pose estimation of spherical objects remains challenging in industrial inspections and robotics due to their geometric symmetries and limited feature discriminability. This study proposes a texture-optimized simulation framework to enhance pose prediction accuracy through optimizing the surface texture features of the design samples. A hierarchical texture design strategy was developed, incorporating complexity gradients (low to high) and color contrast principles, and implemented via VTK-based 3D modeling with automated Euler angle annotations. The framework generated 2297 synthetic images across six texture variants, which were used to train a MobileNet model. The validation tests demonstrated that the high-complexity color textures achieved superior performance, reducing the mean absolute pose error by 64.8% compared to the low-complexity designs. While color improved the validation accuracy universally, the test set analyses revealed its dual role: complex textures leveraged chromatic contrast for robustness, whereas simple textures suffered color-induced noise (a 35.5% error increase). These findings establish texture complexity and color complementarity as critical design criteria for synthetic datasets, offering a scalable solution for vision-based pose estimation. Physical experiments confirmed the practical feasibility, yielding 2.7–3.3° mean errors. This work bridges the simulation-to-reality gaps in symmetric object localization, with implications for robotic manipulation and industrial metrology, while highlighting the need for material-aware texture adaptations in future research.
... Previous research has shown that manual annotation of key points in human movement is a valid and reliable method in typically developing populations [33][34][35][36][37][38]. Furthermore, a recent study showed the validity of 2D markerless tracking using DeepLabCut compared to gold standard laboratory-based optoelectronic three-dimensional motion capture to measure joint angles in children [39]. However, to the best of our knowledge, no studies have investigated the reliability and validity of 2D manual annotation for human movement in individuals with atypical gaits, which is a limitation of this study. ...
Article
Full-text available
The aim of this pilot study was to investigate the feasibility of markerless tracking to assess upper body movements of children with and without human immunodeficiency virus encephalopathy (HIV-E). Sagittal and frontal video recordings were used to track anatomical landmarks with the DeepLabCut pre-trained human model in five children with HIV-E and five typically developing (TD) children to calculate shoulder flexion/extension, shoulder abduction/adduction, elbow flexion/extension and trunk lateral sway. Differences in joint angle trajectories of the two cohorts were investigated using a one-dimensional statistical parametric mapping method. Children with HIV-E showed a larger range of motion in shoulder abduction and trunk sway than TD children. In addition, they showed more shoulder extension and more lateral trunk sway compared to TD children. Markerless tracking was feasible for 2D movement analysis and sensitive to observe expected differences in upper limb and trunk sway movements between children with and without HIVE. Therefore, it could serve as a useful alternative in settings where expensive gait laboratory instruments are unavailable, for example, in clinical centers in low- to middle-income countries. Future research is needed to explore 3D markerless movement analysis systems and investigate the reliability and validity of these systems against the gold standard 3D marker-based systems that are currently used in clinical practice.