Article
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... DeepLabCut can be used for training a deep learning model and performing 2D postures analyses [22]. Anipose is an addition to DeepLabCut and can be used for estimating 3D postures [23]. While the validity and accuracy of markerless motion capture methods have been established for twodimensional joint angle analysis in the sagittal plane [24] and 3D position analyses [25,26], none of the previous studies validated markerless motion capture methods on the 3D neck and trunk kinematics. ...
Article
Full-text available
Surgeons are at high risk for developing musculoskeletal symptoms (MSS), like neck and back pain. Quantitative analysis of 3D neck and trunk movements during surgery can help to develop preventive devices such as exoskeletons. Inertial Measurement Units (IMU) and markerless motion capture methods are allowed in the operating room (OR) and are a good alternative for bulky optoelectronic systems. We aim to validate IMU and markerless methods against an optoelectronic system during a simulated surgery task. Intraclass correlation coefficient (ICC (2,1)), root mean square error (RMSE), range of motion (ROM) difference and Bland–Altman plots were used for evaluating both methods. The IMU-based motion analysis showed good-to-excellent (ICC 0.80–0.97) agreement with the gold standard within 2.3 to 3.9 degrees RMSE accuracy during simulated surgery tasks. The markerless method shows 5.5 to 8.7 degrees RMSE accuracy (ICC 0.31–0.70). Therefore, the IMU method is recommended over the markerless motion capture.
Article
Full-text available
Noninvasive behavioral tracking of animals during experiments is critical to many scientific pursuits. Extracting the poses of animals without using markers is often essential to measuring behavioral effects in biomechanics, genetics, ethology, and neuroscience. However, extracting detailed poses without markers in dynamically changing backgrounds has been challenging. We recently introduced an open-source toolbox called DeepLabCut that builds on a state-of-the-art human pose-estimation algorithm to allow a user to train a deep neural network with limited training data to precisely track user-defined features that match human labeling accuracy. Here, we provide an updated toolbox, developed as a Python package, that includes new features such as graphical user interfaces (GUIs), performance improvements, and active-learning-based network refinement. We provide a step-by-step procedure for using DeepLabCut that guides the user in creating a tailored, reusable analysis pipeline with a graphical processing unit (GPU) in 1–12 h (depending on frame size). Additionally, we provide Docker environments and Jupyter Notebooks that can be run on cloud resources such as Google Colaboratory. This protocol describes how to use an open-source toolbox, DeepLabCut, to train a deep neural network to precisely track user-defined features with limited training data. This allows noninvasive behavioral tracking of movement.
Article
Full-text available
Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.