Access to this full-text is provided by MDPI.
Content available from Applied Sciences
This content is subject to copyright.
Academic Editor: Claudio Belvedere
Received: 20 February 2025
Revised: 14 March 2025
Accepted: 19 March 2025
Published: 21 March 2025
Citation: van den Bogaart, M.;
Jacobs, N.; Hallemans, A.; Meyns, P.
Validity of Deep Learning-Based
Motion Capture Using DeepLabCut to
Assess Proprioception in Children.
Appl. Sci. 2025,15, 3428. https://
doi.org/10.3390/app15073428
Copyright: © 2025 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license
(https://creativecommons.org/
licenses/by/4.0/).
Article
Validity of Deep Learning-Based Motion Capture Using
DeepLabCut to Assess Proprioception in Children
Maud van den Bogaart 1, * , Nina Jacobs 1,2, Ann Hallemans 3and Pieter Meyns 1
1Rehabilitation Research Centre (REVAL), Faculty of Rehabilitation Sciences and Physiotherapy, Hasselt
University, 3590 Diepenbeek, Belgium; nina.jacobs@uhasselt.be (N.J.); pieter.meyns@uhasselt.be (P.M.)
2Research Group for Neurorehabilitation, Department of Rehabilitation Sciences, KU Leuven,
3001 Leuven, Belgium
3
Research Group MOVANT, Department of Rehabilitation Sciences and Physiotherapy (REVAKI), University
of Antwerp, 2610 Antwerp, Belgium; ann.hallemans@uantwerpen.be
*Correspondence: maud.bogaart@outlook.com
Abstract: Proprioceptive deficits can lead to impaired motor performance. Therefore,
accurately measuring proprioceptive function in order to identify deficits as soon as possible
is important. Techniques based on deep learning to track body landmarks in simple video
recordings are promising to assess proprioception (joint position sense) during joint position
reproduction (JPR) tests in clinical settings, outside the laboratory and without the need to
attach markers. Fifteen typically developing children participated in 90 knee JPR trials and
21 typically developing children participated in 126 hip JPR trials. Concurrent validity of
two-dimensional deep-learning-based motion capture (DeepLabCut) to measure the Joint
Reproduction Error (JRE) with respect to laboratory-based optoelectronic three-dimensional
motion capture (Vicon motion capture system, gold standard) was assessed. There was
no significant difference in the hip and knee JRE measured with DeepLabCut and Vicon.
Two-dimensional deep-learning-based motion capture (DeepLabCut) is valid to assess
proprioception with respect to the gold standard in typically developing children. Tools
based on deep learning, such as DeepLabCut, make it possible to accurately measure joint
angles in order to assess proprioception without the need of a laboratory and to attach
markers, with a high level of automatization.
Keywords: proprioception; joint position sense; validity; deep-learning-based motion
capture; pose estimation; DeepLabCut
1. Introduction
Motor behavior is regulated by the sensorimotor control system. This system inte-
grates sensory input (i.e., visual, vestibular and somatosensory inputs) to execute motor
commands, resulting in muscular responses [
1
]. Proprioception is a subsystem of the
somatosensory system. Proprioception is the perception of joint and body movement as
well as the position of the body or body segments in space [
2
]. For example, proprioception
allows someone’s body segment to be localized without having to rely on visual input
alone. Muscle spindles, Golgi tendon organs and other mechanoreceptors located in the
skin and capsuloligamentous structures provide proprioceptive information to the central
nervous system for central processing [
3
]. As postural control is also regulated by the
sensorimotor control system, the development of postural control in children is related
to the maturation of proprioception, more specifically lower limb proprioception. Also,
in other populations, difficulties in processing, integrating and (consciously) perceiving
Appl. Sci. 2025,15, 3428 https://doi.org/10.3390/app15073428
Appl. Sci. 2025,15, 3428 2 of 14
proprioceptive input can lead to impaired motor performance [
4
–
13
]. Training programs
have been shown to lead to improved proprioceptive function in various populations [
14
].
As such, it is important to accurately measure proprioceptive function to identify potential
deficits (as soon as possible).
Different senses of proprioception can be measured, such as the ability to perceive
joint position, movement (extent, trajectory, and velocity), the level of force, muscle tension
and weight [
15
]. The most commonly used and clinically applicable method to assess
proprioception in the context of motor behavior is the Joint Position Reproduction (JPR)
test, which assesses the sense of joint position [
15
–
17
]. The JPR test examines the ability
to actively or passively reproduce a previously presented target joint position [
18
]. The
examiner-positioned joint angle is compared with the joint angle repositioned by the subject.
Differences between the targeted and reproduced joint angles indicate proprioceptive
accuracy (i.e., a measure of proprioceptive function) and are defined as angular errors (i.e.,
joint position reproduction error (JRE)). The JRE could be expressed as an absolute error,
which considers the absolute magnitude of the error (regardless of the direction of the error)
and a systematic error, which considers the magnitude and direction of the error.
The gold standard to accurately measure reliable joint angles is laboratory-based opto-
electronic three-dimensional motion capture. Unfortunately, the need to attach markers and
the need for a high-tech laboratory inhibit the routine collection of valuable high-quality
data, because it is very expensive and time-consuming. In addition, incorrect and inconsis-
tent marker placement and soft tissue artifacts are common, and minimal or tight-fitting
clothing is preferred to avoid these [
19
,
20
]. Recent advances in biomedical engineering
resulted in new techniques based on deep learning to track body landmarks in simple video
recordings, which include a high degree of automatization, and allow recordings in an un-
obtrusive manner in a natural environment. The manually placed, skin-mounted markers
could be replaced with automatically detected landmarks with these deep-learning-based
techniques. In humans, the development and application of systems that can estimate two-
and three-dimensional human poses by tracking body landmarks without markers under
a variety of conditions are increasing (e.g., Theia3D, DeepLabCut, Captury, OpenPose,
Microsoft Kinect, Simi Shape). DeepLabCut is a free open-source toolbox that builds on a
state-of-the-art pose-estimation algorithm to allow a user to train a deep neural network to
track user-defined body landmarks [
21
,
22
]. DeepLabCut distinguishes itself because it is
free and it is possible to use your own model or extend and retrain a pre-trained model
in which joints are annotated. Techniques based on deep learning, such as DeepLabCut,
make it possible to accurately measure sagittal knee and hip joint angles during walking
and jumping without the need for a laboratory to apply markers, and with a high level
of automatization. However, techniques based on deep learning (including DeepLabCut)
have not yet been validated to measure joint angles and proprioceptive function during
JPR tests with respect to a marker-based motion capture system. Due to this gap in the
literature, this study aims to validate two-dimensional deep-learning-based motion capture
(DeepLabCut) with respect to laboratory-based optoelectronic three-dimensional motion
capture (Vicon motion capture system, gold standard) to assess lower limb proprioceptive
function (joint position sense) by measuring the JRE during knee and hip JPR tests in
typically developing children.
2. Materials and Methods
2.1. Study Design
A cross-sectional study design was undertaken to validate two-dimensional deep-
learning-based motion capture (DeepLabCut) to measure the JRE during knee and hip JPR
tests with respect to laboratory-based optoelectronic three-dimensional motion capture
Appl. Sci. 2025,15, 3428 3 of 14
(Vicon motion capture system). This study is part of a larger project in which children
executed the JPR tests three times per joint, both on the left and right side, to assess
proprioceptive function in typically developing children [
23
] and children with neurological
impairments. COSMIN reporting guidelines were followed [24].
2.2. Participants
Potential participants were included when they were (1) aged between five years,
zero months and twelve years, eleven months old, (2) born > 37 weeks of gestation (full-
term) and (3) cognitively capable of understanding and participating in the assessment
procedures. They were excluded if, according to a general questionnaire completed by
the parents, they had; (1) intellectual delays (IQ < 70), (2) developmental disorders (e.g.,
developmental coordination disorder, autism spectrum disorder or attention deficit hyper-
activity disorder), (3) uncorrected visual or vestibular impairments and/or 4) neurological,
orthopedic or other medical conditions that might impede the proprioceptive test proce-
dure. The children were recruited from the researcher’s lab environment, an elementary
school (Vrije Basisschool Lutselus, Diepenbeek, Belgium), through acquaintances and social
media. Parents/caregivers gave written informed consent prior to the experiment for their
child’s participation and analysis of the data. The study protocol (piloting and actual data
collection) was in agreement with the declaration of Helsinki and had been approved
by the Committee for Medical Ethics (CME) of Hasselt University (UHasselt), CME of
Antwerp University Hospital-University of Antwerp (UZA-UA) and Ethics Committee
Research of University Hospitals Leuven-KU Leuven (UZ-KU Leuven) (B3002021000145,
11 March 2022).
2.3. Experimental Procedure
Proprioception (joint position sense) was assessed as the child’s ability to re-identify
a passively placed target position of the ipsilateral hip (90
◦
of flexion) and knee (60
◦
of
flexion) according to the passive-ipsilateral JPR method (Figure 1). For the knee JPR test,
children were seated blindfolded on the table with their lower legs hanging relaxed and
unsupported (90
◦
of knee flexion) while not wearing shoes and socks. The upper leg was
fully supported by the table (90
◦
of hip flexion) and hands were crossed on the chest. For the
hip JPR test, the resting position was the same, only the children sat on an inclined cushion
on the table (without back support) to align the baseline hip joint angle at
70◦of
flexion.
From the resting position, the examiner moved the child’s limb 30
◦
to knee extension,
resulting in a final knee flexion angle of 60
◦
or 20
◦
to hip flexion, resulting in a final hip
flexion angle of 90
◦
, using an inclinometer distally attached to the moving segment. After
experiencing and memorizing this joint position for five seconds, the child’s limb was
passively returned to the start position. Afterward, the examiner moved the ipsilateral limb
back into the same range and the child was asked to re-identify the target joint position as
accurately as possible by pressing a button synchronized to motion capture software.
Depending on the region from which the child was recruited, testing was conducted at
the Multidisciplinary Motor Centre Antwerp (M2OCEAN) (University of Antwerp) or the
Gait Real-time Analysis Interactive Lab (GRAIL) (Hasselt University) by the same trained
examiner using the same standardized protocol and instructions. This protocol was first
tested on seven pilot participants before the actual data collection started.
For the current study, we focused on the concurrent validity of DeepLabCut marker-
less tracking with respect to three-dimensional (3D) Vicon optoelectronic motion capture
to assess proprioceptive function. This required synchronized measurements of both sys-
tems. In the current study, 90 knee JPR test trials of 15 typically developing children and
126 hip
JPR test trials of 21 typically developing children were screened for eligibility. Data
Appl. Sci. 2025,15, 3428 4 of 14
collection was conducted between April 2022 and June 2023. Trials were eligible for the
current study if the JPR tests were captured on video, if there was accurate tracking of the
markers for the 3D movement analysis and if there was visibility of the shoulder, hip, knee
and ankle joints of the child in the sagittal video recordings.
Appl.Sci.2025,15,xFORPEERREVIEW4of15
Figure1.Thepassive-ipsilateralJPRtaskforthekneeandhip.(A)Experience:fromtheresting
position(Knee:90°ofkneeflexion,Hip:70°ofhipflexion),theexaminermovedthechild’slimb30°
tokneeextensionor20°tohipflexion,usinganinclinometerdistallyaachedtothemoving
segment(knee:lowerleg,hip:upperleg),approximatelyperpendiculartotheflexion-extension
axis.(B)Memory:afterexperiencingandmemorizingthisjointpositionforfiveseconds,thechild’s
limbwaspassivelyreturnedtotheneutralstartposition.(C)Reproduction:afterward,theexaminer
movedtheipsilaterallimbbackintothesamerangeandthechildwasaskedtore-identifythetarget
jointposition(Knee:60°ofkneeflexion,Hip:90°ofhipflexion)asaccuratelyaspossiblebypressing
abuonsynchronizedtomotioncapturesoftware.
Dependingontheregionfromwhichthechildwasrecruited,testingwasconducted
attheMultidisciplinaryMotorCentreAntwerp(M2OCEAN)(UniversityofAntwerp)or
theGaitReal-timeAnalysisInteractiveLab(GRAIL)(HasseltUniversity)bythesame
trainedexaminerusingthesamestandardizedprotocolandinstructions.Thisprotocol
wasfirsttestedonsevenpilotparticipantsbeforetheactualdatacollectionstarted.
Forthecurrentstudy,wefocusedontheconcurrentvalidityofDeepLabCut
markerlesstrackingwithrespecttothree-dimensional(3D)Viconoptoelectronicmotion
capturetoassessproprioceptivefunction.Thisrequiredsynchronizedmeasurementsof
bothsystems.Inthecurrentstudy,90kneeJPRtesttrialsof15typicallydeveloping
childrenand126hipJPRtesttrialsof21typicallydevelopingchildrenwerescreenedfor
eligibility.DatacollectionwasconductedbetweenApril2022andJune2023.Trialswere
eligibleforthecurrentstudyiftheJPRtestswerecapturedonvideo,iftherewasaccurate
trackingofthemarkersforthe3Dmovementanalysisandiftherewasvisibilityofthe
shoulder,hip,kneeandanklejointsofthechildinthesagialvideorecordings.
2.4.MaterialsandSoftware
Optoelectronic3Dmotioncapture
Lowerbody3Dkinematicswerecollectedwitha10-cameraViconmotioncapture
system(VICON,OxfordMetrics,Oxford,UK)(100Hz,)usingtheInternationalSocietyof
Biomechanics(ISB)lowerlimbmarkermodel(26markers)[25].The3Dkneeandhip
flexion/extensionjointangleswerequantifiedviaVICONNexussoftware(version2.12.1)
usingEulerAngles.
Two-dimensionaldeep-learning-basedmotioncapture
Two-dimensional(2D)poseestimationwasperformedwithDeepLabCut(version
2.3.7),anopen-sourcedeeplearningpythontoolbox(hps://github.com/DeepLabCut
(accessedon10October2023))[21,22].Anatomicallandmarks(i.e.,ankle,knee,hipand
shoulder)weretrackedinsagialvideorecordings(Figure2).
Figure 1. The passive-ipsilateral JPR task for the knee and hip. (A) Experience: from the resting
position (Knee: 90
◦
of knee flexion, Hip: 70
◦
of hip flexion), the examiner moved the child’s limb
30
◦
to knee extension or 20
◦
to hip flexion, using an inclinometer distally attached to the moving
segment (knee: lower leg, hip: upper leg), approximately perpendicular to the flexion-extension
axis. (B) Memory: after experiencing and memorizing this joint position for five seconds, the child’s
limb was passively returned to the neutral start position. (C) Reproduction: afterward, the examiner
moved the ipsilateral limb back into the same range and the child was asked to re-identify the target
joint position (Knee: 60
◦
of knee flexion, Hip: 90
◦
of hip flexion) as accurately as possible by pressing
a button synchronized to motion capture software.
2.4. Materials and Software
•Optoelectronic 3D motion capture
Lower body 3D kinematics were collected with a 10-camera Vicon motion capture
system (VICON, Oxford Metrics, Oxford, UK) (100 Hz), using the International Society
of Biomechanics (ISB) lower limb marker model (26 markers) [
25
]. The 3D knee and hip
flexion/extension joint angles were quantified via VICON Nexus software (version 2.12.1)
using Euler Angles.
•Two-dimensional deep-learning-based motion capture
Two-dimensional (2D) pose estimation was performed with DeepLabCut (version
2.3.7), an open-source deep learning python toolbox (https://github.com/DeepLabCut
(accessed on 10 October 2023)) [
21
,
22
]. Anatomical landmarks (i.e., ankle, knee, hip and
shoulder) were tracked in sagittal video recordings (Figure 2).
The resolution of the video recordings was 644
×
486 pixels when recorded at the
M2OCEAN laboratory and 480
×
640 pixels when recorded at the GRAIL. The frame rate of
the video recordings was at both sites 50 frames per second. Separate DeepLabCut neural
networks per joint for the left and right sides were created. The pretrained MPII human
model of the DeepLabCut toolbox (ResNet101) was used to retrieve 2D coordinates of these
anatomical landmarks:
•
With an additional 30 manually labeled frames from eight videos of left knee JPR test
trials of six different children (network 1),
•
with an additional 25 manually labeled frames from six videos of right knee JPR test
trials of five different children (network 2),
•
with an additional 28 manually labeled frames from six videos of left hip JPR test trials
of six different children (network 3) and
Appl. Sci. 2025,15, 3428 5 of 14
•
with an additional 26 manually labeled frames from nine videos of hip JPR test trials
of six different children (network 4)
Appl.Sci.2025,15,xFORPEERREVIEW5of15
Figure2.Anatomicallandmarks(i.e.,shoulder,hip,kneeandankle,inorange)weretrackedvia
deep-learning-basedmotioncapture(DeepLabCut)inthevideorecordings.
Theresolutionofthevideorecordingswas644×486pixelswhenrecordedatthe
M2OCEANlaboratoryand480×640pixelswhenrecordedattheGRAIL.Theframerate
ofthevideorecordingswasatbothsites50framespersecond.SeparateDeepLabCut
neuralnetworksperjointfortheleftandrightsideswerecreated.ThepretrainedMPII
humanmodeloftheDeepLabCuttoolbox(ResNet101)wasusedtoretrieve2D
coordinatesoftheseanatomicallandmarks:
Withanadditional30manuallylabeledframesfromeightvideosofleftkneeJPRtest
trialsofsixdifferentchildren(network1),
withanadditional25manuallylabeledframesfromsixvideosofrightkneeJPRtest
trialsoffivedifferentchildren(network2),
withanadditional28manuallylabeledframesfromsixvideosoflefthipJPRtest
trialsofsixdifferentchildren(network3)and
withanadditional26manuallylabeledframesfromninevideosofhipJPRtesttrials
ofsixdifferentchildren(network4)
Thelabeledframesareacombinationofframesfromvideosofthestudyandpilot
participants(Tables1and2,Figure3).Thesevideoswereusedonlyfornetworktraining
andwereexcludedfromstatisticalanalyses.Ifoneormorevideosofachildwereused
foradditionallabelingtoextendtheneuralnetwork,allvideosofthatchildwereexcluded
fromstatisticalanalyses(TablesS1andS2).Framesforlabelingweremanuallyselectedin
ordertoselectframeswithavarietyofappearances(e.g.,clothingandpostureofthe
participantandexaminer,background,lightconditions)withinthedatasets.Thelabeled
frameswererandomlysplitintoeitheratrainingortestset(95%trainingdataset,5%test
dataset).Thekneenetworksweretrainedupto520.000iterationsandthehipnetworks
weretrainedupto500.000iterations,bothwithabatchsizeofeight.Onlytrialswith
confidentpredictionsofthecoordinatesoftheanatomicallandmarkswereincludedinthe
analyses(i.e.,thelikelihoodofconfidentpredictions>p-cutoffof0.8).
Figure 2. Anatomical landmarks (i.e., shoulder, hip, knee and ankle, in orange) were tracked via
deep-learning-based motion capture (DeepLabCut) in the video recordings.
The labeled frames are a combination of frames from videos of the study and pilot
participants (Tables 1and 2, Figure 3). These videos were used only for network training
and were excluded from statistical analyses. If one or more videos of a child were used for
additional labeling to extend the neural network, all videos of that child were excluded
from statistical analyses (Tables S1 and S2). Frames for labeling were manually selected
in order to select frames with a variety of appearances (e.g., clothing and posture of the
participant and examiner, background, light conditions) within the datasets. The labeled
frames were randomly split into either a training or test set (95% training dataset, 5% test
dataset). The knee networks were trained up to 520,000 iterations and the hip networks
were trained up to 500,000 iterations, both with a batch size of eight. Only trials with
confident predictions of the coordinates of the anatomical landmarks were included in the
analyses (i.e., the likelihood of confident predictions > p-cut off of 0.8).
Table 1. Overview of the labeled frames of knee trials used for DeepLabCut network training.
Subject ID Left or Right (L/R)
nr of Labelled Trials (nr of
Labelled Frames)
Pilot1 L 2 trials (5 frames)
Pilot2 L 2 trials (5 frames)
Pilot3 L 1 trial (5 frames)
9 L 1 trial (5 frames)
11 L 1 trial (5 frames)
13 L 1 trial (5 frames)
all subjects (left trials) L 8 trials (30 frames)
Pilot2 R 1 trial (5 frames)
Pilot3 R 1 trial (5 frames)
5 R 1 trial (5 frames)
10 R 2 trials (5 frames)
13 R 1 trial (5 frames)
all subjects (right trials) R 6 trials (25 frames)
Appl. Sci. 2025,15, 3428 6 of 14
Table 2. Overview of the labeled frames of hip trials used for DeepLabCut network training.
Subject ID Left or Right (L/R)
nr of Labelled Trials (nr of
Labelled Frames)
Pilot1 L 1 trial (5 frames)
Pilot3 L 1 trial (5 frames)
6 L 1 trial (5 frames)
8 L 1 trial (3 frames)
9 L 1 trial (5 frames)
14 L 1 trial (5 frames)
all subjects (left trials) L 6 trials (28 frames)
Pilot1 R 1 trial (5 frames)
Pilot3 R 1 trial (5 frames)
1 R 1 trial (3 frames)
2 R 3 trials (5 frames)
5 R 1 trial (3 frames)
13 R 2 trials (5 frames)
all subjects (right trials) R 9 trials (26 frames)
Appl.Sci.2025,15,xFORPEERREVIEW6of15
Figure3.Overviewofthedeep-learning-basedmotioncapture(two-dimensionalposeestimation
DeepLabCut)workflowforthe(A)kneeJPRtesttrialsandofthe(B)hipJPRtesttrials.
Table1.OverviewofthelabeledframesofkneetrialsusedforDeepLabCutnetworktraining.
SubjectIDLeftorRight(L/R)nrofLabelledTrials(nrofLabelled
Frames)
Pilot1L2trials(5frames)
Pilot2L2trials(5frames)
Pilot3L1trial(5frames)
9L1trial(5frames)
11L1trial(5frames)
13L1trial(5frames)
allsubjects(lefttrials)L8trials(30frames)
Pilot2R1trial(5frames)
Pilot3R1trial(5frames)
5R1trial(5frames)
10R2trials(5frames)
13R1trial(5frames)
allsubjects(righttrials)R6trials(25frames)
Table2.OverviewofthelabeledframesofhiptrialsusedforDeepLabCutnetworktraining.
SubjectIDLeftorRight(L/R)nrofLabelledTrials(nrofLabelled
Frames)
Pilot1L1trial(5frames)
Pilot3L1trial(5frames)
6L1trial(5frames)
8L1trial(3frames)
9L1trial(5frames)
Figure 3. Overview of the deep-learning-based motion capture (two-dimensional pose estimation
DeepLabCut) workflow for the (A) knee JPR test trials and of the (B) hip JPR test trials.
The 2D knee flexion/extension joint angles were quantified by determining the angle
between the upper and lower leg using trigonometric functions based on hip, knee and an-
kle joint coordinates via deep-learning-based motion capture. The 2D hip flexion/extension
joint angles were quantified by determining the angle between the trunk and the upper
leg using trigonometric functions based on shoulder, hip and knee joint coordinates via
deep-learning-based motion capture.
2.5. Data Analysis
The time points of the targeted angles were determined by placing event markers in
the VICON Nexus software within the optoelectronic 3D motion capture software. The
Appl. Sci. 2025,15, 3428 7 of 14
time points of the reproduced angles were determined via event markers created within
the optoelectronic 3D motion capture software by the child pressing the button. These time
points were also used to determine the targeted and reproduced angles via deep-learning-
based motion capture. Data of the left and right segment sides were pooled together.
The difference between the targeted and reproduced joint angle (i.e., JRE in degrees)
was the outcome measure to validate deep-learning-based motion capture (DeepLabCut)
with respect to laboratory-based 3D optoelectronic motion capture (gold standard) to
assess proprioceptive function. Negative JREs corresponded to a greater reproduced angle
compared to the targeted angle. Positive JREs corresponded to a smaller reproduced angle
compared to the targeted angle.
2.6. Statistics
The normality of the residuals of the JRE differences between the systems was assessed
by visually checking the Q-Q plot and histogram of the unstandardized residuals, by
determining the skewness and kurtosis and by using the Shapiro–Wilk test.
Per the DeepLabCut neural network, a training and test pixel error was calculated.
For the training pixel error, the locations of the manually labeled anatomical landmarks
in frames in the training data set (i.e., 95% of all labeled frames) were compared to the
predicted locations of the anatomical landmarks after training. For the test pixel error, the
locations of the manually labeled anatomical landmarks in frames in the test dataset (i.e.,
5% of all labeled frames, which were not used for training) were compared to the predicted
locations of the anatomical landmarks after training.
To assess differences in systematic JREs between DeepLabCut and VICON, a linear
mixed model was used with a dependent variable (hip or knee JRE), fixed variable (mea-
surement system) and random variable (subject ID). The best repeated covariance type was
chosen based on Akaike information criteria (AIC).
In addition, the best absolute error (i.e., minimal JRE across the three repetitions) was
determined per subject and compared between VICON and DeepLabCut using a paired
t-test in order to validate a clinically applicable outcome measure. Furthermore, the linear
correlation between the best absolute errors measured with DeepLabCut and VICON was
assessed using a Pearson correlation.
Hip and knee Bland–Altman plots were created (based on the systematic JREs) with
a 95% confidence interval for the mean difference between the systems. The presence
of proportional bias was tested by testing the slope of the regression line fitted to the
Bland–Altman plot using linear regression analysis.
Statistical analyses were performed with SPSS (v29) at α< 0.05.
3. Results
In total, 42 trials of knee JPR tests of 11 typically developing children (six girls, mean
age 8.9
±
1.0 years old, mean BMI 17.0
±
2.7 kg/m
2
) out of the initially included 90 trials of
15 typically developing children were included in the statistical analyses (Figure 3and Table
S1). Likewise, 41 trials of hip JPR tests of 14 typically developing children (six girls, mean
age 9.0
±
1.1 years old, mean BMI 16.9
±
2.5 kg/m
2
) out of the initially included
126 trials
of 21 typically developing children were included in the statistical analyses (Figure 3and
Table S2).
Visually inspecting the Q-Q plot and histogram of the unstandardized residuals,
the Shapiro–Wilk test results (knee: p= 0.91, hip: p= 0.31), skewness (knee: 0.47,
hip: 0.49) and kurtosis (knee: 1.03, hip: 0.98) close to zero indicated that the data was
normally distributed.
Appl. Sci. 2025,15, 3428 8 of 14
Training and test pixel errors of the different DeepLabCut neural networks were all
below four pixels (which corresponds to errors below 2 cm) (Table S3).
3.1. Knee JPR Tests
There was no significant difference in the knee systematic JREs measured with VICON
(
−
3.65 degrees) and the knee systematic JREs measured with DeepLabCut (
−3.69 degrees
)
(F = 0.09, p= 0.77, covariance structure = Toeplitz) (Figure 4). The best absolute errors
measured with VICON (6.2 degrees) were not significantly different from the best absolute
errors measured with DeepLabCut (5.9 degrees) (t(10) = 0.22, p= 0.83). The best abso-
lute errors measured with VICON were strongly correlated with the best absolute errors
measured with DeepLabCut (r(9) = 0.92, p< 0.001)
Appl.Sci.2025,15,xFORPEERREVIEW9of15
Figure4.(A,B)Mean(dashedline)andindividual(solidlines)kneeandhipsystematicJoint
positionReproductionErrors(JRE,indegrees)measuredwithoptoelectronicthree-dimensional
motioncapture(VICON)anddeep-learning-basedmotioncapture(DeepLabCut).Linesinthesame
colorcorrespondtodataofonesubject.(C,D)Bland–Altmanplotforcomparingthekneeandhip
systematicJREs(indegrees)measuredwithVICONandDeepLabCut.Theblackdashedline
representsthemeandifferencebetweenVICONandDeepLabCutandtheblackdoedlines
representthe95%confidenceinterval(mean±(1.96×SD).Dotsinthesamecolorcorrespondtodata
ofonesubject.
3.2.HipJPRTests
TherewasnosignificantdifferenceinthehipsystematicJREsmeasuredwithVICON
(−2.42degrees)andthehipsystematicJREsmeasuredwithDeepLabCut(−2.41degrees)
(F=0.38p=0.54,covariancestructure=Compoundsymmetry)(Figure4).Thebest
absoluteerrorsmeasuredwithVICON(3.4degrees)werenotsignificantlydifferentfrom
thebestabsoluteerrorsmeasuredwithDeepLabCut(4.2degrees)(t(13)=−0.12,p=0.90).
ThebestabsoluteerrorsmeasuredwithVICONwerestronglycorrelatedwiththebest
absoluteerrorsmeasuredwithDeepLabCut(r(12)=0.81,p<0.001).
ProportionalbiaswaspresentastheslopeoftheregressionlinefiedtotheBland–
Altmanplotdidsignificantlydifferfromzero(R2=0.31,F(1,39)=17.46,p<0.001).
Figure 4. (A,B) Mean (dashed line) and individual (solid lines) knee and hip systematic Joint
position Reproduction Errors (JRE, in degrees) measured with optoelectronic three-dimensional
motion capture (VICON) and deep-learning-based motion capture (DeepLabCut). Lines in the same
color correspond to data of one subject. (C,D) Bland–Altman plot for comparing the knee and
hip systematic JREs (in degrees) measured with VICON and DeepLabCut. The black dashed line
represents the mean difference between VICON and DeepLabCut and the black dotted lines represent
the 95% confidence interval (mean
±
(1.96
×
SD). Dots in the same color correspond to data of
one subject.
Appl. Sci. 2025,15, 3428 9 of 14
The scatters of differences were uniform (homoscedasticity), and the slope of the re-
gression line fitted to the Bland–Altman plot did not significantly differ from zero (
R2 = 0.04
,
F(1, 40) = 1.76, p= 0.19).
3.2. Hip JPR Tests
There was no significant difference in the hip systematic JREs measured with VICON
(
−
2.42 degrees) and the hip systematic JREs measured with DeepLabCut (
−
2.41 degrees)
(F = 0.38 p= 0.54, covariance structure = Compound symmetry) (Figure 4). The best absolute
errors measured with VICON (3.4 degrees) were not significantly different from the best
absolute errors measured with DeepLabCut (4.2 degrees) (t(13) =
−
0.12, p= 0.90). The
best absolute errors measured with VICON were strongly correlated with the best absolute
errors measured with DeepLabCut (r(12) = 0.81, p< 0.001).
Proportional bias was present as the slope of the regression line fitted to the Bland–
Altman plot did significantly differ from zero (R2 = 0.31, F(1, 39) = 17.46, p< 0.001).
4. Discussion
The aim of this study was to validate two-dimensional deep-learning-based motion
capture (DeepLabCut) with respect to laboratory-based optoelectronic three-dimensional
motion capture (VICON, gold standard) to assess proprioceptive function (joint position
sense) by measuring the JRE during knee and hip JPR tests in typically developing children.
We validated measuring knee and hip systematic JREs using DeepLabCut with respect to
VICON, but also validated measuring the best absolute JREs using DeepLabCut, in order
to also validate a clinically applicable outcome measure (where over or undershooting is
not important).
Until now, no previous study has reported the validity of a deep-learning-based
motion capture system for the assessment of proprioception by measuring the JRE during
knee and hip JPR tests. Deep-learning-based motion capture techniques (such as OpenPose,
Theia3D, Kinect and DeepLabCut) have already been validated to measure joint kinematics
and spatiotemporal parameters with respect to a marker-based motion capture system
during walking, throwing or jumping in children with cerebral palsy [
26
] and healthy
adults
[27–32]
. The average Root Mean Square (RMS) errors of the captured sagittal
knee and hip joint angles between markerless tracking and marker-based tracking ranged
from 3.2 to 5.7 degrees and from 3.9 to 11.0 degrees, respectively, during walking. These
differences between joint angles measured with the two systems are higher than the average
absolute differences in systematic JREs measured with the two systems in the current study
(knee: 1.2 degrees hip: 1.7 degrees). Greater differences between the systems during
walking compared to JPR tests could be explained by the greater movement speed of the
body segments during walking. Greater movement speed increases the probability of image
blurring, which may lead to inaccurate markerless tracking of the body landmarks [33].
Furthermore, the differences in JREs measured with the two systems are negligible
because the averages of the absolute JRE differences between the systems (knee: 1.2 degrees
(systematic) and 0.9 degrees (best absolute), hip: 1.7 degrees (systematic) and 1.2 degrees
(best absolute)) are smaller than the intersession standard error of measurement (SEM)
values for the knee and hip JRE measured with VICON, which were previously determined
in our sample of typically developing children (knee: 2.26 degrees, hip: 2.03 degrees)
(submitted for publication; unpublished [
23
]). Further research is needed to assess the
reliability of measuring the JRE with DeepLabCut.
In order to use DeepLabcut to assess proprioceptive function, it is important to unravel
the potential cause of the proportional bias, which was present in the hip Bland–Altman
plot. The difference between the hip JRE measured with VICON and DeepLabCut was
Appl. Sci. 2025,15, 3428 10 of 14
greater when the mean JRE of both systems deviated more from zero. More specifically,
the difference between the hip JRE measured with VICON and DeepLabCut greater than
two degrees (=hip SEM) often corresponded to a relatively large VICON JRE of at least
three degrees. However, large VICON JREs did not always correspond to large differ-
ences between the hip JRE measured with VICON and DeepLabCut. This means that
DeepLabCut JREs were not measured accurately in some children, but not in all children
with relatively large VICON JREs of at least three degrees. A possible explanation could
be that the DeepLabCut neural networks were not optimally trained to accurately predict
the coordinates of the anatomical landmarks (especially the shoulder joint when wearing
a loose-fitted shirt, which could have caused the presence of a proportional bias only in
the hip Bland–Altman plot), which are used to determine joint angles, in some children
with relatively large deviations of the reproduction angle from the target angle (i.e., large
JREs). The presence of a proportional bias may be prevented by extending the neural
network models with more labeled frames during reproduction during a trial with a large
JRE. Another reason for differences in JREs measured with VICON and DeepLabCut could
be that the visual appearance such as clothing and posture of some participants was less
similar to the visual appearance of the participants that are used for network training. This
could lead to inaccurate DeepLabCut tracking of the joints for these participants. Increasing
the variation in appearance in the video footage used for network training (e.g., clothing
and posture of the participant and examiner, background, light conditions) will increase the
accuracy of the DeepLabCut tracking, potentially leading to smaller differences between
the JREs measured with VICON and DeepLabCut. However, an exploration of the effect of
environmental factors (such as lighting and background) on DeepLabCut’s performance is
still needed.
In the current study, the visual appearance also included the skin-mounted markers
that are visible in the video footages. However, the markers are not used for labelling
in DeepLabCut as the marker locations are different from the DeepLabCut anatomical
landmarks. Tracking the joints using DeepLabCut should also work without skin-mounted
markers visible in the video footage. But ideally, if only markerless tracking (and no
marker-based tracking) will be used in future work to determine joint angles, video footage
of persons without skin-mounted markers should also be used for network training. Future
work should also consider applying data augmentation methods (e.g., rotations, scaling,
mirroring), as this increases dataset diversity (without data collection) and therefore will
improve deep learning model robustness and generalization.
Furthermore, by extending the neural network models, it could be expected that fewer
trials need to be excluded because of low confidence in the predictions of the coordinates of
the anatomical landmarks; however, the dataset for the current study was limited to further
extend the neural network models (with more variation in the labeled frames) because
videos of trials used for network training are excluded from the statistical analyses. A
possible reason to exclude trials was if one or more joints were not visible in the video
recording. It requires some practice to master the posture of the examiner in which the
joints of the participant are clearly visible in the video. Visibility of the joints of the
participant could be increased when using active proprioception assessment methods (e.g.,
active JPR assessment and active movement extent discrimination assessment (AMEDA)).
Furthermore, the ecological validity of proprioception assessment would increase using
active proprioception assessment methods. However, using active methods increases the
risk of out-of-plane movements. Out-of-plane movements of a participant could cause
different JREs measured by the two systems. Determining the JREs based on measuring
two-dimensional joint angles in persons that do not move linearly in one dimension will
be inaccurate. Differences in JREs measured by the two systems caused by out-of-plane
Appl. Sci. 2025,15, 3428 11 of 14
movements could not be solved by extending the neural network models, but should
be solved by measuring three-dimensional joint angles. In the current study, typically
developing children performed passive JPR tasks, while sagittal videos were recorded (i.e.,
a camera on a tripod was placed perpendicular to the movement direction, the flexion-
extension axis), which resulted in a minimized chance of out-of-plane movements. In the
case of performing JPR tasks in other populations (such as children with cerebral palsy)
and/or under active testing conditions, more out-of-plane movements will be expected.
Therefore, measuring three-dimensional joint angles should be considered when out-of-
plane movements are expected. Assessing the validity and reliability of DeepLabCut in
measuring JREs based on three-dimensional joint angles is still needed.
Further research is needed to assess the validity of DeepLabCut to measure the JRE
in joints other than the knee and hip (such as the ankle and upper extremity joints) and
different measurement planes (such as the frontal movement plane, determining joint
adduction-abduction). Detecting proprioceptive deficits via the assessment of proprio-
ception function is not only relevant in typically developing children, but also in athletes,
older adults and disabled people. In these populations, poorer proprioception is indicative
of a higher chance of getting a sports injury [
34
], can further exacerbate the likelihood of
falls and balance problems [
35
,
36
] and can negatively affect movement control [
37
–
42
].
Deep-learning-based motion capture techniques could identify these proprioceptive deficits
and therefore facilitate the implementation of proprioception assessment in clinical and
sports settings, as there are no markers and no laboratory setting needed.
5. Conclusions
Measuring proprioception (joint position sense) during a knee joint extension and
hip joint flexion position reproduction test with a two-dimensional deep-learning-based
motion capture (DeepLabCut) is valid with respect to laboratory-based optoelectronic three-
dimensional motion capture (Vicon motion capture system, gold standard) in typically
developing children. Tools based on deep learning, such as DeepLabCut, make it possible
to accurately measure joint angles in order to assess proprioception without the need for
a laboratory and to attach markers with a high level of automatization. Further research
is needed to assess the validity of DeepLabCut to assess proprioception in other joints,
populations and tasks.
Supplementary Materials: The following supporting information can be downloaded at:
https://www.mdpi.com/article/10.3390/app15073428/s1. Table S1. Knee joint position repro-
duction test trials overview. Reasons for exclusion of trials were; one of more joints (e.g., ankle, knee,
hip, shoulder) were not visible in the video recording, a measurement error occurred (the trial was
not captured on video or the joint angle was not accurately calculated in VICON), one or more videos
of a child were used for additional labelling to extend the DeepLabCut neural network or if there
was unconfident tracking of joints by using DeepLabCut (DLC outlier); Table S2. Hip joint position
reproduction test trials overview. Reasons for exclusion of trials were; one of more joints (e.g., ankle,
knee, hip, shoulder) were not visible in the video recording, a measurement error occurred (the trial
was not captured on video or the joint angle was not accurately calculated in VICON), one or more
videos of a child were used for additional labelling to extend the DeepLabCut neural network or if
there was unconfident tracking of joints by using DeepLabCut (DLC outlier); Table S3. Train and test
pixel errors of the different DeepLabCut neural networks.
Author Contributions: Conceptualization: N.J., A.H. and P.M.; methodology: M.v.d.B., N.J., A.H.
and P.M.; software: M.v.d.B., N.J., A.H. and P.M.; validation: M.v.d.B., N.J., A.H. and P.M.; formal
analysis: M.v.d.B. and N.J.; investigation: N.J.; resources: M.v.d.B., N.J., A.H. and P.M.; data curation:
M.v.d.B. and N.J.; writing—original draft preparation: M.v.d.B.; writing—review and editing: N.J.,
A.H. and P.M.; visualization: M.v.d.B. and N.J.; supervision: A.H. and P.M.; project administration:
Appl. Sci. 2025,15, 3428 12 of 14
M.v.d.B. and N.J.; funding acquisition: N.J., A.H. and P.M. All authors have read and agreed to the
published version of the manuscript.
Funding: NJ and this work were supported by the Research Foundation—Flanders (FWO)
(grant.number: 92836, 2021) and the Special Research Fund (BOF) for Small Research Project—Hasselt
University (BOF19KP08), respectively.
Institutional Review Board Statement: The study was conducted in accordance with the Declara-
tion of Helsinki, and approved by the Committee for Medical Ethics (CME) of Hasselt University
(UHasselt), CME of Antwerp University Hospital-University of Antwerp (UZA-UA) and Ethics
Committee Research of University Hospitals Leuven-KU Leuven (UZ-KU Leuven) (B3002021000145,
11 March 2022).
Informed Consent Statement: Informed consent was obtained from all subjects involved in
the study
.
Data Availability Statement: The datasets presented in this article are not readily available because
the data are part of an ongoing study. Requests to access the datasets should be directed to the
corresponding author.
Acknowledgments: The authors would like to thank all children and parents who volunteered and
participated in this study and the school and master’s students who collaborated and assisted with
the recruitment of the children.
Conflicts of Interest: The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
JPR Joint position reproduction
JRE Joint position reproduction error
CME Committee for Medical Ethics
3D Three-dimensional
2D Two-dimensional
ISB International Society of Biomechanics
AIC Akaike information criteria
RMS Root mean square
AMEDA Active movement extent discrimination assessment
References
1. Peterka, R.J. Sensorimotor integration in human postural control. J. Neurophysiol. 2002,88, 1097–1118. [PubMed]
2. Sherrington, C.S. The Integrative Action of the Nervous System; Yale University Press: New Haven, CT, USA, 1906.
3.
Proske, U.; Gandevia, S.C. The proprioceptive senses: Their roles in signaling body shape, body position and movement, and
muscle force. Physiol. Rev. 2012,92, 1651–1697. [PubMed]
4.
Zarkou, A.; Lee, S.C.K.; Prosser, L.A.; Jeka, J.J. Foot and Ankle Somatosensory Deficits Affect Balance and Motor Function in
Children with Cerebral Palsy. Front. Hum. Neurosci. 2020,14, 45.
5.
Mendez-Rebolledo, G.; Ager, A.L.; Ledezma, D.; Montanez, J.; Guerrero-Henriquez, J.; Cruz-Montecinos, C. Role of active joint
position sense on the upper extremity functional performance tests in college volleyball players. PeerJ 2022,10, e13564.
6.
Lin, S.I. Motor function and joint position sense in relation to gait performance in chronic stroke patients. Arch. Phys. Med. Rehabil.
2005,86, 197–203.
7.
Chen, F.C.; Pan, C.Y.; Chu, C.H.; Tsai, C.L.; Tseng, Y.T. Joint position sense of lower extremities is impaired and correlated with
balance function in children with developmental coordination disorder. J. Rehabil. Med. 2020,52, jrm00088.
8.
van Roon, D.; Steenbergen, B.; Meulenbroek, R.G. Movement-accuracy control in tetraparetic cerebral palsy: Effects of removing
visual information of the moving limb. Mot. Control 2005,9, 372–394.
9.
Klingels, K.; Demeyere, I.; Jaspers, E.; De Cock, P.; Molenaers, G.; Boyd, R.; Feys, H. Upper limb impairments and their impact on
activity measures in children with unilateral cerebral palsy. Eur. J. Paediatr. Neurol. 2012,16, 475–484.
10.
Kantak, S.S.; Zahedi, N.; McGrath, R.L. Task-Dependent Bimanual Coordination After Stroke: Relationship with Sensorimotor
Impairments. Arch. Phys. Med. Rehabil. 2016,97, 798–806.
Appl. Sci. 2025,15, 3428 13 of 14
11.
Ryerson, S.; Byl, N.N.; Brown, D.A.; Wong, R.A.; Hidler, J.M. Altered trunk position sense and its relation to balance functions in
people post-stroke. J. Neurol. Phys. Ther. 2008,32, 14–20.
12.
Kuczynski, A.M.; Dukelow, S.P.; Semrau, J.A.; Kirton, A. Robotic Quantification of Position Sense in Children with Perinatal
Stroke. Neurorehabilit. Neural Repair 2016,30, 762–772. [CrossRef] [PubMed]
13.
Damiano, D.L.; Wingert, J.R.; Stanley, C.J.; Curatalo, L. Contribution of hip joint proprioception to static and dynamic balance in
cerebral palsy: A case control study. J. Neuroeng. Rehabil. 2013,10, 57. [CrossRef]
14.
Aman, J.E.; Elangovan, N.; Yeh, I.L.; Konczak, J. The effectiveness of proprioceptive training for improving motor function: A
systematic review. Front. Hum. Neurosci. 2014,8, 1075. [CrossRef]
15.
Horvath, A.; Ferentzi, E.; Schwartz, K.; Jacobs, N.; Meyns, P.; Koteles, F. The measurement of proprioceptive accuracy: A
systematic literature review. J. Sport Health Sci. 2023,12, 219–225. [CrossRef]
16.
Goble, D.J. Proprioceptive acuity assessment via joint position matching: From basic science to general practice. Phys. Ther. 2010,
90, 1176–1184. [CrossRef] [PubMed]
17.
Hillier, S.; Immink, M.; Thewlis, D. Assessing Proprioception: A Systematic Review of Possibilities. Neurorehabilit. Neural Repair
2015,29, 933–949. [CrossRef]
18.
Baert, I.A.; Mahmoudian, A.; Nieuwenhuys, A.; Jonkers, I.; Staes, F.; Luyten, F.P.; Truijen, S.; Verschueren, S.M.P. Proprioceptive
accuracy in women with early and established knee osteoarthritis and its relation to functional ability, postural control, and
muscle strength. Clin. Rheumatol. 2013,32, 1365–1374. [CrossRef] [PubMed]
19.
Gorton, G.; Hebert, D.; Gannotti, M. Assessment of the kinematic variability among 12 motion analysis laboratories. Gait Posture
2009,29, 398–402. [CrossRef]
20.
Leardini, A.; Chiari, L.; Croce, U.D.; Cappozzo, A. Human movement analysis using stereophotogrammetry: Part 3. Soft tissue
artifact assessment and compensation. Gait Posture 2005,21, 212–225. [CrossRef]
21.
Mathis, A.; Mamidanna, P.; Cury, K.M.; Abe, T.; Murthy, V.N.; Mathis, M.W.; Bethge, M. DeepLabCut: Markerless pose estimation
of user-defined body parts with deep learning. Nat. Neurosci. 2018,21, 1281–1289. [CrossRef]
22.
Nath, T.; Mathis, A.; Chen, A.C.; Patel, A.; Bethge, M.; Mathis, M.W. Using DeepLabCut for 3D markerless pose estimation across
species and behaviors. Nat. Protoc. 2019,14, 2152–2176. [PubMed]
23.
Jacobs, N.; van den Bogaart, M.; Hallemans, A.; Meyns, P. Multi-joint approach for assessing lower limb proprioception: Reliability
and precision in school-aged children. medRxiv 2024. [CrossRef]
24.
Gagnier, J.J.; Lai, J.; Mokkink, L.B.; Terwee, C.B. COSMIN reporting guideline for studies on measurement properties of
patient-reported outcome measures. Qual. Life Res. 2021,30, 2197–2218.
25.
Wu, G.; Siegler, S.; Allard, P.; Kirtley, C.; Leardini, A.; Rosenbaum, D.; Whittle, M.; D’Lima, D.D.; Cristofolini, L.; Witte, H.; et al.
ISB recommendation on definitions of joint coordinate system of various joints for the reporting of human joint motion—Part I:
Ankle, hip, and spine. J. Biomech. 2002,35, 543–548.
26.
Kidzinski, L.; Yang, B.; Hicks, J.L.; Rajagopal, A.; Delp, S.L.; Schwartz, M.H. Deep neural networks enable quantitative movement
analysis using single-camera videos. Nat. Commun. 2020,11, 4054.
27.
Bilesan, A.; Behzadipour, S.; Tsujita, T.; Komizunai, S.; Konno, A. (Eds.) Markerless Human Motion Tracking Using Microsoft
Kinect SDK and Inverse Kinematics. In Proceedings of the 2019 12th Asian Control Conference (ASCC), Kitakyushu, Japan, 9–12
June 2019.
28.
Kanko, R.M.; Laende, E.K.; Davis, E.M.; Selbie, W.S.; Deluzio, K.J. Concurrent assessment of gait kinematics using marker-based
and markerless motion capture. J. Biomech. 2021,127, 110665.
29.
Kanko, R.M.; Laende, E.K.; Strutzenberger, G.; Brown, M.; Selbie, W.S.; DePaul, V.; Scott, S.H.; Deluzio, K.J. Assessment of
spatiotemporal gait parameters using a deep learning algorithm-based markerless motion capture system. J. Biomech. 2021,
122, 110414. [CrossRef] [PubMed]
30.
Drazan, J.F.; Phillips, W.T.; Seethapathi, N.; Hullfish, T.J.; Baxter, J.R. Moving outside the lab: Markerless motion capture accurately
quantifies sagittal plane kinematics during the vertical jump. J. Biomech. 2021,125, 110547.
31.
Nakano, N.; Sakura, T.; Ueda, K.; Omura, L.; Kimura, A.; Iino, Y.; Fukashiro, S.; Yoshioka, S. Evaluation of 3D Markerless Motion
Capture Accuracy Using OpenPose with Multiple Video Cameras. Front. Sports Act. Living 2020,2, 50. [CrossRef]
32.
Horsak, B.; Eichmann, A.; Lauer, K.; Prock, K.; Krondorfer, P.; Siragy, T.; Dumphart, B. Concurrent validity of smartphone-
based markerless motion capturing to quantify lower-limb joint kinematics in healthy and pathological gait. J. Biomech. 2023,
159, 111801.
33.
Fang, W.; Zheng, L.; Deng, H.; Zhang, H. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive
Visual-Inertial Fusion. Sensors 2017,17, 1037. [CrossRef] [PubMed]
34.
Cameron, M.; Adams, R.; Maher, C. Motor control and strength as predictors of hamstring injury in elite players of Australian
football. Phys. Ther. Sport 2003,4, 159–166.
35.
Wingert, J.R.; Welder, C.; Foo, P. Age-Related Hip Proprioception Declines: Effects on Postural Sway and Dynamic Balance. Arch.
Phys. Med. Rehabil. 2014,95, 253–261.
Appl. Sci. 2025,15, 3428 14 of 14
36.
Chen, X.; Qu, X. Age-Related Differences in the Relationships Between Lower-Limb Joint Proprioception and Postural Balance.
Hum. Factors 2019,61, 702–711. [PubMed]
37.
Coleman, R.; Piek, J.P.; Livesey, D.J. A longitudinal study of motor ability and kinaesthetic acuity in young children at risk of
developmental coordination disorder. Hum. Mov. Sci. 2001,20, 95–110.
38.
Kaufman, L.B.; Schilling, D.L. Implementation of a strength training program for a 5-year-old child with poor body awareness
and developmental coordination disorder. Phys. Ther. 2007,87, 455–467.
39.
Goble, D.J.; Hurvitz, E.A.; Brown, S.H. Deficits in the ability to use proprioceptive feedback in children with hemiplegic cerebral
palsy. Int. J. Rehabil. Res. 2009,32, 267–269.
40.
Wang, T.N.; Tseng, M.H.; Wilson, B.N.; Hu, F.C. Functional performance of children with developmental coordination disorder at
home and at school. Dev. Med. Child Neurol. 2009,51, 817–825.
41.
Zwicker, J.G.; Harris, S.R.; Klassen, A.F. Quality of life domains affected in children with developmental coordination disorder: A
systematic review. Child Care Health Dev. 2013,39, 562–580.
42.
Li, K.Y.; Su, W.J.; Fu, H.W.; Pickett, K.A. Kinesthetic deficit in children with developmental coordination disorder. Res. Dev.
Disabil. 2015,38, 125–133.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.