Fig 2 - uploaded by Cheng Yang
Content may be subject to copyright.
Sagittal Model (Right Hand Side). 12 visible markers are marked with green circles. 2 partial invisible markers are shown in circle outlines.

Sagittal Model (Right Hand Side). 12 visible markers are marked with green circles. 2 partial invisible markers are shown in circle outlines.

Source publication
Conference Paper
Full-text available
Driven by recent advances in information and communications technology, tele-rehabilitation services based on multimedia processing are emerging. Gait analysis is common for many rehabilitation programs, being, for example, periodically performed in the post-stroke recovery assessment. Since current optical diagnostic and patient assessment tools t...

Contexts in source publication

Context 1
... MS Kinect v2 SDK [7], which simplifies walking line calibration. Next, we split the defined sagittal subject model into three parts: upper body, limb and foot model. We locate and label all markers by validating their camera space coordinates along with axes X,Y, and measure H0 * , H7 * , and W 4 * to W 8 * as sagittal model parameters shown in Fig. ...
Context 2
... conditions. We set the recovery radius δ = 3 for growing the high IR P i,j region by Eq.2 and calculate the weight value as the significant mean relative distance between depth pixels to the centroid in each histogram bin. We use the subject model defined in Sec. 2.1 to inspect all potential groups for upper body, limb and foot models (see Fig. 2). In particular, we order all markers between L12 and L13 by X- coordinate and validate the distances D0, D1, D2 for obtaining the most likely groups as the first look-up table. Then for the upper limb, we sort markers under L13 by Y-coordinate and X-coordinate and evaluate the nearest six markers to the ground by splitting them up ...
Context 3
... L12 and L13 by X- coordinate and validate the distances D0, D1, D2 for obtaining the most likely groups as the first look-up table. Then for the upper limb, we sort markers under L13 by Y-coordinate and X-coordinate and evaluate the nearest six markers to the ground by splitting them up into two groups with the triangular foot model shown in Fig. 2. Finally, we categorize the remaining left markers in the up- per limb region by Y-coordinates and solve foot position by examining relative position to the right knee marker and histories over ...

Citations

... Related range sensor and homevideo based systems, which cost about £700, such as [9][10][11], that build on the work of [12], with Pro-Trainer motion analysis software (Sports Motion, Inc., Cardiff, CA), offer gait analysis outside the gait laboratory, e.g. in local clinics and at homes. Similar to other range sensor and home-video based gait analysis systems [2,[13][14][15][16][17][18][19][20][21] and inertial measurement unit (IMU) based gait analysis systems [22][23][24][25][26], the gait parameters obtained after data processing can be sent to physiatrists for clinical consultation, indicating the potential for tele-rehabilitation [27][28][29][30][31]. It is shown in [32] that a 2D video tracker software provides similar accuracy to VICON 3D system for knee angle measurement but not for measurement of the ankle angle over time. ...
... Our system addresses some of the drawbacks of related range sensor and home-video based systems [2,9,11,[13][14][15][16][17][18][19]41] and IMU systems [22][23][24][25][26] namely: (i) unlike [9], there are no colour restrictions on the background or the participant's clothing; (ii) in contrast to [9], which is validated on only one healthy volunteer with one walking trial with no gold standard benchmark, we validate our proposed system's knee angle against the gold standard VICON MX Giganet 6xT40 and 6xT160 (VICON Motion Systems Ltd., Oxford, UK, approximately £ 250,000) optical motion analysis system (the same gold standard as used by Ugbolue et al. [11]); (iii) unlike systems of [11] and Pro-Trainer and Siliconcoach (Siliconcoach Ltd., Dunedin, New Zealand) as used by the authors in [42,43] that require significant manual effort, our system autonomously tracks the markers attached to the joints and calculates the knee angle; the only operational effort required is for marker-template selection for tracking initialisation which is done via a user-friendly graphical user interface (GUI); (iv) unlike the passive marker system [41] that is only validated on one side of the body without any benchmarking systems, our system is validated on both sides of the body with a gold standard VICON optical motion analysis system; (v) 3D Kinect range sensor-based systems [13-18, 20, 21] cannot reliably capture relatively fast body motion, since Kinect operates at only 30 frames per second (fps), whereas our system operates at 210 fps; (vi) like other range sensor and home-video based systems, our system is non-intrusive to the participants, which is in contrast to state-of-the-art IMU gait analysis systems [22][23][24][25][26]. However, with only a 2D camera in our gait analysis system, its drawback lies in the following two aspects: (i) estimation of the human joint locations using our system is less accurate compared to 3D Kinect-based range sensor systems, and (ii) the gait parameters derived from the 2D images in our system are less reliable than those derived from the inertial data in IMU systems. ...
... Our future work will be focused on further improvement of performance and potential measurement capability of more gait parameters using a stereo 2D-camera system or a single depth sensing device [2,[13][14][15][16][17][18][19] with a high frame rate, without sacrificing portability, to remove the parallax error, and leverage the 3D information for quantifying a larger number of gait parameters such as hip, knee, and ankle angles in both the sagittal and frontal planes, and pelvis tilt, calculating temporal-spatial parameters (step length and width, stride length, step time, cadence and step length symmetry), gait speed and measuring sagittal/ frontal plane knee motion, but at an increased processing complexity. Furthermore, neural-network-based methods are what we could consider in future research that involves evaluations with public availability of large labelled datasets of walking trials. ...
Article
Full-text available
While optical motion analysis systems can provide high-fidelity gait parameters, they are usually impractical for local clinics and home use, due to high cost, requirement for large space, and lack of portability. In this study, we focus on a cost-effective and portable, single-camera gait analysis solution, based on video acquisition with calibration, autonomous detection of frames-ofinterest, Kalman-filter+Structural-Similarity-based marker tracking, and autonomous knee angle calculation. The proposed system is tested using 15 participants, including 10 stroke patients and 5 healthy volunteers. The evaluation of autonomous frames-ofinterest detection shows only 0.2% difference between the frame number of the detected frame compared to the frame number of the manually labelled ground truth frame, and thus can replace manual labelling. The system is validated against a gold standard optical motion analysis system, using knee angle accuracy as metric of assessment. The accuracy investigation between the RGBand the grayscale-video marker tracking schemes shows that the grayscale system suffers from negligible accuracy loss with a significant processing speed advantage. Experimental results demonstrate that the proposed system can automatically estimate the knee angle, with R-squared value larger than 0.95 and Bland-Altman plot results smaller than 3.0127 degrees mean error.
... Such people are hardly to use the application of hands or legs for the interactive operations, so the multimedia remote interactions have been developed. Among the common systems for the multimedia remote interactive operations, the systems based on images, voices, and biological signals are used for the multimedia remote interactions [18]. Since the images and voices based systems have some spatial features of EEG signals, and the structure and parameters of convolution neural network were optimized. ...
Article
Full-text available
In order to construct the multimedia remote interactive operations for the locked-in patients, this paper proposed a novel BCI system that constructed by the EEG signals with convolutional neural network. To constructing the remote interactive operations, the temporal and spatial features of electroencephalogram (EEG) signals were extracted by longitudinal convolution kernel and transverse convolution kernel respectively. The classification of motion imagination was completed by using two fully-connected layers. Finally, the classification results will be transferred as the multimedia remote interactive results in response to the multimedia remote interactive operations. The experimental results showed that the proposed EEG-BCI system had a good performance and efficiency for the multimedia remote interactive operations, which will facilitate the fields of multimedia remote interactive operations.
... Moreover, since the retroreflective markers block the depth measurements from the depth camera, the only way to recover the depth value for each marker is to use their surrounding information. To address the above problems, we proposed three algorithms: (1) Threshold analysis (Alg.1) − extending previous work in [25] to solve fast motion and camera noise during marker detection, (2) Marker detection (Alg.2) − the idea is to improve the marker centroid location accuracy and speed which are attached to joints of interest, in image space, (3) Depth recovery and mapping (Alg.3) − the 3D texture is partially missing in the marker region and it is possible to use the point cloud histograms for restoring the depth value of the marker centroid. ...
... There are several approaches to detect and identify blobs, such as matched filters / template matching [25], watershed detection [35], structure tensor analysis followed by hypothesis testing of gradient directions [36], [37], scale-space analysis [38]. All these approaches are limited by their sensitivity to noise, structure restriction and complexity [39]. ...
... All these approaches are limited by their sensitivity to noise, structure restriction and complexity [39]. In previous related work [25], a concentric cycle-based method (template matching) is proposed to perform the shape fitting test for each potential blob in order to locate all markers in image space (2D); however, this method is time consuming and requires expertise to determine associated parameters for the shape fitter and the kernel cluster filter, and cannot locate the center of the marker correctly when motion blur occurs and the marker is out of the sagittal plane, which leads to center deviation on those markers with circular distributed IR values. ...
Article
With increasing importance given to telerehabilitation, there is a growing need for accurate, low-cost, and portable motion capture systems that do not require specialist assessment venues. This paper proposes a novel framework for motion capture using only a single depth camera, which is portable and cost effective compared to most industry-standard optical systems, without compromising on accuracy. Novel signal processing and computer vision algorithms are proposed to determine motion patterns of interest from infrared and depth data. In order to demonstrate the proposed framework’s suitability for rehabilitation, we developed a gait analysis application that depends on the underlying motion capture sub-system. Each subject’s individual kinematics parameters, which are unique to that subject, are calculated and these are stored for monitoring individual progress of the clinical therapy. Experiments were conducted on 14 different subjects, 5 healthy and 9 stroke survivors. The results show very close agreement of the resulting relevant joint angles with a 12-camera based VICON system, a mean error of at most 1.75% in detecting gait events w.r.t the manually generated ground-truth, and significant performance improvements in terms of accuracy and execution time compared to a previous Kinect-based system.
Conference Paper
Central nervous system dysfunction in infants may be manifested through inconsistent, rigid and abnormal limb movements. Detection of limb movement anomalies associated with such neurological dysfunctions in infants is the first step towards early treatment for improving infant development. This paper addresses the issue of detecting and quantifying limb movement anomalies in infants through non-invasive 3D image analysis methods using videos from multiple camera views. We propose a novel scheme for tracking 3D time trajectories of markers on infant's limbs by video analysis techniques. The proposed scheme employ videos captured from three camera views. This enables us to detect a set of enhanced 3D markers through cross-view matching and to effectively handle marker self-occlusions by other body parts. We track a set of 3D trajectories of limb movements by a set of particle filters in parallel, enabling more robust 3D tracking of markers, and use the 3D model errors for quantifying abrupt limb movements. The proposed work makes a significant advancement to the previous work in [1] through employing tracking in 3D space, and hence overcome several main barriers that hinder real applications by using single camera-based techniques. To the best of our knowledge, applying such a multi-view video analysis approach for assessing neurological dysfunctions of infants through 3D time trajectories of markers on limbs is novel, and could lead to computer-aided tools for diagnosis of dysfunctions where early treatment may improve infant development. Experiments were conducted on multi-view neonate videos recorded in a clinical setting and results have provided further support to the proposed method.