September 2024
·
1 Read
The Proceedings of Conference of Chugoku-Shikoku Branch
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
September 2024
·
1 Read
The Proceedings of Conference of Chugoku-Shikoku Branch
December 2023
·
28 Reads
·
1 Citation
Sensors
In this study, we introduce a novel framework that combines human motion parameterization from a single inertial sensor, motion synthesis from these parameters, and biped robot motion control using the synthesized motion. This framework applies advanced deep learning methods to data obtained from an IMU attached to a human subject’s pelvis. This minimalistic sensor setup simplifies the data collection process, overcoming price and complexity challenges related to multi-sensor systems. We employed a Bi-LSTM encoder to estimate key human motion parameters: walking velocity and gait phase from the IMU sensor. This step is followed by a feedforward motion generator-decoder network that accurately produces lower limb joint angles and displacement corresponding to these parameters. Additionally, our method also introduces a Fourier series-based approach to generate these key motion parameters solely from user commands, specifically walking speed and gait period. Hence, the decoder can receive inputs either from the encoder or directly from the Fourier series parameter generator. The output of the decoder network is then utilized as a reference motion for the walking control of a biped robot, employing a constraint-consistent inverse dynamics control algorithm. This framework facilitates biped robot motion planning based on data from either a single inertial sensor or two user commands. The proposed method was validated through robot simulations in the MuJoco physics engine environment. The motion controller achieved an error of ≤5° in tracking the joint angles demonstrating the effectiveness of the proposed framework. This was accomplished using minimal sensor data or few user commands, marking a promising foundation for robotic control and human–robot interaction.
June 2023
·
4 Reads
·
2 Citations
April 2023
·
87 Reads
·
9 Citations
Applied Sciences
Gait analysis is important in a variety of applications such as animation, healthcare, and virtual reality. So far, high-cost experimental setups employing special cameras, markers, and multiple wearable sensors have been used for indoor human pose-tracking and gait-analysis purposes. Since locomotive activities such as walking are rhythmic and exhibit a kinematically constrained motion, fewer wearable sensors can be employed for gait and pose analysis. One of the core parts of gait analysis and pose-tracking is lower-limb-joint angle estimation. Therefore, this study proposes a neural network-based lower-limb-joint angle-estimation method from a single inertial sensor unit. As proof of concept, four different neural-network models were investigated, including bidirectional long short-term memory (BLSTM), convolutional neural network, wavelet neural network, and unidirectional LSTM. Not only could the selected network affect the estimation results, but also the sensor placement. Hence, the waist, thigh, shank, and foot were selected as candidate inertial sensor positions. From these inertial sensors, two sets of lower-limb-joint angles were estimated. One set contains only four sagittal-plane leg-joint angles, while the second includes six sagittal-plane leg-joint angles and two coronal-plane leg-joint angles. After the assessment of different combinations of networks and datasets, the BLSTM network with either shank or thigh inertial datasets performed well for both joint-angle sets. Hence, the shank and thigh parts are the better candidates for a single inertial sensor-based leg-joint estimation. Consequently, a mean absolute error (MAE) of 3.65° and 5.32° for the four-joint-angle set and the eight-joint-angle set were obtained, respectively. Additionally, the actual leg motion was compared to a computer-generated simulation of the predicted leg joints, which proved the possibility of estimating leg-joint angles during walking with a single inertial sensor unit.
January 2023
·
109 Reads
IEEE Access
This study evaluated the capability of a single inertial sensor based on both legs’ hip and knee joint angles estimation during four different walking patterns in an outdoor setting. The sensor was placed on the upper part of the tibia, a location chosen due to its large range of motion and minimal foot-ground impact influence. A Bi-LSTM (bidirectional long short-term memory) data-driven approach was used for joint angle estimation. The results showed smaller errors in intra-subject angle estimation compared to inter-subject, with an average MAE (mean absolute error) of 2.11° to 3.65°. The study suggests that deep learning approaches can effectively process single IMU (inertial measurement unit) data for accurate human motion monitoring, reducing the need for multiple sensors. Despite using only one sensor and four different walking patterns (zigzag, sideways, backward, and ramp walking), our method achieved similar results to previous studies that used single-motion activities. This study, conducted outdoors without instructing participants, is a step closer to real-world application, potentially providing insights into lower body biomechanics in physiotherapy, mobility improvement progress after surgery, and aiding in the development of personalized exoskeletons robots.
December 2022
·
15 Reads
The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)
Digitizing handwriting is mostly performed either by using offline methods such as Optical character recognition or using the combination of a special stylus and pad. This approach is costlier. Therefore, in this study, a deep-learning-based English alphabet and Arabic numeral character recognition method is proposed. A particular digital pen, equipped with an inertial sensor (three-axis accelerometer and three-axis gyroscope) and three force sensors, developed in our previous paper was utilized for this study. Vision transformer (ViT) which is drawing huge attention in the field of image classification and sequential tasks was adopted as a neural network model. The model achieved an excellent F1-score result of 0.993 during testing. This confirmed a promising capability of the developed digital pen for character recognition using neural network approaches.
October 2022
·
295 Reads
·
15 Citations
Sensors
Digitizing handwriting is mostly performed using either image-based methods, such as optical character recognition, or utilizing two or more devices, such as a special stylus and a smart pad. The high-cost nature of this approach necessitates a cheaper and standalone smart pen. Therefore, in this paper, a deep-learning-based compact smart digital pen that recognizes 36 alphanumeric characters was developed. Unlike common methods, which employ only inertial data, handwriting recognition is achieved from hand motion data captured using an inertial force sensor. The developed prototype smart pen comprises an ordinary ballpoint ink chamber, three force sensors, a six-channel inertial sensor, a microcomputer, and a plastic barrel structure. Handwritten data of the characters were recorded from six volunteers. After the data was properly trimmed and restructured, it was used to train four neural networks using deep-learning methods. These included Vision transformer (ViT), DNN (deep neural network), CNN (convolutional neural network), and LSTM (long short-term memory). The ViT network outperformed the others to achieve a validation accuracy of 99.05%. The trained model was further validated in real-time where it showed promising performance. These results will be used as a foundation to extend this investigation to include more characters and subjects.
August 2022
·
16 Reads
·
4 Citations
April 2022
·
110 Reads
·
2 Citations
Intelligent Service Robotics
A variable stiffness actuator (VSA) is considered a promising mechanism-based approach for realizing compliant robotic manipulators. By changing the stiffness of each joint, the robot can modulate the stiffness of the entire system to enhance safety and efficiency during physical interaction with other systems. This paper presents a feedforward method to modulate the operational stiffness of a parallel planar robot with multiple VSAs. A VSA utilizing a lever mechanism was developed, clearly presenting its mechanical design and kinematic model details. A computational model of joint-restoring torque was developed based on deformation measurements and hysteresis loop geometry to estimate the applied torque of each joint in real-time. An algorithm was proposed to compute the joint stiffness solution using the robot's kinematic model for modulating the operational stiffness of the parallel robot. Experiments were performed to evaluate the proposed method by comparing the performances of two DOF serial and parallel robot systems. The results demonstrated the capability of the VSA in both feedforward stiffness modulation and external force estimation.
January 2022
·
1 Read
The Proceedings of Conference of Chugoku-Shikoku Branch
... An example is the estimation of human sitting postures from the data of a pressure map sensor on a seat surface [14,15]. Similar methods are already used for monitoring patients in hospital beds and for gait analysis [11,14,[16][17][18][19][20][21][22]. However, the suitability of this approach for estimating the upper limbs posture, and more specifically the human arm postures in manual activities, such as typically for assembly processes and construction work, has not been studied yet [23]. ...
June 2023
... To handle the above-described nonlinearities in POF sensor outputs, Long Short-Term Memory (LSTM) networks [21], [22], [23], [24], [25], [26], [27], [28] are employed in this study. LSTMs are designed to analyze timeseries data, allowing them to model temporal relationships. ...
April 2023
Applied Sciences
... In contrast to traditional artificial neural networks (ANN), recurrent neural network (RNN) models, a type of deep learning-based architecture, have been designed to handle temporal dependencies between input and output sequences, which is a common challenge in the processing tasks of human motion data [52][53][54]. These methods have been used for estimating lower-limb joint kinematics with a single IMU placed on a particular body segment such as the pelvis or foot [54][55][56]. These studies suggest that the overall motion of a multi-segment linkage system (the pelvis-leg apparatus) during a repeated motor task such as walking may be predicted by RNN methods using data from one of the segments (the pelvis). ...
August 2022
... Robo-pen жасаған күрделі деректер жиынын жақсырақ түсіну үшін деректерді талдаудың озық әдістерін қолдануға көп көңіл бөлінеді. Біз қолжазба көрсеткіштерінің уақыт қатарлары деректеріндегі ұзақ мерзімді уақытша корреляцияларды зерттеу үшін төмендетілген тербеліс талдауын (DFA) енгізуді жоспарлап отырмыз [15]. DFA қолжазбадағы қартаюға байланысты қалыпты өзгерістер мен Альцгеймер ауруымен байланысты өзгерістерді ажыратуда әсіресе пайдалы болады [16]. ...
October 2022
Sensors
... Open-loop approaches include variable geometry [39,40], redundant actuation [41,42], and implementation of variable stiffness actuators [43,44]. A combined use of real-time kinematic redundancy and variable stiffness actuators for achieving stiffness modulation was proposed in Ref. [38]. ...
April 2022
Intelligent Service Robotics
... The majority of existing handwriting recognition methods equipped with different sensors can be split into two groups based on whether they require direct contact with hardware [13]. Generally, digital pens [9,16,17], smart watches [18], smart bands [19] and other devices [10,20] using inertial-based sensors are attached to skin to acquire detailed body motion data for classification. M. Schrapel et al. [9] used a microphone and an inertial measurement unit (IMU) in a pen for handwritten digit recognition to record audio and motion data while writing. ...
January 2021
... This approach benefits from not requiring labeled data, making it suitable for environments where collecting fall data is challenging. Additionally, research has explored the use of autoencoders with grayscale images of acceleration signals as input [46]. This method allows the autoencoder to capture and learn from the visual representation of the signal's dynamics, enhancing its ability to detect anomalies that may indicate a fall. ...
January 2021
... In the field of literature studies, various deep learning techniques have been employed for the purpose of recognizing human activities and enhancing occupational health and safety protocols within construction sites as well as other industrial settings [6,31,[33][34][35][46][47][48][49]. In this study, the primary aim is to determine the most suitable model through the analysis of literature-derived data using various deep learning methods and diverse window and overlap ratios. ...
April 2021
Sensors
... The VSA developed during previous research [17,18] was adopted as an actuator for the robot joint in this paper. It exploits a lever mechanism to transmit the reaction force of a torsion spring to an output axis, while this force is adjusted by changing the pivot position of the lever link, which in turn modifies the output stiffness. ...
February 2021
... The trained model was tested on a computer and a smartphone for real-time motion recognition to check its practicability. This methodology was adopted from a previous conference paper [47]. ...
November 2020