Specifications of LSM9DS1 IMU sensor.

Specifications of LSM9DS1 IMU sensor.

Source publication
Article
Full-text available
Digitizing handwriting is mostly performed using either image-based methods, such as optical character recognition, or utilizing two or more devices, such as a special stylus and a smart pad. The high-cost nature of this approach necessitates a cheaper and standalone smart pen. Therefore, in this paper, a deep-learning-based compact smart digital p...

Contexts in source publication

Context 1
... LSM9DS1 IMU sensor, a small chip embedded in the Arduino Nano 33 BLE board, was utilized in this research. The specifications for this sensor are shown in Table 1. The placement of the IMU sensor is decisive. ...
Context 2
... LSM9DS1 IMU sensor, a small chip embedded in the Arduino Nano 33 BLE board, was utilized in this research. The specifications for this sensor are shown in Table 1. The placement of the IMU sensor is decisive. ...

Citations

... Another approach to online HWR uses pens equipped with inertial measurement units (IMUs) [21,1,14]. These sensors, including accelerometers and gyroscopes, capture pen movement without relying on the exact position of the tip, allowing the pen to function independently of external devices and on any surface. ...
Preprint
Online handwriting recognition (HWR) using data from inertial measurement units (IMUs) remains challenging due to variations in writing styles and the limited availability of high-quality annotated datasets. Traditional models often struggle to recognize handwriting from unseen writers, making writer-independent (WI) recognition a crucial but difficult problem. This paper presents an HWR model with an encoder-decoder structure for IMU data, featuring a CNN-based encoder for feature extraction and a BiLSTM decoder for sequence modeling, which supports inputs of varying lengths. Our approach demonstrates strong robustness and data efficiency, outperforming existing methods on WI datasets, including the WI split of the OnHW dataset and our own dataset. Extensive evaluations show that our model maintains high accuracy across different age groups and writing conditions while effectively learning from limited data. Through comprehensive ablation studies, we analyze key design choices, achieving a balance between accuracy and efficiency. These findings contribute to the development of more adaptable and scalable HWR systems for real-world applications.
... Robo-pen жасаған күрделі деректер жиынын жақсырақ түсіну үшін деректерді талдаудың озық әдістерін қолдануға көп көңіл бөлінеді. Біз қолжазба көрсеткіштерінің уақыт қатарлары деректеріндегі ұзақ мерзімді уақытша корреляцияларды зерттеу үшін төмендетілген тербеліс талдауын (DFA) енгізуді жоспарлап отырмыз [15]. DFA қолжазбадағы қартаюға байланысты қалыпты өзгерістер мен Альцгеймер ауруымен байланысты өзгерістерді ажыратуда әсіресе пайдалы болады [16]. ...
Article
Alzheimer’s Disease (AD) poses a significant challenge in contemporary medicine, necessitating early and accurate diagnostic methods to manage its progression effectively. This study explores the development and application of the Robo-pen, an innovative diagnostic tool designed to detect early signs of cognitive decline through detailed handwriting analysis. The Robo-pen, equipped with an MPU-9250 sensor, captures three-dimensional coordinates, velocity, and acceleration of handwriting movements, crucial for assessing spatial control, movement consistency, speed variations, and the ability to modulate movement speed and force–parameters often disrupted in cognitive impairments like AD. Participants included 20 patients diagnosed with AD and 18 healthy controls, matched in age and educational levels. Data collection involved tasks such as sentence rewriting, figure redrawing, and digit rewriting, processed using CoolTerm software at a sampling rate of 18 Hz. Descriptive statistics revealed that the AD group exhibited lower mean values for gyroscope and acceleration data, indicating slower and less variable movements compared to the control group. T-tests confirmed significant differences (p < 0.001) across all measured parameters between the AD and control groups. The results support the potential of the Robo-pen as a non-invasive, cost-effective diagnostic tool for early detection of AD. By capturing subtle neuromotor changes, the Robo-pen facilitates earlier diagnosis and timely intervention, potentially altering the disease trajectory and improving patient outcomes. This study marks a significant advancement in the early detection of AD, highlighting the Robo-pen’s promise as a transformative tool in neurodegenerative disease diagnosis and management.
... Another example is a study by Tlemsani et al. [29] that deployed a time delay neural network (TDNN) for online handwriting recognition of Arabic characters using stylus x(t) and y(t) coordinates, direction, curvature, and pen up/down states as features, achieving up to 99.61% recognition rates. To the best of our knowledge, one of the few papers that has explored alternative modalities to tablets is by Alemayoh et al. [30], in which the authors analyzed readings from inertial measurement units (IMUs) and from three force sensors embedded in a pen to recognize 36 handwritten alphanumeric characters with an accuracy of 99.05%. To the best of our knowledge, there are no studies that use real-time style detection based on the kinematics features of Arabic handwriting. ...
Article
Full-text available
Handwriting style is an important aspect affecting the quality of handwriting. Adhering to one style is crucial for languages that follow cursive orthography and possess multiple handwriting styles, such as Arabic. The majority of available studies analyze Arabic handwriting style from static documents, focusing only on pure styles. In this study, we analyze handwriting samples with mixed styles, pure styles (Ruq’ah and Naskh), and samples without a specific style from dynamic features of the stylus and hand kinematics. We propose a model for classifying handwritten samples into four classes based on adherence to style. The stylus and hand kinematics data were collected from 50 participants who were writing an Arabic text containing all 28 letters and covering most Arabic orthography. The parameter search was conducted to find the best hyperparameters for the model, the optimal sliding window length, and the overlap. The proposed model for style classification achieves an accuracy of 88%. The explainability analysis with Shapley values revealed that hand speed, pressure, and pen slant are among the top 12 important features, with other features contributing nearly equally to style classification. Finally, we explore which features are important for Arabic handwriting style detection.
... Covered Topics [1][2][3][4][5][6][7][8][9][10] Significance of diagnosing AD and introduction to the field [14][15][16][17][18][19][20][21][22][23][24] Related work, overview of existing relevant review papers [25][26][27][28][29][30][31][32] Overview of machine learning techniques [33][34][35][36] Radiomics based techniques in diagnosing AD [2,[37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56] MRI based techniques in diagnosing AD [57][58][59][60][61][62][63][64][65][66][67][68][69][70][71] PET based techniques in diagnosing AD [72][73][74][75][76][77][78][79][80][81][82][83][84][85][86][87] Use of EEG and MEG signals in detecting AD [88][89][90][91][92][93][94][95][96][97][98][99][100][101][102][103][104][105][106][107] Analysis of sensors data to diagnose the AD [108][109][110][111][112][113][114][115][116][117][118][119][120] AI-based Application in Alzheimer's disease diagnosis neuroimaging initiative) database, OASIS (Open access series of imaging studies) database and MIRIAD (Minimal Interval Resonance Imaging in Alzheimer's Disease). ...
... Alemayoh et al. [112] developed a remote monitoring system that combines AI algorithms with handwriting data for continuous assessment of individuals with Alzheimer's disease [113][114][115]. By utilizing digital pen technology and AI-driven analysis, the system can monitor changes in handwriting patterns over time, providing real-time insights into cognitive decline and enabling timely interventions. ...
Article
Alzheimer's disease is the most common cause of dementia, gradually impairing memory, intellectual, learning, and organizational capacities. An individual's capacity to perform fundamental daily tasks is greatly impacted. This review examines the advancements in diagnosing Alzheimer's disease (AD) using artificial intelligence (AI) methods and machine learning (ML) algorithms. The review introduces the importance of diagnosing AD accurately and the potential benefits of using AI techniques and machine learning algorithms for this purpose. The review is based on various state-of-the-art data sources including MRI data, PET imaging, EEG and MEG signals, and data from various sensors. The state-of-the-art radiomics approaches are explored to extract a wide range of information from medical images using data-characterization algorithms. These features can show temporal patterns and qualities that are not visible to the human eye. A novel data source (handwriting data) is thoroughly investigated and coupled with AI algorithms for the precise and early detection of cognitive loss associated with Alzheimer's disease. The paper discusses research directions, prospects, and future advances, as well as the proposed notion of employing a Robopen with an MPU-9250 sensor connected via Arduino. Finally, the review concludes with a summary of its significant findings and their clinical implications.
... The notable method is the use of inertial sensors on a smartwatch in order to track motions of the arm and hand [46]. However, inertial sensors alone can have difficulties to implicitly sense the state of the hand and, thus, are usually integrated with other sensors [47,48]. Electro-Myography (EMG) is another wearable approach where electrical currents in muscle cells are measured and provide some state of the hand [49,50]. ...
Preprint
Full-text available
Hand gestures play a significant role in human interactions where non-verbal intentions, thoughts and commands are conveyed. In Human-Robot Interaction (HRI), hand gestures offer a similar and efficient medium for conveying clear and rapid directives to a robotic agent. However, state-of-the-art vision-based methods for gesture recognition have been shown to be effective only up to a user-camera distance of seven meters. Such a short distance range limits practical HRI with, for example, service robots, search and rescue robots and drones. In this work, we address the Ultra-Range Gesture Recognition (URGR) problem by aiming for a recognition distance of up to 25 meters and in the context of HRI. We propose a novel deep-learning framework for URGR using solely a simple RGB camera. First, a novel super-resolution model termed HQ-Net is used to enhance the low-resolution image of the user. Then, we propose a novel URGR classifier termed Graph Vision Transformer (GViT) which takes the enhanced image as input. GViT combines the benefits of a Graph Convolutional Network (GCN) and a modified Vision Transformer (ViT). Evaluation of the proposed framework over diverse test data yields a high recognition rate of 98.1%. The framework has also exhibited superior performance compared to human recognition in ultra-range distances. With the framework, we analyze and demonstrate the performance of an autonomous quadruped robot directed by human gestures in complex ultra-range indoor and outdoor environments.
... Jangpangi et al. [20] proposed the utilization of the Wasserstein distance function to improve the Adversarial Feature Deformation Module (AFDM), resulting in a reduction in both overall word error rate and character error rate. Alemayoh et al. [21] have developed a compact smart digital pen based on deep learning, capable of recognizing 36 alphanumeric characters. However, online handwritten character recognition still encounters numerous challenges at present, including but not limited to the recognition of easily confusable characters, unavailability of enough datasets, and various types and massive characters [22]. ...
Article
Full-text available
In the field of dynamic gesture trajectory recognition, it is difficult to real-time recognize its semantics on the continuous handwritten trajectories because of the difficulty of trajectory segment accurately. In this paper, focuses on the semantic recognition for the handwritten trajectories of continuous numeric characters, a regression-based time pyramid network real-time recognition method is proposed. Firstly, we use corner detection algorithms to obtain the corner points of the fingers, and then construct reasonable convex functions to obtain the unique fingertip point. Then, we perform hierarchical construction of the extracted fingertip trajectory features using a time pyramid, and then aggregate the features that have undergone spatial semantic modulation and temporal rate modulation. Finally, utilizing the idea of regression detection, we predict and classify the extracted trajectory features in a specialized fully connected layer with N neural nodes. According to the experimental results, our method achieved a recognition accuracy of up to 78.87%, while also achieving a recognition speed of 32.69 fps. Our method achieves a good balance between recognition accuracy and recognition speed, which indicates that our approach has significant advantages in real-time recognition of continuous handwritten trajectories.
... The mishandling can harm the package by making the object break, leak or cause other damage. Because it is sourced from an external force, a sensor that can sense a motion in an object is the Inertial Measurement Unit (IMU) [8]- [10]. ...
... Measurement data is collected for some time. Then, the sequence, value, and direction are analyzed using many methods, such as Machine Learning, Deep Learning, and Digital Signal Processing [10]. The result was also affected by the sensor resolution and sampling rate [11]. ...
... Using deep learning methods to do activity recognition has its advantage. Using raw data as input and the process done by deep learning can also yield a fast process [10], [12], [13]. Otherwise, deep learning needs more data than machine learning. ...
Article
Full-text available
In the distribution sector, logistic package experience activities, such as transport, distribution, storage, packaging, and handling. Even though those processes have reasonable operational procedures, sometimes the package experience mishandling. The mishandling is hard to identify because many packages run simultaneously, and not all processes are monitored. An Inertial Measurement Unit (IMU) is installed inside a package to collect three acceleration and rotation data. The data is then labeled manually into four classes: correct handling, vertical fall, and thrown and rotating fall. Then, using cross-validation, ten classifiers were used to generate a model to classify the logistic package status and evaluate the accuracy score. It is hard to differentiate between free-fall and thrown. The classification only uses the accelerometer data to minimize the running time. The correct handling classification gives a good result because the data pattern has few variations. However, the thrown, free-fall and rotating data give a lower result because the pattern resembles each other. The average accuracy of the ten classifications is 78.15, with a mean deviation of 4.31. The best classifier for this research is the Gaussian Process, with a mean accuracy of 94.4 % and a deviation of 3.5 %.
... They rely on capturing hand and finger movements. In addition, a smart pen that exploits the inertial force sensors can record the digits [35][36][37][38]. ...
... Sensors 2023, 23, x FOR PEER REVIEW The performance of the proposed feature generator is tested on the MN digit datasets. The USPS handwritten digit dataset is derived from a proje ing handwritten digits on envelopes [38]. The digits have sizes of 16 × 16 pix 7291 samples for the training set and 2007 samples for the test set. ...
Article
Full-text available
In this paper, a novel feature generator framework is proposed for handwritten digit classification. The proposed framework includes a two-stage cascaded feature generator. The first stage is based on principal component analysis (PCA), which generates projected data on principal components as features. The second one is constructed by a partially trained neural network (PTNN), which uses projected data as inputs and generates hidden layer outputs as features. The features obtained from the PCA and PTNN-based feature generator are tested on the MNIST and USPS datasets designed for handwritten digit sets. Minimum distance classifier (MDC) and support vector machine (SVM) methods are exploited as classifiers for the obtained features in association with this framework. The performance evaluation results show that the proposed framework outperforms the state-of-the-art techniques and achieves accuracies of 99.9815% and 99.9863% on the MNIST and USPS datasets, respectively. The results also show that the proposed framework achieves almost perfect accuracies, even with significantly small training data sizes.
... He et al. [81] use acceleration and audio data of handwritten actions for character recognition. Furthermore, recent publications came up with similar developments that are only prototypical, for example, the works proposed by [82]- [84]. Hence, there is already a lot of interest and future technical advancements will further boost the classification performance of online HWR methods. ...
Article
Full-text available
Cross-modal representation learning learns a shared embedding between two or more modalities to improve performance in a given task compared to using only one of the modalities. Cross-modal representation learning from different data types – such as images and time-series data (e.g., audio or text data) – requires a deep metric learning loss that minimizes the distance between the modality embeddings. In this paper, we propose to use the contrastive or triplet loss, which uses positive and negative identities to create sample pairs with different labels, for cross-modal representation learning between image and time-series modalities (CMR-IS). By adapting the triplet loss for cross-modal representation learning, higher accuracy in the main (time-series classification) task can be achieved by exploiting additional information of the auxiliary (image classification) task. We present a triplet loss with a dynamic margin for single label and sequence-to-sequence classification tasks. We perform extensive evaluations on synthetic image and time-series data, and on data for offline handwriting recognition (HWR) and on online HWR from sensor-enhanced pens for classifying written words. Our experiments show an improved classification accuracy, faster convergence, and better generalizability due to an improved cross-modal representation. Furthermore, the more suitable generalizability leads to a better adaptability between writers for online HWR.
... The objective of this study was to develop an intelligent sensor-based ultrasonicassisted inner diameter saw cutting force system. With the development of sensors, temperature sensors [24], image sensors [25], and force sensors have appeared [26] more and more frequently, allowing performance and quality improvements in industrial applications. A six-axis force sensor was integrated for force measuring. ...
... The objective of this study was to develop an intelligent sensor-based ultrasonic-assisted inner diameter saw cutting force system. With the development of sensors, temperature sensors [24], image sensors [25], and force sensors have appeared [26] more and more frequently, allowing performance and quality improvements in industrial applications. A six-axis force sensor was integrated for force measuring. ...
Article
Full-text available
Ultrasonic-assisted inner diameter machining is a slicing method for hard and brittle materials. During this process, the sawing force is the main factor affecting the workpiece surface quality and tool life. Therefore, based on indentation fracture mechanics, a theoretical model of the cutting force of an ultrasound-assisted inner diameter saw is established in this paper for surface quality improvement. The cutting experiment was carried out with alumina ceramics (99%) as an exemplar of hard and brittle material. A six-axis force sensor was used to measure the sawing force in the experiment. The correctness of the theoretical model was verified by comparing the theoretical modeling with the actual cutting force, and the influence of machining parameters on the normal sawing force was evaluated. The experimental results showed that the ultrasonic-assisted cutting force model based on the six-axis force sensor proposed in this paper was more accurate. Compared with the regular tetrahedral abrasive model, the mean value and variance of the proposed model's force prediction error were reduced by 5.08% and 2.56%. Furthermore, by using the proposed model, the sawing processing parameters could be updated to improve the slice surface quality from a roughness Sa value of 1.534 µm to 1.129 µm. The proposed model provides guidance for the selection of process parameters and can improve processing efficiency and quality in subsequent real-world production.