Article

Personal Identification Through Pedestrians’ Behavior

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This article focuses on a new approach for personal identification by exploring the features of pedestrian behavior. The recent progress of a motion capture sensor system enables personal identification using human behavioral data observed from the sensor. Kinect is a motion sensing input device developed by Microsoft for Xbox 360 and Xbox One. Personal identification using the Microsoft Kinect sensor (hereafter referred to as Kinect) is presented in this study. Kinect is used to estimate body sizes and the walking behaviors of pedestrians. Body sizes such as height and width, and walking behavior such as joint angles and stride lengths, for example, are used as explanatory variables for personal identification. An algorithm for the personal identification of pedestrians is defined by a traditional neural network and by a support vector machine. In the numerical experiments, pictures of body sizes and the walking behaviors are captured from fifteen examinees through Kinect. The walking direction of pedestrians was specified as 0°, 90°, 180°, and 225°, and then the accuracies were compared. The results indicate that identification accuracy was best when the walking direction was 180°. In addition, the accuracy of the vector machine was better than that of the neural network.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Telephone conversation is considered as one of the most concerning privacy and security issues because it involves with users' personal information, such as user identification [1], financial information [2], passwords To address these challenges, in this paper, we propose Vibphone, a new side-channel attacking method exploiting a built-in zero-permission accelerometer to eavesdrop on telephone conversations as illustrated in Fig. 1. We have validated that smartphone accelerometers are sensitive to SIV signals, and the device diversity does have a decisive impact on the performance of Vibphone (see Section 3). ...
... To demonstrate the effectiveness of the above designs, we build a prototype of Vibphone with six off-the-shelf smartphone models, such as Samsung S8, Samsung S9, Samsung S10+, Xiaomi 9, OPPO R17, Huawei Nova3. We conduct extensive IRB-approved real-world experiments 1 and we act as the adversary to attack victims' call conversation on different types of smartphone models and usage scenarios. The results show that Vibphone's successful rate is 92.3% on known devices and 81.4% on unseen devices. ...
Article
Full-text available
Motion sensors in modern smartphones have been exploited for audio eavesdropping in loudspeaker mode due to their sensitivity to vibrations. In this paper, we further move one step forward to explore the feasibility of using built-in accelerometer to eavesdrop on the telephone conversation of caller/callee who takes the phone against cheek-ear and design our attack Vibphone. The inspiration behind Vibphone is that the speech-induced vibrations (SIV) can be transmitted through the physical contact of phone-cheek to accelerometer with the traces of voice content. To this end, Vibphone faces three main challenges: i) Accurately detecting SIV signals from miscellaneous disturbance; ii) Combating the impact of device diversity to work with a variety of attack scenarios; and iii) Enhancing feature-agnostic recognition model to generalize to newly issued devices and reduce training overhead. To address these challenges, we first conduct an in-depth investigation on SIV features to figure out the root cause of device diversity impacts and identify a set of critical features that are highly relevant to the voice content retained in SIV signals and independent of specific devices. On top of these pivotal observations, we propose a combo method that is the integration of extracted critical features and deep neural network to recognize speech information from the spectrogram representation of acceleration signals. We implement the attack using commodity smartphones and the results show it is highly effective. Our work brings to light a fundamental design vulnerability in the vast majority of currently deployed smartphones, which may put people's speech privacy at risk during phone calls. We also propose a practical and effective defense solution. We validate that it is feasible to prevent audio eavesdropping by using random variation of sampling rate.
... The RBF kernel is extensively utilized for classification of anomalous gait behavior due to better accuracy [32]- [34] The function can be expressed as in ( (22), ...
Article
Full-text available
The rate of crime is worsen and has led to a growing number of studies on human identification namely gait recognition. Hence, this study focused on the normal and anomalous behavior at the gate of residential units based on gait features extracted using Kinect sensor. Firstly, dataset of housebreaking crime behavior and normal behavior at the gate is acquired and collected. Further, orthogonal least squares (OLS) are utilized to extract and select the gait features along with principal component analysis (PCA) as gait feature optimization. Next, classification of gait features is done using artificial neural network (ANN) and support vector machine (SVM). Result attained showed that the recognition performance using ANN classifier was up to 99% but only 50% for SVM classifier. Findings from this study showed that the most optimum accuracy rate is at 99.78% using ANN with GDX as the learning algorithm in classifying both normal and anomalous behavior at the residential gate units.
Article
Personal identification is the task of authenticating a person using individual biological features. Deep neural networks (DNNs) have demonstrated an impressive performance in this field. Since no general algorithm is available for the design of network structures and the parameters adopted in DNNs for every application problem, DNNs should be determined according to the programmers’ experiments and know-how. For a new application task, it is very time-consuming for non-experts to design network structure, hyperparameters and an ensemble of base models adequately and effectively. In this paper, we present a genetic algorithm (GA)-based approach to construct network structures, tune their hyperparameters, and generate base models for the ensemble algorithm. The ensemble is constructed from base models with different network structures according to the voting ensemble algorithm. Our original personal identification dataset is employed as the numerical example to illustrate the performance of the proposed method. The results show that the prediction accuracy of the ensemble model is better than that of the base models and that the prediction of walking behavior toward the Kinect at 90 degrees and 225 degrees is more difficult than other walking behaviors.
Article
This article proposes a new approach to personal authentication by exploring the features of a person’s face and voice. Microsoft’s Kinect sensor is used for facial and voice recognition. Parts of the face including the eyes, nose, and mouth, etc., are analyzed as position vectors. For voice recognition, a Kinect microphone array is adopted to record personal voices. Mel-frequency cepstrum coefficients, logarithmic power, and related values involved in the analysis of personal voice are also estimated from the voices. Neural networks,support vector machines and principal components analysis are employed and compared for personal authentication. To achieve accurate results, 20 examinees were selected for face and voice data used for training the authentication models. The experimental results show that the best accuracy is achieved when the model is trained by a support vector machine using both facial and voice features.
Article
Principal Components Analysis (PCA) as a method of multivariate statistics was created before the Second World War. However, the wider application of this method only occurred in the 1960s, during the “Quantitative Revolution” in the Natural and Social Sciences.The main reason for this time-lag was the huge difficulty posed by calculations involving this method. Only with the advent and development of computers did the almost unlimited application of multivariate statistical methods, including principal components, become possible.At the same time, requirements arose for precise numerical methods concerning, among other things, the calculation of eigenvalues and eigenvectors, because the application of principal components to technical problems required absolute accuracy.On the other hand, numerous applications in Social Sciences gave rise to a significant increase in the ability to interpret these nonobservable variables, which is just what the principal components are. In the application of principal components, the problem is not only to do with their formal properties but above all, their empirical origins.The authors considered these two tendencies during the creation of the program for principal components. This program—entitled PCA—accompanies this paper. It analyzes consecutively, matrices of variance-covariance and correlations, and performs the following functions: •- the determination of eigenvalues and eigenvectors of these matrices.•- the testing of principal components.•- the calculation of coefficients of determination between selected components and the initial variables, and the testing of these coefficients,•- the determination of the share of variation of all the initial variables in the variation of particular components,•- construction of a dendrite for the initial set of variables,•- the construction of a dendrite for a selected pattern of the principal components,•- the scatter of the objects studied in a selected coordinate system.Thus, the PCA program performs many more functions especially in testing and graphics, than PCA programs in conventional statistical packages. Included in this paper are a theoretical description of principal components, the basic rules for their interpretation and also statistical testing.
Article
Principal component analysis (PCA) is a multivariate technique that analyzes a data table in which observations are described by several inter‐correlated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and to display the pattern of similarity of the observations and of the variables as points in maps. The quality of the PCA model can be evaluated using cross‐validation techniques such as the bootstrap and the jackknife. PCA can be generalized as correspondence analysis (CA) in order to handle qualitative variables and as multiple factor analysis (MFA) in order to handle heterogeneous sets of variables. Mathematically, PCA depends upon the eigen‐decomposition of positive semi‐definite matrices and upon the singular value decomposition (SVD) of rectangular matrices. Copyright © 2010 John Wiley & Sons, Inc. This article is categorized under: Statistical and Graphical Methods of Data Analysis > Multivariate Analysis Statistical and Graphical Methods of Data Analysis > Dimension Reduction
Article
This paper reports the first phase of a research program on visual perception of motion patterns characteristic of living organisms in locomotion. Such motion patterns in animals and men are termed here as biological motion. They are characterized by a far higher degree of complexity than the patterns of simple mechanical motions usually studied in our laboratories. In everyday perceptions, the visual information from biological motion and from the corresponding figurative contour patterns (the shape of the body) are intermingled. A method for studying information from the motion pattern per se without interference with the form aspect was devised. In short, the motion of the living body was represented by a few bright spots describing the motions of the main joints. It is found that 10–12 such elements in adequate motion combinations in proximal stimulus evoke a compelling impression of human walking, running, dancing, etc. The kinetic-geometric model for visual vector analysis originally developed in the study of perception of motion combinations of the mechanical type was applied to these biological motion patterns. The validity of this model in the present context was experimentally tested and the results turned out to be highly positive.
Conference Paper
This paper propose a biometric personal identification method based on a pair of right and left sole pressure distribution change. We acquire the sole pressure distribution change by load distribution sensor and use it for a personal identification. We employ twelve features based on shape of footprint, and twenty seven features based on movement of weight during walking for each sole pressure data. We make these fuzzy if-then rules. We calculate a fuzzy degree of a pair of right and left sole pressure data for one person, and identify person by this fuzzy degree. We evaluated our method by five-hold cross validation method. The low false rejection and acceptance rates are evaluated from 20 to 90 persons.
Article
From the publisher: This is the first comprehensive introduction to Support Vector Machines (SVMs), a new generation learning system based on recent advances in statistical learning theory. SVMs deliver state-of-the-art performance in real-world applications such as text categorisation, hand-written character recognition, image classification, biosequences analysis, etc., and are now established as one of the standard tools for machine learning and data mining. Students will find the book both stimulating and accessible, while practitioners will be guided smoothly through the material required for a good grasp of the theory and its applications. The concepts are introduced gradually in accessible and self-contained stages, while the presentation is rigorous and thorough. Pointers to relevant literature and web sites containing software ensure that it forms an ideal starting point for further study. Equally, the book and its associated web site will guide practitioners to updated literature, new applications, and on-line software.
Conference Paper
This paper proposes an automatic gait recognition approach for analyzing and classifying human gait by computer vision techniques. The approach attempts to incorporate knowledge of the static and dynamics of human gait into the feature extraction process. The width vectors of the binarized silhouette of a walking person contain the physical structure of the person, the motion of the limbs and other details of the body are chosen as the basic gait feature. Different from the model-based approaches, the limb angle information is extracted by analyzing the variation of silhouette width without needing the human body model. Discrete cosine analysis is used to analyze the shape and dynamic characteristic and reduce the gait features. And this paper uses the multi-class support vector machines to distinguish the different gaits of human. The performance of the proposed method is tested using different gait databases. Recognition results show this approach is efficient.
Conference Paper
Our goal is to establish a simple baseline method for human identification based on body shape and gait. This baseline recognition method provides a lower bound against which to evaluate more complicated procedures. We present a viewpoint-dependent technique based on template matching of body silhouettes. Cyclic gait analysis is performed to extract key frames from a test sequence. These frames are compared to training frames using normalized correlation, and subject classification is performed by nearest-neighbor matching among correlation scores. The approach implicitly captures biometric shape cues such as body height, width, and body-part proportions, as well as gait cues such as stride length and amount of arm swing. We evaluate the method on four databases with varying viewing angles, background conditions (indoors and outdoors), walking styles and pixels on target.
Article
Introduction to information theory [1] The possibility of reliable communication over unreliable channels. The (7,4) Hamming code and repetition codes. Entropy and data compression [3] Entropy, conditional entropy, mutual information, Shannon information content. The idea of typicality and the use of typical sets for source coding. Shannon's source coding theorem. Codes for data compression. Uniquely decodeable codes and the Kraft-MacMillan inequality. Completeness of a symbol code. Pre x codes. Human codes. Arithmetic coding. Communication over noisy channels [3] De nition of channel capacity. Capacity of binary symmetric channel; of binary erasure channel; of Z channel. Joint typicality, random codes, and Shannon's noisy channel coding theorem. Real channels and practical error-correcting codes. Hash codes. Statistical inference, data modelling and pattern recognition [2] The likelihood function and Bayes' theorem. Approximation of probability distributions [2] Laplace's metho