ArticlePDF Available

A Low-Powered Wearable Motion Detecting System Using Static Electric Fields

Wiley
IET Cyber-Physical Systems: Theory & Applications
Authors:

Abstract and Figures

Recently, the commercial market has seen an increase in the availability of smart wearable Internet of things (IoT) devices (wearables) including such items as: smart shoes, smart watches, wrist bands, and pendants. Many of these devices are part of the human-in-the-loop cyber-physical systems. In this research, the authors have designed and developed an embedded sensory IoT system with a low-power Bluetooth communication module to collect body single node voltage using a smartphone. Their approach for sensing the user's movement builds on work in the electric field sensing. Experimentation and verification have been conducted on a group of test subjects with different test scenarios including remaining at rest, walking, jumping, running, hand waving, eating, and bending over. They designed and developed their sensor to detect body motion data, and then used their algorithm to analyse the collected data. This study introduces the use of signal processing techniques for sensor data analytics to detect human body motion. The system can detect activity with a high degree of accuracy (~ 87%).
This content is subject to copyright. Terms and conditions apply.
IET Cyber-Physical Systems: Theory & Applications
Research Article
Low-powered wearable motion detecting
system using static electric fields
ISSN 2398-3396
Received on 2nd July 2018
Revised 29th June 2019
Accepted on 24th July 2019
E-First on 9th September 2019
doi: 10.1049/iet-cps.2018.5034
www.ietdl.org
Shane Lambert1, Haitao Lu1, Zane Shreve1, Yi Zhan1, A.K.M. Jahangir Alam Majumder2 , Gokhan
Sahin1
1Department of Electrical and Computer Engineering, Miami University, OH, USA
2Division of Mathematics and Computer Science, University of South Carolina Upstate, SC, USA
E-mail: majumder@mailbox.sc.edu
Abstract: Recently, the commercial market has seen an increase in the availability of smart wearable Internet of things (IoT)
devices (wearables) including such items as: smart shoes, smart watches, wrist bands, and pendants. Many of these devices
are part of the human-in-the-loop cyber-physical systems. In this research, the authors have designed and developed an
embedded sensory IoT system with a low-power Bluetooth communication module to collect body single node voltage using a
smartphone. Their approach for sensing the user's movement builds on work in the electric field sensing. Experimentation and
verification have been conducted on a group of test subjects with different test scenarios including remaining at rest, walking,
jumping, running, hand waving, eating, and bending over. They designed and developed their sensor to detect body motion
data, and then used their algorithm to analyse the collected data. This study introduces the use of signal processing techniques
for sensor data analytics to detect human body motion. The system can detect activity with a high degree of accuracy ( 87%).
1Introduction
In recent years, the development of the ‘Internet of things’ (IoT)
has moved rapidly, and several kinds of wearable IoT (wIoT) have
appeared in the market such as: Apple™ watch and Google™
glass. Considering the desired functions of wIoT, motion detection
is an important player in IoT. Analysis of motion data can be used
for a variety of applications in a variety of fields. Having this data
accessible remotely and wirelessly is another huge benefit for this
type of technology, and takes advantage of the IoT framework.
Beyond these activities, though, body motion data is also useful in
remote supervision applications. These could include supervision
of elderly residents at nursing homes, where caregiver employees
are not capable of physically monitoring all residents at all times.
Remote access to body motion allows these caregivers to form a
picture of a resident's daily activities, and the caregiver can then
make decisions based on a comparison of the activities detected
and the activities expected. In a similar application, motion data
can be used to alert a remote caregiver of a resident who has fallen
or is at risk of fall. Furthermore, motion data can also allow a user
of the technology to know when they are showing a tendency
toward falling, thus preventing the fall in the first place.
Our project comprises of a wearable sensor system that makes
use of the IoT framework and connects to a user's smartphone to
capture body motion. This body motion data is collected using
static electric fields (EFs) through a capacitive coupling on the
device. The system uses the data gathered from the fields to
accurately detect and determine different types of motions. The
system aims to operate on ultra-low power (LP), in the range of
nanowatts. In [1], Cohn et al. discuss a low-powered sensor that
operates via the user's static EF. This system achieved detection of
simple and complex body motions and provided a non-intrusive
wearable sensor. The system did not, however, include a low-
powered, fast, wireless form of data transfer such as Bluetooth.
Owing to this difference, our system has an advantage in its ability
to utilise wireless data transfer for remote access to user data. Also,
the system developed in [1] includes a wake-up system that
increases power consumption.
Our approach to collect body motion data utilises static EFs.
They are very useful since they do not change in magnitude over
distance or time from the original value. This is helpful as it allows
us to measure the change in the field as the user moves through it
to provide accurate feedback. It is also beneficial in our case as it
allows us to recognise smaller movements that the user is
performing as the fields will be moving with the user. This type of
body motion data can be used for a variety of purposes related to
tracking human movements including sleep and posture analyses,
health and fitness tracking, remote supervision of elderly or
disabled persons, and fall detection and prevention. For this work,
the human movement will be categorised into separate motions,
and a decision about which activity is being performed will be
wirelessly transmitted to a smartphone application for remote
monitoring and real-time data collection.
The objective of this paper is to present a multisensory system
for human body motion sensing using body area sensors (BASs)
and static EFs. For this paper, we implemented an embedded
sensor system with an LP Bluetooth communication module to
collect body single node voltage using a smartphone. Our proposed
system will be useful for monitoring the elderly and has utility in
identifying simple activities among child physical rehabilitation
patients, and for human behaviour analysis research.
1.1 Major contributions
In this paper, we propose to use smartphones as the primary
platform for developing an embedded system for detecting daily
activities using body motion; these devices naturally combine the
detection and communication components. Our major contributions
are as follows:
developed a multisensory embedded IoT system for daily
activity detection;
proposed a smartphone-based motion detection system using a
static EF with a wearable BAS;
designed, developed, and implemented a self-assistance system
which analyses upper and lower body motions using body single
node voltage; and
implemented a mobile system for remote supervision of users,
which can be used to differentiate current activities from normal,
expected activities.
IET Cyber-Phys. Syst., Theory Appl., 2020, Vol. 5 Iss. 1, pp. 31-38
This is an open access article published by the IET under the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/3.0/)
31
The rest of this paper is organised as follows: in Sections 2 and 3,
we describe the background and relevant related work. In Section
4, we discuss the process of designing our system. In this section,
we also explain the circuit design of our system. In Section 5, we
discuss the data collection process. Sections 6 and 7 are the results
and evaluation of our smartphone-based prototype system. In
Section 8, we conclude this paper with possible directions for
future research.
2Background and motivation
Wearable motion sensors have been used in high-impact
applications such as simple and complex activity recognition,
health and wellness sensing, and elderly care. It is hard to realise
the effect of an injury due to abnormal body motion if someone
does not have experience with this kind of injury. It can take only
one bad incident to severely incapacitate or even kill an individual.
wIoT solutions can help in the diagnosis and analysis of users with
neuromotor disorders.
In this work, we investigated static EF sensing and decided to
build a circuit that utilises static EFs to detect body motion. The
advantages of differential static field sensors over accelerometer
are LP consumption and fast response to motions. Better sensitivity
is also achieved due to the low noise level of capacitive detection.
Using this self-built sensor, we could collect preliminary data to
analyse the differences between separate simple motions. The
scope of applications for motion detection is wide, and the benefits
garnered from this type of data are important and impactful. The
applications described above represent a large societal benefit in
terms of quality of life and personal health and safety. The impacts
of these technology stretches to those who are using this remotely
as well as the technology works to decrease their menial workload
and allows them to use their time more efficiently and effectively.
In [1], Cohn et al. introduced an ultra-low-power (LP) approach
to passively sense body motion based on static EFs. The total
power consumption for their system was 6.6 μW. Besides the LP
consumption, their system passively relies on the existing static EF
between the human body and the environment, as shown in Fig. 1,
and measures the voltage difference between the human body and
the environment through capacitive coupling via a capacitor (CS)
connected between the body and the local ground.
The expression for sensed voltage (VS) is the difference
between the voltage on the body (VB) and the local ground plane
(VR). The equation can be expressed as
Vs= [(QB)/ (CB)] [(QR)/(CR)], and the circuit diagram is shown
in Fig. 2.
For the hardware, the system included a gain stage, buffer, low-
pass filter (LPF), and a wake-up portion. Fig. 3 shows the block
diagram of the hardware of [1], which was implemented using
ultra-LP, off-the-shelf components. Also, to filter out the high
amplitude 60 Hz signal on the body, they applied a third-order
Butterworth LPF with a corner frequency of 10 Hz using an active
filter.
This system [1] served as great inspiration for portions of the
work done in our system. For further development of the system,
there were four challenges faced. The first challenge was how to
build an integrated cyber-human system using a microcontroller.
Cyber-human systems advance scientific understanding of
computing and communications system together with a theoretical
and practical understanding of behavioural, social, and design
sciences to better design and develop diverse kinds of systems.
Also, cyber-human systems seek to improve our fundamental
understanding of how, and the processes by which, interactive
systems should be designed to achieve human–computer symbiosis
and computer-mediated human communication, collaboration, and
competition [2]. We made a general assumption that the end user
had access to a smartphone to receive and process data. The second
challenge was how to develop a low-powered sensor using discrete
components. Our approach was to use a passive LPF, pull-down
resistor and capacitor to lower-power consumption. Initial attempts
also integrated an op-amp to amplify the body's signal [3]. After
testing, it was decided that the op-amp was not to be used when
monitoring simple body motions, which reduced the power
consumption for our designed system in half. The third challenge
was how to collect and transmit data wirelessly in real time. The
solution was to use a Bluetooth module to send data from the
Arduino microcontroller to the user's smartphone. There were
several possible ways such as using wireless fidelity
communication. A similar commercial product, the Fitbit Flex,
uses a combination of Bluetooth communication between the
device and the phone and adaptive network topology (ANT)
wireless networking to sync with compatible health applications
[4]. However, considering power consumption, Bluetooth was
found to be a better approach to save power, and best fit our
system's constraints. The final challenge was to determine complex
motions with accuracy. As for the solution, we decided to approach
it with both hardware and software components. For the hardware
part, an op-amp was introduced to our system to amplify the signal
to ensure the output signal is large enough for detection [3]. For the
software part, we designed and developed an algorithm that was
developed to accurately detect motions.
3Related work
Human body motion detection using static EFs has been the subject
of many studies over the past decade. Most of the previous
approaches regarding motion detection utilise accelerometers
attached to the subject for gathering data. Existing dynamic models
are non-subject specific because the dynamic parameters are used
Fig. 1 Capacitive coupling indicated by field lines [1]
Fig. 2 Circuit model of the sensing technique [1]
Fig. 3 Block diagram of hardware for motion sensing [1]
32 IET Cyber-Phys. Syst., Theory Appl., 2020, Vol. 5 Iss. 1, pp. 31-38
This is an open access article published by the IET under the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/3.0/)
in general models. Therefore, they have very limited accuracy in
predicting body motion for a specific individual.
Most of the motion sensors such as [5] are inertial measurement
units (IMUs). Usually, an IMU sensor is a combination of an
accelerometer, a gyroscope, and a magnetometer. Compared to our
system, an IMU has more power consumption and can only
measure the movement of a single node. For our system which
utilises static EFs, though the device is attached to a single node on
the human body, the movement of the whole body is measured.
In [6], Trost et al. used a wrist-worn sensor and a sensor on the
hip to detect seven physical activities. They showed the potential of
using the wrist position for activity recognition using logistic
regression as a classifier. However, they assessed these two
positions separately and did not combine these two sensors.
Similarly, a wrist-worn accelerometer was used in [7] to recognise
eight activities including the activity of working on a computer. In
[8], Ramos-Garcia and Hoover detect the act of eating using a
hidden Markov model with a wrist-worn accelerometer and a
gyroscope. They recognise eating by dividing that activity into sub-
activities: resting, eating, drinking, using utensils, and others. The
authors report an accuracy of 84.3%. A feasibility study on
smoking detection using a wrist-worn accelerometer is done in [9],
where Scholl and Van reported a user-specific accuracy of 70% for
this activity. This paper uses only an accelerometer at the wrist
position. Parate et al. [10] use an accelerometer, a gyroscope, and a
magnetometer at the wrist position to recognise smoking puffs,
though solely differentiated from all other activities.
To address the drawbacks of the aforementioned research, we
propose a smartphone-based body motion detection system. Our
system is designed to directly address some of the drawbacks of the
existing systems and yield good activity prediction results. We
illustrate the difference between our system and the other related
works in Table 1.
4Circuits and system
The strength of our proposed system relies on existing wireless
communication to provide an LP solution with maximum freedom
of movement to users in their physical activity. Besides, we have
used small, lightweight devices that are user-friendly such as the
smartphone and the wrist-band. The progression of our system can
be described in two stages: an early prototype using a commercial
sensor and the final design using our sensor.
4.1 Early prototype using a commercial sensor
For our initial prototype, we utilised a commercial sensor, which
included an accelerometer, a gyroscope, a magnetometer, and an
embedded algorithm to detect and determine the motion. The
circuit is composed of two parts. The first part was an Arduino
microcontroller that connects to a BNO055 sensor, as shown in
Fig. 4. The BNO055 sent data using the I2C communication
protocol, meaning only two outputs from the Arduino, a clock line,
and a data line, were necessary.
The second part of the architecture was the Arduino connecting
to the HC-05 [17] Bluetooth module, as shown in Fig. 5. The
Bluetooth module communicated with the Arduino using serial
data transfer, again using only two microcontroller outputs.
The description of the proposed system architecture shown in
Fig. 6 is as follows: a single hand wearing the glove with integrated
BNO055 detects body motion. The BNO055 collects the position
data of the hand.
The Arduino calculates the first derivative of the position data
from the BNO055 sensor. Next, the HC-05 module transmits the
data from the Arduino and sends it to an app on the phone via
Bluetooth, where the app displays the graphs of body motion data
and an activity decision. The initial prototype using a commercial
sensor is shown in Fig. 7.
4.2 Final design using our developed sensor
For our final design, as shown in Fig. 8, we focused on building
our sensor to integrate into the system. Our sensor uses static EFs
Table 1Comparison of existing work based on different features
Approach Use the IoT system Has com. capability Use body node voltage Use embedded sensor LP system
Cohn [1] no no yes yes yes
Konrad [5] yes yes no no no
Ming [11] yes yes yes yes no
Bennett [4] yes no no no no
WIHa [12] no yes no yes no
Edward [13] no no no yes no
Shyamal [14] yes yes no yes no
Maurizio [15] no no no yes no
MSFb [16] no no no yes no
Trost [6] no no no yes no
Silva [7] no no no yes no
Ramos [8] no no no yes yes
Scholl [9] no no no yes no
Parate [10] yes no no yes no
our Approach yes yes yes yes yes
aWitherings inspire health (WIH).
bMisfit shine fitness (MSF).
The bold entries indicate that the activity is correctly classified with our approach in these trials. The other entries are incorrectly classified.
Fig. 4 Circuit model of BNO055 motion sensor
Fig. 5 Circuit model of HC-05 Bluetooth module
IET Cyber-Phys. Syst., Theory Appl., 2020, Vol. 5 Iss. 1, pp. 31-38
This is an open access article published by the IET under the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/3.0/)
33
and calculates the compared voltage difference between the human
body and the environment.
We initially tried different materials and components to serve as
body contact including conductive fabric and a conductive wrist
strap. After tests were completed, the wrist strap had the best
performance. To get the raw data, we connect the body contact
node with one side of a capacitor, and ground the other side of the
capacitor, then measuring the voltage level across the capacitor.
To reduce the effects of ambient noise, we implemented an LPF
to filter out the noise that was affecting the body signal such as 60 
Hz hum from power mains. Through testing, it was found that
when the cut-off frequency was set at 30.8 Hz, it could filter out
the noise and accurately detect different simple body motions such
as walking, running, and jumping. To stabilise the value when the
subject was at rest, a pull-down resistor was introduced into the
circuit. It was connected in parallel with the LPF, and it pulled
down the voltage so that the signal level was effectively reduced to
the ground when users are at rest.
In the next step, to detect the complex motion such as hand
waving, bending, and typing, we implemented an amplifier in the
circuit as the signal magnitude is low for accurate detection. An LP
op-amp, MCP6041 [3], was added into our circuit after LPF with a
pull-down resistor. Also, we introduced a voltage divider to reduce
the battery voltage (7.4 V) to 0.6 V, which was used for rail-to-rail
voltage for the op-amp. The 7.4 V supply voltage was selected
because it fits within the recommended voltage specifications of
our Arduino Nano (between 7 and 12 V).
The output of the circuit was connected with Arduino analogue
pin, from which Arduino could read the data and send it to the
smartphone through Bluetooth. The reason that we decided to
utilise an Arduino as our microprocessor was that it is simple, easy
to use, and has a built-in analogue-to-digital converter [18].
We transmit the data to a smartphone in real time through the
Bluetooth communication module. After analogue-to-digital
conversion using the Arduino, we send data to the user's
smartphone along with a delimiter that would be utilised in our app
through Bluetooth communication module. Once the application
was embedded into the smartphone, and the phone was correctly
paired with the Bluetooth, the app received the data and used a
delimiter. In our case, we used the pound symbol (#) to separate the
individual data points and display them in real time. The
screenshots of real-time body sensor data collection are shown in
Fig. 9. Also, the user interface of real-time activity detection is
shown in Fig. 10.
Our system is divided into two separate circuit modules. The
first part of the circuit was used to detect simple motion such as
walking, jumping, and running, as shown in Fig. 11. An LPF, pull-
down resistor, and a capacitor were used. The power consumption
for this circuit is at around 7.2 μW. The second circuit is used to
detect complex activity detection. We increase the sensor data
signal strength to accurately detect complex activity. In this circuit,
we add an LP amplifier circuit, as shown in Fig. 12. The signal
strength for a complex motion such as typing, is very low. We tried
to detect simple and complex motions using the same circuit
(circuit 1). The data flow diagram for the system is shown in
Fig. 13.
It is difficult to determine the difference between finger and
hand movements in complex activity detection. This makes sense
because small movements of the fingers and hands do not have a
large effect on the overall capacitance of the body to its
environment. Therefore, we focus on detecting simple motion
using our current hardware implementation. However, complex
activity detection could still be improved using a sensor with better
sensitivity. The developed sensor implementation is better in
comparison with using commercially available IMU sensor or
built-in smartphone sensor. The dynamic range of the smartphone
built-in accelerometer is low (<± 2g).
5Data collection
On completing our system, we began testing our system. For
testing the accuracy of our system, we recruited 26 participants
from both genders, a variety of age groups, and a range of heights
Fig. 6 Early prototype of our system architecture
Fig. 7 Our initial prototype system with a commercial sensor
Fig. 8 Early prototype of our proposed system
Fig. 9 Smartphone screenshots for body node voltage collection
(a) Body sensor data collection interface, (b) Real-time walking data on a smartphone
34 IET Cyber-Phys. Syst., Theory Appl., 2020, Vol. 5 Iss. 1, pp. 31-38
This is an open access article published by the IET under the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/3.0/)
(see Table 2 for statistics). We established a baseline walk period
for each of the walking traces. This was achieved by manually
finding the walk-start (tstart) and walk-end (tend) events.
The data was collected for different simulated scenarios such as,
for simple motion while walking, running, jumping, and while the
subject is bending. For complex motion detection data was
collected by simulated typing, washing dishes, eating, and hand
waving. We collected data for different motion events in different
environments for 200 samples in each event.
We started the data collection with a simulation at rest
condition. This allowed us to see a baseline and if there were value
differences to see what was different and fix what could be causing
that issue. We then proceeded with walking, running, jumping, and
bending. For each subject and in each motion, we gathered a
minimum of three data sets, allowing us to see if the data was
consistent or not, and if not, how much it varied. This would play a
critical role in our analysis later. The average values for each test
subject can be seen below, with each average being represented as
a millivolt value.
During our test, we tracked simple motions performed by the
user. What distinguishes these motions is the pattern that is seen
from the motions. In simple motion, a subject is performing the
same process repeatedly, which is seen in walking, running, and
jumping, all of which are classified under the simple motion
category. Using this knowledge on the pattern of activity being
performed, the activity of the user can be seen, tracked, and
predicted.
As shown in Table 3, the sensor values vary for different test
subjects, but the relative range of signal variations are the same.
Also, each signal waveforms remain constant and have a clear
pattern with clear distinctions.
During our tests, we noted significant changes in data in each
subject that also changed with time and day. After close analysis,
we noted the most significant changes in the subject's clothing and
footwear. As the footwear established the ground connection to the
earth, which would complete the circuit, a bad connection changed
the data significantly. We saw jumps in values of ±20 depending on
the person's footwear, with the most consistent data coming from
normal gym shoes.
Clothing would also play an effect, as certain types of clothing
could build up more of a charge on the body, causing the change in
the field to be greater than without this charge. When performing
the final tests, we took this into consideration having subjects wear
gym shoes and non-cotton clothing as that typically built up the
most static. This allowed us to get consistent data for each test
subject that we could utilise in our analysis. To evaluate the effect
of different clothing, we also tested our system with several other
commonly used clothing with different materials, athletic clothes
(made of cotton, flax, wool, ramie, silk), denim (98% cotton, 2%
elastase), and roma (74% polyester, 27% rayon, 3% spandex). It is
observed that the combination of athletic clothes and tennis shoes
gives us an accurate result in motion detection. The tennis shoes
with firm rubber soles keep our circuit most grounded. To improve
the ground effect and to be able to work for all types of clothing,
we will include a small local ground plane on the sensor board.
Both the body and the local ground plane will be capacitively
coupled to the earth ground. Thus, will improve the dynamic
sensor range.
Motion detection using a built-in smartphone sensor such as
accelerometer might have an effect on different clothing. The
clothing would affect the accelerometer signals with the placement
of the smartphone during data collection. The data from
accelerometer's x, y, z axes would vary with the position of the
smartphone. The most common approach to manage movement
noise in wearable sensors in preventing: by attaching the phone to
the body using elastic, straps, adhesive, or a skin-tight clothing. We
investigated the impact of different clothing (fabric, denim, roma)
on sensor signal quality and found a significant effect on both in
measuring the effects and in modelling the effect of phone
orientation.
The graphs shown in Fig. 14 are raw sampled data from the
Arduino, with the x-axis corresponding to the individual data
points (sampled at 200 Hz) and the y-axis representing the digital
signal magnitude (0.48 mV per increment).
6Results and analysis
To evaluate our proposed system, we developed a prototype
application and investigated its performance. We evaluated the
Fig. 10 Activity decision on a smartphone for
(a) Resting, (b) Running, (c) Bending
Fig. 11 Circuit for simple motion detection
Fig. 12 Circuit for complex motion detection
Fig. 13 Block diagram for body sensor data flow
Table 2Statistics about subjects participating in our data
collection
Height, cm
140–159: 7
gender age, years 160–169: 4
f: 4 20–29: 16 170–179: 12
m: 22 30–34: 635–39: 4 180–190: 3
IET Cyber-Phys. Syst., Theory Appl., 2020, Vol. 5 Iss. 1, pp. 31-38
This is an open access article published by the IET under the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/3.0/)
35
prototype with extensive experiments. In this section, we describe
how the data was analysed, and performance was measured.
Initially, we used the commercially available multi-function
BNO055 motion sensor and were able to successfully distinguish
between three forms of simple motion activities: walking, running,
and jumping. Additionally, all actions could be successfully
differentiated when the user is at rest. All decisions were made
according to a custom algorithm developed by our team. The
algorithm depended on the accelerometer and gyroscope data
collected from the BNO055 sensor. For both types of sensors, the
magnitude of the data was found by taking the square root of the
sum of the squares of all three axes. This resulted in a cumulative
magnitude for both accelerometer and gyroscope; these magnitudes
were then summed together to create a more complete motion
index. The rate of change of this motion index value was calculated
over the previous eight data points, then the slope of motion index
was compared with threshold values to decide on the type of
motion.
The motion decision was determined on the Arduino side of the
architecture, and a coded decision corresponding to the proper
motion was sent as part of each Bluetooth transmission to the
Android application. Raw sensor values were also sent to the
Android handheld, allowing the user to see real-time graphs of
accelerometer, magnetometer, and gyroscope data, along with the
final decision of the motion.
Our improved system can distinguish between running,
walking, jumping, bending over, and being at rest. Each of these
activities leads to a distinct and corresponding voltage pattern. As
the voltage is sampled through the Arduino and plotted in real time
on the smartphone, the signal patterns for each activity are easily
distinguishable. Each motion pattern has a unique frequency and
maximum magnitude; the uniqueness of these features led us to use
both properties in our detection algorithm. Each activity is
distinguishable as all motions present itself as peaks or spikes.
Similarly, the patterns that were gathered for more complex
activities such as waving and typing were distinguishable, though
the magnitude of these activities was too small to be accurately and
consistently measured in our tests without integrating the op-amp
amplifier circuit [3].
To integrate detection into our system, though, pattern detection
algorithms needed to be run on the data. Early attempts at detection
were performed offline in MATLABTM, using sampled data
collected by the Arduino. These attempts were a combination of
principal component analysis (PCA) and the K-nearest neighbour
(KNN) algorithm through a toolbox implementation for
MATLABTM [19]. PCA is a statistical procedure that uses an
orthogonal transformation to convert a set of observations of
possibly correlated variables into a set of values of linearly
uncorrelated variables called principal components. Moreover, the
KNN's algorithm (kNN) is a non-parametric method used for
classification and regression. In both cases, the input consists of the
k closest training examples in the feature space. The output
depends on whether kNN is used for classification or regression.
These techniques required a strict standardisation of the data set
to accurately detect a pattern. Some of the equations that are used
in this process are shown below and are critical in the performance
of this algorithm. To start, the data is standardised uniquely based
on the application; we tried many ways to attempt to keep the data
as similar as it was before this process. Following this, covariance
and eigenvectors were required to calculate the principal
component (PC) score for each point [20]. The equation to
determine correlation and covariance matrices and the definition of
eigenvalues and eigenvectors are displayed in equations below:
cor(X,Y) = cov(X,Y)
(σx×σy)
(1)
COV(x,y)× [eigenvector] = [eigenvector] × [eigenvector]
(2)
These new eigenvectors help determine the new axes that the PC
score is based on. The toolbox that we used then utilised these
Table 3Average sensor signal values (millivolt) for four test subjects
Test subject Resting Walking Running Jumping Bending
S1 0 10.20 11.87 15.53 1.51
S2 0 6.98 8.54 13.48 2.10
S3 0 6.15 8.11 12.35 3.52
S4 0 5.42 12.30 15.33 0.84
Fig. 14 Sample simple motion signals for sitting, bending, jumping, running, and walking
36 IET Cyber-Phys. Syst., Theory Appl., 2020, Vol. 5 Iss. 1, pp. 31-38
This is an open access article published by the IET under the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/3.0/)
values with KNN to guess which motion was being performed. The
Euclidean distance and accuracy equation are mainly used for this
process and are shown below in equations [21]:
d(Xi,Xj) = (for all values in a class(Xi,aXj,a)2)
(3)
accuracy = number of correctly guessed points
number of points × 100
(4)
The process of standardisation led to issues, though, because of the
variance of our signals. While we had distinct patterns, the general
standardisation procedure used in this algorithm led to results that
were indistinguishable when tested in MATLAB, as shown in
Fig. 15.
Fig. 15 shows the issues that were encountered. The graph of
KNN shows two distinct data sets recognised by the programme,
but when the actual classification of this data occurred, the
classifier puts both motions under the same class all the time.
Owing to this, we decided to develop our algorithm that considers
the most polarising properties of the data, namely the frequency
and magnitude of the sampled signals. Prior works use different
methods/algorithms to capture the user's daily activities [22–25].
7Activity identification algorithm
In our prototype system, which used a commercial sensor, we
developed a mathematical model that would allow us to predict
which activity was being performed. This equation utilised values
from the accelerometer and gyroscope on the market sensor to
calculate an overall magnitude. The slope is measured from the
calculated signal magnitude graphs, in which the slope would
determine which motion was being performed. This was an
effective way of approaching detection, though it was slow and not
always the most accurate with a detection rate of about 70%.
There is still room to increase detection accuracy. The magnitude
equation that we used for our values and equation to calculate
slopes are displayed in the equations below:
Mtotal =Ax
2+Ay
2+Az
2+Gx
2+Gy
2+Gz
2
(5)
M= ( y2y1)/(x2x1)
(6)
This algorithm worked successfully for the commercial sensor, but
failed to achieve the same success for our developed system.
We have designed and developed our algorithm after several
experiments and tests using KNN-based algorithm to get accurate
classifications accuracy. This algorithm was focused on a score that
took the unique attributes of each movement such as the mean and
standard deviation of the sampled data. Then, we calculate the
difference between adjacent data points to model the first-order
derivative. From this derivative data set, the maximum value and
the number of zero crossings were found. The score was calculated
by combining these four properties in a way that led to a unique
range of values for each activity. Equations (7) and (8) represent
the calculation of standard deviation and mean of the sensor signal
σ=(xμ)2
n 1
(7)
μ=1
nx
(8)
The maximum point, zero-crossing determination, and the final
scoring algorithm combining the values are shown in the equations
below:
max = max (x)
(9)
zeros = (x= 0)
(10)
score = σ× max × zeros
μ
(11)
We evaluated the above features using MATLABTM and found
promising results from the testing data to detect accurate motion.
When scores of the features were calculated for the different
activities, thresholding was used to compare the score values to a
range of values corresponding to an activity. Issues arose, however,
because of the variance between signals retrieved from different
test subjects. In MATLABTM, the threshold values could be easily
changed and manipulated for different data sets based on the values
of the individual test subjects. However, for each session when
values would be changed individually, proof of functionality was
seen with rather high accuracy. The confusion matrix, as shown in
Table 4 demonstrates this, having a very high accuracy rate
(87%). The columns represent the simple motions with the order
being resting, walking, running, jumping, then bending. As shown
in Table 4, it was accurate for this session as values were adjusted
to what we were seeing, with only one value being predicted
wrong, as a jumping value was guessed to be running.
We believe that a calibration mode can improve the incorrect
determination for better classification accuracy. In the smartphone
implementation mode, a new user could go through all possible
activities and establish a sample baseline data set to determine their
specific threshold values.
For the complex motion detection (such as typing, waving a
hand, eating, and dishwashing), we used the same features as
described above from the sensing signals. We integrated an
amplifier into our system during testing and accurately amplified
signals from the body node. We believe that the small levels of
current generated by the body for complex motion in our sensing
system are the reason for the inaccurate amplification of the raw
body signals. Owing to these issues with amplification, our team
focused most of our efforts on distinguishing between the motions
of running, walking, jumping, and bending over. We are still
working on the complex motion for better detection accuracy.
Currently, the accuracy for the complex motion detection using our
system is not high. We are in the process of collecting more
complex motion data using our system for different age groups in a
laboratory environment. We are collecting complex activity data
using our system and from the built-in smartphone accelerometer.
It is observed that the complex activity detection using our system
had better accuracy than the accuracy using smartphone
accelerometer. The dynamic range of the smartphone sensor is not
adequate to accurately detect the complex motion.
8Conclusion
In this paper, we have developed an integrated cyber-human
system for motion detection. We use our developed sensor to
validate the proposed approach and the real-time simple and
complex activity detection in users. The results from different data
sets are also presented to show that this approach provides a high
degree of classification accuracy in distinguishing between at rest,
walking, running, jumping, and body bending patterns. The system
Fig. 15 Initial confusion matrix and KNN graph
IET Cyber-Phys. Syst., Theory Appl., 2020, Vol. 5 Iss. 1, pp. 31-38
This is an open access article published by the IET under the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/3.0/)
37
may also find multiple applications in behaviour detection for
people with various disabilities.
To test the chronological permanence and long-term feasibility
of our approach in the future, we plan to test our system with
people who suffer from chronic health problems once we get the
Institutional Review Board approval. Also, we plan to measure the
different spatiotemporal parameters of the user during daily
activities. Additionally, the system can be used in the smart home
monitoring system for future wireless technology. Also, we plan to
miniaturise the circuit in a printed circuit board to make our system
user-friendly.
9Acknowledgments
We thank the anonymous reviewers for their valuable comments,
which helped us to improve this paper. Also, thank Ms. Tina Carico
and Jeff Peterson from the Department of Electrical and Computer
Engineering at Miami University for their help and support.
10References
[1] Cohn, G., Gupta, S., Lee, T., et al.: ‘An ultra-low-power human body motion
sensor using static electric field sensing’. Proc. 2012 ACM Conf. Ubiquitous
Computing (Ubicomp), Pittsburgh, PA, USA, September 2012, pp. 99–102
[2] ‘Research articles’, Cyber-Human Systems (CHS), National Science
Foundation (NSF)
[3] ‘Dexsilicium.com (2013) and microchip MCP6041’. Available at https://
www.dexsilicium.com/Microchip_MCP6041.pdf, accessed June 2019
[4] Bennett, B.: ‘Fitbit Flex review’. Available at https://www.cnet.com/reviews/
fitbit-flex-review/, accessed June 2019
[5] Konrad, L., Bor-rong, C., Geoffrey, W.C., et al.: ‘Mercury: a wearable sensor
network platform for high-fidelity motion analysis’. Proc. ACM SenSys'09,
Berkeley, CA, USA, November 2009, pp. 183–196
[6] Trost, S.G., Zheng, Y., Wong, W.K.: ‘Machine learning for activity
recognition: hip versus wrist data’, Physiol. Meas., 2014, 35, (11), pp. 2183–
2189
[7] Da-Silva, F.G., Galeazzo, E.: ‘Accelerometer based intelligent system for
human movement recognition’. Proc. Fifth IEEE Int. Workshop on Advances
in Sensors and Interfaces (IWASI), Bari, Italy, June 2013, pp. 20–24
[8] Ramos-Garcia, R.I., Hoover, A.W.: ‘A study of temporal action sequencing
during consumption of a meal’. Proc. ACM Int. Conf. Bioinformatics,
Computational Biology and Biomedical Informatics, Washington, D.C., USA,
September 2013, p. 68
[9] Scholl, P.M., Van, K.: ‘A feasibility study of wrist-worn accelerometer based
detection of smoking habits’. Proc. Sixth IEEE Int. Conf. Innovative Mobile
and Internet Services in Ubiquitous Computing (IMIS), Palermo, Italy, July
2012, pp. 886–891
[10] Parate, A., Chiu, M.C., Chadowitz, C., et al.: ‘Risq: recognizing smoking
gestures with inertial sensors on a wristband’. Proc. 12th Annual Int. Conf.
Mobile Systems, Applications, and Services, Bretton Woods, NH, USA, June
2014, pp. 149–161
[11] Ming-Zher, P., Nicholas, C.S., Rosalind, W.P.: ‘A wearable sensor for
unobtrusive, long-term assessment of electrodermal activity’, IEEE Trans.
Biomed. Eng., 2010, 57, (5), pp. 1243–1252
[12] ‘Withings inspire health (WIH), pulse Ox track improve’. Available at https://
www.amazon.com/Withings-Pulse-O2-Activity-Sleep-and-Heart-Rate-SPO2-
Tracker-for-iOS-and-Android/dp/B00JQ6YA6O, accessed June 2019
[13] Edward, S.S., George, F., James, H., et al.: ‘Monitoring of posture allocations
and activities by a shoe-based wearable sensor’, IEEE Trans. Biomed. Eng.,
2011, 58, (4), pp. 983–990
[14] Shyamal, P., Konrad, L., Richard, H., et al.: ‘Analysis of feature space for
monitoring persons with Parkinson's disease with application to a wireless
wearable sensor system’, Comput. Biol. Med., 2017, 89, (c), pp. 379–388
[15] Maurizio, G., Matteo, L., Dan, B., et al.: Inc., Cambridge, MA., USA, and
Milan, Italy, MIT, Cambridge, MA, USA.: ‘Empatica E3 a wearable
wireless multi-sensor device for real-time computerized biofeedback and data
acquisition’. January 07, 2015
[16] ‘Misfit shine fitness (MSF) + sleep monitor. Burlin-game (CA): misfit’.
Available at https://misfit.com/misfit-shine-2, accessed June 2019
[17] Electronicaestudio.com.: ‘HC-05-bluetooth to serial port module’, 2018.
Available at https://www.electronicaestudio.com/wp-content/uploads/
2018/09/BT811d.pdf, accessed June 2019
[18] Arduino.cc: ‘Arduino reference’, 2018. Available at https://www.arduino.cc/,
accessed June 2019
[19] Wold, S., Esbensen, K., Geladi, P.: ‘Principal component analysis’,
Chemometr. Intell. Lab. Syst., 1987, 2, (1–3), pp. 37–52
[20] Duin, R.P.W., Juszczak, P., Paclik, P., et al.: ‘A MATLAB toolbox for pattern
recognition’, PRTools, 2000, 3, pp. 109–111
[21] Fukunaga, K., Narendra, P.M.: ‘A branch and bound algorithm for computing
k-nearest neighbors’, IEEE Trans. Comput., 1975, 100, (7), pp. 750–753
[22] Guiry, J.J., van de Ven, P., Nelson, J.: ‘Multi-sensor fusion for enhanced
contextual awareness of everyday activities with ubiquitous devices’, Sensors
(Basel), 2014, 14, pp. 5687–5701
[23] Robert-Andrei, V., Ciprian, D., Lidia, B., et al.: ‘Human physical activity
recognition using smartphone sensors’, Sensors, 2019, 19, (3), p. 458
[24] Shoaib, M., Bosch, S., Incel, O.D., et al.: ‘Fusion of smartphone motion
sensors for physical activity recognition’, Sensors, 2014, 14, pp. 10146–
10176
[25] Abukhary, N., Mustafah, Y.: ‘Real-time human activity recognition’. IOP
Conf. Ser. Mater. Sci. Eng., 2017, 260, pp. 12–17
Table 4Confusion matrix of simple motion-based classification
Activity Resting Walking Running Jumping Bending
resting 30 0 0 0 actual class
walking 0 30 0 0
running 0 1 20 0
jumping 0 1 0 20
bending 0 0 0 0 3
predicted class
The bold entries indicate that the activity is correctly classified with our approach in these trials. The other entries are incorrectly classified.
38 IET Cyber-Phys. Syst., Theory Appl., 2020, Vol. 5 Iss. 1, pp. 31-38
This is an open access article published by the IET under the Creative Commons Attribution License
(http://creativecommons.org/licenses/by/3.0/)
Article
Use of far-field electric field sensors in an outdoor event detection is described. Electric field variations accompany broad variety of physical events, and field signature signals can be used in determination of activities in proximity of the sensor. Perimeter monitoring, moving objects recognition, electric power faults detection are only a few examples of such applications. This work presents development and signal processing for electric field sensor uses in human and animal motion detection and characterization.
Article
Full-text available
Because the number of elderly people is predicted to increase quickly in the upcoming years, “aging in place” (which refers to living at home regardless of age and other factors) is becoming an important topic in the area of ambient assisted living. Therefore, in this paper, we propose a human physical activity recognition system based on data collected from smartphone sensors. The proposed approach implies developing a classifier using three sensors available on a smartphone: accelerometer, gyroscope, and gravity sensor. We have chosen to implement our solution on mobile phones because they are ubiquitous and do not require the subjects to carry additional sensors that might impede their activities. For our proposal, we target walking, running, sitting, standing, ascending, and descending stairs. We evaluate the solution against two datasets (an internal one collected by us and an external one) with great effect. Results show good accuracy for recognizing all six activities, with especially good results obtained for walking, running, sitting, and standing. The system is fully implemented on a mobile device as an Android application.
Article
Full-text available
The traditional Closed-circuit Television (CCTV) system requires human to monitor the CCTV for 24/7 which is inefficient and costly. Therefore, there's a need for a system which can recognize human activity effectively in real-time. This paper concentrates on recognizing simple activity such as walking, running, sitting, standing and landing by using image processing techniques. Firstly, object detection is done by using background subtraction to detect moving object. Then, object tracking and object classification are constructed so that different person can be differentiated by using feature detection. Geometrical attributes of tracked object, which are centroid and aspect ratio of identified tracked are manipulated so that simple activity can be detected.
Article
Full-text available
Problem addressed: Wrist-worn accelerometers are associated with greater compliance. However, validated algorithms for predicting activity type from wrist-worn accelerometer data are lacking. This study compared the activity recognition rates of an activity classifier trained on acceleration signal collected on the wrist and hip. Methodology: 52 children and adolescents (mean age 13.7 ± 3.1 year) completed 12 activity trials that were categorized into 7 activity classes: lying down, sitting, standing, walking, running, basketball, and dancing. During each trial, participants wore an ActiGraph GT3X+ tri-axial accelerometer on the right hip and the non-dominant wrist. Features were extracted from 10-s windows and inputted into a regularized logistic regression model using R (Glmnet + L1). Results: Classification accuracy for the hip and wrist was 91.0% ± 3.1% and 88.4% ± 3.0%, respectively. The hip model exhibited excellent classification accuracy for sitting (91.3%), standing (95.8%), walking (95.8%), and running (96.8%); acceptable classification accuracy for lying down (88.3%) and basketball (81.9%); and modest accuracy for dance (64.1%). The wrist model exhibited excellent classification accuracy for sitting (93.0%), standing (91.7%), and walking (95.8%); acceptable classification accuracy for basketball (86.0%); and modest accuracy for running (78.8%), lying down (74.6%) and dance (69.4%).Potential Impact: Both the hip and wrist algorithms achieved acceptable classification accuracy, allowing researchers to use either placement for activity recognition.
Article
Full-text available
For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible.
Conference Paper
Full-text available
Wearable sensor systems have been used in the ubiquitous computing community and elsewhere for applications such as activity and gesture recognition, health and wellness monitoring, and elder care. Although the power consumption of accelerometers has already been highly optimized, this work introduces a novel sensing approach which lowers the power requirement for motion sensing by orders of magnitude. We present an ultra-low-power method for passively sensing body motion using static electric fields by measuring the voltage at any single location on the body. We present the feasibility of using this sensing approach to infer the amount and type of body motion anywhere on the body and demonstrate an ultra-low-power motion detector used to wake up more power-hungry sensors. The sensing hardware consumes only 3.3 μW, and wake-up detection is done using an additional 3.3 μW (6.6 μW total).
Conference Paper
Full-text available
Cigarette smoking is one of the major causes of lung cancer, and has been linked to a large amount of other cancer types and diseases. Smoking cessation, the only mean to avoid these serious risks, is hindered by the ease to ignore these risks in day-to-day life. In this paper we present a feasibility study with smokers wearing an accelerometer device on their wrist over the course of a week to detect their smoking habits based on detecting typical gestures carried out while smoking a cigarette. We provide a basic detection method that identifies when the user is smoking, with the goal of building a system that provides an individualized risk estimation to increase awareness and motivate smoke cessation. Our basic method detects typical smoking gestures with a precision of 51.2% and shows a user-specific recall of over 70% - creating evidence that an unobtrusive wrist-watch-like sensor can detect smoking.
Article
The Empatica E3 is a wearable wireless multisensor device for real-time computerized biofeedback and data acquisition. The E3 has four embedded sensors: photoplethysmograph (PPG), electrodermal activity (EDA), 3-axis accelerometer, and temperature. It is small, light and comfortable and it is suitable for almost all real-life applications. The E3 operates both in streaming mode for real-time data processing using a Bluetooth low energy interface and in recording mode using its internal flash memory. With E3, it is possible to conduct research outside of the lab by acquiring continuous data for ambulatory situations in a comfortable and non-distracting way.
Article
Smoking-induced diseases are known to be the leading cause of death in the United States. In this work, we design RisQ, a mobile solution that leverages a wristband containing a 9-axis inertial measurement unit to capture changes in the orientation of a person's arm, and a machine learning pipeline that processes this data to accurately detect smoking gestures and sessions in real-time. Our key innovations are four-fold: a) an arm trajectory-based method that extracts candidate hand-to-mouth gestures, b) a set of trajectory-based features to distinguish smoking gestures from confounding gestures including eating and drinking, c) a probabilistic model that analyzes sequences of hand-to-mouth gestures and infers which gestures are part of individual smoking sessions, and d) a method that leverages multiple IMUs placed on a person's body together with 3D animation of a person's arm to reduce burden of self-reports for labeled data collection. Our experiments show that our gesture recognition algorithm can detect smoking gestures with high accuracy (95.7%), precision (91%) and recall (81%). We also report a user study that demonstrates that we can accurately detect the number of smoking sessions with very few false positives over the period of a day, and that we can reliably extract the beginning and end of smoking session periods.
Conference Paper
Advances in body sensing and mobile health technology have created new opportunities for empowering people to take a more active role in managing their health. Measurements of dietary intake are commonly used for the study and treatment of obesity. However, the most widely used tools rely upon self-report and require considerable manual effort, leading to underreporting of consumption, non-compliance, and discontinued use over the long term. We are investigating the use of wrist-worn accelerometers and gyroscopes to automatically recognize eating gestures. In order to improve recognition accuracy, we studied the sequential dependency of actions during eating. Using a set of four actions (rest, utensiling, bite, drink), we developed a hidden Markov model (HMM) and compared its recognition performance against a non-sequential classifier (KNN). Tested on a dataset of 20 meals, the KNN achieved 71.7% accuracy while the HMM achieved 84.3% accuracy, showing that knowledge of the sequential nature of activities during eating improves recognition accuracy.
Conference Paper
This paper presents the development of a system based on computational intelligence techniques and on an accelerometer to perform, in a comfortable and non-intrusive manner, the recognition of basic movements of a person's routine. The information provided by this system can be directed to support promoting health and well-being of the individual, as well as diagnosing and remote patient monitoring. The system provides an overall success rate in recognition of movements around 93% by using support vector machines for signal classification and Fisher's discriminant ratio to select the most significant features.