Conference PaperPDF Available

Indoor Localization using Solar Cells

Authors:

Abstract and Figures

The development of the Internet of Things (IoT) opens the doors for innovative solutions in indoor positioning systems. Recently, light-based positioning has attracted much attention due to the dense and pervasive nature of light sources (e.g., Light-emitting Diode lighting) in indoor environments. Nevertheless, most existing solutions necessitate carrying a high-end phone at hand in a specific orientation to detect the light intensity with the phone's light sensing capability (i.e., light sensor or camera). This limits the ease of deployment of these solutions and leads to drainage of the phone battery. We propose PVDeepLoc, a device-free light-based indoor localization system that passively leverages photovoltaic currents generated by the solar cells powering various digital objects distributed in the environment. The basic principle is that the location of the human interferes with the lighting received by the solar cells, thus producing a location fingerprint on the generated photocurrents. These fingerprints are leveraged to train a deep learning model for localization purposes. PVDeepLoc incorporates different regu-larization techniques to improve the deep model's generalization and robustness against noise and interference. Results show that PVDeepLoc can localize at sub-meter accuracy for typical indoor lighting conditions. This highlights the promise of the proposed system for enabling device-free light-based localization systems.
Content may be subject to copyright.
Indoor Localization using Solar Cells
Hamada Rizk
Grad. Sch. of Info. Sci. and Tech.,
Osaka University, Osaka, Japan
& Tanta University, Tanta, Egypt
hamada rizk@f-eng.tanta.edu.eg
Dong Ma
Sch. of Comp. and Info. Sys.,
Singapore Management University, Singapore
dongma@smu.edu.sg
Mahbub Hassan
Sch. of Comp. Sci. and Eng.,
University of New South Wales, Australia
mahbub.hassan@unsw.edu.au
Moustafa Youssef
Dept. of Comp. and Sys. Eng.,
AUC & Alexandria University, Egypt
moustafa-youssef@aucegypt.edu
Abstract—The development of the Internet of Things (IoT)
opens the doors for innovative solutions in indoor positioning
systems. Recently, light-based positioning has attracted much
attention due to the dense and pervasive nature of light sources
(e.g., Light-emitting Diode lighting) in indoor environments.
Nevertheless, most existing solutions necessitate carrying a high-
end phone at hand in a specific orientation to detect the light
intensity with the phone’s light sensing capability (i.e., light
sensor or camera). This limits the ease of deployment of these
solutions and leads to drainage of the phone battery. We propose
PVDeepLoc, a device-free light-based indoor localization system
that passively leverages photovoltaic currents generated by the
solar cells powering various digital objects distributed in the
environment. The basic principle is that the location of the
human interferes with the lighting received by the solar cells, thus
producing a location fingerprint on the generated photocurrents.
These fingerprints are leveraged to train a deep learning model
for localization purposes. PVDeepLoc incorporates different regu-
larization techniques to improve the deep model’s generalization
and robustness against noise and interference. Results show that
PVDeepLoc can localize at sub-meter accuracy for typical indoor
lighting conditions. This highlights the promise of the proposed
system for enabling device-free light-based localization systems.
Index Terms—solar panels, deep learning, indoor localization,
device-free localization
I. INTRODUCTION
The current advances in the sensing capabilities of IoT
devices open the door for the next generation of human-
centric applications [1]–[14]. Accurate energy-efficient indoor
localization comes on top of these applications. While GPS has
mostly solved the localization problem in outdoor scenarios,
it cannot work indoors due to the absence of the line of
sight to reference satellites. Therefore, industry and academia
have been devoting immense effort to find a pervasive indoor
positioning system.
WiFi-based localization has been one of the main indoor
localization approaches due to the widespread use of WiFi
access points (APs) [4]–[7]. However, this solution suffers
from practical issues such as wireless channel dynamics,
fading, interference, and environmental noises that lead to
unstable performance. More recently, cellular signals have
been used for indoor tracking [8]–[14]. These systems are
designed to map the received signal strength from the cell
towers covering the area of interest to the corresponding user
location. Unlike WiFi-based networks, cell towers are located
outside buildings transmitting long-range signals. Therefore,
the received signals are noisy and highly affected by the
sensitivity of the measuring device (i.e., cell phone) [8], [11],
[12].
Light-emitting diodes (LEDs) is a new lighting technology
offering long lifetime and energy-saving. As LEDs are often
deployed at a much higher density compared to WiFi APs,
light-based localization can potentially achieve higher localiza-
tion accuracy. Current light-based localization techniques [15],
[16] are designed to locate the user based on the light intensity
received by the user smartphone. However, leveraging the
user smartphone limits its wide adoption to only users with
high-end phones (i.e., equipped with light sensor or camera).
Moreover, even with the availability of such high-end phones,
the localization system cannot work when the phone is not
exposed to the light source (e.g., the phone is in the user’s
pocket or bag). Additionally, the diversity of smartphones,
e.g., the sensor sensitivity, sensor placement, and sampling
rate, leads to a significant drop in the localization performance
when the testing phone is different from the ones used in the
calibration phase [8]. Finally, continuous light-sensing leads
to rapid battery drainage, especially with phones powered by
small batteries.
Recently, there has been a trend of fitting many indoor
Internet of Things (IoT) devices with solar cells to extend their
battery life or enable completely battery-free operation [17].
Inspired by this, we propose PVDeepLoc, a novel device-
free and energy-free light-based indoor localization system
that passively leverages photovoltaic currents generated by
the solar cells distributed in the environment. The basic
principle is that the location of the human interferes with the
lighting received by the solar cells, thus producing a location
fingerprint on the generated photocurrents. The fingerprints
collected at pre-defined reference locations are used for train-
Model
Creator
Data
Collector Location
Predictor
Pre-
processor
Offline
Online
Photocurrent
vector
Fingerprint
Collector
Photocurrent
vector
Pre-
processor
Fingerprint
dataset
model
Fig. 1. The architecture of the PVDeepLoc system.
ing an efficient deep learning-based localization that learns the
complex relationship between the photocurrent measurements
of the installed solar cells and the user location. Moreover,
PVDeepLoc employs different model regularization techniques
to increase the system generalization ability and select the
model’s configurations optimally.
The rest of the paper is structured as follows. Section II
introduces the detailed implementation and discusses the role
played by each module of the proposed PVDeepLoc sys-
tem. Section III validates PVDeepLoc performance with the
experimental evaluations. Finally, we conclude the paper in
Section IV.
II. TH E PVDeepLoc SYS TE M
Fig. 1 shows the PVDeepLoc system architecture.
PVDeepLoc works in two phases: an offline training phase
and an online tracking phase. During the offline phase, the
area of interest is partitioned into uniform virtual grids (i.e.,
have equal sizes). Then, the photocurrent measurements cor-
responding to the received light are obtained while the user
is located at an arbitrary grid in the environment by the solar
cells and recorded. The obtained readings are forwarded to
the Pre-processor module to prepare consistent length feature
vectors of photocurrent measurements enabling traing a local-
ization model. Next, the Model Creator module constructs and
trains a deep neural network while also selecting the optimal
parameters for the model with provisions to avoid over-fitting.
Finally, the optimal model is then saved for later use by the
online Location Predictor module.
During the online phase, the user is at an unknown location
while the solar cells receive light intensities from the light
sources in the area of interest. The photocurrent feature vector
is obtained the Pre-processor module. Finally, the Location
Predictor module feeds the processed input vector to the
localization model constructed in the offline phase to estimate
the likelihood of the user being at any grid of the already
defined ones at the offline phase.
A. The pre-processor module
This module runs during both the offline and online phases.
It processes the measured photocurrent values from the in-
stalled solar cells. Specifically, the Pre-processor forms the
photocurrent values in a feature vector (where each entity
g1
Hidden Layers
Input Layer Output Layer
g2
gn
a1
a2
an
Photocurrent features
Fig. 2. Network structure. The input is the photocurrent feature vector and
the output is the probability distribution for different reference grids. Grey-
shaded neurons represent examples of temporarily dropped out neurons.
represents a reading from a corresponding solar cell) that fit the
input length of the localization model. Then, the feature vector
is re-scaled to the range of [0,1] due to the neural network’s
sensitivity to the input scale. Finally, to handle the noise that
may be accompanied to the received light due to transient
additional light or environment changes, PVDeepLoc employs
the data augmentation framework proposed in [18] and outlier
removal [19], [20]. The framework generates synthetic data
from samples collected over a short-term that reflect the typical
variation in measurements. It offers an additional advantage of
combating the possible bias problem which may occur due to
training with a small amount of data and affect the model
generalization ability.
B. The localization model creator
This module is responsible for training a deep localization
model and finding its optimal parameters. The selected model
will be used during the online phase by the Location Pre-
dictor module to provide an estimate for the user location.
PVDeepLoc employs a deep fully-connected neural network
due to its hierarchical representational ability, enabling the
learning of complex patterns [21].
1) The network architecture: Fig. 2 shows our deep net-
work structure. We construct a deep fully connected neural
network consisting of cascaded hidden layers of nonlinear
processing neuronal units. We use the hyperbolic tangent
function (tanh) as the activation function for the hidden layers
due to its non-linearity, differentiability (i.e., having stronger
gradients and avoiding bias in the gradients), and consideration
of negative and positive inputs [22]. The network’s input layer
is a vector of length krepresenting the photocurrent feature
vector. The output layer consists of a number of neurons
corresponding to the number of reference grids of the area
of interest that is defined by the simulator. This network is
trained to operate as a multinomial (multi-class) classifier by
leveraging a Softmax activation function in the output layer.
This leads to a probability distribution for the expected grids
given an input difference vector.
During the offline phase, the ground-truth probability label
vector P(ai)=[p(ai1), p(ai2)...p(ain)] is formalized using
one-hot-encoding. This encoding has a probability of 1for the
correct reference grid and 0for others. The model is trained
using the Adaptive Moment Estimation (Adam) optimizer to
minimize the mean cross-entropy between the estimated output
probability distribution P(ai)and the one-hot-encoded vector
gi.
2) Preventing over-fitting: To increase the model robust-
ness and further reduce over-fitting, PVDeepLoc employs two
regularization techniques: First, we use dropout regularization
during the network training (Fig. 2). We also adopt early stop-
ping regularization method to automatically stop the training
process at an optimal point in time when the performance
improvements are no longer gained.
C. Online phase
This phase aims to locate the user in real-time, after
deploying the system, using the measured light intensities from
the installed solar cells in the area of interest. This can be
done by calculating the corresponding photocurrent vector as
a feature vector as described previously. Thereafter, this vector
is fed to the trained localization model obtained to estimate the
user location as one of the grids defined at the configuration
phase. The grid gwith the maximum probability given the
feature vector (c) is selected. That is, we want to find:
g= argmax
r
[P(g|c)] (1)
We implemented our deep learning-based training using
the Keras learning library on top of the Google TensorFlow
framework.
III. PROO F-O F-CONCEPT IMPLEMENTATION
A. Experimental Setup
In this section, we describe the data collection setup in a real
room that spans an area of 2m×3min a residential building
(denoted experimental testbed). The testbed is equipped with
four vertically installed solar cells at the four walls of the
rooms, as shown in Fig. 3. The figure shows the 3D coordinate
of the considered cells. Each solar cell has 15.5% efficiency
with dimensions: length, width and depth of 10cm, 7cm
and 0.15cm, respectively. The room is illuminated with a
chandelier of 8lamps of 40 watt each, i.e., 450 lumens.
This light source is hung in the center of the room’s ceiling
at the height of 1.9m from the floor. The experiment area
is uniformly partitioned into 24 different grids with 0.5m
spacing. The data is collected while the user stands at the
center of each grid cell (i.e., reference points).
TABLE I
DEFAU LT PARAM ETE RS VAL UES U SE D IN TH E EVALU ATION .
Parameter Range Default
Learning rate 0.0001 - 0.2 0.001
Dropout rate (%) 0 - 90 5
Early stopping patience 1-100 40
Number of hidden Neurons 20 - 1000 220
Number of layers 2 - 30 6
Number of training samples per grid 1 - 640 640
TABLE II
THE L OCA LI ZATIO N ER RORS O F TH E PRO POS ED S YST EM .
Min 25th Perc. Median 75th perc. Avg. M ax
0.01 0.29 0.63 1.16 0.71 1.81
For capturing the photocurrent reading from a solar panel,
we connected the solar panel to an analog to digital converter
(ADC) whose output is fed to a Raspberry Pi (RPI) module.
The measurements are recorded using our Python implementa-
tion, which sends an HTTP request to the four installed RPIs to
get the response of photocurrent readings from their connected
solar panels. These readings are aggregated into one sample
stored in our fingerprint database. We collected 50 samples of
photocurrent readings while the user was standing at the center
of each reference grid. To enable the effective adoption of the
deep learning model, the number of samples captured at each
grid is increased to 400 using the proposed data augmentation
methods (Section II-A).
B. Experimental results
Table II summarizes how PVDeepLoc performs in the
considered testbed. Specifically, the PVDeepLocs localization
performance is evaluated by calculating the Euclidean distance
error between the ground-truth location and the estimated
user location. The reported results in the table confirm the
good performance of the system achieving as low as only
0.01m, 0.29m, 0.63m, 0.71m, 1.16mand 1.8mfor the mini-
mum, 25th percentile, 50th percentile, average, 75th percentile
and maximum (100th percentile), respectively. This confirms
the validity of PVDeepLoc as an accurate indoor localization
system for intelligent environments.
1) Robustness to density variation of solar cells: In this
section, we study the robustness of the proposed system when
fewer solar cells are considered in a real-world testbed. Fig. 4
shows the effect of varying the density of the solar cells on
the PVDeepLoc localization accuracy. For this, we randomly
removed solar cells from a total of 4solar cells installed in
the area of interest. The figure shows that the more solar panel
available, the richer input information to the model and thus
a better localization accuracy. However, even with a density
as low as only two cells, PVDeepLoc can achieve an accurate
room-level localization with less than 2m median error. This
is due to the light perturbation caused by the room’s user
presence, which can be measured by the installed few solar
cells. IV. CONCLUSION
We presented PVDeepLoc, an accurate and robust device-
free indoor localization system that uses photocurrent mea-
surements of solar cells to localize users passively without
Light source
Origin (0,0,0) X(room_length) = 3
Solar cell
Z (room_height) = 2.8
S1
S2
(0, 0.75, 0.4) (0.25, 2, 0.4)
(2.75, 0, 0.4)
(3, 1.25, 0.4)
S4
S3
(1.5, 1, 1.9)
Fig. 3. The experimental setup of the real testbed.
0
0.5
1
1.5
2
2.5
3
1 2 3 4
Median location error (m)
Number solar cells
Fig. 4. Effect of changing the number of the considered solar panels.
requiring users to wear or carry any device. PVDeepLoc
trains a deep neural network to estimate the fine-grained
user location. The system employs different regularization
techniques to enable the deep network to generalize and avoid
over-fitting, leading to a more stable model in the case of
unseen/noisy data. We evaluated PVDeepLoc in a challenging
real-world testbed. The results show that PVDeepLoc comes
with a median localization accuracy of more than 0.63m.
Currently, we are extending the system in different direc-
tions, including exploring more advanced neural networks to
improve the accuracy with fewer and heterogeneous solar
panels, studying the variation in number and type of light
sources including dimmable lights and outdoor light coming in
via windows, improving the system robustness against environ-
mental changes (e.g., furniture) and variation in reflectivity of
different objects as well as investigating the effect of ambient
human activities and tracking multiple subjects.
ACKNOWLEDGMENT
This work was partially funded by the Australian Research
Council Discovery Project DP210100904. The authors would
like to thanks Rola Elbakly for her assistance in the data
collection process.
REFERENCES
[1] V. Erd´
elyi, H. Rizk, H. Yamaguchi, and T. Higashino, “Learn to see: A
microwave-based object recognition system using learning techniques,
in Adjunct Proceedings of the 2021 International Conference on Dis-
tributed Computing and Networking, 2021, pp. 145–150.
[2] S. Yamada, H. Rizk, and H. Yamaguchi, “An accurate point cloud-based
human identification using micro-size lidar,” in International Conference
on Pervasive Computing and Communications Workshops and other
Affiliated Events (PerCom Workshops). IEEE, 2022.
[3] H. Rizk, T. Amano, H. Yamaguchi, and M. Youssef, “Smartwatch-
based face-touch prediction using deep representational learning,” in the
18th EAI International Conference on Mobile and Ubiquitous Systems:
Computing, Networking and Services, EAI. Springer, 2021.
[4] X. Wang, L. Gao, S. Mao, and S. Pandey, “DeepFi: Deep learning for
indoor fingerprinting using channel state information,” in Proceedings
of the International Conference on Wireless Communications and Net-
working. IEEE, 2015, pp. 1666–1671.
[5] H. Rizk, H. Yamaguchi, M. Youssef, and T. Higashino, “Gain without
pain: Enabling fingerprinting-based indoor localization using tracking
scanners,” in The 28th International Conference on Advances in Geo-
graphic Information Systems, 2020, pp. 550–559.
[6] M. Abbas, M. Elhamshary, H. Rizk, M. Torki, and M. Youssef,
“WiDeep: WiFi-based accurate and robust indoor localization system
using deep learning,” in Proceedings of the International Conference on
Pervasive Computing and Communications (PerCom). IEEE, 2019.
[7] I. Fahmy, S. Ayman, H. Rizk, and M. Youssef, “Monofi: Efficient indoor
localization based on single radio source and minimal fingerprinting,”
in Proceedings of the 29th International Conference on Advances in
Geographic Information Systems, 2021, pp. 674–675.
[8] H. Rizk, M. Abbas, and M. Youssef, “OmniCells: cross-device cellular-
based indoor location tracking using deep neural networks,” in the
International Conference on Pervasive Computing and Communications
(PerCom). IEEE, 2020.
[9] ——, “Device-independent cellular-based indoor location tracking using
deep learning,” Pervasive and Mobile Computing, p. 101420, 2021.
[10] H. Rizk and M. Youssef, “Monodcell: A ubiquitous and low-overhead
deep learning-based indoor localization with limited cellular infor-
mation,” in Proceedings of the 27th ACM SIGSPATIAL International
Conference on Advances in Geographic Information Systems. ACM,
2019, pp. 109–118.
[11] H. Rizk, “Device-invariant cellular-based indoor localization system
using deep learning,” in The ACM MobiSys 2019 on Rising Stars Forum,
ser. RisingStarsForum’19. ACM, 2019, pp. 19–23.
[12] H. Rizk, H. Yamaguchi, T. Higashino, and M. Youssef, “A ubiquitous
and accurate floor estimation system using deep representational learn-
ing,” in Proceedings of the 28th International Conference on Advances
in Geographic Information Systems, 2020, pp. 540–549.
[13] K. Alkiek, A. Othman, H. Rizk, and M. Youssef, “Deep learning-based
floor prediction using cell network information,” in Proceedings of the
28th International Conference on Advances in Geographic Information
Systems, 2020, pp. 663–664.
[14] H. Rizk, “Solocell: Efficient indoor localization based on limited cell
network information and minimal fingerprinting,” in Proceedings of
the 27th ACM SIGSPATIAL International Conference on Advances in
Geographic Information Systems, 2019, pp. 604–605.
[15] C. Zhang and X. Zhang, “Litell: robust indoor localization using
unmodified light fixtures,” in the 22nd Annual International Conference
on Mobile Computing and Networking. ACM, 2016, pp. 230–242.
[16] Y. Umetsu, Y. Nakamura, Y. Arakawa, M. Fujimoto, and H. Suwa,
“Ehaas: Energy harvesters as a sensor for place recognition on wear-
ables,” in 2019 IEEE International Conference on Pervasive Computing
and Communications (PerCom. IEEE, 2019, pp. 1–10.
[17] I. Mathews, S. N. Kantareddy, T. Buonassisi, and I. M. Peters, “Technol-
ogy and market perspective for indoor photovoltaic cells,Joule, vol. 3,
no. 6, pp. 1415–1426, 2019.
[18] H. Rizk, A. Shokry, and M. Youssef, “Effectiveness of data augmentation
in cellular-based localization using deep learning,” in Proceedings of the
International Conference on Wireless Communications and Networking
Conference (WCNC). IEEE, 2019.
[19] H. Rizk, S. Elgokhy, and A. Sarhan, “A hybrid outlier detection
algorithm based on partitioning clustering and density measures,” in The
Tenth International Conference on Computer Engineering & Systems
(ICCES). IEEE, 2015, pp. 175–181.
[20] A. Elmogy, H. Rizk, and A. M. Sarhan, “Ofcod: On the fly clustering
based outlier detection framework,Data, vol. 6, no. 1, p. 1, 2021.
[21] H. Rizk, M. Torki, and M. Youssef, “CellinDeep: Robust and Accurate
Cellular-based Indoor Localization via Deep Learning,” IEEE Sensors
Journal, 2018.
[22] Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. M¨
uller, “Efficient
backprop,” in Neural networks: Tricks of the trade. Springer, 2012,
pp. 9–48.
... The compatibility of the communication with the energy harvesting feature is also investigated, exploiting the DC component of the received optical signal that is normally discarded for communications tasks. Even though the photodiode is usually preferred to the solar cell as photodetector, several works exist in the literature treating VLP with solar cells [27][28][29][30][31], although most of the time the harvesting aspect is not accounted at all or, at most, is mentioned as a possibility without presenting a complete integrated system. The application in [27] exploits an RSS-based trilateration positioning method to recover the mutual distances between transmitting LEDs and receiver with a mean error of about 10 cm, which is the same approach presented in our work. ...
... Moreover, the LEDs transmit their identification numbers by on-off keying (OOK) instead of using a unique frequency allocation methodology as we propose, implying a much more complex customized lighting hardware since additional equipment as microprogram control unit (MCU), synchronization and code modulation circuits must be integrated. Different methodologies are presented in [30,31] where machine learning models are trained using several fingerprints to attain localization. In particular, in [30], the authors present a wearable prototype integrating three energy harvesters (i.e., solar cell, piezo and thermoelectric generators) and performing place recognition. ...
... Therefore, with respect to the proposed system, no exact coordinate estimation is achieved, in fact the prototype acts as a classifier recognizing places on the basis of the electricity generated and the user movement. In [31], the human localization is attained from the radiation changes measured by fixed solar cells and dependent on the human position in the room. The main disadvantage of machine learning approaches is that several fingerprints must be previously collected to train the algorithm, whilst our proposed method needs just one calibration measurement in the center of the measurement area subtended by each triad of LEDs. ...
Article
Full-text available
Nowadays, indoor positioning (IP) is a relevant aspect in several scenarios within the Internet of Things (IoT) framework, e.g., Industry 4.0, Smart City and Smart Factory, in order to track, amongst others, the position of vehicles, people or goods. This paper presents the realization and testing of a low power sensor node equipped with long range wide area network (LoRaWAN) connectivity and providing 2D Visible Light Positioning (VLP) features. Three modulated LED (light emitting diodes) sources, the same as the ones commonly employed in indoor environments, are used. The localization feature is attained from the received light intensities performing optical channel estimation and lateration directly on the target to be localized, equipped with a low-power microcontroller. Moreover, the node exploits a solar cell, both as optical receiver and energy harvester, provisioning energy from the artificial lights used for positioning, thus realizing an innovative solution for self-sufficient indoor localization. The tests performed in a ~1 m2 area reveal accurate positioning results with error lower than 5 cm and energy self-sufficiency even in case of radio transmissions every 10 min, which are compliant with quasi-real time monitoring tasks.
... Recent years have witnessed an increasing demand for precise and ubiquitous indoor positioning systems in many applications [5]. To realize this, cellular-based indoor localization has an attractive solution in academia and industry, especially with the current advancement in artificial intelligence techniques. ...
Preprint
Full-text available
In this paper, we demonstrate OmniCells: a cellular-based indoor localization system designed to combat the device heterogeneity problem. OmniCells is a deep learning-based system that leverages cellular measurements from one or more training devices to provide consistent performance across unseen tracking phones. In this demo, we show the effect of device heterogeneity on the received cellular signals and how this leads to performance deterioration of traditional localization systems. In particular, we show how OmniCells and its novel feature extraction methods enable learning a rich and device-invariant representation without making any assumptions about the source or target devices. The system also includes other modules to increase the deep model's generalization and resilience to unseen scenarios.
... This raises the need for developing the next generation of indoor positioning systems that are not only device-free but also energyefficient. Towards this end, a promising solution is to leverage energy harvesters to convert ambient energy into electrical energy to power the sensing devices [11][12][13]23]. With the continuous development of IoT and their pervasive applications [6,8,9,16,35], many manufacturers has already considered this solution. ...
... Over the years, several systems have been proposed to handle this issue, e.g., Zee [36] employs map matching to lesson the localization error while Unloc [38] and SemanticSLAM [39] opportunistically reset the error on encounters of the detected building landmarks. [40] uses solar cells as a light sensor for estimating the user location while harvesting energy. On the other hand, Headio [37] proposes a computer vision-based solution leveraging the smartphone's front camera to correct the location drift. ...
Preprint
Full-text available
The demand for safety-boosting systems is always increasing, especially to limit the rapid spread of COVID-19. Real-time social distance preserving is an essential application towards containing the pandemic outbreak. Few systems have been proposed which require infrastructure setup and high-end phones. Therefore, they have limited ubiquitous adoption. Cellular technology enjoys widespread availability and their support by commodity cellphones which suggest leveraging it for social distance tracking. However, users sharing the same environment may be connected to different teleco providers of different network configurations. Traditional cellular-based localization systems usually build a separate model for each provider, leading to a drop in social distance performance. In this paper, we propose CellTrace, a deep learning-based social distance preserving system. Specifically, CellTrace finds a cross-provider representation using a deep learning version of Canonical Correlation Analysis. Different providers' data are highly correlated in this representation and used to train a localization model for estimating the social distances. Additionally, CellTrace incorporates different modules that improve the deep model's generalization against overtraining and noise. We have implemented and evaluated CellTrace in two different environments with a side-by-side comparison with the state-of-the-art cellular localization and contact tracing techniques. The results show that CellTrace can accurately localize users and estimate the contact occurrence, regardless of the connected providers, with a sub-meter median error and 97% accuracy, respectively. In addition, we show that CellTrace has robust performance in various challenging scenarios.
... Over the years, several systems have been proposed to handle this issue, e.g., Zee [36] employs map matching to lesson the localization error while Unloc [38] and SemanticSLAM [39] opportunistically reset the error on encounters of the detected building landmarks. [40] uses solar cells as a light sensor for estimating the user location while harvesting energy. On the other hand, Headio [37] proposes a computer vision-based solution leveraging the smartphone's front camera to correct the location drift. ...
Article
Full-text available
The demand for safety-boosting systems is always increasing, especially to limit the rapid spread of COVID-19. Real-time social distance preservation is an essential application toward containing the pandemic outbreak. Few systems have been proposed which require infrastructure setup and high-end phones. Therefore, they have limited ubiquitous adoption. Cellular technology enjoys widespread availability and their support by commodity cellphones, which suggest leveraging it for social distance tracking. However, users sharing the same environment may be connected to different telecom providers of different network configurations. Traditional cellular-based localization systems usually build a separate model for each provider, leading to a drop in social distance performance. In this article, we propose CellTrace , a deep learning-based social distance preserving system. Specifically, CellTrace finds a cross-provider representation using a deep learning version of canonical correlation analysis. Different providers’ data are highly correlated in this representation and used to train a localization model for estimating the social distances. In addition, CellTrace incorporates different modules that improve the deep model’s generalization against overtraining and noise. We have implemented and evaluated CellTrace in two different environments with a side-by-side comparison with the state-of-the-art cellular localization and contact tracing techniques. The results show that CellTrace can accurately localize users and estimate the contact occurrence, regardless of the connected providers, with a submeter median error and 97% accuracy, respectively. In addition, we show that CellTrace has robust performance in various challenging scenarios.
Article
Full-text available
The World Health Organization reported that face touching is a primary source of infection transmission of viral diseases, including COVID-19, seasonal Influenza, Swine flu, Ebola virus, etc. Thus, people have been advised to avoid such activity to break the viral transmission chain. However, empirical studies showed that it is either impossible or difficult to avoid as it is unconsciously a human habit. This gives rise to the need to develop means enabling the automatic prediction of the occurrence of such activity. In this paper, we propose SafeSense , a cross-subject face-touch prediction system that combines the sensing capability of smartwatches and smartphones. The system includes innovative modules for automatically labeling the smartwatches’ sensor measurements using smartphones’ proximity sensors during normal phone use. Additionally, SafeSense uses a multi-task learning approach based on autoencoders for learning a subject-invariant representation without any assumptions about the target subjects. SafeSense also improves the deep model’s generalization ability and incorporates different modules to boost the per-subject system’s accuracy and robustness at run-time. We evaluated the proposed system on ten subjects using three different smartwatches and their connected phones. Results show that SafeSense can obtain as high as 97.9% prediction accuracy with a F1-score of 0.98. This outperforms the state-of-the-art techniques in all the considered scenarios without extra data collection overhead. These results highlight the feasibility of the proposed system for boosting public safety.
Conference Paper
Full-text available
The technology of 3D recognition is evolving rapidly, enabling unprecedented growth of applications towards human-centric intelligent environments. On top of these applications human segmentation is a key technology towards analyzing and understanding human mobility in those environments. However, existing segmentation techniques rely on deep learning models, which are computationally intensive and data-hungry solutions. This hinders their practical deployment on edge devices in realistic environments. In this paper, we introduce a novel micro-size LiDAR device for understanding human mobility in the surrounding environment. The device is supplied with an on-device lightweight human segmentation technique for the captured 3D point cloud data using density-based clustering. The proposed technique significantly reduces the computational complexity of the clustering algorithm by leveraging the Spatio-temporal relation between consecutive frames. We implemented and evaluated the proposed technique in a real-world environment. The results show that the proposed technique obtains a human segmentation accuracy of 99% with a drastic reduction of the processing time by 66%.
Conference Paper
Full-text available
The demand for safety-boosting systems is increasing, especially nowadays, to limit the rapid spread of COVID-19. Real-time life-logging is an essential application towards tracking infected cases and thus containing the pandemic outbreak. This application raises the need for an accurate human identification technology where cameras cannot be adopted due to privacy. Recently, LiDAR sensor has attracted attention due to its ability to represent the surrounding environment in the form of 3D point cloud. In this paper, we introduce a novel wearable device of a micro-size LiDAR to build an onboard human identification system for life-logging. The system acquires 3D point cloud data of the surrounding environment from which subject-discriminative signatures are extracted. This is achieved by removing noise and background using Spatio-temporal density clustering and fisher vector representations. The extracted features are then used to train a random forest classifier for subject identification. We have implemented and evaluated the proposed system on six different subjects. The results show that the proposed system can effectively remove noise and background and accurately identify subjects with 99.9% accuracy.
Conference Paper
Full-text available
World Health Organization (WHO) reported that viruses, including COVID-19, can be transmitted by touching the face with contaminated hands and advised people to avoid touching their face, especially the mouth, nose, and eyes. However, according to recent studies, people touch their faces unconsciously in their daily lives, and it is difficult to avoid such activities. Although many activity recognition methods have been proposed over the years, none of them target the prediction of face-touch (rather than detection) with other daily life activities. To address to problem, we propose TouchAlert: a system that automatically predict the occurrence of face-touch activity and warn the user before its occurrence. Specifically, TouchAlert utilizes commodity wearable de-vices' sensors to train a deep learning-based model for predicting the variable length face-touching of different users at an early stage of its occurrence. Our experimental results show high accuracy of F1-score of 0.98 and prediction accuracy of 97.9%.
Conference Paper
Full-text available
Indoor localization is a key component of pervasive and mobile computing. Due to the widespread use of WiFi technology, WiFi fingerprinting is one of the most widely utilized approaches for indoor localization. Despite advancements in WiFi-based positioning approaches, existing solutions necessitate a dense deployment of access points, time-consuming manual fingerprinting, and/or special hardware. In this paper, we propose MonoFi, a novel WiFi-based indoor localization system relying only on the received signal strength from a single access point. To compensate for the low amount of information available for learning, the system trains a recurrent neural network with sequences of signal measurements. MonoFi incorporates different modules to reduce the data collection overhead, boost the scalability and improves the deep model's generalization. The proposed system is deployed and assessed in comparison to existing WiFi indoor localization systems. Our experiments with different mobile phones show that the system can achieve a median localization error of 0.80 meters, surpassing the state-of-the-art results by at least 140%.
Article
Full-text available
In data mining, outlier detection is a major challenge as it has an important role in many applications such as medical data, image processing, fraud detection, intrusion detection, and so forth. An extensive variety of clustering based approaches have been developed to detect outliers. However they are by nature time consuming which restrict their utilization with real-time applications. Furthermore, outlier detection requests are handled one at a time, which means that each request is initiated individually with a particular set of parameters. In this paper, the first clustering based outlier detection framework, (On the Fly Clustering Based Outlier Detection (OFCOD)) is presented. OFCOD enables analysts to effectively find out outliers on time with request even within huge datasets. The proposed framework has been tested and evaluated using two real world datasets with different features and applications; one with 699 records, and another with five millions records. The experimental results show that the performance of the proposed framework outperforms other existing approaches while considering several evaluation metrics.
Conference Paper
Full-text available
Robust and accurate indoor localization has been the goal of several research efforts over the past decade. Towards achieving this goal, WiFi fingerprinting-based indoor localization systems have been proposed. However, fingerprinting involves significant effort; especially when done at high density; and needs to be repeated with any change in the deployment area. While a number of recent systems have been introduced to reduce the calibration effort, these still trade overhead with accuracy. In this paper, we present LiPhi: an accurate system for enabling fingerprinting-based indoor localization systems without the associated data collection overhead. This is achieved by leveraging the sensing capability of transportable laser range scanners (LRSs) to automatically label WiFi signal scans, which can subsequently be used to build (and maintain) localization models. As part of its design, LiPhi has modules to associate WiFi scans with the unla-beled traces obtained from as few as one LRS as well as provisions to train a robust deep learning model. Evaluation of LiPhi using Android phones in two realistic testbeds shows that it can match the performance of manual fingerprinting techniques under the same deployment conditions without the overhead associated with the traditional fingerprinting process. In addition, LiPhi improves upon the median localization accuracy obtained from crowdsourcing-based and fingerprinting-based systems by 181% and 297% respectively, when tested with data collected a few months later.
Conference Paper
Full-text available
Location-based services have undergone massive improvements over the last decade. Despite intense efforts in industry and academia, a pervasive infrastructure-free localization is still elusive. Towards making this possible, cellular-based systems have recently been proposed due to the widespread availability of the cellular networks and their support by commodity cellphones. However, these systems only consider locating the user in a 2D single floor environment , which reduces their value when used in multi-story buildings. In this paper, we propose CellRise, a deep learning-based system for floor identification in multi-story buildings using ubiquitous cellular signals. Due to the inherent challenges of leveraging the large propagation range and the overlap in the signal space between horizontal and vertical user movements, CellRise provides a novel module to generate floor-discriminative representations. These representations are then fed to a recurrent neural network that learns the sequential changes in signals to estimate the user floor level. Additionally, CellRise incorporates different modules that improve the deep model's generalization against avoiding overtraining and noise. These modules also permit CellRise to generalize to floors completely unseen during training. We have implemented and evaluated CellRise using two different buildings with a side-by-side comparison with the state-of-the-art floor estimation techniques. The results show that CellRise can accurately estimate the exact user's floor 97.7% of the time and within one floor error 100% of the time. This is better than the state-of-the-art systems by at least 17.9% in floor identification accuracy. In addition, we show that CellRise has robust performance in various challenging conditions.
Chapter
World Health Organization (WHO) reported that viruses, including COVID-19, can be transmitted by touching the face with contaminated hands and advised people to avoid touching their face, especially the mouth, nose, and eyes. However, according to recent studies, people touch their faces unconsciously in their daily lives, and it is difficult to avoid such activities. Although many activity recognition methods have been proposed over the years, none of them target the prediction of face-touch (rather than detection) with other daily life activities. To address to problem, we propose TouchAlert: a system that automatically predict the occurrence of face-touch activity and warn the user before its occurrence. Specifically, TouchAlert utilizes commodity wearable devices’ sensors to train a deep learning-based model for predicting the variable length face-touching of different users at an early stage of its occurrence. Our experimental results show high accuracy of F1-score of 0.98 and prediction accuracy of 97.9%.KeywordsCOVID-19Face touchActivity recognitionSmartwatch
Article
The demand for a ubiquitous and accurate indoor localization service is continuously growing. Cellular-based systems are a good candidate to provide such ubiquitous service due to their wide availability worldwide. One of the main barriers to the accuracy of such services is the large number of models of cell phones, which results in variations of the measured received signal strength (RSS), even at the same location and time. In this paper, we propose OmniCells++, a deep learning-based system that leverages cellular measurements from one or more training devices to provide consistent performance across unseen tracking phones. Specifically, OmniCells++ uses a novel approach to multi-task learning based on LSTM encoder–decoder models that allows it to learn a rich and device-invariant RSS representation without any assumptions about the source or target devices. OmniCells++ also incorporates different modules to boost the system’s accuracy with RSS relative difference-based features and improve the deep model’s generalization and robustness. Evaluation of OmniCells++ in two realistic testbeds using different Android phones with different form factors and cellular radio hardware shows that OmniCells++ can achieve a consistent median localization accuracy when tested on different phones. This is better than the state-of-the-art indoor cellular-based systems by at least 148%.