Conference PaperPDF Available

Remote PhotoPlethysmoGraphy Using SPAD Camera for Automotive Health Monitoring Application

Remote PhotoPlethysmoGraphy Using SPAD Camera
for Automotive Health Monitoring Application
Marco Paracchini, Lorenzo Marchesi, Klaus Pasquinelli,
Marco Marcon, Giulio Fontana, Alessandro Gabrielli and Federica Villa
Dipartimento di Elettronica, Informazione e Bioingegneria
Politecnico di Milano, Milan, Italy,,,,,
Abstract—Remote PhotoPlethysmoGraphy (rPPG) applications
allow to extract cardiac information just analyzing a video
stream of a person face. In this work we propose the use
of a Single-Photon Avalanche Diode (SPAD) camera in order
to perform rPPG with higher accuracy, especially in low
illumination conditions, exploiting the higher sensitivity of the
SPAD sensors. In particular, we suggest the adoption of a
rPPG application in an automotive environment in order to
monitor, in a non invasive fashion, the driver’s health state and
potentially avoid accidents caused by acute illness states. Some
quantitative results are shown obtained in realistic situations in
which the SPAD camera was mounted inside a vehicle cockpit
and compared with rPPG heart rate estimations data collected
with a portable wearable ECG device.
1. Introduction
Remote PhotoPlethysmoGraphy (rPPG) applications aim
at solving the problem of estimating the heart rate (or
other bio-medical information) of a subject given a video
of his/her face [1]. Typically a signal, representing the time
variation of the light intensity reflected by the skin, is
extracted using a camera. This signal is related to the heart
activity since the transition of blood in the vessel underneath
the skin varies the absorption and reflection coefficients of
the skin itself. The signal is then consequently analyzed
in order to estimate the heart rate of the subject and/or
other bio-medical measurements. What is recorded by the
camera is actually the pulse signal since the camera can
detect the changes in blood flow linked to the contraction of
the heart. This signal is different from the electrocardiogram
since the sources of the two signals are physically different:
the former is mechanical while the latter is electrical. Pulse
rate and heart rate are not necessarily synchronized due to
mechanical delays, but they show the same frequencies and
frequency trends. For this reason, analyzing the pulse wave,
it is possible to measure the Heart Rate (HR) and Heart
Rate Variability (HRV) and so the simpatho-vagal balance,
obtaining a quantitative information about the functioning
and the activation of the autonomic nervous system.
In recent years alternative methods able to estimate re-
motely a subject heart rate had been developed. Approaches
based on millimeter waves (mmW) [2] are one of them.
Instead of analyzing an optical signal, these methods [3]
utilize a transmitter and receiver system in order to detect the
subject slight movements caused by the heartbeat exploiting
the Doppler effect. Although these techniques could produce
good results in a controlled environment they perform gen-
erally worse in presence of movements and uncooperative
subjects [4].
The majority of rPPG applications [1] utilize a standard
RGB camera, based on CMOS or CCD technologies, in
order to acquire the video stream. The goal of this work is to
perform rPPG using a SPAD (i.e. Single-Photon Avalanche
Diode) camera. This kind of cameras is capable to de-
tect even a single photon [5], has extremely high frame
rate [6] and has proved to be useful in a wide field of
applications [7], such as 3D optical ranging (LIDAR [8]),
Positron Emission Tomography (PET), quantum microscopy
and so on. In rPPG applications its high precision could be
useful in measuring accurately the skin intensity fluctuations
produced by the blood flow.
In this work we propose to use a rPPG method based
on a SPAD camera in an automotive application with the
aim of monitoring the health condition of the driver. The
idea is to develop an application that could run in real time
on a computational unit equipped on the car that is able to
extract the pulse signal and analyze it in order to consistently
monitor the driver’s health condition. These data could then
be used to enable particular features of the vehicle, such as
autonomous driving, that could take control of the vehicle
and avoid some accident in case of detected driver sickness
or altered emotional state. All the acquired parameters could
also be transmitted to a cloud based system in order to
constantly monitor the health condition and the emotional
state of the driver.
The rest of the paper is organized as follows: in Sec. 2
the state of the art of rPPG applications is analyzed; in
Sec. 3 the fundamental working principals of the SPAD
camera are outlined, describing also the actual camera used
during the tests. On the software side, in Sec. 4 the signal
processing steps needed to extract heart rate information
from the camera data stream are described. Furthermore, in
Sec. 5 some results, obtained in realistic working conditions,
are shown and compared with a commercial ECG device.
Finally in Sec. 6 the conclusions of this work are presented.
2. Related work
Contact photoplethysmography (PPG) is a simple tech-
nique that traces back to 1930s [9] in which a light is used
to measure blood volume changes related to the pulsating
nature of circulatory systems [10]. In more recent years,
starting from 2008, it was demonstrated [11] that PPG could
be performed remotely using ambient light and since then
many studies focused on the extraction of heart rate using
videocameras were published [1], [12], [13], [14], [15],
[16], [17]. The goal of most recent studies is to obtain the
tachogram, which is defined as a chart reporting time on
the x axes and Inter Beat Interval (IBI) on the y axes, by
performing remote-photoplethysmography [12].
Although Verkruysse et al. [11], [16], [17], [18] shown
that a video captured with a common RGB camera is
enough to obtain a plethysmographic signal whence mea-
suring HR and respiration rate, the choice of the camera
is critical especially in not controlled conditions. Cameras
used in literature are commonly RGB CCD (Charge Coupled
Device) cameras, some studies use webcam integrated in
laptop [18], [19], while others record video using compact-
cameras or giga-Ethernet-cameras [15]. Particular attention
is commonly paid to acquisition frequency, but different
works provide different values: Poh considers 15 fps [18],
while others acquire at 20 to 60 fps. Some works consider
regions on which to extract the pulse signal by manually
choosing the pixels of the image corresponding to the skin
of the subject [1], [13], but this is not a feasible choice in
an automatic application. On the other hand, some modern
rPPG applications involve the use of face detection and
tracking algorithms [18], [20] in order to select an appro-
priate Region Of Interest (ROI). While using a RGB camera
instead of a monochrome one could lead to some benefits for
rPPG applications [12], V. Rouast et al. [13] arrived to the
conclusion that the G channel contains enough information
and also recommend the use of the single G channel in
order to reduce computational costs and to implement online
To the extent of our knowledge no study has ever been
done with the aim of performing rPPG using single photon
cameras in order to achieve good result also on low illumi-
nation conditions. Furthermore, we propose for the first time
the use of a camera without RGB Bayer filters, in order to
exploit a near infrared (NIR) illumination.
3. SPAD camera
The main advantage of working with SPAD cameras,
instead of standard CMOS cameras and CCD cameras,
is being able to handle low signals situations and dark
environments. This is made possible using SPAD sensors
instead of using conventional pixels that convert propor-
tionally the arrival light in electric charge. Due to their
Figure 1. SPC3 SPAD camera commercialized by Micro Photon Devices
(MPD) for single-photon counting applications.
design SPADs are photodetectors able to see even a single
photons. Considering a biased p-n junction, on which a bias
voltage greater than the breakdown voltage is applied, a
single incident photon is able to trigger an avalanche; this
means that from a single-photon event a digital output is
obtained. After the photon has been seen, the avalanche is
then stopped in order to avoid unnecessary power dissipation
due to the avalanche itself, and to rearm the SPAD making it
able to see another photon; in order to achieve these targets
a dedicated hardware is implemented. A simple solution
is the use of a properly sized resistor in series with the
SPAD: after a photon-event the parasitic capacitance of the
photodetector is discharged, the resistor has the task to
reload the capacitor in the way to rearm the SPAD. This
gives a simple but slower way to obtain our proposals. A
better solution is the use of an Active-Quenching Circuit
(AQC) [21] that provides a faster quenching in a smaller
area respect to the previously mentioned solution. When a
photon triggers the avalanche, the AQC powers down the
SPAD in order to rearm it, this gives an holdoff time during
which the photodetector is completely blind. This period
can be made adjustable, short holdoff periods (in the order
of 20 ns) are required in applications where high photon
fluxes are present at the cost of high afterpulsing [22] (i.e.
the retriggering of the SPAD due to a trapped charges of the
previous avalanche), instead much longer holdoff periods are
required when weak signals are present or when afterpulsing
can heavily affect the measurement and so its reliability.
3.1. Counting, timing and other applications
A simple use of the SPAD is counting photons: a counter
is added at the output of the photodetector and each photon
increases the counter value by one. This behavior can be
compared to a conventional pixel of a camera that integrate
the light signal over the time. The advantaged of using
SPAD in this way is the single-photon resolution of the
integrated signal. This technology is currently applied in the
observation of fluorescence [23], spectroscopy [24], night
vision [25] and other fields. A more advanced use is the
Figure 2. Block diagram (left) and micrograph (right) of the 64 x 32 SPAD array chip in an high-voltage CMOS 0.35 µm technology.
Time-Of-Flight (TOF) measurements, in which a pulsed
illuminator provides a pulsed signal to the target. Due to
the presence of background light, in order to have a reliable
measure, many repetitions are needed in such a way as
to realize a histogram containing the arrival time of the
photons. Once the peak has been found the TOF measure
is concluded. TOF measurements done in this way can be
used to map 3D places in dark conditions or to realize
LIDAR [8] (Light Detection and Ranging) systems in not
heavily illuminated conditions or with a proper Field of view
(FOV) and proper optical filters.
3.2. SPC3 SPAD Camera
The camera used in this project is based on a SPAD array
developed by Politecnico di Milano [6]. The whole camera
has been developed and commercialized by Micro Photon
Devices (MPD) 1and belongs to the SPC3 SPAD camera
series 2. In Fig. 1, a picture of the SPC3 camera is shown. As
reported in the diagram in Fig. 2, the matrix is composed by
32×64 pixels, each pixel produces an unsigned 9-bit integer
output and contains a 30 µm SPAD, the AQC, counters
and the memories. The camera, connected through USB 3.0
interface, can be used in counting mode and it is capable to
reach 96 kframe/s, which, for the purpose of this project,
is more than enough. To recover part of the efficiency
lost due to the low, 3%, fill factor of the pixel (because
of to the presences of electronics in the pixel) the matrix
has been equipped of microlenses that provide a partial
enhancement of performances (80% equivalent fill-factor
for parallel light beams). This camera has the maximum
Photon Detection Efficiency (PDE) of about 50% at around
400 nm, the readout is completely parallel for all the 2048
pixels of the matrix that makes it possible to realize a global
shutter. Another important metric for SPAD cameras is the
detector intrinsic noise, called Dark Counting Rate (DCR).
Dark counts are the triggering events that are not associated
to photons but related to other kind of generations (as the
thermal one), this parameter affects the signal to noise ratio
in low signal regimes. In this project the camera has been
used at 100 fps, considering that the SPC3 SPAD has the
dark count around 100 cps (counts per seconds), DCR is
completely negligible.
An FPGA is used to readout the camera, to sum consecu-
tive frames in order to reduce the final frame-rate increasing
the counts depth up to 16-bits and to transfer the data to PC
through the USB 3.0 interface. In order to acquire at 100
fps the camera is set to continuous acquisition mode; in
this mode a start command is given externally (from the
computational unit) and the frames are acquired and stored
in the FPGA internal memory, used as a buffer. Arrays of
contiguous frames are then transferred to the PC at 10 Hz
in order to perform the operations described in Sec. 4. Each
frame is obtained summing in the FPGA the results of 500
acquisitions each obtained with an exposure time of 20 µs
in order to collect all the incident photons and at the same
time avoid saturation issues and increase the dynamic range
of the internal counters.
4. Algorithms
Like the majority of rPPG applications [1], the one pro-
posed in this paper is composed by two consecutive stages:
the signal extraction and the signal analysis. A diagram of
the complete algorithm pipeline is shown in Fig. 3.
4.1. Signal extraction
The signal extraction algorithm is composed by three
components (face detection, face tracking and pixel selection
and signal creation), represented with blue blocks in the
diagram in Fig. 3, which will be described in the next
4.1.1. Face detection. The first task of the signal extrac-
tion stage is to localize the driver’s face in the live video
streamed by the SPAD camera. In order to perform this step,
the Viola-Jones method [26] is used. This is an accurate,
efficient and fast method for object detection widely used
in many Computer Vision applications and could be applied
on low resolution images as the one acquired by the SPAD
Figure 3. rPPG algorithm pipeline. The blue blocks are relative to the Signal Extraction stage while the green ones belongs to the Signal processing stage.
camera. For these reasons it has been chosen as the face
detection method in the proposed rPPG application.
4.1.2. Face tracking. If the face was already detected in
the last iteration a tracking algorithm is used instead of the
face detection one. Firstly same features are detected inside
the face region returned by the detection algorithm on the
previous frame, using the Shis and Tomasis Good Features
to Track algorithm [27]. Consequently, these features are
tracked forward to the current frame using the KanadeLucas-
Tomasi (KLT) algorithm [28]. From the previous and current
pixel positions of the tracked points a 2D rigid transforma-
tion (homography) is estimated and the face bounding box
is transformed accordingly.
4.1.3. Pixel selection and signal creation. From the bound-
ing box containing the driver’s face a Region Of Interest
(ROI) centred around the subject forehead is calculated us-
ing fixed proportions. For each frame the signal is extracted
averaging, inside the ROI, the light intensity measured by
the camera obtaining a 1D signal sampled at 100 Hz.
4.1.4. Implementation details. Since the SPAD camera
output has a very small spatial resolution (32×64) the
output frames are scaled by a factor of 10 before applying
the tracking and face detection algorithms, using bicubic
interpolation. A border padding of 50 pixel is also added in
order to detect faces very near the image borders or partially
outside of them. The ROI coordinates are then accordingly
scaled back to the original resolution and the signal is
extracted from the original resolution frames. Although the
SPAD acquisition frame rate is set to 100 fps, the face
detection and tracking frequency is set to 10 fps and is
performed on a mean image obtained averaging the pixel
values of the last 10 frames available. The signal extraction
part of the application is implemented in C++ exploiting the
OpenCV library [29].
4.2. Signal Processing
Following the signal extraction stage, the signal process-
ing one is performed. It is represented by the green part in
the diagram in Fig. 3, and is composed by three blocks
detailed in the following paragraphs.
4.2.1. Filtering. The signal extracted by the camera is
filtered with a Butterworth bandpass filter with bandwidth
between 0.4 Hz and 4 Hz, equivalent to 24 bpm and 240 bpm
respectively. This is done in order to remove any other signal
with frequency very far from plausible HR.
4.2.2. Average Heart Rate estimation. In order to estimate
the average heart rate from the camera signal, the following
two operations are performed. Firstly, after applying the
preprocessing steps described in Sec. 4.2.1, the power spec-
trum of the pulse signal is obtained applying a Fast Fourier
Transform (FFT) on the filtered signal. Finally, in order to
determine the average Heart Rate estimation the frequency
related to the peak of the power spectrum is considered.
4.2.3. Tachogram estimation. Precisely detecting all the
heart beat peak inside the pulse signal is a fundamental re-
quirement in order to produce a good quality tachogram. In
order to perform peak detection the local maxima of the fil-
tered signal are detected imposing two different thresholds,
one related to the temporal distance between consecutive
maxima and one on the height of the maxima. Once all the
maxima are found, the average of RR intervals are calculated
providing information about the average distance between
two consecutive maxima. This value is used to adjust the
temporal threshold and perform a second run of the maxima
searching function on all the pulse wave with the constrain
that there must be a maximum inside a window slightly
greater then the calculated average inter-beat interval.
5. Evaluation
In order to test the proposed rPPG system we performed
some tests in both controlled conditions and also in an
automotive environment.
Figure 4. The rPPG visual output while running on a video acquired inside
the car. The estimated HR is reported in red.
Figure 5. HR estimation through FFT output maxima. Blue: rPPG PSD,
red: ECG PSD
5.1. Experiment setup
The first test involved 4 people and was conducted in
laboratory in controlled conditions with subjects at rest and
with no head movements recorded for 10 minutes. In the
second test we mounted the SPAD camera equipped with a
8 mm lens on the sun shield holder of a car and we acquired
some sequences. On the same support we mounted also a
LED illuminator producing infrared light (850 nm). For both
tests, the illuminator power is set keeping into account the
eye safety for the specific wavelength and considering that
the illuminator is positioned in the car cockpit around 50
cm far from the driver. We mounted also an optical filter on
the lens centered at 850 nm (bandwidth ±40 nm) in order
to discard any other light sources. For both the tests, each
subject was also equipped with a portable ECG device in
order to collect ground-truth heart activity data.
5.2. Results
In all the 4 controlled steady acquisitions the average
heart rate estimation error was 0 bpm. On the other hand,
the root mean squared error between tachogram obtained
with rPPG and ECG was 52.5 ms.
In Fig. 4 the output of the rPPG application running on
a driving sequence is shown. In particular, the red bounding
Figure 6. Tachogram estimation. Blue: rPPG tachogram, red: ECG
box represents the face detector output, the green area is
the set of pixels on which the signal is extracted (forehead)
while the green crosses are the tracked features. Below
the face bounding box the current heart rate estimation is
superimposed. In Fig. 5 and 6 the heart rate and tachogram
estimation of the same sequence shown in Fig. 4 are shown
respectively. In both graphs the blue lines represent the
estimation obtained with the proposed rPPG application
while the red ones are relative to the ECG ground-truth. The
heart rate estimation coincide with the ground-truth while
the estimated tachogram is able to follow the trend of the
real one (with a root mean squared error of 77 ms between
the two curves).
The application is able to run in real time on an
ARM board (Hardkernel Odroid XU3 3, equipped with
Samsung Exynos5422 CortexTM-A15 2.1 GHz quad core
and CortexTM-A7 1.5 GHz quad core CPUs) analyzing an
incoming SPAD input at 100 Hz and producing a new heart
rate estimation and tachogram each second.
6. Conclusions
In this work we presented a remote photoplethysmogra-
phy application with the final aim of checking in real time
the health and stress conditions of a driver. The application
runs in real time receiving input video coming from a SPAD
As described in Sec. 3, these particular photon counting
cameras, could work in dark environments and could detect
small fluctuation in light intensity caused by the pulsing of
blood in vessels underneath the skin. In Sec. 4 we described
the rPPG algorithm that is used in order to analyze each
frame, coming from the SPAD camera, and to extract the
pulse signal and consequently estimate the heart rate. In
Sec. 5 we proved the accuracy of our system testing it in
realistic conditions with the camera mounted inside a car and
comparing the obtained results with a commercial wearable
ECG device, confirming that the rPPG system could estimate
the heart rate with high accuracy and is also able to produce
good quality tachograms.
The benefits of the proposed application are numerous,
in particular in detecting, automatically and without contact,
situations of driver’s acute illness. This could lead for exam-
ple to the activation of some autonomous driving mechanism
in order to potentially avoid car accidents.
This work has been supported by the DEIS project (De-
pendability Engineering Innovation for automotive CPS),
funded by the European Unions Horizon 2020 research and
innovation programme, under the grant no. 732242.
[1] P. Rouast, M. Adam, R. Chiong, D. Cornforth, and E. Lux, “Remote
heart rate measurement using low-cost RGB face video: a technical
literature review,Frontiers of Computer Science, no. August, pp.
1–15, 2017.
[2] P. Mehrotra, B. Chatterjee, and S. Sen, “Em-wave biosensors: A
review of rf, microwave, mm-wave and optical sensing,” Sensors,
vol. 19, no. 5, p. 1013, 2019.
[3] Z. Yang, P. H. Pathak, Y. Zeng, X. Liran, and P. Mohapatra, “Vital
sign and sleep monitoring using millimeter wave,ACM Transactions
on Sensor Networks, vol. 13, no. 2, pp. 1–32, 2017.
[4] M. Fukunishi, K. Kurita, S. Yamamoto, and N. Tsumura, “Video
based measurement of heart rate and heart rate variability spec-
trogram from estimated hemoglobin information,” 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition Workshops
(CVPRW), pp. 1405–14 057, 2018.
[5] D. Bronzi, F. Villa, S. Tisa, A. Tosi, and F. Zappa,
“Spad figures of merit for photon-counting, photon-
timing, and imaging applications: A review,” IEEE Sensors
Journal, vol. 16, pp. 3–12, 2016. [Online]. Available:
[6] D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, D. Durini, S. Weyers,
and W. Brockherde, “100 000 frames/s 64 32 single-photon detector
array for 2-D imaging and 3-D ranging,” IEEE Journal on Selected
Topics in Quantum Electronics, vol. 20, no. 6, 2014.
[7] D. Bronzi, Y. Zou, F. Villa, S. Tisa, A. Tosi, and F. Zappa, “Automo-
tive three-dimensional vision through a single-photon counting spad
camera,” IEEE Transactions on Intelligent Transportation Systems,
vol. 17, no. 3, pp. 782–795, March 2016.
[8] R. Lussana, F. Villa, A. D. Mora, D. Contini, A. Tosi, and F. Zappa,
“Enhanced single-photon time-of-flight 3d ranging,” Opt. Express,
vol. 23, no. 19, pp. 24 962–24 973, Sep 2015. [Online]. Available:
[9] A. B. Hertzman, “Photoelectric plethysmography of the fingers and
toes in man,” Proceedings of the Society for Experimental Biology
and Medicine, vol. 37, no. 3, pp. 529–534, 1937. [Online]. Available:
[10] Y. Sun, V. Azorin-Peris, R. Kalawsky, S. Hu, C. Papin,
and S. E. Greenwald, “Use of ambient light in remote
photoplethysmographic systems: comparison between a high-
performance camera and a low-cost webcam,Journal of Biomedical
Optics, vol. 17, pp. 17 – 17 – 11, 2012. [Online]. Available:
[11] W. Verkruysse, L. Svaasand, and J. S. Nelson, “Remote plethysmo-
graphic imaging using ambient light,” Optics Express, vol. 16, no. 26,
pp. 63–86, 2008.
[12] J. Moreno, J. Ramos-Castro, J. Movellan, E. Parrado, G. Rodas, and
L. Capdevila, “Facial video-based photoplethysmography to detect
HRV at rest,International Journal of Sports Medicine, vol. 36, no. 6,
pp. 474–480, 2015.
[13] P. V. Rouast, M. P. Adam, V. Dorner, and E. Lux, “Remote photo-
plethysmography: Evaluation of contactless heart rate measurement in
an information systems setting,” Applied Informatics and Technology
Innovation Conference, pp. 1–17, 2016.
[14] N. Docampo and P. Casas, “Heart rate estimation using facial video
information,” Ph.D. dissertation, 2011.
[15] L. Iozzia, L. Cerina, and L. Mainardi, “Relationships between heart-
rate variability and pulse-rate variability obtained from video-PPG
signal using ZCA,” Physiological Measurement, vol. 37, no. 11, pp.
1934–1944, 2016.
[16] E. Tasli, A. Gudi, and M. Uyl, “Remote PPG based vital sign
measurement usign adaptive facial regions Vicarious Perception Tech-
nologies Intelligent Systems Lab Amsterdam, University of Amster-
dam, The Netherlands,” International Conference on Image Process-
ing(ICIP), pp. 1410–1414, 2014.
[17] G. De Haan and V. Jeanne, “Robust pulse rate from chrominance-
based rPPG,” IEEE Transactions on Biomedical Engineering, vol. 60,
no. 10, pp. 2878–2886, 2013.
[18] M. Poh, D. J. McDuff, and R. W. Picard, “Non-contact, automated
cardiac pulse measurements using video imaging and blind source
separation,” Optics Express, vol. 18, no. 10, p. 10762, 2010. [Online].
[19] S. Tulyakov, X. Alameda-Pineda, E. Ricci, L. Yin, J. F.
Cohn, and N. Sebe, “Self-Adaptive Matrix Completion for
Heart Rate Estimation from Face Videos under Realistic
Conditions,” 2016 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pp. 2396–2404, 2016. [Online].
[20] F. AL-Khalidi, R. Saatchi, D. Burke, and H. Elphick, “Facial tracking
method for noncontact respiration rate monitoring,” Communication
Systems Networks and Digital Signal Processing (CSNDSP), 2010
7th International Symposium on, pp. 751–754, 2010.
[21] S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche
photodiodes and quenching circuits for single-photon detection,” Ap-
plied Optics, vol. 35, pp. 1956–1976, 1996.
[22] M. Anti, A. Tosi, F. Acerbi, and F. Zappa, “Modeling of afterpulsing
in single-photon avalanche diodes,” in -. SPIE, 2011, pp. 79 331R–
1–79 331R–8.
[23] G. Giraud, H. Schulze, D.-U. Li, T. Bachmann, J. Crain, D. Tyndall,
J. Richardson, R. Walker, D. Stoppa, E. Charbon, R. Henderson, and
J. Arlt, “Fluorescence lifetime biosensing with dna microarrays and
a cmos-spad imager,” Biomedical Optics Express, vol. 1, no. 5, pp.
1302–1308, 12 2010.
[24] X. Michalet, A. Ingargiola, R. A. Colyer, G. Scalia, S. Weiss,
P. Maccagnani, A. Gulinatti, I. Rech, and M. Ghioni, “Silicon photon-
counting avalanche diodes for single-molecule fluorescence spec-
troscopy,IEEE Journal of Selected Topics in Quantum Electronics,
vol. 20, no. 6, pp. 248–267, Nov 2014.
[25] P. Seitz and A. J. P. Theuwissen, Single-Photon Imaging (Springer
Series in Optical Sciences). Springer, 2013.
[26] P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J.
Comput. Vision, vol. 57, no. 2, pp. 137–154, May 2004. [Online].
[27] J. Shi and C. Tomasi, “Good features to track,” 1994, pp. 593–600.
[28] J. yves Bouguet, “Pyramidal implementation of the lucas kanade
feature tracker,Intel Corporation, Microprocessor Research Labs,
[29] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software
Tools, 2000.
... En effet, le but final est de pouvoir détecter une baisse de vigilance afin de prévenir le conducteur pour qu'il s'arrête. Paracchini et al. ont par exemple publié une étude portant sur cette application[105]. Ce capteur présentant l'intérêt d'être porté en continu et donc de pouvoir avoir une plus grande connaissance de l'utilisateur. ...
Le rythme respiratoire est une information importante dans le contexte médical puisqu'elle permet de prédire plusieurs complications potentiellement mortelles.Malgré cela, elle est souvent négligée par le personnel médical faute de temps ou de bien comprendre les enjeux associés.Dans ce contexte, les méthodes de mesure automatisées permettent d'améliorer le statu quo en fournissant en continu une mesure du rythme respiratoire.La plupart des méthodes actuelles comme la ceinture respiratoire ou l'ECG nécessitent un contact avec la personne pour pouvoir mesurer efficacement le rythme respiratoire.Malheureusement, cela introduit des problèmes qui peuvent empêcher la mesure dans certains cas ou la rendre contraignante lors d'une mesure en continu et au quotidien, là où il serait souhaitable que la mesure soit la plus discrète possible.Afin de pallier à ces problèmes, plusieurs méthodes de mesure du rythme respiratoire sans contact sont actuellement en développement.Parmi celles-ci, la photopléthysmographie sans contact utilise la variation de la couleur de la peau en fonction du volume sanguin présent dans les capillaires afin de trouver un signal cardiaque et respiratoire.Dans la thèse présentée, nous nous attachons à améliorer la qualité de la mesure du rythme respiratoire à l'aide de la photopléthysmographie sans contact en développant des méthodes dont le but est de combiner efficacement les signaux couleur extraits à partir d'une vidéo de manière à obtenir un seul signal maximisant l'information respiratoire.Dans un deuxième temps, une chaîne de traitement est mise en place de façon à utiliser ces méthodes de combinaison pour déterminer le rythme respiratoire en utilisant toutes les informations pouvant être extraites du signal photopléthysmographique.
... Being able to constantly check, in real time and without any contact, the health condition of a person could have a significant impact in many different situations. Possible applications include fitness assessments [1], medical diagnosis [1], and driver monitoring [2]. The act of extracting biomedical information analyzing video capture is called remote photoplethysmography (rPPG) or imaging photoplethysmography (iPPG) [1]. ...
Full-text available
The problem of performing remote biomedical measurements using just a video stream of a subject face is called remote photoplethysmography (rPPG). The aim of this work is to propose a novel method able to perform rPPG using single-photon avalanche diode (SPAD) cameras. These are extremely accurate cameras able to detect even a single photon and are already used in many other applications. Moreover, a novel method that mixes deep learning and traditional signal analysis is proposed in order to extract and study the pulse signal. Experimental results show that this system achieves accurate results in the estimation of biomedical information such as heart rate, respiration rate, and tachogram. Lastly, thanks to the adoption of the deep learning segmentation method and dependability checks, this method could be adopted in non-ideal working conditions—for example, in the presence of partial facial occlusions.
... This kind of cameras is capable to detect even a single photon (Bronzi et al., 2016a), has extremely high frame rate (Bronzi et al., 2014) and has proved to be useful in a very large range of applications (Bronzi et al., 2016b), such as 3D optical ranging (LIDAR), Positron Emission Tomography (PET) and many others. In some rPPG works (Paracchini et al., 2019) SPAD cameras are used instead of traditional ones, where their high precision are useful in measure accurately the skin intensity fluctuations produced by the blood flow. On the other hand, due to the complexity of the SPAD sensor, this kind of cameras has a very small spatial resolution, 64x32 in Bronzi et al. (2014), and produces grayscale intensity image, since the low spatial resolution does not allow the use of Bayer filters. ...
Full-text available
In this work we present a facial skin detection method, based on a deep learning architecture, that is able to precisely associate a skin label to each pixel of a given image depicting a face. This is an important preliminary step in many applications, such as remote photoplethysmography (rPPG) in which the hearth rate of a subject needs to be estimated analyzing a video of his/her face. The proposed method can detect skin pixels even in low resolution grayscale face images (64 × 32 pixel). A dataset is also described and proposed in order to train the deep learning model. Given the small amount of data available, a transfer learning approach is adopted and validated in order to learn to solve the skin detection problem exploiting a colorization network. Qualitative and quantitative results are reported testing the method on different datasets and in presence of general illumination, facial expressions, object occlusions and it is able to work regardless of the gender, age and ethnicity of the subject.
Full-text available
This article presents a broad review on optical, radio-frequency (RF), microwave (MW), millimeter wave (mmW) and terahertz (THz) biosensors. Biomatter-wave interaction modalities are considered over a wide range of frequencies and applications such as detection of cancer biomarkers, biotin, neurotransmitters and heart rate are presented in detail. By treating biological tissue as a dielectric substance, having a unique dielectric signature, it can be characterized by frequency dependent parameters such as permittivity and conductivity. By observing the unique permittivity spectrum, cancerous cells can be distinguished from healthy ones or by measuring the changes in permittivity, concentration of medically relevant biomolecules such as glucose, neurotransmitters, vitamins and proteins, ailments and abnormalities can be detected. In case of optical biosensors, any change in permittivity is transduced to a change in optical properties such as photoluminescence, interference pattern, reflection intensity and reflection angle through techniques like quantum dots, interferometry, surface enhanced raman scattering or surface plasmon resonance. Conversely, in case of RF, MW, mmW and THz biosensors, capacitive sensing is most commonly employed where changes in permittivity are reflected as changes in capacitance, through components like interdigitated electrodes, resonators and microstrip structures. In this paper, interactions of EM waves with biomatter are considered, with an emphasis on a clear demarcation of various modalities, their underlying principles and applications.
Conference Paper
Full-text available
As a source of valuable information about a person's affective state, heart rate data has the potential to improve both understanding and experience of human-computer interaction. Conventional methods for measuring heart rate use skin contact methods, where a measuring device must be worn by the user. In an Information Systems setting, a contactless approach without interference in the user's natural environment could prove to be advantageous. We develop an application that fulfils these conditions. The algorithm is based on remote photoplethysmography, taking advantage of the slight skin color variation that occurs periodically with the user's pulse. When evaluating this application in an Information Systems setting with various arousal levels and naturally moving subjects, we achieve an average root mean square error of 7.32 bpm for the best performing configuration. We find that a higher frame rate yields better results than a larger size of the moving measurement window. Regarding algorithm specifics, we find that a more detailed algorithm using the three RGB signals slightly outperforms a simple algorithm using only the green signal.
Full-text available
In this paper, classical time– and frequency-domain variability indexes obtained by pulse rate variability (PRV) series extracted from video-photoplethysmography signals (vPPG) were compared with heart rate variability (HRV) parameters extracted from ECG signals. The study focuses on the analysis of the changes observed during a rest-to-stand manoeuvre (a mild sympathetic stimulus) performed on 60 young, normal subjects (age: $24\pm 3$ years). The objective is to evaluate if video-derived PRV indexes may replace HRV in the assessment of autonomic responses to external stimulation. Video recordings were performed with a GigE Sony XCG-C30C camera and analyzed offline to extract the vPPG signal. A new method based on zero-phase component analysis (ZCA) was employed in combination with a fully-automatic method for detection and tracking of region of interest (ROI) located on the forehead, the cheek and the nose. Results show an overall agreement between time and frequency domain indexes computed on HRV and PRV series. However, some differences exist between resting and standing conditions. During rest, all the indexes computed on HRV and PRV series were not statistically significantly different (p > 0.05), and showed high correlation (Pearson's r > 0.90). The agreement decreases during standing, especially for the high-frequency, respiration-related parameters such as RMSSD (r = 0.75), pNN50 (r = 0.68) and HF power (r = 0.76). Finally, the power in the LF band (n.u.) was observed to increase significantly during standing by both HRV ($28\pm 14$ versus $45\pm 16$ (n.u.); rest versus standing) and PRV ($26\pm 12$ versus $30\pm 13$ (n.u.); rest versus standing) analysis, but such an increase was lower in PRV parameters than that observed by HRV indexes. These results provide evidence that some differences exist between variability indexes extracted from HRV and video-derived PRV, mainly in the HF band during standing. However, despite these differences video-derived PRV indexes were able to evince the autonomic responses expected by the sympathetic stimulation induced by the rest-to-stand manoeuvre.
Full-text available
Remote Photoplethysmography (rPPG) allows remote measurement of the heart rate using low-cost RGB imaging equipment. In this paper, we review the development of the field since its emergence in 2008, classify existing approaches for rPPG, and derive a framework that provides an overview of modular steps. Based on this framework, practitioners can use the classification to orchestrate algorithms to an rPPG approach that suits their specific needs. Researchers can use the reviewed and classified algorithms as a starting point to improve particular features of an rPPG algorithm.
Full-text available
We present an optical 3-D ranging camera for automotive applications that is able to provide a centimeter depth resolution over a 40° x 20° field of view up to 45 m with just 1.5 W of active illumination at 808 nm. The enabling technology we developed is based on a CMOS imager chip of 64 x 32 pixels, each with a single-photon avalanche diode (SPAD) and three 9-bit digital counters, able to perform lock-in time-of-flight calculation of individual photons emitted by a laser illuminator, reflected by the objects in the scene, and eventually detected by the camera. Due to the SPAD single-photon sensitivity and the smart in-pixel processing, the camera provides state-of-the-art performance at both high frame rates and very low light levels without the need for scanning and with global shutter benefits. Furthermore, the CMOS process is automotive certified.
Continuous monitoring of human’s breathing and heart rates is useful in maintaining better health and early detection of many health issues. Designing a technique that can enable contactless and ubiquitous vital sign monitoring is a challenging research problem. This article presents mmVital, a system that uses 60GHz millimeter wave (mmWave) signals for vital sign monitoring. We show that the mmWave signals can be directed to human’s body and the Received Signal Strength (RSS) of the reflections can be analyzed for accurate estimation of breathing and heart rates. We show how the directional beams of mmWave can be used to monitor multiple humans in an indoor space concurrently. mmVital also provides sleep monitoring with sleeping posture identification and detection of central apnea and hypopnea events. It relies on a novel human finding procedure where a human can be located within a room by reflection loss-based object/human classification. We evaluate mmVital using a 60GHz testbed in home and office environment and show that it provides the mean estimation error of 0.43 breaths per minute (Bpm; breathing rate) and 2.15 beats per minute (bpm; heart rate). Also, it can locate the human subject with 98.4% accuracy within 100ms of dwell time on reflection. We also demonstrate that mmVital is effective in monitoring multiple people in parallel and even behind a wall.
Conference Paper
Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR). While considerable progress has been made in the last few years, still many issues remain open. In particular, state-of-the-art approaches are not robust enough to operate in natural conditions (e.g. in case of spontaneous movements, facial expressions, or illumination changes). Opposite to previous approaches that estimate the HR by processing all the skin pixels inside a fixed region of interest, we introduce a strategy to dynamically select face regions useful for robust HR estimation. Our approach, inspired by recent advances on matrix completion theory, allows us to predict the HR while simultaneously discover the best regions of the face to be used for estimation. Thorough experimental evaluation conducted on public benchmarks suggests that the proposed approach significantly outperforms state-of-the-art HR estimation methods in naturalistic conditions.
Photoelectric plethysmographs for the fingers and toes are described which use electrocardiographs for the recording and which have definite advantages in routine clinical observations on the circulation. The validity of the technique is established (1) by comparison of the photoelectric records with simultaneous records obtained with transmission plethysmographs, (2) by comparison of the photoelectric records in instances of circulatory disturbances with independent directional confirmation by other methods in the literature.