Remote PhotoPlethysmoGraphy Using SPAD Camera
for Automotive Health Monitoring Application
Marco Paracchini, Lorenzo Marchesi, Klaus Pasquinelli,
Marco Marcon, Giulio Fontana, Alessandro Gabrielli and Federica Villa
Dipartimento di Elettronica, Informazione e Bioingegneria
Politecnico di Milano, Milan, Italy
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org
email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org
Abstract—Remote PhotoPlethysmoGraphy (rPPG) applications
allow to extract cardiac information just analyzing a video
stream of a person face. In this work we propose the use
of a Single-Photon Avalanche Diode (SPAD) camera in order
to perform rPPG with higher accuracy, especially in low
illumination conditions, exploiting the higher sensitivity of the
SPAD sensors. In particular, we suggest the adoption of a
rPPG application in an automotive environment in order to
monitor, in a non invasive fashion, the driver’s health state and
potentially avoid accidents caused by acute illness states. Some
quantitative results are shown obtained in realistic situations in
which the SPAD camera was mounted inside a vehicle cockpit
and compared with rPPG heart rate estimations data collected
with a portable wearable ECG device.
Remote PhotoPlethysmoGraphy (rPPG) applications aim
at solving the problem of estimating the heart rate (or
other bio-medical information) of a subject given a video
of his/her face . Typically a signal, representing the time
variation of the light intensity reﬂected by the skin, is
extracted using a camera. This signal is related to the heart
activity since the transition of blood in the vessel underneath
the skin varies the absorption and reﬂection coefﬁcients of
the skin itself. The signal is then consequently analyzed
in order to estimate the heart rate of the subject and/or
other bio-medical measurements. What is recorded by the
camera is actually the pulse signal since the camera can
detect the changes in blood ﬂow linked to the contraction of
the heart. This signal is different from the electrocardiogram
since the sources of the two signals are physically different:
the former is mechanical while the latter is electrical. Pulse
rate and heart rate are not necessarily synchronized due to
mechanical delays, but they show the same frequencies and
frequency trends. For this reason, analyzing the pulse wave,
it is possible to measure the Heart Rate (HR) and Heart
Rate Variability (HRV) and so the simpatho-vagal balance,
obtaining a quantitative information about the functioning
and the activation of the autonomic nervous system.
In recent years alternative methods able to estimate re-
motely a subject heart rate had been developed. Approaches
based on millimeter waves (mmW)  are one of them.
Instead of analyzing an optical signal, these methods 
utilize a transmitter and receiver system in order to detect the
subject slight movements caused by the heartbeat exploiting
the Doppler effect. Although these techniques could produce
good results in a controlled environment they perform gen-
erally worse in presence of movements and uncooperative
The majority of rPPG applications  utilize a standard
RGB camera, based on CMOS or CCD technologies, in
order to acquire the video stream. The goal of this work is to
perform rPPG using a SPAD (i.e. Single-Photon Avalanche
Diode) camera. This kind of cameras is capable to de-
tect even a single photon , has extremely high frame
rate  and has proved to be useful in a wide ﬁeld of
applications , such as 3D optical ranging (LIDAR ),
Positron Emission Tomography (PET), quantum microscopy
and so on. In rPPG applications its high precision could be
useful in measuring accurately the skin intensity ﬂuctuations
produced by the blood ﬂow.
In this work we propose to use a rPPG method based
on a SPAD camera in an automotive application with the
aim of monitoring the health condition of the driver. The
idea is to develop an application that could run in real time
on a computational unit equipped on the car that is able to
extract the pulse signal and analyze it in order to consistently
monitor the driver’s health condition. These data could then
be used to enable particular features of the vehicle, such as
autonomous driving, that could take control of the vehicle
and avoid some accident in case of detected driver sickness
or altered emotional state. All the acquired parameters could
also be transmitted to a cloud based system in order to
constantly monitor the health condition and the emotional
state of the driver.
The rest of the paper is organized as follows: in Sec. 2
the state of the art of rPPG applications is analyzed; in
Sec. 3 the fundamental working principals of the SPAD
camera are outlined, describing also the actual camera used
during the tests. On the software side, in Sec. 4 the signal
processing steps needed to extract heart rate information
from the camera data stream are described. Furthermore, in
Sec. 5 some results, obtained in realistic working conditions,
are shown and compared with a commercial ECG device.
Finally in Sec. 6 the conclusions of this work are presented.
2. Related work
Contact photoplethysmography (PPG) is a simple tech-
nique that traces back to 1930s  in which a light is used
to measure blood volume changes related to the pulsating
nature of circulatory systems . In more recent years,
starting from 2008, it was demonstrated  that PPG could
be performed remotely using ambient light and since then
many studies focused on the extraction of heart rate using
videocameras were published , , , , ,
, . The goal of most recent studies is to obtain the
tachogram, which is deﬁned as a chart reporting time on
the x axes and Inter Beat Interval (IBI) on the y axes, by
performing remote-photoplethysmography .
Although Verkruysse et al. , , ,  shown
that a video captured with a common RGB camera is
enough to obtain a plethysmographic signal whence mea-
suring HR and respiration rate, the choice of the camera
is critical especially in not controlled conditions. Cameras
used in literature are commonly RGB CCD (Charge Coupled
Device) cameras, some studies use webcam integrated in
laptop , , while others record video using compact-
cameras or giga-Ethernet-cameras . Particular attention
is commonly paid to acquisition frequency, but different
works provide different values: Poh considers 15 fps ,
while others acquire at 20 to 60 fps. Some works consider
regions on which to extract the pulse signal by manually
choosing the pixels of the image corresponding to the skin
of the subject , , but this is not a feasible choice in
an automatic application. On the other hand, some modern
rPPG applications involve the use of face detection and
tracking algorithms ,  in order to select an appro-
priate Region Of Interest (ROI). While using a RGB camera
instead of a monochrome one could lead to some beneﬁts for
rPPG applications , V. Rouast et al.  arrived to the
conclusion that the G channel contains enough information
and also recommend the use of the single G channel in
order to reduce computational costs and to implement online
To the extent of our knowledge no study has ever been
done with the aim of performing rPPG using single photon
cameras in order to achieve good result also on low illumi-
nation conditions. Furthermore, we propose for the ﬁrst time
the use of a camera without RGB Bayer ﬁlters, in order to
exploit a near infrared (NIR) illumination.
3. SPAD camera
The main advantage of working with SPAD cameras,
instead of standard CMOS cameras and CCD cameras,
is being able to handle low signals situations and dark
environments. This is made possible using SPAD sensors
instead of using conventional pixels that convert propor-
tionally the arrival light in electric charge. Due to their
Figure 1. SPC3 SPAD camera commercialized by Micro Photon Devices
(MPD) for single-photon counting applications.
design SPADs are photodetectors able to see even a single
photons. Considering a biased p-n junction, on which a bias
voltage greater than the breakdown voltage is applied, a
single incident photon is able to trigger an avalanche; this
means that from a single-photon event a digital output is
obtained. After the photon has been seen, the avalanche is
then stopped in order to avoid unnecessary power dissipation
due to the avalanche itself, and to rearm the SPAD making it
able to see another photon; in order to achieve these targets
a dedicated hardware is implemented. A simple solution
is the use of a properly sized resistor in series with the
SPAD: after a photon-event the parasitic capacitance of the
photodetector is discharged, the resistor has the task to
reload the capacitor in the way to rearm the SPAD. This
gives a simple but slower way to obtain our proposals. A
better solution is the use of an Active-Quenching Circuit
(AQC)  that provides a faster quenching in a smaller
area respect to the previously mentioned solution. When a
photon triggers the avalanche, the AQC powers down the
SPAD in order to rearm it, this gives an holdoff time during
which the photodetector is completely blind. This period
can be made adjustable, short holdoff periods (in the order
of 20 ns) are required in applications where high photon
ﬂuxes are present at the cost of high afterpulsing  (i.e.
the retriggering of the SPAD due to a trapped charges of the
previous avalanche), instead much longer holdoff periods are
required when weak signals are present or when afterpulsing
can heavily affect the measurement and so its reliability.
3.1. Counting, timing and other applications
A simple use of the SPAD is counting photons: a counter
is added at the output of the photodetector and each photon
increases the counter value by one. This behavior can be
compared to a conventional pixel of a camera that integrate
the light signal over the time. The advantaged of using
SPAD in this way is the single-photon resolution of the
integrated signal. This technology is currently applied in the
observation of ﬂuorescence , spectroscopy , night
vision  and other ﬁelds. A more advanced use is the
Figure 2. Block diagram (left) and micrograph (right) of the 64 x 32 SPAD array chip in an high-voltage CMOS 0.35 µm technology.
Time-Of-Flight (TOF) measurements, in which a pulsed
illuminator provides a pulsed signal to the target. Due to
the presence of background light, in order to have a reliable
measure, many repetitions are needed in such a way as
to realize a histogram containing the arrival time of the
photons. Once the peak has been found the TOF measure
is concluded. TOF measurements done in this way can be
used to map 3D places in dark conditions or to realize
LIDAR  (Light Detection and Ranging) systems in not
heavily illuminated conditions or with a proper Field of view
(FOV) and proper optical ﬁlters.
3.2. SPC3 SPAD Camera
The camera used in this project is based on a SPAD array
developed by Politecnico di Milano . The whole camera
has been developed and commercialized by Micro Photon
Devices (MPD) 1and belongs to the SPC3 SPAD camera
series 2. In Fig. 1, a picture of the SPC3 camera is shown. As
reported in the diagram in Fig. 2, the matrix is composed by
32×64 pixels, each pixel produces an unsigned 9-bit integer
output and contains a 30 µm SPAD, the AQC, counters
and the memories. The camera, connected through USB 3.0
interface, can be used in counting mode and it is capable to
reach 96 kframe/s, which, for the purpose of this project,
is more than enough. To recover part of the efﬁciency
lost due to the low, 3%, ﬁll factor of the pixel (because
of to the presences of electronics in the pixel) the matrix
has been equipped of microlenses that provide a partial
enhancement of performances (80% equivalent ﬁll-factor
for parallel light beams). This camera has the maximum
Photon Detection Efﬁciency (PDE) of about 50% at around
400 nm, the readout is completely parallel for all the 2048
pixels of the matrix that makes it possible to realize a global
shutter. Another important metric for SPAD cameras is the
detector intrinsic noise, called Dark Counting Rate (DCR).
Dark counts are the triggering events that are not associated
to photons but related to other kind of generations (as the
thermal one), this parameter affects the signal to noise ratio
in low signal regimes. In this project the camera has been
used at 100 fps, considering that the SPC3 SPAD has the
dark count around 100 cps (counts per seconds), DCR is
An FPGA is used to readout the camera, to sum consecu-
tive frames in order to reduce the ﬁnal frame-rate increasing
the counts depth up to 16-bits and to transfer the data to PC
through the USB 3.0 interface. In order to acquire at 100
fps the camera is set to continuous acquisition mode; in
this mode a start command is given externally (from the
computational unit) and the frames are acquired and stored
in the FPGA internal memory, used as a buffer. Arrays of
contiguous frames are then transferred to the PC at 10 Hz
in order to perform the operations described in Sec. 4. Each
frame is obtained summing in the FPGA the results of 500
acquisitions each obtained with an exposure time of 20 µs
in order to collect all the incident photons and at the same
time avoid saturation issues and increase the dynamic range
of the internal counters.
Like the majority of rPPG applications , the one pro-
posed in this paper is composed by two consecutive stages:
the signal extraction and the signal analysis. A diagram of
the complete algorithm pipeline is shown in Fig. 3.
4.1. Signal extraction
The signal extraction algorithm is composed by three
components (face detection, face tracking and pixel selection
and signal creation), represented with blue blocks in the
diagram in Fig. 3, which will be described in the next
4.1.1. Face detection. The ﬁrst task of the signal extrac-
tion stage is to localize the driver’s face in the live video
streamed by the SPAD camera. In order to perform this step,
the Viola-Jones method  is used. This is an accurate,
efﬁcient and fast method for object detection widely used
in many Computer Vision applications and could be applied
on low resolution images as the one acquired by the SPAD
Figure 3. rPPG algorithm pipeline. The blue blocks are relative to the Signal Extraction stage while the green ones belongs to the Signal processing stage.
camera. For these reasons it has been chosen as the face
detection method in the proposed rPPG application.
4.1.2. Face tracking. If the face was already detected in
the last iteration a tracking algorithm is used instead of the
face detection one. Firstly same features are detected inside
the face region returned by the detection algorithm on the
previous frame, using the Shis and Tomasis Good Features
to Track algorithm . Consequently, these features are
tracked forward to the current frame using the KanadeLucas-
Tomasi (KLT) algorithm . From the previous and current
pixel positions of the tracked points a 2D rigid transforma-
tion (homography) is estimated and the face bounding box
is transformed accordingly.
4.1.3. Pixel selection and signal creation. From the bound-
ing box containing the driver’s face a Region Of Interest
(ROI) centred around the subject forehead is calculated us-
ing ﬁxed proportions. For each frame the signal is extracted
averaging, inside the ROI, the light intensity measured by
the camera obtaining a 1D signal sampled at 100 Hz.
4.1.4. Implementation details. Since the SPAD camera
output has a very small spatial resolution (32×64) the
output frames are scaled by a factor of 10 before applying
the tracking and face detection algorithms, using bicubic
interpolation. A border padding of 50 pixel is also added in
order to detect faces very near the image borders or partially
outside of them. The ROI coordinates are then accordingly
scaled back to the original resolution and the signal is
extracted from the original resolution frames. Although the
SPAD acquisition frame rate is set to 100 fps, the face
detection and tracking frequency is set to 10 fps and is
performed on a mean image obtained averaging the pixel
values of the last 10 frames available. The signal extraction
part of the application is implemented in C++ exploiting the
OpenCV library .
4.2. Signal Processing
Following the signal extraction stage, the signal process-
ing one is performed. It is represented by the green part in
the diagram in Fig. 3, and is composed by three blocks
detailed in the following paragraphs.
4.2.1. Filtering. The signal extracted by the camera is
ﬁltered with a Butterworth bandpass ﬁlter with bandwidth
between 0.4 Hz and 4 Hz, equivalent to 24 bpm and 240 bpm
respectively. This is done in order to remove any other signal
with frequency very far from plausible HR.
4.2.2. Average Heart Rate estimation. In order to estimate
the average heart rate from the camera signal, the following
two operations are performed. Firstly, after applying the
preprocessing steps described in Sec. 4.2.1, the power spec-
trum of the pulse signal is obtained applying a Fast Fourier
Transform (FFT) on the ﬁltered signal. Finally, in order to
determine the average Heart Rate estimation the frequency
related to the peak of the power spectrum is considered.
4.2.3. Tachogram estimation. Precisely detecting all the
heart beat peak inside the pulse signal is a fundamental re-
quirement in order to produce a good quality tachogram. In
order to perform peak detection the local maxima of the ﬁl-
tered signal are detected imposing two different thresholds,
one related to the temporal distance between consecutive
maxima and one on the height of the maxima. Once all the
maxima are found, the average of RR intervals are calculated
providing information about the average distance between
two consecutive maxima. This value is used to adjust the
temporal threshold and perform a second run of the maxima
searching function on all the pulse wave with the constrain
that there must be a maximum inside a window slightly
greater then the calculated average inter-beat interval.
In order to test the proposed rPPG system we performed
some tests in both controlled conditions and also in an
Figure 4. The rPPG visual output while running on a video acquired inside
the car. The estimated HR is reported in red.
Figure 5. HR estimation through FFT output maxima. Blue: rPPG PSD,
red: ECG PSD
5.1. Experiment setup
The ﬁrst test involved 4 people and was conducted in
laboratory in controlled conditions with subjects at rest and
with no head movements recorded for 10 minutes. In the
second test we mounted the SPAD camera equipped with a
8 mm lens on the sun shield holder of a car and we acquired
some sequences. On the same support we mounted also a
LED illuminator producing infrared light (850 nm). For both
tests, the illuminator power is set keeping into account the
eye safety for the speciﬁc wavelength and considering that
the illuminator is positioned in the car cockpit around 50
cm far from the driver. We mounted also an optical ﬁlter on
the lens centered at 850 nm (bandwidth ±40 nm) in order
to discard any other light sources. For both the tests, each
subject was also equipped with a portable ECG device in
order to collect ground-truth heart activity data.
In all the 4 controlled steady acquisitions the average
heart rate estimation error was 0 bpm. On the other hand,
the root mean squared error between tachogram obtained
with rPPG and ECG was 52.5 ms.
In Fig. 4 the output of the rPPG application running on
a driving sequence is shown. In particular, the red bounding
Figure 6. Tachogram estimation. Blue: rPPG tachogram, red: ECG
box represents the face detector output, the green area is
the set of pixels on which the signal is extracted (forehead)
while the green crosses are the tracked features. Below
the face bounding box the current heart rate estimation is
superimposed. In Fig. 5 and 6 the heart rate and tachogram
estimation of the same sequence shown in Fig. 4 are shown
respectively. In both graphs the blue lines represent the
estimation obtained with the proposed rPPG application
while the red ones are relative to the ECG ground-truth. The
heart rate estimation coincide with the ground-truth while
the estimated tachogram is able to follow the trend of the
real one (with a root mean squared error of 77 ms between
the two curves).
The application is able to run in real time on an
ARM board (Hardkernel Odroid XU3 3, equipped with
Samsung Exynos5422 CortexTM-A15 2.1 GHz quad core
and CortexTM-A7 1.5 GHz quad core CPUs) analyzing an
incoming SPAD input at 100 Hz and producing a new heart
rate estimation and tachogram each second.
In this work we presented a remote photoplethysmogra-
phy application with the ﬁnal aim of checking in real time
the health and stress conditions of a driver. The application
runs in real time receiving input video coming from a SPAD
As described in Sec. 3, these particular photon counting
cameras, could work in dark environments and could detect
small ﬂuctuation in light intensity caused by the pulsing of
blood in vessels underneath the skin. In Sec. 4 we described
the rPPG algorithm that is used in order to analyze each
frame, coming from the SPAD camera, and to extract the
pulse signal and consequently estimate the heart rate. In
Sec. 5 we proved the accuracy of our system testing it in
realistic conditions with the camera mounted inside a car and
comparing the obtained results with a commercial wearable
ECG device, conﬁrming that the rPPG system could estimate
the heart rate with high accuracy and is also able to produce
good quality tachograms.
The beneﬁts of the proposed application are numerous,
in particular in detecting, automatically and without contact,
situations of driver’s acute illness. This could lead for exam-
ple to the activation of some autonomous driving mechanism
in order to potentially avoid car accidents.
This work has been supported by the DEIS project (De-
pendability Engineering Innovation for automotive CPS),
funded by the European Unions Horizon 2020 research and
innovation programme, under the grant no. 732242.
 P. Rouast, M. Adam, R. Chiong, D. Cornforth, and E. Lux, “Remote
heart rate measurement using low-cost RGB face video: a technical
literature review,” Frontiers of Computer Science, no. August, pp.
 P. Mehrotra, B. Chatterjee, and S. Sen, “Em-wave biosensors: A
review of rf, microwave, mm-wave and optical sensing,” Sensors,
vol. 19, no. 5, p. 1013, 2019.
 Z. Yang, P. H. Pathak, Y. Zeng, X. Liran, and P. Mohapatra, “Vital
sign and sleep monitoring using millimeter wave,” ACM Transactions
on Sensor Networks, vol. 13, no. 2, pp. 1–32, 2017.
 M. Fukunishi, K. Kurita, S. Yamamoto, and N. Tsumura, “Video
based measurement of heart rate and heart rate variability spec-
trogram from estimated hemoglobin information,” 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition Workshops
(CVPRW), pp. 1405–14 057, 2018.
 D. Bronzi, F. Villa, S. Tisa, A. Tosi, and F. Zappa,
“Spad ﬁgures of merit for photon-counting, photon-
timing, and imaging applications: A review,” IEEE Sensors
Journal, vol. 16, pp. 3–12, 2016. [Online]. Available:
 D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, D. Durini, S. Weyers,
and W. Brockherde, “100 000 frames/s 64 32 single-photon detector
array for 2-D imaging and 3-D ranging,” IEEE Journal on Selected
Topics in Quantum Electronics, vol. 20, no. 6, 2014.
 D. Bronzi, Y. Zou, F. Villa, S. Tisa, A. Tosi, and F. Zappa, “Automo-
tive three-dimensional vision through a single-photon counting spad
camera,” IEEE Transactions on Intelligent Transportation Systems,
vol. 17, no. 3, pp. 782–795, March 2016.
 R. Lussana, F. Villa, A. D. Mora, D. Contini, A. Tosi, and F. Zappa,
“Enhanced single-photon time-of-ﬂight 3d ranging,” Opt. Express,
vol. 23, no. 19, pp. 24 962–24 973, Sep 2015. [Online]. Available:
 A. B. Hertzman, “Photoelectric plethysmography of the ﬁngers and
toes in man,” Proceedings of the Society for Experimental Biology
and Medicine, vol. 37, no. 3, pp. 529–534, 1937. [Online]. Available:
 Y. Sun, V. Azorin-Peris, R. Kalawsky, S. Hu, C. Papin,
and S. E. Greenwald, “Use of ambient light in remote
photoplethysmographic systems: comparison between a high-
performance camera and a low-cost webcam,” Journal of Biomedical
Optics, vol. 17, pp. 17 – 17 – 11, 2012. [Online]. Available:
 W. Verkruysse, L. Svaasand, and J. S. Nelson, “Remote plethysmo-
graphic imaging using ambient light,” Optics Express, vol. 16, no. 26,
pp. 63–86, 2008.
 J. Moreno, J. Ramos-Castro, J. Movellan, E. Parrado, G. Rodas, and
L. Capdevila, “Facial video-based photoplethysmography to detect
HRV at rest,” International Journal of Sports Medicine, vol. 36, no. 6,
pp. 474–480, 2015.
 P. V. Rouast, M. P. Adam, V. Dorner, and E. Lux, “Remote photo-
plethysmography: Evaluation of contactless heart rate measurement in
an information systems setting,” Applied Informatics and Technology
Innovation Conference, pp. 1–17, 2016.
 N. Docampo and P. Casas, “Heart rate estimation using facial video
information,” Ph.D. dissertation, 2011.
 L. Iozzia, L. Cerina, and L. Mainardi, “Relationships between heart-
rate variability and pulse-rate variability obtained from video-PPG
signal using ZCA,” Physiological Measurement, vol. 37, no. 11, pp.
 E. Tasli, A. Gudi, and M. Uyl, “Remote PPG based vital sign
measurement usign adaptive facial regions Vicarious Perception Tech-
nologies Intelligent Systems Lab Amsterdam, University of Amster-
dam, The Netherlands,” International Conference on Image Process-
ing(ICIP), pp. 1410–1414, 2014.
 G. De Haan and V. Jeanne, “Robust pulse rate from chrominance-
based rPPG,” IEEE Transactions on Biomedical Engineering, vol. 60,
no. 10, pp. 2878–2886, 2013.
 M. Poh, D. J. McDuff, and R. W. Picard, “Non-contact, automated
cardiac pulse measurements using video imaging and blind source
separation,” Optics Express, vol. 18, no. 10, p. 10762, 2010. [Online].
 S. Tulyakov, X. Alameda-Pineda, E. Ricci, L. Yin, J. F.
Cohn, and N. Sebe, “Self-Adaptive Matrix Completion for
Heart Rate Estimation from Face Videos under Realistic
Conditions,” 2016 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pp. 2396–2404, 2016. [Online].
 F. AL-Khalidi, R. Saatchi, D. Burke, and H. Elphick, “Facial tracking
method for noncontact respiration rate monitoring,” Communication
Systems Networks and Digital Signal Processing (CSNDSP), 2010
7th International Symposium on, pp. 751–754, 2010.
 S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche
photodiodes and quenching circuits for single-photon detection,” Ap-
plied Optics, vol. 35, pp. 1956–1976, 1996.
 M. Anti, A. Tosi, F. Acerbi, and F. Zappa, “Modeling of afterpulsing
in single-photon avalanche diodes,” in -. SPIE, 2011, pp. 79 331R–
 G. Giraud, H. Schulze, D.-U. Li, T. Bachmann, J. Crain, D. Tyndall,
J. Richardson, R. Walker, D. Stoppa, E. Charbon, R. Henderson, and
J. Arlt, “Fluorescence lifetime biosensing with dna microarrays and
a cmos-spad imager,” Biomedical Optics Express, vol. 1, no. 5, pp.
1302–1308, 12 2010.
 X. Michalet, A. Ingargiola, R. A. Colyer, G. Scalia, S. Weiss,
P. Maccagnani, A. Gulinatti, I. Rech, and M. Ghioni, “Silicon photon-
counting avalanche diodes for single-molecule ﬂuorescence spec-
troscopy,” IEEE Journal of Selected Topics in Quantum Electronics,
vol. 20, no. 6, pp. 248–267, Nov 2014.
 P. Seitz and A. J. P. Theuwissen, Single-Photon Imaging (Springer
Series in Optical Sciences). Springer, 2013.
 P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J.
Comput. Vision, vol. 57, no. 2, pp. 137–154, May 2004. [Online].
 J. Shi and C. Tomasi, “Good features to track,” 1994, pp. 593–600.
 J. yves Bouguet, “Pyramidal implementation of the lucas kanade
feature tracker,” Intel Corporation, Microprocessor Research Labs,
 G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software