Figure 1 - uploaded by Naval Kishore Mehta
Content may be subject to copyright.
The right image captured using a neuromorphic camera visualizes the events triggered during the falling action, while the left one depicts a normal grayscale frame. It is quite evident that identifying the person is considerably more challenging in the right image than in the left.

The right image captured using a neuromorphic camera visualizes the events triggered during the falling action, while the left one depicts a normal grayscale frame. It is quite evident that identifying the person is considerably more challenging in the right image than in the left.

Contexts in source publication

Context 1
... event-based camera systems like the dynamic vision sensor (DVS) can mitigate this issue. DVS captures video streams differently, registering only event changes and significantly reducing data redundancies, which is particularly beneficial for infrequent events like human falls (Figure 1). ...
Context 2
... utilized DVS noise-filtered output events to train the SLAYER SNN. The performance of the SLAYER SNN was evaluated with different time bins (T) using intervals of 25 ms, ranging from 600 ms to 1000 ms, corresponding to sample lengths of 1200 ms and 2000 ms at a sampling rate of 2, as illustrated in Figure 11 with the best performance on time bin of 725 ms on DVSFall Spilt-1. Hence we choose sample length 1450 ms(time bin: 725 ms) to train on the other splits, the model achieved an average accuracy of 94.59% and sensitivity of 80.63% on the DVSFall test set as provided in Table 4. ...
Context 3
... we choose sample length 1450 ms(time bin: 725 ms) to train on the other splits, the model achieved an average accuracy of 94.59% and sensitivity of 80.63% on the DVSFall test set as provided in Table 4. The qualitative results of the best-performing model trained on split-1 is shown in Figure 10. In the next set of experiments, we obtained results through the utilization of a 3D-CNN network, which was trained over 10 epochs on the DVSFall Split-1 train sets. ...

Citations

... Similarly, 17 healthy adults were used for simulating falling and falls using RGB cameras [12]. Finally, the DVSFall [13] dataset includes simulated falls and diverse ADLs performed by 21 healthy subjects of varying ages, captured using multiple dynamic vision sensors strategically positioned. ...
Article
Full-text available
Despite extensive research in machine learning for fall detection, early warning signs of falls have been largely overlooked. Current datasets mainly focus on fall mitigation rather than the irregularities in movement and behavior patterns that precede a fall, useful for fall prevention. Identifying these early signs is crucial for enabling timely interventions to reduce injury severity and improve the quality of life for older adults. To address this gap, we present the Pre-VFall dataset, a novel resource designed to simulate early fall indicators using vision sensor technology. Since RGB cameras already exist in common areas of living facilities for seniors, the proposed vision-based sensors become naturally well-suited. This open dataset comprises over 22K simulated instances encompassing normal conditions, various abnormal states (including weakness, dizziness, delirium-confusion, and Normal Pressure Hydrocephalus (NPH)-confusion), and fall events, all recorded from nine healthy young adult participants. The dataset includes comprehensive data in the form of videos, images, key gradient vector magnitude, and key gradient vector direction features. These elements are crucial for advancing research into the pre-fall irregularities that signal potential falls, thereby supporting the development of more sophisticated and proactive fall detection systems.
... Dynamic Vision Sensors (DVS), with their exceptional temporal resolution of 1 µs and latency under 1 ms, offer a promising solution by capturing changes in brightness through binary ON and OFF events [7]. While DVS technology excels in scenarios requiring low latency and a high dynamic range [8], [9], its limited adoption in gaze tracking is due to the high cost and limited availability of the hardware. We approach these concerns by simulating DVS-like event streams, we replicate the high temporal precision needed for saccadic gaze prediction while overcoming the cost and accessibility barriers associated with DVS. ...
... Furthermore, an active and growing research field is investigating the usage of event cameras coupled with other data sources to realize assistive devices, in particular for visual impairment. As already observed, when sensitive data are transmitted, it is clear that realizing privacy-preserving systems is crucial for the adoption of such sensors in real scenarios, as investigated in [198,199]. As a final note, the authors in [157] stated how there are no established studies that have delved into the utilization of event cameras for specific biomedical purposes. ...
Article
Full-text available
Traditional frame-based cameras, despite their effectiveness and usage in computer vision, exhibit limitations such as high latency, low dynamic range, high power consumption, and motion blur. For two decades, researchers have explored neuromorphic cameras, which operate differently from traditional frame-based types, mimicking biological vision systems for enhanced data acquisition and spatio-temporal resolution. Each pixel asynchronously captures intensity changes in the scene above certain user-defined thresholds, and streams of events are captured. However, the distinct characteristics of these sensors mean that traditional computer vision methods are not directly applicable, necessitating the investigation of new approaches before being applied in real applications. This work aims to fill existing gaps in the literature by providing a survey and a discussion centered on the different application domains, differentiating between computer vision problems and whether solutions are better suited for or have been applied to a specific field. Moreover, an extensive discussion highlights the major achievements and challenges, in addition to the unique characteristics, of each application field.