FIGURE 4 - uploaded by Mohamed Abul Hassan
Content may be subject to copyright.
AIM-2, an egocentric wearable camera with food intake detection sensors.

AIM-2, an egocentric wearable camera with food intake detection sensors.

Source publication
Article
Full-text available
Automatic Ingestion Monitor v2 (AIM-2) is an egocentric camera and sensor that aids monitoring of individual diet and eating behavior by capturing still images throughout the day and using sensor data to detect eating. The images may be used to recognize foods being eaten, eating environment, and other behaviors and daily activities. At the same ti...

Context in source publication

Context 1
... sensor system used for the method development is Automatic Ingestion Monitor, version 2 (AIM-2), which is a second-generation egocentric wearable sensor used for monitoringof diet and eating behavior (Fig. 4). The AIM-2 may capture periodic images only during food intake or during the whole day an image per 15 seconds or about 5760 images per day. AIM comprises of five main components: inputs; 5-megapixel CMOS image sensor and 3D accelerometer, STM32 processing unit and an FPGA based frame buffer, and micro SD-based storage unit. The camera ...

Similar publications

Article
Full-text available
Introduction: This paper presents a novel Ear Canal Pressure Sensor (ECPS) for objective detection of food intake, chew counting, and food image capture in both controlled and free-living conditions. The contribution of this study is threefold: 1) Development and validation of a novel wearable sensor that uses changes in ear canal pressure and the...
Article
Full-text available
Upper limb impairment is one of the most common problems for people with neurological disabilities, affecting their activity, quality of life (QOL), and independence. Objective assessment of upper limb performance is a promising way to help patients with neurological upper limb disorders. By using wearable sensors, such as an egocentric camera, it...

Citations

... Egocentric vision has greatly enhanced the vulnerability of bystanders (Ferdous et al. 2017). Recent research has worked on preserving visual privacy of the third parties that did not give consent: Dimiccoli et al. (2018) analyzed how image degradation might preserve the privacy of persons appearing in the image while activities can still be recognized; Hassan and Sazonov (2020) proposed an image redaction approach for privacy protection by selective content removal using a semantic segmentation-based deep learning. ...
Article
Full-text available
Population aging resulting from demographic changes requires some challenging decisions and necessary steps to be taken by different stakeholders to manage current and future demand for assistance and support. The consequences of population aging can be mitigated to some extent by assisting technologies that can support the autonomous living of older individuals and persons in need of care in their private environments as long as possible. A variety of technical solutions are already available on the market, but privacy protection is a serious, often neglected, issue when using such (assisting) technology. Thus, privacy needs to be thoroughly taken under consideration in this context. In a three-year project PAAL (‘Privacy-Aware and Acceptable Lifelogging Services for Older and Frail People’), researchers from different disciplines, such as law, rehabilitation, human-computer interaction, and computer science, investigated the phenomenon of privacy when using assistive lifelogging technologies. In concrete terms, the concept of Privacy by Design was realized using two exemplary lifelogging applications in private and professional environments. A user-centered empirical approach was applied to the lifelogging technologies, investigating the perceptions and attitudes of (older) users with different health-related and biographical profiles. The knowledge gained through the interdisciplinary collaboration can improve the implementation and optimization of assistive applications. In this paper, partners of the PAAL project present insights gained from their cross-national, interdisciplinary work regarding privacy-aware and acceptable lifelogging technologies.
... [2020] describe a low-cost distributed storage system architecture for video data to fend off geo-range attacks. In their work on discussing solutions to privacy concerns in wear-able cameras [8], M. A. Hassan et al. ...
Preprint
Full-text available
The rapid advancement of technology has resulted in advanced camera capabilities coming to smaller form factors with improved energy efficiency. These improvements have led to more efficient and capable cameras on mobile devices like mobile phones, tablets, and even eyeglasses. Using these unobtrusive cameras, users can capture photographs and videos of almost any location where they have physical access. Unfortunately, the proliferation of highly compact cameras has threatened the privacy rights of individuals and even entire nations and governments. For example, governments may not want photographs or videos of sensitive installations or locations like airside operations of military bases or the inner areas of nuclear power plants to be captured for unapproved uses. In addition, solutions that obfuscate images in post-processing are subject to threats that could siphon unprocessed data. Our work proposes a Global Positioning System-based approach to restrict the ability of smart cameras to capture and store images of sensitive areas.
Article
Full-text available
The first step in any dietary monitoring system is the automatic detection of eating episodes. To detect eating episodes, either sensor data or images can be used, and either method can result in false-positive detection. This study aims to reduce the number of false positives in the detection of eating episodes by a wearable sensor, Automatic Ingestion Monitor v2 (AIM-2). Thirty participants wore the AIM-2 for two days each (pseudo-free-living and free-living). The eating episodes were detected by three methods: (1) recognition of solid foods and beverages in images captured by AIM-2; (2) recognition of chewing from the AIM-2 accelerometer sensor; and (3) hierarchical classification to combine confidence scores from image and accelerometer classifiers. The integration of image- and sensor-based methods achieved 94.59% sensitivity, 70.47% precision, and 80.77% F1-score in the free-living environment, which is significantly better than either of the original methods (8% higher sensitivity). The proposed method successfully reduces the number of false positives in the detection of eating episodes.
Chapter
The rapid advancement of technology has resulted in advanced camera capabilities coming to smaller form factors with improved energy efficiency. These improvements have led to more efficient and capable cameras on mobile devices like mobile phones, tablets, and even eyeglasses. Using these unobtrusive cameras, users can capture photographs and videos of almost any location where they have physical access. Unfortunately, the proliferation of highly compact cameras has threatened the privacy rights of individuals and even entire nations and governments. For example, governments may not want photographs or videos of sensitive installations or locations like airside operations of military bases or the inner areas of nuclear power plants to be captured for unapproved uses. In addition, solutions that obfuscate images in post-processing are subject to threats that could siphon unprocessed data. Our work proposes a Global Positioning System-based approach to restrict the ability of smart cameras to capture and store images of sensitive areas.KeywordsNational securityPhotographyPrivacyBounding boxes