Chenshu Wu’s research while affiliated with The University of Hong Kong and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (13)


CardioLive: Empowering Video Streaming with Online Cardiac Monitoring
  • Preprint
  • File available

February 2025

·

17 Reads

Sheng Lyu

·

Ruiming Huang

·

Sijie Ji

·

[...]

·

Chenshu Wu

Online Cardiac Monitoring (OCM) emerges as a compelling enhancement for the next-generation video streaming platforms. It enables various applications including remote health, online affective computing, and deepfake detection. Yet the physiological information encapsulated in the video streams has been long neglected. In this paper, we present the design and implementation of CardioLive, the first online cardiac monitoring system in video streaming platforms. We leverage the naturally co-existed video and audio streams and devise CardioNet, the first audio-visual network to learn the cardiac series. It incorporates multiple unique designs to extract temporal and spectral features, ensuring robust performance under realistic video streaming conditions. To enable the Service-On-Demand online cardiac monitoring, we implement CardioLive as a plug-and-play middleware service and develop systematic solutions to practical issues including changing FPS and unsynchronized streams. Extensive experiments have been done to demonstrate the effectiveness of our system. We achieve a Mean Square Error (MAE) of 1.79 BPM error, outperforming the video-only and audio-only solutions by 69.2% and 81.2%, respectively. Our CardioLive service achieves average throughputs of 115.97 and 98.16 FPS when implemented in Zoom and YouTube. We believe our work opens up new applications for video stream systems. We will release the code soon.

Download

Tapor: 3D Hand Pose Reconstruction with Fully Passive Thermal Sensing for Around-device Interactions

January 2025

·

5 Reads

This paper presents the design and implementation of Tapor, a privacy-preserving, non-contact, and fully passive sensing system for accurate and robust 3D hand pose reconstruction for around-device interaction using a single low-cost thermal array sensor. Thermal sensing using inexpensive and miniature thermal arrays emerges with an excellent utility-privacy balance, offering an imaging resolution significantly lower than cameras but far superior to RF signals like radar or WiFi. The design of Tapor, however, is challenging, mainly because the captured temperature maps are low-resolution and textureless. To overcome the challenges, we investigate the thermo-depth and thermo-pose properties and present a novel physics-inspired neural network design that learns effective 3D spatial representations of potential hand poses. We then formulate the 3D pose reconstruction problem as a distinct retrieval task, enabling precise determination of the hand pose corresponding to the input temperature map. To deploy Tapor on IoT devices, we introduce an effective heterogeneous knowledge distillation method that reduces the computation by 377x. We fully implement Tapor and conduct comprehensive experiments in various real-world scenarios. The results demonstrate the remarkable performance of Tapor, which is further illustrated by four case studies of gesture control and finger tracking. We envision Tapor to be a ubiquitous interface for around-device control and have released the dataset, software, firmware, and demo videos at https://github.com/IOT-Tapor/TAPOR.


ASE: Practical Acoustic Speed Estimation Beyond Doppler via Sound Diffusion Field

December 2024

·

6 Reads

Passive human speed estimation plays a critical role in acoustic sensing. Despite extensive study, existing systems, however, suffer from various limitations: First, previous acoustic speed estimation exploits Doppler Frequency Shifts (DFS) created by moving targets and relies on microphone arrays, making them only capable of sensing the radial speed within a constrained distance. Second, the channel measurement rate proves inadequate to estimate high moving speeds. To overcome these issues, we present ASE, an accurate and robust Acoustic Speed Estimation system on a single commodity microphone. We model the sound propagation from a unique perspective of the acoustic diffusion field, and infer the speed from the acoustic spatial distribution, a completely different way of thinking about speed estimation beyond prior DFS-based approaches. We then propose a novel Orthogonal Time-Delayed Multiplexing (OTDM) scheme for acoustic channel estimation at a high rate that was previously infeasible, making it possible to estimate high speeds. We further develop novel techniques for motion detection and signal enhancement to deliver a robust and practical system. We implement and evaluate ASE through extensive real-world experiments. Our results show that ASE reliably tracks walking speed, independently of target location and direction, with a mean error of 0.13 m/s, a reduction of 2.5x from DFS, and a detection rate of 97.4% for large coverage, e.g., free walking in a 4m ×\times 4m room. We believe ASE pushes acoustic speed estimation beyond the conventional DFS-based paradigm and will inspire exciting research in acoustic sensing.


Unfolding Target Detection with State Space Model

October 2024

·

1 Read

Target detection is a fundamental task in radar sensing, serving as the precursor to any further processing for various applications. Numerous detection algorithms have been proposed. Classical methods based on signal processing, e.g., the most widely used CFAR, are challenging to tune and sensitive to environmental conditions. Deep learning-based methods can be more accurate and robust, yet usually lack interpretability and physical relevance. In this paper, we introduce a novel method that combines signal processing and deep learning by unfolding the CFAR detector with a state space model architecture. By reserving the CFAR pipeline yet turning its sophisticated configurations into trainable parameters, our method achieves high detection performance without manual parameter tuning, while preserving model interpretability. We implement a lightweight model of only 260K parameters and conduct real-world experiments for human target detection using FMCW radars. The results highlight the remarkable performance of the proposed method, outperforming CFAR and its variants by 10X in detection rate and false alarm rate. Our code is open-sourced here: https://github.com/aiot-lab/NeuroDet.


USpeech: Ultrasound-Enhanced Speech with Minimal Human Effort via Cross-Modal Synthesis

October 2024

·

10 Reads

Speech enhancement is crucial in human-computer interaction, especially for ubiquitous devices. Ultrasound-based speech enhancement has emerged as an attractive choice because of its superior ubiquity and performance. However, inevitable interference from unexpected and unintended sources during audio-ultrasound data acquisition makes existing solutions rely heavily on human effort for data collection and processing. This leads to significant data scarcity that limits the full potential of ultrasound-based speech enhancement. To address this, we propose USpeech, a cross-modal ultrasound synthesis framework for speech enhancement with minimal human effort. At its core is a two-stage framework that establishes correspondence between visual and ultrasonic modalities by leveraging audible audio as a bridge. This approach overcomes challenges from the lack of paired video-ultrasound datasets and the inherent heterogeneity between video and ultrasound data. Our framework incorporates contrastive video-audio pre-training to project modalities into a shared semantic space and employs an audio-ultrasound encoder-decoder for ultrasound synthesis. We then present a speech enhancement network that enhances speech in the time-frequency domain and recovers the clean speech waveform via a neural vocoder. Comprehensive experiments show USpeech achieves remarkable performance using synthetic ultrasound data comparable to physical data, significantly outperforming state-of-the-art ultrasound-based speech enhancement baselines. USpeech is open-sourced at https://github.com/aiot-lab/USpeech/.



TADAR: Thermal Array-based Detection and Ranging for Privacy-Preserving Human Sensing

September 2024

·

10 Reads

Human sensing has gained increasing attention in various applications. Among the available technologies, visual images offer high accuracy, while sensing on the RF spectrum preserves privacy, creating a conflict between imaging resolution and privacy preservation. In this paper, we explore thermal array sensors as an emerging modality that strikes an excellent resolution-privacy balance for ubiquitous sensing. To this end, we present TADAR, the first multi-user Thermal Array-based Detection and Ranging system that estimates the inherently missing range information, extending thermal array outputs from 2D thermal pixels to 3D depths and empowering them as a promising modality for ubiquitous privacy-preserving human sensing. We prototype TADAR using a single commodity thermal array sensor and conduct extensive experiments in different indoor environments. Our results show that TADAR achieves a mean F1 score of 88.8% for multi-user detection and a mean accuracy of 32.0 cm for multi-user ranging, which further improves to 20.1 cm for targets located within 3 m. We conduct two case studies on fall detection and occupancy estimation to showcase the potential applications of TADAR. We hope TADAR will inspire the vast community to explore new directions of thermal array sensing, beyond wireless and acoustic sensing. TADAR is open-sourced on GitHub: https://github.com/aiot-lab/TADAR.





Citations (3)


... Like RF ranging, thermal radiation attenuation, as described in Observation I in §2.3, can be utilized for range/depth estimation. However, direct depth estimation based solely on temperature values results in large errors, e.g., about ±20 for human bodies [42,83], which can be even worse for the significantly smaller hands. In Tapor, we exploit a complementary opportunity thanks to the 2D imaging capability of the thermal arrays. ...

Reference:

Tapor: 3D Hand Pose Reconstruction with Fully Passive Thermal Sensing for Around-device Interactions
TADAR: Thermal Array-based Detection and Ranging for Privacy-Preserving Human Sensing
  • Citing Conference Paper
  • October 2024

... We instruct GPIoT to develop a heartbeat (R-peak) detection algorithm and test it on the MIT-BIH dataset [44]. 2) Human Activity Recognition (HAR) [3,7,[31][32][33] deployed on edge devices is important for real-time analysis of daily human activities. We instruct GPIoT to develop a WiFi-based HAR model using the WiAR dataset [23] and deploy it on a Jetson Nano board that has limited resources [40]. ...

HARGPT: Are LLMs Zero-Shot Human Activity Recognizers?
  • Citing Conference Paper
  • May 2024

... Zheng et al. [15] proposed BG-BERT, a self-supervised learning framework designed to predict blood glucose levels with a specific focus on adverse events such as hyperglycemia and hypoglycemia, which are often underrepresented in datasets. BG-BERT employs a masked autoencoder to capture contextual information from blood glucose records and uses the Synthetic Minority Oversampling Technique (SMOTE), a data augmentation technique, to enhance sensitivity to adverse events. ...

Predicting Adverse Events for Patients with Type-1 Diabetes Via Self-Supervised Learning
  • Citing Conference Paper
  • April 2024