October 2024
·
11 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
October 2024
·
11 Reads
October 2024
·
43 Reads
August 2024
·
30 Reads
·
1 Citation
Journal of Learning Analytics
Classroom sensing systems can capture data on teacher-student behaviours and interactions at a scale far greater than human observers can. These data, translated to multi-modal analytics, can provide meaningful insights to educational stakeholders. However, complex data can be difficult to make sense of. In addition, analyses done on these data are often limited by the organization of the underlying sensing system, and translating sensing data into meaningful insights often requires custom analyses across different modalities. We present Edulyze, an analytics engine that processes complex, multi-modal sensing data and translates them into a unified schema that is agnostic to the underlying sensing system or classroom configuration. We evaluate Edulyze’s performance by integrating three sensing systems (Edusense, ClassGaze, and Moodoo) and then present data analyses of five case studies of relevant pedagogical research questions across these sensing systems. We demonstrate how Edulyze’s flexibility and customizability allow us to answer a broad range of research questions made possible by Edulyze’s translation of a breadth of raw sensing data from different sensing systems into relevant classroom analytics.
July 2024
·
15 Reads
SID Symposium Digest of Technical Papers
The flat, featureless nature of touch screen displays has meant keyboard text entry is slower and more error‐prone than on mechanical counterparts, and has also introduced negative side effects in demanding settings like automotive. Ideally, we could maintain the graphical flexibility afforded by displays, but carry over the tactile benefits of physical keyboard keys and buttons. Flat panel haptic displays have the potential to achieve this. We present a new type of shape‐changing display using embedded electroosmotic pumps (EEOPs). Our pumps are 1.5mm in thickness, and allow complete stack‐ups under 5mm with the opportunity to become much thinner. Our EEOPs can move their entire volume's worth of fluid in 1 second, and apply pressures of +/‐50kPa. This is enough to create dynamic, mm scale tactile features on a surface.
April 2024
·
25 Reads
·
1 Citation
December 2023
·
26 Reads
ACM Transactions on Computer-Human Interaction
Non-contact, mid-air haptic devices have been utilized for a wide variety of experiences, including those in extended reality, public displays, medical, and automotive domains. In this work, we explore the use of synthetic jets as a promising and under-explored mid-air haptic feedback method. We show how synthetic jets can scale from compact, low-powered devices, all the way to large, long-range, and steerable devices. We built seven functional prototypes targeting different application domains, in order to illustrate the broad applicability of our approach. These example devices are capable of rendering complex haptic effects, varying in both time and space. We quantify the physical performance of our designs using spatial pressure and wind flow measurements, and validate their compelling effect on users with stimuli recognition and qualitative studies.
November 2023
·
61 Reads
·
2 Citations
Figure 1: Pantoenna uses an antenna integrated on the bottom of a VR/AR headset (A). The mouth is dielectrically loaded to the antenna; changes in pose manifest as changes in the antenna's self-resonance frequency and performance, which we measure (B). Our machine-learning pipeline predicts 11 3D keypoints for the cheeks, lips and tongue (C). This data can then be used, for example, to pose an expressive avatar for telepresence uses (D). Our technique sidesteps privacy issues inherent in camera-based systems, while simultaneously supporting silent facial expressions that audio-based systems cannot detect. ABSTRACT Methods for faithfully capturing a user's holistic pose have immediate uses in AR/VR, ranging from multimodal input to expressive avatars. Although body-tracking has received the most attention, the mouth is also of particular importance, given that it is the channel for both speech and facial expression. In this work, we describe a new RF-based approach for capturing mouth pose using an antenna integrated into the underside of a VR/AR headset. Our approach side-steps privacy issues inherent in camera-based methods, while simultaneously supporting silent facial expressions that audio-based methods cannot. Further, compared to bio-sensing methods such as EMG and EIT, our method requires no contact with the wearer's body and can be fully self-contained in the headset, offering a high degree of physical robustness and user practicality. We detail our implementation along with results from two user studies, which show a mean 3D error of 2.6 mm for 11 mouth keypoints across worn sessions without re-calibration. CCS CONCEPTS • Human-centered computing → Graphics input devices.
October 2023
·
16 Reads
·
1 Citation
Proceedings of the ACM on Human-Computer Interaction
Pointing with one's finger is a natural and rapid way to denote an area or object of interest. It is routinely used in human-human interaction to increase both the speed and accuracy of communication, but it is rarely utilized in human-computer interactions. In this work, we use the recent inclusion of wide-angle, rear-facing smartphone cameras, along with hardware-accelerated machine learning, to enable real-time, infrastructure-free, finger-pointing interactions on today's mobile phones. We envision users raising their hands to point in front of their phones as a "wake gesture". This can then be coupled with a voice command to trigger advanced functionality. For example, while composing an email, a user can point at a document on a table and say "attach". Our interaction technique requires no navigation away from the current app and is both faster and more privacy-preserving than the current method of taking a photo.
October 2023
·
160 Reads
·
20 Citations
October 2023
·
38 Reads
·
12 Citations
... The nine papers included in this issue demonstrate practical implementations and discuss the implications and possible future trajectories for learning analytics research, especially tailored for the Latin American academic community. Volume 11.2 included 15 regular research papers and three other types of papers, including those that fall into the categories of "data and tools reports" (Patidar et al., 2024), "extended conference papers" (Seidenberg et al., 2024), and "practical reports" (Wasson et al., 2024). Finally, the current issue (Vol. ...
August 2024
Journal of Learning Analytics
... However, their effectiveness comes at cost of higher hardware costs and larger size, mainly due to the need for multiple antennas in mm-wave radar systems. Recently, innovative solutions have been explored using pairs of antennas with a near-field perturbation approach to monitor variations in reflection coefficient [29], [30] or impedance [31], yielding impressive results in short-range gesture recognition. If we can replicate these results with a single antenna, it would further reduce the complexity and cost of the system. ...
November 2023
... Currently, human action recognition (HAR) is the most extensively studied aspect in HIAU. HAR aims to identify and categorize discrete human actions by leveraging data from diverse modal sensors, such as smartwatches [24,57], smartphones [11,47], earbuds [37,83], smart glasses [25,74], and Wi-Fi signals [34,61,68]. With accurate HAR, systems can anticipate user needs, improve safety, and provide context-aware assistance, fostering a seamless interaction between individuals and their surroundings. ...
October 2023
... Some compact wearable systems, such as the Haptic Thimble [9] and others [10] [11] are not able to transmit high-fidelity surface information, relying on other qualities like force feedback or vibration for teletaction. Recently, Carnegie Mellon's Future Interface Systems group has developed the Fluid Reality haptic fingertip [12], which currently represents the smallest form factor of wearable haptic devices that are able to transmit depth and contact information. Even still, this solution lacks in tactile resolution, utilizing only two levels of depth actuation per pin. ...
October 2023
... IMU devices. The IMU device choice follows IMUPoser [36] and includes three commonly used devices in daily life: smartwatches, smartphones, and earbuds. Additionally, we introduce smart glasses, considering their promising potential in AR and the metaverse [85]. ...
April 2023
... The output resistance of the high voltage drive circuit is also important to consider when modeling EA dynamics. Output stages include common source amplifiers [7,8], push-pull output stages such as half bridges [38] and full bridges [11,12,24,39], and optocoupler relays [6,15]. Although the transistors in these circuits have response times far faster than EA's mechanical time constants, many small form factor DC-DC converters cannot handle the large current spikes that occur whenever these transistors flip on or off. ...
April 2023
... It is important to note that proper lighting is essential for maintaining the precision of image processing systems. Therefore, addressing lighting conditions is essential for enhancing the accuracy and overall user satisfaction in facial authentication systems, like adaptive lighting controls [16] and notifications to inform users of unsuitable lighting conditions, may become a solution to solve this issue. ...
March 2023
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies
... However, challenges arise when the sensor module experiences significant accelerations or rotations, which can adversely affect the performance of the tracking algorithm and result in suboptimal accuracy. Furthermore, the fusion of electromyography and depth sensors allows for the accurate tracking of arm and hand poses, thereby supporting supplementary applications such as held object recognition and environment mapping [39]. Nevertheless, accurately estimating hand poses in the presence of significant self-occlusion remains a highly complex challenge. ...
October 2022
... A wristband serves as a convenient location for detecting finger movements based on force measurement [9], EM wave [19], inertial sensing [20], capacitive coupling [28], electromyography (EMG) [30], computer vision [37], etc. Because the wristband infers finger gestures away from the wrist, the signal caused by subtle finger motions inevitably becomes small, challenging for the wristband to stably detect and accurately distinguish them [37,38]. ...
October 2022
... They found that flipping gestures performed with the wrist were the fastest, while bimanual gestures were the most preferred by the participants in their study. Shen and Harrison [34] proposed a design space of pull gestures for 2-way display laptops, structured according to the location of interaction (on screen vs. off-screen) and the number of screens (one vs. two). Examples of on-screen gestures include drag, flick, pinch-to-zoom, double tap, click & hold, lasso, pull apart, dial, and knuckle taps, while cross-screen input is mainly represented by drag and drag & tap, respectively. ...
November 2022