Maitreyee Wairagkar’s research while affiliated with Imperial College London and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (30)


Fig. 2 | In-home assessment scores per cluster group at baseline for activities of daily living, psychiatric behaviours and proxy-rater well-being. Baseline total activities of daily living and psychiatric behaviour assessment scores per cluster from K-Means clustering model (Calinski-Harabasz Score: 18.52). Each cluster is described on the x-axis, Severe, Moderate and Mild. Model features included, but were not limited to, baseline (a) activities of daily living assessment (n = 87), (b) total behaviour (n = 87) and (c) total behaviour proxy-rater distress scores (n = 87).
Fig. 5 | Average cumulative sum of healthcare-related data per cluster group. For each cluster group, Severe, Moderate and Mild, we calculated the average cumulative sum of: (a) comorbidities before baseline, (b) comorbidities after baseline, (c) healthcare events and encounters, and (d) behavioural observations, per cluster (n = 87). Distinct patterns of behaviour, comorbidities and healthcare events are illustrated for individual clusters.
Correlation of standardised mini-mental state examination with in-home assessments
Longitudinal study of care needs and behavioural changes in people living with dementia using in-home assessment data
  • Article
  • Full-text available

January 2025

·

60 Reads

Communications Medicine

Chloe Walsh

·

Alexander Capstick

·

Nan Fletcher-Lloyd

·

[...]

·

Andy Kenny

Background People living with dementia often experience changes in independence and daily living, affecting their well-being and quality of life. Behavioural changes correlate with cognitive decline, functional impairment, caregiver distress, and care availability. Methods We use data from a 3-year prospective observational study of 141 people with dementia at home, using the Bristol Activities of Daily Living Scale, Neuropsychiatric Inventory and cognitive assessments, alongside self-reported and healthcare-related data. Results Here we show, psychiatric behavioural symptoms and difficulties in activities of daily living, fluctuate alongside cognitive decline. 677 activities of daily living and 632 psychiatric behaviour questionnaires are available at intervals of 3 months. Clustering shows three severity-based groups. Mild cognitive decline associates with higher caregiver anxiety, while the most severe group interacts more with community services, but less with hospitals. Conclusions We characterise behavioural symptoms and difficulties in activities of daily living in dementia, offering clinically relevant insights not commonly considered in current practice. We provide a holistic overview of participants’ health during their progression of dementia.

Download

Simulating the psychological and neural effects of affective touch with soft robotics: an experimental study

November 2024

·

40 Reads

Frontiers in Robotics and AI

Human affective touch is known to be beneficial for social-emotional interactions and has a therapeutic effect. For touch initiated by robotic entities, richer affective affordance is a critical enabler to unlock its potential in social-emotional interactions and especially in care and therapeutic applications. Simulating the attributes of particular types of human affective touch to inform robotic touch design can be a beneficial step. Inspired by the scientific finding on CT-optimal affective touch - a gentle skin stroking at velocities of 1–10 cm/s evidenced to be pleasant and calming, we developed a proof-of-concept haptic rendering system - S-CAT, using pneumatic silicone soft robotic material to simulate the attributes (velocity, temperature and applied normal force) of CT-optimal affective touch. To investigate whether the affective touch performed by the S-CAT system elicits psychological effects comparable to CT-optimal, manual affective touch, we conducted an experimental study comparing the effects of CT-optimal versus non-CT-optimal stimulation velocities in each of three types of stimulation modes (S-CAT device, skin-to-skin manual stroking, hairbrush manual stroking), and across them. Our measures included subjective ratings of touch pleasantness and intensity, neurophysiological responses (EEG), and qualitative comments. Our results showed that velocity modulated subjective and neurophysiological responses in each and across these three stimulation modes, and that CT-optimal stimulations from S-CAT system and manual method received similar ratings and verbal comments on pleasantness, suggesting that the S-CAT touch can have comparable effects to manual stroking. We discuss the design insights learned and the design space that this study opens up to support well-being and healthcare.


Figure 2. Accurate cursor control and click. a. Trial-averaged firing rates (mean ± s.e.) recorded from four example electrodes during the Grid Evaluation Task. Activity is aligned to when the cursor entered the cued target (left), and then to when the click decoder registered a click (right). Firing rates were Gaussian smoothed (std 20 ms) before trial-averaging. b. The Grid Evaluation Task. The participant attempted to move the cursor (white circle) to the cued target (green square) and click on it. c. Location of every click that was performed, relative to the current trial's cued target (central black square), during blocks with the 6x6 grid (left) and with the 14x14 grid (right). Small gray squares indicate where the cursor began each trial, relative to the trial's cued target. d. Timeline of the seventeen 3-minute Grid Evaluation Task blocks. Each point represents a trial and indicates the trial length and trial result (success or failure). Each gray region is a single block. e. T15's online bitrate performance in the Grid Evaluation Task, compared to the highest-performing prior dPCG cursor control study. Circles are individual blocks (only shown for this study). Triangles are averages per participant (from this study and others).
Figure 3. The dorsal 6v array contributed the most to cursor velocity decoding. a. Zoomed-in view of T15's array locations shown in Fig. 1b. Triangles indicate arrays providing the best decoding performance for speech (orange) and for cursor control (crimson). The best speech arrays were identified in Card et al. 2024. b. Offline analysis of cursor decoders trained using neural features from
Figure 5. Participant T15 controlled his personal desktop computer with the cursor BCI. a. Over-the-shoulder view of T15 neurally controlling the mouse cursor on his personal computer. The red arrow points to the cursor. b-c. Screenshots of T15's personal computer usage, with cursor trajectories (pink lines) overlaid. Cursor position every 250 ms (circles) and clicks (stars) are also drawn. In b., T15 first opened the Settings application (left) and then switched his computer to Light Mode (right). In c., T15 opened Netflix from Chrome's New Tab menu (top) and then selected his Netflix user (bottom).
Speech motor cortex enables BCI cursor control and click

November 2024

·

27 Reads

Decoding neural activity from ventral (speech) motor cortex is known to enable high-performance speech brain-computer interface (BCI) control. It was previously unknown whether this brain area could also enable computer control via neural cursor and click, as is typically associated with dorsal (arm and hand) motor cortex. We recruited a clinical trial participant with ALS and implanted intracortical microelectrode arrays in ventral precentral gyrus (vPCG), which the participant used to operate a speech BCI in a prior study. We developed a cursor BCI driven by the participant's vPCG neural activity, and evaluated performance on a series of target selection tasks. The reported vPCG cursor BCI enabled rapidly-calibrating (40 seconds), accurate (2.90 bits per second) cursor control and click. The participant also used the BCI to control his own personal computer independently. These results suggest that placing electrodes in vPCG to optimize for speech decoding may also be a viable strategy for building a multi-modal BCI which enables both speech-based communication and computer control via cursor and click.


A mosaic of whole-body representations in human motor cortex

September 2024

·

46 Reads

·

2 Citations

Understanding how the body is represented in motor cortex is key to understanding how the brain controls movement. The precentral gyrus (PCG) has long been thought to contain largely distinct regions for the arm, leg and face (represented by the “motor homunculus”). However, mounting evidence has begun to reveal a more intermixed, interrelated and broadly tuned motor map. Here, we revisit the motor homunculus using microelectrode array recordings from 20 arrays that broadly sample PCG across 8 individuals, creating a comprehensive map of human motor cortex at single neuron resolution. We found whole-body representations throughout all sampled points of PCG, contradicting traditional leg/arm/face boundaries. We also found two speech-preferential areas with a broadly tuned, orofacial-dominant area in between them, previously unaccounted for by the homunculus. Throughout PCG, movement representations of the four limbs were interlinked, with homologous movements of different limbs (e.g., toe curl and hand close) having correlated representations. Our findings indicate that, while the classic homunculus aligns with each area’s preferred body region at a coarse level, at a finer scale, PCG may be better described as a mosaic of functional zones, each with its own whole-body representation.


An instantaneous voice synthesis neuroprosthesis

August 2024

·

48 Reads

·

2 Citations

Brain computer interfaces (BCIs) have the potential to restore communication to people who have lost the ability to speak due to neurological disease or injury. BCIs have been used to translate the neural correlates of attempted speech into text. However, text communication fails to capture the nuances of human speech such as prosody, intonation and immediately hearing one's own voice. Here, we demonstrate a "brain-to-voice" neuroprosthesis that instantaneously synthesizes voice with closed-loop audio feedback by decoding neural activity from 256 microelectrodes implanted into the ventral precentral gyrus of a man with amyotrophic lateral sclerosis and severe dysarthria. We overcame the challenge of lacking ground-truth speech for training the neural decoder and were able to accurately synthesize his voice. Along with phonemic content, we were also able to decode paralinguistic features from intracortical activity, enabling the participant to modulate his BCI-synthesized voice in real-time to change intonation, emphasize words, and sing short melodies. These results demonstrate the feasibility of enabling people with paralysis to speak intelligibly and expressively through a BCI.


An Accurate and Rapidly Calibrating Speech Neuroprosthesis

August 2024

·

33 Reads

·

38 Citations

The New-England Medical Review and Journal

Background: Brain-computer interfaces can enable communication for people with paralysis by transforming cortical activity associated with attempted speech into text on a computer screen. Communication with brain-computer interfaces has been restricted by extensive training requirements and limited accuracy. Methods: A 45-year-old man with amyotrophic lateral sclerosis (ALS) with tetraparesis and severe dysarthria underwent surgical implantation of four microelectrode arrays into his left ventral precentral gyrus 5 years after the onset of the illness; these arrays recorded neural activity from 256 intracortical electrodes. We report the results of decoding his cortical neural activity as he attempted to speak in both prompted and unstructured conversational contexts. Decoded words were displayed on a screen and then vocalized with the use of text-to-speech software designed to sound like his pre-ALS voice. Results: On the first day of use (25 days after surgery), the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. Calibration of the neuroprosthesis required 30 minutes of cortical recordings while the participant attempted to speak, followed by subsequent processing. On the second day, after 1.4 additional hours of system training, the neuroprosthesis achieved 90.2% accuracy using a 125,000-word vocabulary. With further training data, the neuroprosthesis sustained 97.5% accuracy over a period of 8.4 months after surgical implantation, and the participant used it to communicate in self-paced conversations at a rate of approximately 32 words per minute for more than 248 cumulative hours. Conclusions: In a person with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore conversational communication after brief training. (Funded by the Office of the Assistant Secretary of Defense for Health Affairs and others; BrainGate2 ClinicalTrials.gov number, NCT00912041.).


Figure 1. Real-time neural decoding of attempted speech. a, Diagram of the brain-to-text speech BCI system. Neural activity is measured from the left ventral precentral gyrus using four 64-electrode Utah arrays and processed into neural features (threshold crossings and spikeband power), temporally binned, and
Figure 3. Offline decoding analyses indicate rapidly-calibrating, stable and generalizable decoding. a, Offline recreation of "day 1" performance for 50-word (red) and 125,000-word (blue) vocabularies with optimal decoding hyperparameters. Word error rate is plotted as a function of the number of training sentences. b, Decoding stability over time with no recalibration or model fine-tuning. Decoders were trained on data from 5 (black) or 10 (gray) sequential sessions, and then evaluated on all future evaluation blocks. Word error rate is
Figure 4. Decoding attempted speech during open conversations. a, Photograph of the participant's BCI interface during self-initiated speech. Sentence construction initiates when any phoneme's RNN output probability surpasses that of silence and concludes after 6 seconds of speech inactivity, or upon SP2's optional activation of an on-screen button via eye tracking. After the decoded sentence was finalized, SP2 used the on-screen confirmation buttons to indicate if the decoded sentence was correct. This photo has been cropped to not include the participant, as per medrXiv policy. b, Sample transcript of a conversation between SP2 and a family member, on the second day of use. c, Evaluating speech decoding accuracy in open conversations (n=925 sentences with known true labels). Average word error rate was 3.7% (95% CI: [3.3%, 4.3%]). d, Timeline of two example sentences showing the most probable phoneme at each time step, as indicated by RNN outputs. Gray intervals indicate the highest output probability is silence, while colored segments show the
An accurate and rapidly calibrating speech neuroprosthesis

December 2023

·

315 Reads

·

5 Citations

Brain-computer interfaces (BCIs) can provide a rapid, intuitive way for people with paralysis to communicate by transforming the cortical activity associated with attempted speech into text. Despite recent advances, communication with BCIs has been restricted by requiring many weeks of training data, and by inadequate decoding accuracy. Here we report a speech BCI that decodes neural activity from 256 microelectrodes in the left precentral gyrus of a person with ALS and severe dysarthria. This system achieves daily word error rates as low as 1% (2.66% average; 9 times fewer errors than previous state-of-the-art speech BCIs) using a comprehensive 125,000-word vocabulary. On the first day of system use, following only 30 minutes of attempted speech training data, the BCI achieved 99.6% word accuracy with a 50 word vocabulary. On the second day of use, we increased the vocabulary size to 125,000 words and after an additional 1.4 hours of training data, the BCI achieved 90.2% word accuracy. At the beginning of subsequent days of use, the BCI reliably achieved 95% word accuracy, and adaptive online fine-tuning continuously improved this accuracy throughout the day. Our participant used the speech BCI in self-paced conversation for over 32 hours to communicate with friends, family, and colleagues (both in-person and over video chat). These results indicate that speech BCIs have reached a level of performance suitable to restore naturalistic communication to people living with severe dysarthria.


Fig. 2. Top 5 words in each topic. These words were selected based on their proportion relative to all words within their respective topic. The x-axis measures this proportion. Note the undefined topic contains many different utterances, thus, all words among this topic have low Term Frequency -Inverse Document Frequency (TF-IDF) scores.
Discovering Behavioral Patterns Using Conversational Technology for In-Home Health and Well-Being Monitoring

November 2023

·

119 Reads

·

10 Citations

IEEE Internet of Things Journal

Advancements in conversational AI have created unparalleled opportunities to promote the independence and well-being of older adults, including people living with dementia (PLWD). However, conversational agents have yet to demonstrate a direct impact in supporting target populations at home, particularly with long-term user benefits and clinical utility. We introduce an infrastructure fusing in-home activity data captured by Internet of Things (IoT) technologies with voice interactions using conversational technology (Amazon Alexa). We collect 3103 person-days of voice and environmental data across 14 households with PLWD to identify behavioural patterns. Interactions include an automated well-being questionnaire and 10 topics of interest, identified using topic modelling. Although a significant decrease in conversational technology usage was observed after the novelty phase across the cohort, steady state data acquisition for modelling was sustained. We analyse household activity sequences preceding or following Alexa interactions through pairwise similarity and clustering methods. Our analysis demonstrates the capability to identify individual behavioural patterns, changes in those patterns and the corresponding time periods. We further report that households with PLWD continued using Alexa following clinical events (e.g., hospitalisations), which offers a compelling opportunity for proactive health and well-being data gathering related to medical changes. Results demonstrate the promise of conversational AI in digital health monitoring for ageing and dementia support and offer a basis for tracking health and deterioration as indicated by household activity, which can inform healthcare professionals and relevant stakeholders for timely interventions. Future work will use the bespoke behavioural patterns extracted to create more personalised AI conversations.




Citations (23)


... The finding that the cursor BCI could be controlled with non-speech, non-orofacial imagery aligns well with accumulating evidence that the whole body is represented in a distributed fashion along the precentral gyrus. [24][25][26][27] Furthermore, this study exemplifies how to leverage this modern view of motor cortex to build multi-modal implanted BCIs using fewer arrays and implant sites. We note, however, that cursor velocity decoding relied heavily on a single array (in the dorsal aspect of area 6v). ...

Reference:

Speech motor cortex enables BCI cursor control and click
A mosaic of whole-body representations in human motor cortex
  • Citing Preprint
  • September 2024

... 5,6,[8][9][10][11][12] In recent years, speech BCIs have emerged as a viable path toward restoring fast, naturalistic communication for people with paralysis by instead decoding attempted speech movements. [18][19][20][21][22][23] In contrast to hand-based BCIs, speech BCIs have typically been driven by neural activity in sensorimotor cortical areas further ventral such as middle precentral gyrus (midPCG) and ventral precentral gyrus (vPCG) which are most often associated with production of orofacial movements and speech. 18,20-22 Speech BCIs far outperform cursor BCIs with regard to communication rate, 10,20 but are not as well-suited for general-purpose computer control. ...

An instantaneous voice synthesis neuroprosthesis

... Another breakthrough involved enabling an ALS patient with severe dysarthria to communicate using text-to-speech brain implant technology. By surgically implanting four microelectrode arrays into the patient's left ventral precentral gyrus, scientists restored the patient's ability to produce speech [5]. ...

An Accurate and Rapidly Calibrating Speech Neuroprosthesis
  • Citing Article
  • August 2024

The New-England Medical Review and Journal

... Intracortical brain-computer interfaces (iBCIs) can restore functional capabilities for people with paralysis by monitoring cortical neural activity and mapping it to an external variable [1,2], such as intended cursor movements, actuations of a robotic effector, handwritten characters, spoken words, and even muscle contractions [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. These devices typically use implanted electrodes to measure spiking activity, which in this work refers to unsorted threshold crossing events consisting primarily of action potentials. ...

An accurate and rapidly calibrating speech neuroprosthesis

... With consistent use, these technologies hold potential to become accessible and personalised tools that could ultimately track the trajectory of cognitive status over time through spoken language, alerting clinicians to individuals who may need more comprehensive diagnostic evaluation. Integrating additional in-home behavioural data, comorbidities, and individual health events such as hospitalisations or infections [48,49] could further improve prediction performance and enhance clinical applicability. ...

Discovering Behavioral Patterns Using Conversational Technology for In-Home Health and Well-Being Monitoring

IEEE Internet of Things Journal

... Previous work has demonstrated the feasibility of detecting and [3][4][5][6] and classifying [5,[7][8][9][10][11] speech from electrocorticographic (ECoG) [6,[11][12][13] and microelectrode array (MEA) [14] implants in patients with intact speech. In addition, audible speech may also be synthesized from ECoG [15][16][17][18] and MEAs [19]. Recent work has begun to translate these impressive results into recognizing [20][21][22][23] and synthesizing [23] speech in patients who are nearly or entirely unable to speak. ...

Synthesizing Speech by Decoding Intracortical Neural Activity from Dorsal Motor Cortex
  • Citing Conference Paper
  • April 2023

... Acceleration and angular velocity data obtained with inertial motion units (IMUs) during sit-to-stand tests have also been proven useful in providing substantial information on mobility, including the duration of subphases of the test (Lummel et al., 2016), and analysis methods like dynamic time warping (DTW) to measure the variability of motor performance (Ghahramani et al., 2020). Several studies have explored the use of sensor-instrumented sit-to-stand to assess mobility in varied populations (Bochicchio et al., 2023;Forero et al., 2023;Ghahramani et al., 2020;Lummel et al., 2016;Meulemans et al., 2023;Tulipani et al., 2022;Van Lummel et al., 2013;Van Roie et al., 2019;Wairagkar et al., 2022;Zijlstra et al., 2010). Sensor-based performance metrics of monitored sit-to-stand transitions have been shown to better identify people with mobility impairments and fall risk than daily living monitoring (Tulipani et al., 2022). ...

A novel approach for modelling and classifying sit-to-stand kinematics using inertial sensors

... Finally, we also see the use of social robots to engage intergenerational users in various forms of asynchronous communication. Social robots can monitor the users' health condition and persuade users to lead healthier lifestyles (Lima et al., 2022) and learn from previous conversations and adapt their behavior over time (Wairagkar et al., 2021). Such monitoring, learning and adaptive capabilities can further enable the social robot to facilitate remote communication between family members over an extended period of time and whenever appropriate for the individuals. ...

Conversational artificial intelligence and affective social robot for monitoring health and well‐being of people with dementia
  • Citing Article
  • December 2021

... Communication robots (CRs), which provide interactive entertainment through conversations, singing, and other activities, have emerged as promising tools in this regard [8][9][10]. These robots not only engage users in enjoyable activities but also have the potential to encourage regular health monitoring [11,12]. The integration of CRs into daily life represents an innovative approach to addressing the lack of motivation often observed in this demographic in terms of maintaining traditional self-care practices. ...

Conversational Affective Social Robots for Ageing and Dementia Support

IEEE Transactions on Cognitive and Developmental Systems

... Miko is a robot designed for children that uses Generative Artificial Intelligence and can operate in multiple languages, including Spanish and English. Additionally, Miko displays emotive facial expressions in response to interactions and features a touchscreen (Wairagkar et al., 2022). In our case, we used Miko³. ...

Emotive Response to a Hybrid-Face Robot and Translation to Consumer Social Robots

IEEE Internet of Things Journal