Giovanni Marco Di Liberto

Giovanni Marco Di Liberto
Trinity College Dublin | TCD · School of Computer Science and Statistics

PhD

About

49
Publications
8,761
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,581
Citations
Additional affiliations
July 2021 - present
Trinity College Dublin
Position
  • Professor (Assistant)
January 2021 - June 2021
University College Dublin
Position
  • PostDoc Position
January 2021 - June 2021
Trinity College Dublin
Position
  • PostDoc Position
Education
October 2013 - June 2017
October 2011 - October 2013
University of Padova
Field of study
  • Engineering
October 2008 - July 2011
University of Padova
Field of study
  • Engineering

Publications

Publications (49)
Article
Full-text available
Here we duplicate a neural tracking paradigm, previously published with infants (aged 4 to 11 months), with adult participants, in order to explore potential developmental similarities and differences in entrainment. Adults listened and watched passively as nursery rhymes were sung or chanted in infant-directed speech. Whole-head EEG (128 channels)...
Article
An auditory-visual speech benefit, the benefit that visual speech cues bring to auditory speech perception, is experienced from early on in infancy and continues to be experienced to an increasing degree with age. While there is both behavioural and neurophysiological evidence for children and adults, only behavioural evidence exists for infants –...
Article
Full-text available
Driving a car requires high cognitive demands, from sustained attention to perception and action planning. Recent research investigated the neural processes reflecting the planning of driving actions, aiming to better understand the factors leading to driving errors and to devise methodologies to anticipate and prevent such errors by monitoring the...
Article
Full-text available
Cognitive neuroscience, in particular research on speech and language, has seen an increase in the use of linear modeling techniques for studying the processing of natural, environmental stimuli. The availability of such computational tools has prompted similar investigations in many clinical domains, facilitating the study of cognitive and sensory...
Article
The amplitude envelope of speech carries crucial low-frequency acoustic information that assists linguistic decoding at multiple time scales. Neurophysiological signals are known to track the amplitude envelope of adult-directed speech (ADS), particularly in the theta-band. Acoustic analysis of infant-directed speech (IDS) has revealed significantl...
Article
Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelop...
Article
During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction s...
Article
Full-text available
Music perception requires the human brain to process a variety of acoustic and music-related properties. Recent research used encoding models to tease apart and study the various cortical contributors to music perception. To do so, such approaches study temporal response functions that summarise the neural activity over several minutes of data. Her...
Preprint
Cognitive neuroscience has seen an increase in the use of linear modelling techniques for studying the processing of natural, environmental stimuli. The availability of such computational tools has prompted similar investigations in many clinical domains, facilitating the study of cognitive and sensory deficits within an ecologically relevant conte...
Article
Full-text available
Seeing a speaker's face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker's face provides temporal cues to auditory cortex, and articulatory information fro...
Article
Full-text available
Healthy ageing leads to changes in the brain that impact upon sensory and cognitive processing. It is not fully clear how these changes afect the processing of everyday spoken language. Prediction is thought to play an important role in language comprehension, where information about upcoming words is pre-activated across multiple representational...
Article
Full-text available
Acquiring a new language requires individuals to simultaneously and gradually learn linguistic attributes on multiple levels. Here, we investigated how this learning process changes the neural encoding of natural speech by assessing the encoding of the linguistic feature hierarchy in second-language listeners. Electroencephalography (EEG) signals w...
Preprint
Full-text available
The amplitude envelope of speech carries crucial low-frequency acoustic information that assists linguistic decoding at multiple time scales. Neurophysiological signals are known to track the amplitude envelope of adult-directed speech (ADS), particularly in the theta-band. Acoustic analysis of infant-directed speech (IDS) has revealed significantl...
Preprint
Full-text available
Acquiring a new language requires a simultaneous and gradual learning of multiple levels of linguistic attributes. Here, we investigated how this process changes the neural encoding of natural speech by assessing the encoding of the linguistic feature hierarchy in second-language listeners. Electroencephalography (EEG) signals were recorded from na...
Preprint
Full-text available
Seeing a speaker’s face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker’s face provides temporal cues to auditory cortex, and articulatory information fro...
Preprint
Full-text available
Healthy ageing leads to changes in the brain that impact upon sensory and cognitive processing. It is not fully clear how these changes affect the processing of everyday spoken language. Prediction is thought to play an important role in language comprehension, where information about upcoming words is pre-activated across multiple representational...
Article
Full-text available
Humans engagement in music rests on underlying elements such as the listeners’ cultural background and interest in music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into h...
Preprint
Full-text available
Humans engagement in music rests on underlying elements such as the listeners' cultural background and general interest in music, all shaping the way music is processed in the brain and perceived. Crucially, these factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these...
Article
Humans comprehend speech despite the various challenges such as mispronunciation and noisy environments. Our auditory system is robust to these thanks to the integration of the sensory input with prior knowledge and expectations built on language-specific regularities. One such regularity regards the permissible phoneme sequences, which determine t...
Article
Brain signals recorded with electroencephalography (EEG), magnetoencephalography (MEG) and related techniques often have poor signal-to-noise ratio due to the presence of multiple competing sources and artifacts. A common remedy is to average over repeats of the same stimulus, but this is not applicable for temporally extended stimuli that are pres...
Preprint
Full-text available
Perceptual processes can be probed by fitting stimulus-response models that relate measured brain signals such as electroencephalography (EEG) to the stimuli that evoke them. These models have also found application for the control of devices such as hearing aids. The quality of the fit, as measured by correlation, classification, or information ra...
Article
Brain data recorded with electroencephalography (EEG), magnetoencephalography (MEG) and related techniques often have poor signal-to-noise ratios due to the presence of multiple competing sources and artifacts. A common remedy is to average responses over repeats of the same stimulus, but this is not applicable for temporally extended stimuli that...
Article
Full-text available
This study assessed cortical tracking of temporal information in incoming natural speech in seven-month-old infants. Cortical tracking refers to the process by which neural activity follows the dynamic patterns of the speech input. In adults, it has been shown to involve attentional mechanisms and to facilitate effective speech encoding. However, i...
Preprint
Humans comprehend speech despite the various challenges of real-world environments, such as loud noise and mispronunciation. Our auditory system is robust to these thanks to the integration of the upcoming sensory input with prior knowledge and expectations built on language-specific regularities. One such regularity regards the permissible phoneme...
Article
Humans comprehend speech despite the various challenges of real-world environments, such as loud noise and mispronunciation. Our auditory system is robust to these thanks to the integration of the upcoming sensory input with prior knowledge and expectations built on language-specific regularities. One such regularity regards the permissible phoneme...
Preprint
Full-text available
Brain signals recorded with electroencephalography (EEG), magnetoencephalography (MEG) and related techniques often have poor signal-to-noise ratio due to the presence of multiple competing sources and artifacts. A common remedy is to average over repeats of the same stimulus, but this is not applicable for temporally extended stimuli that are pres...
Article
Developmental dyslexia is a multifaceted disorder of learning primarily manifested by difficulties in reading, spelling, and phonological processing. Neural studies suggest that phonological difficulties may reflect impairments in fundamental cortical oscillatory mechanisms. Here we examine cortical mechanisms in children (6-12 years of age) with o...
Article
Full-text available
In real-world environments, humans comprehend speech by actively integrating prior knowledge (P) and expectations with sensory input. Recent studies have revealed effects of prior information in temporal and frontal cortical areas and have suggested that these effects are underpinned by enhanced encoding of speech-specific features, rather than a b...
Article
Developmental dyslexia is a multifaceted disorder of learning primarily manifested by difficulties in reading, spelling, and phonological processing. Neural studies suggest that phonological difficulties may reflect impairments in fundamental cortical oscillatory mechanisms. Here we examine cortical mechanisms in children (6-12 years of age) with o...
Article
People routinely hear and understand speech at rates of 120-200 words per minute [1, 2]. Thus, speech comprehension must involve rapid, online neural mechanisms that process words' meanings in an approximately time-locked fashion. However, electrophysiological evidence for such time-locked processing has been lacking for continuous speech. Although...
Article
Full-text available
The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophist...
Preprint
Full-text available
The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) appli-cations. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophis...
Article
Speech perception may be underpinned by a hierarchical cortical system, which attempts to match "external" incoming sensory inputs with "internal" top-down predictions. Prior knowledge modulates internal predictions of an upcoming stimulus and exerts its effects in temporal and inferior frontal cortex. Here, we used source-space magnetoencephalogra...
Preprint
Full-text available
Understanding natural speech requires that the human brain convert complex spectrotemporal patterns of acoustic input into meaning in a rapid manner that is reasonably tightly time-locked to the incoming speech signal. However, neural evidence for such a time-locked process has been lacking. Here, we sought such evidence by using a computational mo...
Article
Speech is central to human life. As such, any delay or impairment in receptive speech processing can have a profoundly negative impact on the social and professional life of a person. Thus, being able to assess the integrity of speech processing in different populations is an important goal. Current standardized assessment is mostly based on psycho...
Article
Full-text available
Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual s...
Article
Full-text available
Understanding how brains process sensory signals in natural environments is one of the key goals of 21st century neuroscience. While brain imaging and invasive electrophysiology will play key roles in this endeavor, there is also an important role to be played by noninvasive, macroscopic techniques with high temporal resolution such as electro- and...
Article
Full-text available
In many under-resourced languages it is possible to find text, and it is possible to find speech, but transcribed speech suitable for training automatic speech recognition (ASR) is unavailable. In the absence of native transcripts, this paper proposes the use of a probabilistic transcript: a probability mass function over possible phonetic transcri...
Article
Full-text available
Unlabelled: Speech comprehension is improved by viewing a speaker's face, especially in adverse hearing conditions, a principle known as inverse effectiveness. However, the neural mechanisms that help to optimize how we integrate auditory and visual speech in such suboptimal conversational environments are not yet fully understood. Using human EEG...
Chapter
Full-text available
The human ability to understand speech across an enormous range of listening conditions is underpinned by a hierarchical auditory processing system whose successive stages process increasingly complex attributes of the acoustic input. In order to produce a categorical perception of words and phonemes, it has been suggested that, while earlier areas...
Article
The human ability to understand speech is underpinned by a hierarchical auditory system whose successive stages process increasingly complex attributes of the acoustic input. It has been suggested that to produce categorical speech perception, this system must elicit consistent neural responses to speech tokens (e.g., phonemes) despite variations i...
Article
Complete tree search is a highly effective method for tackling MIP problems, and over the years, a plethora of branching heuristics have been introduced to further refine the technique for varying problems. Recently, portfolio algorithms have taken the process a step further, trying to predict the best heuristic for each instance at hand. However,...

Projects

Projects (2)
Project
This project is part of Mangiacotti's PhD (Padua University) in collaboration with Middlesex University and in partnership with MHA (Methodist Home Association). In this project, we want to investigate cognitive, physiological (Cortisol/DHEA; respiratory sinus arrhythmia), and behavioural benefits of 4 months, one to one music therapy (MT) activities vs one to one storytelling activities. The storytelling protocol was developed to match the use of improvisational techniques and the mood-matching approach used in the MT activities. Preliminary results provided robust evidence confirming the effectiveness of MT interventions for ageing adults with cognitive decline leaving in care homes.
Project
A new project at MCC Lab, funded by the Dunhill Medical Trust! PI: F. Franco (Middlesex University London) Main Researcher: A. Mangiacotti (Middlesex University London) Co-Is: M. Biasutti (Padua University), E. Chinellato (Middlesex University London), G. Di Liberto (ENS, Paris), G. Gabai (Padua University), M.H Hsu (MHA & Anglia Ruskin University), M. Van Puyvelde (Brussels), E. Ward (Middlesex University London). We will shortly recruit a fully funded PhD in robotics, working under the supervision of Dr Eris Chinellato (e.chinellato@mdx.ac.uk) for study 3, see below. Overall, MusiCare aims to provide care-homes, communities and policy-makers with clear guidelines concerning the utility, suitability and cost-effectiveness of 5-month Music Therapy (MT) interventions (one2one vs small-group) as a prevention/rehabilitation method suitable for social prescribing and support for positive aging. It aims to: [Ai] Provide music therapists with robust protocols, new tests specifically designed to work through musical tasks (Music Cognitive Test, Mangiacotti et al., 2020, under review), and a platform of robotic technology to enrich their practice and monitor clients in-between sessions. [Aii] Provide scholars/practitioners with a range of objective measures to select from, depending on their needs, in order to evaluate MT (music therapy) interventions in aging. [Aiii] Empower care-home staff with a new active role in assisting rehabilitative activities with robotic technologies. [Aiv] Facilitate inter-generational communication between families and aging relatives, through the interaction with robotic technologies stimulating cognition/well-being. [Av] Increase public awareness about healthy aging, and arts & wellbeing. From the above, the following objectives are addressed: [Oi] Identifying a consistent set of convergent measures for the reliable assessment of cognition/well-being in MT studies integrating psychological measures with biomarkers; [Oii] Implementing robust MT protocols benefitting cognitive functions/well-being in aging individuals with varying cognitive ability; [Oiii] Comparing the outcomes of one2one/small-group MT intervention in function of participants’ cognitive abilities (ranging from healthy aging to severe impairment); [Oiv] Devising a robotic platform associated with MT to facilitate therapists/caregivers’ work through novel forms of interaction with aging individuals, and potential translatability to communities. Study 1: well being, behavioural and biomarker measures in healthy 65+ attending one2one v small group music therapy v control. Study 2: well being, behavioural and biomarker measures in 65+ living in care homes with mild v moderate-severe cognitive decline, attending one2one v small group music therapy v control. Study 3: 65+ living in care homes, attending a programme of music therapy supplemented fortnightly by socially assistive robotic (SAR)-enriched intervention supported by caregivers, providing therapists with continuous monitoring by the robotic platform. The most useful measures emerged in Study 1 and 2 will be selected to describe change. In partnership with MHA (Methodist Homes) UK. Robots QT from LuxAI.