We value your privacy
The human voice is a powerful tool for social communication. In recent years, Artificial Intelligence (AI) fostered the development of advanced voice systems, able to infer considerable implicit information from the speaker’s voice, such as emotional and mental states, mood information and personality traits. As a result, these systems are largely increasing the usability of human–computer interfaces by using voice to infer speaker-related characteristics, with concrete applicative opportunities in future years in healthcare, robotics and multimodal interaction.
Individuals with schizophrenia (SZ) tend to present voice atypicalities, in terms of poverty of speech, increased pauses, distinctive tone and intensity of voice. Voice atypicalities are related to core clinical symptoms and social impairment of SZ, and could thus constitute a direct window into cognitive, emotional and social components of the disorder. However, our present understanding of voice abnormalities in SZ is very poor, limited by the lack of comprehensive models and systematic approaches to study voice production (see https://www.biorxiv.org/content/10.1101/583815v4).
Recent advances in voice technology may lead the way to a revolution in the study of voice disorders. They may allow to disentangle the affective, cognitive and social mechanisms responsible for voice atypicalities, assist clinicians in diagnosis and monitoring of the disorders, and enhance their capability to capture the complex relationship between vocal behaviour, emotion regulation and clinical features.
The project “Modeling vocal expression in schizophrenia (MOVES)” aims at providing solid understanding of the implications of atypical voice patterns in SZ: through the application of machine learning and signal processing technologies (AI), MOVES aims to provide a comprehensive account of the mechanisms underlying voice atypicalities, assess their impact on clinical evaluations, and create the foundations for more reliable and evidence-based screening tools.
The project aims to foster multi-centric collaborations between Interacting Minds Centre (AU), University of Turin (UNITO), and a network of clinical researchers within Europe, in order to overcome important limits of this research field, such as the need for cross-linguistic studies, larger datasets, and open and collaborative research.
MOVES pioneers a new area of research at the intersection between cognitive and clinical neuroscience, psychiatry, computational science and AI. An innovative aspect of the project is the intention to translate recent AI technological advances into clinical settings, to improve the way we conceptualize, assess and monitor voice disorders in SZ.
1. Characterize the narrative profile of patients with TBI
2. Determine the role played by executive functions, attention and memory in their language difficulties
3. Plan innovative rehabilitation programs for the enhancement of linguistic and narrative skills in such patients