Lab

Tomasz Wolak's lab


Featured research (15)

Since childhood, we experience speech as a combination of audio and visual signals, with visual cues particularly beneficial in difficult auditory conditions. This study investigates an alternative multisensory context of speech, and namely audio-tactile, which could prove beneficial for rehabilitation in the hearing impaired population. We show improved understanding of distorted speech in background noise, when combined with low-frequency speech-extracted vibrotactile stimulation delivered on fingertips. The quick effect might be related to the fact that both auditory and tactile signals contain the same type of information. Changes in functional connectivity due to audio-tactile speech training are primarily observed in the visual system, including early visual regions, lateral occipital cortex, middle temporal motion area, and the extrastriate body area. These effects, despite lack of visual input during the task, possibly reflect automatic involvement of areas supporting lip-reading and spatial aspects of language, such as gesture observation, in difficult acoustic conditions. For audio-tactile integration we show increased connectivity of a sensorimotor hub representing the entire body, with the parietal system of motor planning based on multisensory inputs, along with several visual areas. After training, the sensorimotor connectivity increases with high-order and language-related frontal and temporal regions. Overall, the results suggest that the new audio-tactile speech task activates regions that partially overlap with the established brain network for audio-visual speech processing. This further indicates that neuronal plasticity related to perceptual learning is first built upon an existing structural and functional blueprint for connectivity. Further effects reflect task-specific behaviour related to body and spatial perception, as well as tactile signal processing. Possibly, a longer training regime is required to strengthen direct pathways between the auditory and sensorimotor brain regions during audio-tactile speech processing.
The ability to identify and resolve conflicts between standard, well trained behaviors, and behaviors required by the current context is an essential feature of cognitive control. To date, no consensus has been reached on the brain mechanisms involved in exerting such control: while some studies identified diverse patterns of activity across different conflicts, other studies reported common resources across conflict tasks or even across simple tasks devoid of conflict component. The latter reports attributed the entire activity observed in the presence of conflict to longer time spent on the task (i.e. to the so-called time-on-task effects). Here we used an extended Multi-Source Interference Task (MSIT) which combines Simon and flanker types of interference to determine shared and conflict-specific mechanisms of conflict resolution in fMRI, and their separability from the time-on-task effects. Increases of activity in the dorsal attention network and decreases of activity in the default mode network were largely shared across the tasks and scaled in parallel with increasing reaction times. Importantly, activity in the sensory and sensorimotor cortices, as well as in the posterior medial frontal cortex (pMFC)–a key region implicated in conflict processing–could not be exhaustively explained by the time-on-task effects. Highlights Flanker and Simon conflicts activate and deactivate largely the same brain regions. Common activity in DAN and DMN is mostly explained by time-on-task effects. Conflict-specific activity emerges mostly in the sensory cortices. pMFC (incl. dACC and pre-SMA) shows both time-on-task and conflict-related activity.
Previous studies have suggested that parents may support the development of theory of mind (ToM) in their child by talking about mental states (mental state talk; MST). However, MST has not been sufficiently explored in deaf children with cochlear implants (CIs). This study investigated ToM and availability of parental MST in deaf children with CIs (n = 39, Mage = 62.92, SD = 15.23) in comparison with their peers with typical hearing (TH; n = 52, Mage = 52.48, SD = 1.07). MST was measured during shared storybook reading. Parents’ narratives were coded for cognitive, emotional, literal, and non-mental references. ToM was measured with a parental questionnaire. Children with CIs had lower ToM scores than their peers with TH, and their parents used more literal references during shared storybook reading. There were no significant differences in the frequencies of cognitive and emotional references between groups. Parental emotional references contributed positively to children’s ToM scores when controlling for the child’s age and receptive grammar only in the CI group. These results indicated some distinctive features in parents of deaf children with CIs’ MST and highlighted the role of MST in the development of ToM abilities in this group.
Subjective tinnitus is a prevalent, though heterogeneous, condition whose pathophysiological mechanisms are still under investigation. Based on animal models, changes in neurotransmission along the auditory pathway have been suggested as co-occurring with tinnitus. It has not, however, been studied whether such effects can also be found in other sites beyond the auditory cortex. Our MR spectroscopy study is the first one to measure composite levels of glutamate and glutamine (Glx; and other central nervous system metabolites) in bilateral medial frontal and non-primary auditory temporal brain areas in tinnitus. We studied two groups of participants with unilateral and bilateral tinnitus and a control group without tinnitus, all three with a similar hearing profile. We found no metabolite level changes as related to tinnitus status in neither region of interest, except for a tendency of an increased concentration of Glx in the left frontal lobe in people with bilateral vs unilateral tinnitus. Slightly elevated depressive and anxiety symptoms are also shown in participants with tinnitus, as compared to healthy individuals, with the bilateral tinnitus group marginally more affected by the condition. We discuss the null effect in the temporal lobes, as well as the role of frontal brain areas in chronic tinnitus, with respect to hearing loss, attention mechanisms and psychological well-being. We furthermore elaborate on the design-related and technical obstacles when using MR spectroscopy to elucidate the role of neurometabolites in tinnitus.
Mentalizing is the key socio-cognitive ability. Its heterogeneous structure may result from a variety of forms of mental state inference, which may be based on lower-level processing of cues encoded in the observable behavior of others, or rather involve higher-level computations aimed at understanding another person's perspective. Here we aimed to investigate the representational content of the brain regions engaged in mentalizing. To this end, 61 healthy adults took part in an fMRI study. We explored ROI activity patterns associated with five well-recognized ToM tasks that induce either decoding of mental states from motion kinematics or belief-reasoning. By using multivariate representational similarity analysis, we examined whether these examples of lower- and higher-level forms of social inference induced common or distinct patterns of brain activity. Distinct patterns of brain activity related to decoding of mental states from motion kinematics and belief-reasoning were found in lTPJp and the left IFG, but not the rTPJp. This may indicate that rTPJp supports a general mechanism for the representation of mental states. The divergent patterns of activation in lTPJp and frontal areas likely reflect differences in the degree of involvement of cognitive functions which support the basic mentalizing processes engaged by the two task groups.

Lab head

Tomasz Wolak
Department
  • Bioimaging Research Center
About Tomasz Wolak
  • I have actively participated in many research projects regarding the use of fMRI technique to study sensory and cognitive processes in humans. Since 2009 I am the head of the Bioimaging Research Center equipped with a modern 3T magnetic resonance scanner. My main interests are functional magnetic resonance, brain segmentation, image analysis and visualization, language and auditory functional studies. I worked with many clinical and scientific centers in Poland, where I introduced the fMRI technique to clinical practice. I have 20 years of experience in the field of neuroimaging, over 3,000 carried out studies and analysis and participated in several research projects related to fMRI and EEG / fMRI.

Members (7)

Agnieszka Pluta
  • University of Warsaw
Katarzyna Ciesla
  • Reichman University
Hanna Cygan
  • Instytut Fizjologii i Patologii Słuchu
Karolina Golec
  • University of Warsaw
Bartosz Kochański
  • Instytut Fizjologii i Patologii Słuchu
Joanna Wysocka
  • University of Warsaw
Maciej Haman
Maciej Haman
  • Not confirmed yet
Jakub Wojciechowski
Jakub Wojciechowski
  • Not confirmed yet
Paulina Paluch
Paulina Paluch
  • Not confirmed yet

Alumni (4)

Monika Lewandowska
  • Instytut Fizjologii i Patologii Słuchu
Patrycja Naumczyk
Patrycja Naumczyk
Sylwia Hyniewska
  • University of Zurich