Fig 5 - uploaded by Srikanth Damera
Content may be subject to copyright.
Neural dynamics in the concept processing network. (A) The three seeds are marked orthographic (red), superordinate (green, see Fig. 3), and basic-level toolselective (purple, see Fig. 4) sensor groups. (B) Average ERP of all sensors that show an orthographic response (shown in red in (A)) shows significant adaptation of the
Source publication
A number of fMRI studies have provided support for the existence of multiple concept representations in areas of the brain such as the anterior temporal lobe (ATL) and inferior parietal lobule (IPL). However, the interaction among different conceptual representations remains unclear. To better understand the dynamics of how the brain extracts meani...
Contexts in source publication
Context 1
... human electrophysiology recordings ( Cui et al., 2008;Gregoriou et al., 2009;Seth et al., 2015). In this work, we computed Granger Causality within each subject at the single trial level between all pairs of channels in the left anterior temporal superordinate-classification, parietal basic-level tool, and left posterior orthographic sensors (see Fig. 5A). We used the BSMART toolbox ( Cui et al., 2008) with a model order of 15 and a sliding temporal window of 60 ms. We only used artifact-free trials for which correct responses were given. Before computing the Granger Causality, we performed the standard preprocessing step of subtracting the temporal mean of each trial (in each channel) ...
Context 2
... representations -a key prediction of two-stage models of category learning. Specifically, we investigated whether activity at orthographically-selective sensors directly modulated activity in dorsal tool-selective sensors. To do so, we first identified a high-level perceptual (orthography) selective cluster ( Fig. 5A; see methods), putatively identifying the "visual word form area", VWFA (Brem et al., Source estimation of within-tools classification shown in panels A and B. The locus of above-chance classification during the time window of classification was estimated in source space to the left parietal lobe (n ¼ 10; α < 0.002; one-tailed p ¼ .025). The three ...
Context 3
... Dehaene- Lambertz et al., 2018;Maurer et al., 2005) -the highest orthographically selective stage in the ventral visual pathway. This cluster showed a significant adaptation effect between 148 and 178 ms with a peak difference at 162 ms ( Fig. 5B; cluster-defining α ¼ 0.002, two-tailed p ¼ .034). This orthographic N170 cluster overlapped in both space and time with those reported in previous EEG studies (Maurer et al., 2005;Scholl et al., 2013). The ERP for these sensors showed an N170 response during the first word with the negative deflection starting at 150 ms post-stimulus ...
Context 4
... 5B; cluster-defining α ¼ 0.002, two-tailed p ¼ .034). This orthographic N170 cluster overlapped in both space and time with those reported in previous EEG studies (Maurer et al., 2005;Scholl et al., 2013). The ERP for these sensors showed an N170 response during the first word with the negative deflection starting at 150 ms post-stimulus onset (Fig. 5B inset). The sensors in this cluster were used as our orthographic ...
Context 5
... bands and time points (see Materials and Methods), we found evidence that theta-frequency (but not alpha or beta) activity in orthography-selective sensors significantly modulated activity in parietal tool-selective sensors from 204 to 214 ms and 244-284 ms post-stimulus onset -just prior to the onset of basic-level tool representations ( Fig. 5C; two-tailed p < .05 FDR ...
Citations
The existence of a neural representation for whole words (i.e., a lexicon) is a common feature of many models of speech processing. Prior studies have provided evidence for a visual lexicon containing representations of whole written words in an area of the ventral visual stream known as the “Visual Word Form Area” (VWFA). Similar experimental support for an auditory lexicon containing representations of spoken words has yet to be shown. Using fMRI rapid adaptation techniques, we provide evidence for an auditory lexicon in the “Auditory Word Form Area” (AWFA) in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens the selectivity of their representations in the AWFA. These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across modalities in the visual and auditory ventral streams.
The existence of a neural representation for whole words (i.e., a lexicon) is a common feature of many models of speech processing. Prior studies have provided evidence for a visual lexicon containing representations of whole written words in an area of the ventral visual stream known as the “Visual Word Form Area” (VWFA). Similar experimental support for an auditory lexicon containing representations of spoken words has yet to be shown. Using fMRI rapid adaptation techniques, we provide evidence for an auditory lexicon in the “Auditory Word Form Area” (AWFA) in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens the selectivity of their representations in the AWFA. These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across modalities in the visual and auditory ventral streams.
Highlights
Individual auditory word form areas (AWFA) were defined via an auditory localizer
The AWFA shows tuning for individual real words but not untrained pseudowords
The AWFA develops tuning for individual pseudowords after training