Article
To read the full-text of this research, you can request a copy directly from the authors.

Figures

Content may be subject to copyright.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In a recent proposal (Borghesani and Piazza, 2017), starting from the assumption that semantic representations of concrete words (e.g., "tomato") are points in multidimensional spaces where each dimension represents specific characteristics of the object referred to by the word (e.g., red color, roundish shape, small size), we suggested that the different dimensions are coded both conjunctively (in convergence zones, and here demonstrated through spatialcodes), but also separately, in the same brain regions that respond to those features when the objects are physically presented. In support of the idea we and others found that when subjects process words that refer to concrete objects the implied average size or sound associated are indeed separately represented in visual or auditory areas, respectively (e.g., Borghesani et al., 2016;Borghesani et al., 2019;Coutanche 2019;Kiefer et al., 2008). Here we looked at the whole brain level to reveal whether and where separate representational maps of size and pitch existed. ...
... Motivated by recent models of how the different components of the meaning of words can be represented in different brain regions (Borghesani & Piazza 2016), we applied the same distance analysis but now considering only the distances between words along either the visual (size of the object) or the sound (pitch of the object) dimensions, separately. On the left side we show the results of the analysis that focused on distances along size, revealing a significant cluster in the occipital cortex, at the level of secondary visual areas (BA18). ...
... Previous studies showed that when subjects process words that refer to common concrete objects, the implied average size or sound are individually separately represented in visual and auditory areas, respectively (e.g., Borghesani et al., 2016;Borghesani et al., 2019;Coutanche 2019;Kiefer et al., 2008). These results are in line with a recent proposal (Borghesani and Piazza, 2017) that suggests that semantic representations of concrete words can be conceived as points in multidimensional spaces where different dimensions represent the different characteristics that define the meaning of a word. ...
Preprint
Full-text available
When mammals navigate in the physical environment, specific neurons such as grid-cells, head-direction cells, and place-cells activate to represent the navigable surface, the faced direction of movement, and the specific location the animal is visiting. Here we test the hypothesis that these codes are also activated when humans navigate abstract language- based representational spaces. Human participants learnt the meaning of novel words as arbitrary signs referring to specific artificial audiovisual objects varying in size and sound. Next, they were presented with sequences of words and asked to process them semantically while we recorded the activity of their brain using fMRI. Processing words in sequence was conceivable as movements in the semantic space, thus enabling us to systematically search for the different types of neuronal coding schemes known to represent space during navigation. By applying a combination of representational similarity and fMRI-adaptation analyses, we found evidence of i) a grid-like code in the right postero-medial entorhinal cortex, representing the general bidimensional layout of the novel semantic space; ii) a head- direction-like code in parietal cortex and striatum, representing the faced direction of movements between concepts; and iii) a place-like code in medial prefrontal, orbitofrontal, and mid cingulate cortices, representing the Euclidean distance between concepts. We also found evidence that the brain represents 1-dimensional distances between word meanings along individual sensory dimensions: implied size was encoded in secondary visual areas, and implied sound in Heschl′s gyrus/Insula. These results reveal that mentally navigating between 2D word meanings is supported by a network of brain regions hosting a variety of spatial codes, partially overlapping with those recruited for navigation in physical space.
... Embodied theories of knowledge suggest that conceptual retrieval is partly based on the simulation of sensorimotor experience (Binder & Desai, 2011;Barsalou, 1999). Part of the evidence supporting this claim comes from studies showing that visual areas in the occipital cortex are activated when people process the meaning of words referring to concrete entities (Borghesani et al., 2016;Fernandino et al., 2015;Saygin, McCullough, Alac, & Emmorey, 2010). For instance, the size of different animals (e.g., elephant vs. mouse) is encoded in the posterior occipital cortex (Borghesani et al., 2016); whether or not an object has a typical color is reflected in the activation of the color-sensitive area V4 (Fernandino et al., 2015); and sentences referring to motion activate the motion-sensitive area V5 (Saygin et al., 2010). ...
... Part of the evidence supporting this claim comes from studies showing that visual areas in the occipital cortex are activated when people process the meaning of words referring to concrete entities (Borghesani et al., 2016;Fernandino et al., 2015;Saygin, McCullough, Alac, & Emmorey, 2010). For instance, the size of different animals (e.g., elephant vs. mouse) is encoded in the posterior occipital cortex (Borghesani et al., 2016); whether or not an object has a typical color is reflected in the activation of the color-sensitive area V4 (Fernandino et al., 2015); and sentences referring to motion activate the motion-sensitive area V5 (Saygin et al., 2010). However, it is still debated whether the activity in visual regions during conceptual retrieval reflects the simulation of perceptual experience (Barsalou, 2016) or, instead, abstract representations of semantic features (e.g., size, color, movement) that are largely innate (Leshinskaya & Caramazza, 2016) and that are active both during retrieval and perception (Stasenko, Garcea, Dombovy, & Mahon, 2014). ...
... Such visual features are usually encoded in the posterior portion of the occipital cortex (BA 18-BA 19; including the lingual gyrus, cuneus, V4 and V5) during visual perception of objects and scenes (Bracci & Op de Beeck, 2016;Connolly et al., 2012;Naselaris, Prenger, Kay, Oliver, & Gallant, 2009). Moreover, some of these posterior occipital regions seem to encode perceptual information (e.g., size, color, motion) also during conceptual retrieval from words (Borghesani et al., 2016;Fernandino et al., 2015;Saygin et al., 2010). ...
Article
Full-text available
If conceptual retrieval is partially based on the simulation of sensorimotor experience, people with a different sensorimotor experience, such as congenitally blind people, should retrieve concepts in a different way. However, studies investigating the neural basis of several conceptual domains (e.g., actions, objects, places) have shown a very limited impact of early visual deprivation. We approached this problem by investigating brain regions that encode the perceptual similarity of action and color concepts evoked by spoken words in sighted and congenitally blind people. At first, and in line with previous findings, a contrast between action and color concepts (independently of their perceptual similarity) revealed similar activations in sighted and blind people for action concepts and partially different activations for color concepts, but outside visual areas. On the other hand, adaptation analyses based on subjective ratings of perceptual similarity showed compelling differences across groups. Perceptually similar colors and actions induced adaptation in the posterior occipital cortex of sighted people only, overlapping with regions known to represent low-level visual features of those perceptual domains. Early-blind people instead showed a stronger adaptation for perceptually similar concepts in temporal regions, arguably indexing higher reliance on a lexical-semantic code to represent perceptual knowledge. Overall, our results show that visual deprivation does changes the neural bases of conceptual retrieval, but mostly at specific levels of representation supporting perceptual similarity discrimination, reconciling apparently contrasting findings in the field.
... " Semantic similarity between concepts was characterized based on the number of features shared between concepts and these similarities were in turn compared to similarities between fMRI voxel patterns elicited by the concepts when processed by the participants in the scanner, a technique called "representational similarity analysis, " or RSA (Kriegeskorte et al., 2008). The anterior temporal lobe, specifically including the perirhinal cortex and the more anterior temporal pole, emerged as a particularly important area where semantic similarity between concepts matched similarity between voxel patterns elicited by conceptually processed pictures (Bruffaerts et al., 2013;Fairhall and Caramazza, 2013;Clarke and Tyler, 2014;Borghesani et al., 2016;Chen et al., 2016) and words (Bruffaerts et al., 2013;Fairhall and Caramazza, 2013;Liuzzi et al., 2015;Borghesani et al., 2016). Whereas these studies did not specifically address typicality, they support the claim that categories have stable central tendencies as the Roschian view suggests, and that these central tendencies are represented in the ATL. ...
... " Semantic similarity between concepts was characterized based on the number of features shared between concepts and these similarities were in turn compared to similarities between fMRI voxel patterns elicited by the concepts when processed by the participants in the scanner, a technique called "representational similarity analysis, " or RSA (Kriegeskorte et al., 2008). The anterior temporal lobe, specifically including the perirhinal cortex and the more anterior temporal pole, emerged as a particularly important area where semantic similarity between concepts matched similarity between voxel patterns elicited by conceptually processed pictures (Bruffaerts et al., 2013;Fairhall and Caramazza, 2013;Clarke and Tyler, 2014;Borghesani et al., 2016;Chen et al., 2016) and words (Bruffaerts et al., 2013;Fairhall and Caramazza, 2013;Liuzzi et al., 2015;Borghesani et al., 2016). Whereas these studies did not specifically address typicality, they support the claim that categories have stable central tendencies as the Roschian view suggests, and that these central tendencies are represented in the ATL. ...
... Peelen and Caramazza (2012) instructed participants to make semantic judgments about objects with orthogonal similarity patterns for shape, associated action, and associated location. Multivoxel pattern information about action and location, but not shape, was found in the anterior temporal lobes in locations similar to those observed in studies showing sensitivity to semantic feature similarity (Bruffaerts et al., 2013;Fairhall and Caramazza, 2013;Clarke and Tyler, 2014;Liuzzi et al., 2015;Borghesani et al., 2016;Chen et al., 2016). Finally, medial prefrontal cortex and retrosplenial cortex, two areas implicated in Bar's work as sensitive to object-scene associations, are also sensitive to the degree of match between an object and background scene. ...
Article
Full-text available
Typicality effects are among the most well-studied phenomena in the study of concepts. The classical notion of typicality is that typical concepts share many features with category co-members and few features with members of contrast categories. However, this notion was challenged by evidence that typicality is highly context dependent and not always dependent on central tendency. Dieciuc and Folstein (2019) argued that there is strong evidence for both views and that the two types of typicality effects might depend on different mechanisms. A recent theoretical framework, the controlled semantic cognition framework (Lamdon Ralph et al., 2017) strongly emphasizes the classical view, but includes mechanisms that could potentially account for both kinds of typicality. In contrast, the situated cognition framework (Barsalou, 2009b) articulates the context-dependent view. Here, we review evidence from cognitive neuroscience supporting the two frameworks. We also briefly evaluate the ability of computational models associated with the CSC to account for phenomena supporting SitCog (Rogers and McClelland, 2004). Many predictions of both frameworks are borne out by recent cognitive neuroscience evidence. While the CSC framework can at least potentially account for many of the typicality phenomena reviewed, challenges remain, especially with regard to ad hoc categories.
... Perceptual similarity and semantic similarity effects for words were reported in adjacent but discrete regions in the left inferior frontal cortex, respectively Brodmann area 44 and 45 . For written words, pairwise word length differences based on the number of characters can also be used (Borghesani et al., 2016), but in this case it is preferable to use monospaced fonts for stimulus presentation. ...
... Semantic models provide a new approach to test the representation of semantic information across different modalities. Left perirhinal cortex has already been proposed as a crucial region for the processing of semantic information using semantic models in controls (Borghesani et al., 2016;Bruffaerts et al., 2013b;Clarke and Tyler, 2014;Devereux et al., 2013;Liuzzi et al., 2019Liuzzi et al., , 2015Martin et al., 2018). However, no semantic similarity effect for spoken words was found in perirhinal cortex (Liuzzi et al., 2015). ...
... Using another semantic model also based on feature norms, this finding was confirmed independently byClarke et al. (2014). The semantic effect in the left anterior temporal lobe was replicated using a semantic model several times since, using a feature norm model consisting of only nonvisual features(Martin et al., 2018), using Italian feature norms(Borghesani et al., 2016) and word association data(Liuzzi et al., 2019). Studies of semantic processing using electrocorticographic recordings in epilepsy patients(Chen et al., 2016;Rupp et al., 2017) and MEG also allowed to characterize the time course of the semantic effect.The semantic effect is maximal at 300-450 ms after stimulus onset(Chen et al., 2016;, although the time course seems to vary between subjects(Rupp et al., 2017). ...
Article
Full-text available
The boundaries of our understanding of conceptual representation in the brain have been redrawn since the introduction of explicit models of semantics. These models are grounded in vast behavioural datasets acquired in healthy volunteers. Here, we review the most important techniques which have been applied to detect semantic information in neuroimaging data and argue why semantic models are possibly the most valuable addition to the research of semantics in recent years. Using multivariate analysis, predictions based on patient lesion data have been confirmed during semantic processing in healthy controls. Secondly, this new method has given rise to new research avenues, e.g. the detection of semantic processing outside of the temporal cortex. As a future line of work, the same research strategy could be useful to study neurological conditions such as the semantic variant of primary progressive aphasia, which is characterized by pathological semantic processing.
... The findings suggest a common code in primate IT response patterns emphasizing behaviorally important categorical distinctions. Borghesani et al. (2016) utilized multivariate pattern analysis techniques, including decoding and representational similarity analysis, to analyze fMRI data from participants reading words related to animals and tools. The study identified a gradient of semantic coding along the ventral visual stream: early visual areas mainly encoded perceptual features, such as implied real-world size, while more anterior temporal regions encoded conceptual features, such as taxonomic categories. ...
... Agricultural science Akos et al. (2021) Biology Dettling & Peter (2002) Yang & Wang (2003) Zhao & Karypis (2005) Guo et al. ( ) Yan et al. (2022 Alexandre et al. (2022) Riva et al. (2023) Chemistry Xiao et al. (2005) Livera et al. (2015) Doncheva et al. (2018) Sakamuru et al. (2021) Morishita & Kaneko (2022) Physics Tamarit et al. (2020) Visani et al. (2024) Materials science Rahnama & Sridhar (2019) Huang et al. (2020) Häse et al. (2021) Neuroscience Kriegeskorte et al. (2008) Borghesani et al. (2016) Nevado et al. (2021 Baenas et al. (2024) Health sciences Burgel et al. (2012) Vanfleteren et al. (2013 Ravishankar et al. (2013) Mamykina et al. (2016 Hose et al. ...
Preprint
Full-text available
The clustering of categorical data is a common and important task in computer science, offering profound implications across a spectrum of applications. Unlike purely numerical datasets, categorical data often lack inherent ordering as in nominal data, or have varying levels of order as in ordinal data, thus requiring specialized methodologies for efficient organization and analysis. This review provides a comprehensive synthesis of categorical data clustering in the past twenty-five years, starting from the introduction of K-modes. It elucidates the pivotal role of categorical data clustering in diverse fields such as health sciences, natural sciences, social sciences, education, engineering and economics. Practical comparisons are conducted for algorithms having public implementations, highlighting distinguishing clustering methodologies and revealing the performance of recent algorithms on several benchmark categorical datasets. Finally, challenges and opportunities in the field are discussed.
... Although previous studies have decoded the linguistic representations in the left ventral occipitotemporal cortex (Borghesani et al., 2016;Fischer-Baum et al., 2017;Taylor et al., 2019;Zhao et al., 2017), they have at least three limitations. First, among studies decoding lexical representation in the left ventral occipitotemporal cortex, most of them defined the VWFA based on the coordinates reported in previous publications (Borghesani et al., 2016;Fischer-Baum et al., 2017;Taylor et al., 2019). ...
... Although previous studies have decoded the linguistic representations in the left ventral occipitotemporal cortex (Borghesani et al., 2016;Fischer-Baum et al., 2017;Taylor et al., 2019;Zhao et al., 2017), they have at least three limitations. First, among studies decoding lexical representation in the left ventral occipitotemporal cortex, most of them defined the VWFA based on the coordinates reported in previous publications (Borghesani et al., 2016;Fischer-Baum et al., 2017;Taylor et al., 2019). The location and size of the VWFA varies across studies. ...
Article
Full-text available
As a key area in word reading, the left ventral occipitotemporal cortex is proposed for abstract orthographic processing, and its middle part has even been labeled as the visual word form area. Because the definition of the VWFA largely varies and the reading task differs across studies, the function of the left ventral occipitotemporal cortex in word reading is continuingly debated on whether this region is specific for orthographic processing or be involved in an interactive framework. By using representational similarity analysis (RSA), this study examined information representation in the VWFA at the individual level and the modulatory effect of reading task. Twenty-four subjects were scanned while performing the explicit (i.e., the naming task) and implicit (i.e., the perceptual task) reading tasks. Activation analysis showed that the naming task elicited greater activation in regions related to phonological processing (e.g., the bilateral prefrontal cortex and temporoparietal cortex), while the perceptual task recruited greater activation in visual cortex and default mode network (e.g., the bilateral middle frontal gyrus, angular gyrus, and the right middle temporal gyrus). More importantly, RSA also showed that task modulated information representation in the bilateral anterior occipitotemporal cortex and VWFA. Specifically, ROI-based RSA revealed enhanced orthographic and phonological representations in the bilateral anterior fusiform cortex and VWFA in the naming task relative to the perceptual task. These results suggest that lexical representation in the VWFA is influenced by the demand of phonological processing, which supports the interactive account of the VWFA.
... ; https://doi.org/10.1101/2021.09.20.461111 doi: bioRxiv preprint gradient in the ventral stream from perceptual sensitivity to categorical or semantic sensitivity has been supported by both word reading (Borghesani et al., 2016) and semantic question tasks (Martin et al., 2018), which have shown that visual perceptual differentiation occurs more in more posterior regions while more categorical distinctions occur in more anterior regions as processing moves towards the anterior temporal lobe. Borghesani et al. (2016) demonstrated that although processing becomes more abstract and category sensitive in anterior regions, regions from the occipital-temporal cortex all the way back to the primary visual area of the occipital lobe are sensitive to semantic category distinction between animals and tools. ...
... doi: bioRxiv preprint gradient in the ventral stream from perceptual sensitivity to categorical or semantic sensitivity has been supported by both word reading (Borghesani et al., 2016) and semantic question tasks (Martin et al., 2018), which have shown that visual perceptual differentiation occurs more in more posterior regions while more categorical distinctions occur in more anterior regions as processing moves towards the anterior temporal lobe. Borghesani et al. (2016) demonstrated that although processing becomes more abstract and category sensitive in anterior regions, regions from the occipital-temporal cortex all the way back to the primary visual area of the occipital lobe are sensitive to semantic category distinction between animals and tools. Martin et al. (2018) also found a shift from specific visual semantic distinction in the LOC to broad categorical semantic sensitivity in the anterior temporal lobe and demonstrated that visual features of object referents of words were represented in the LOC during a semantic property verification task. ...
Preprint
Full-text available
Identifying printed words and pictures concurrently is ubiquitous in daily tasks, and so it is important to consider the extent to which reading words and naming pictures may share a cognitive-neurophysiological functional architecture. Two functional magnetic resonance imaging (fMRI) experiments examined whether reading along the left ventral occipitotemporal region (vOT; often referred to as a visual word form area, VWFA) has activation that is overlapping with referent pictures (i.e., both conditions significant and shared, or with one significantly more dominant) or unique (i.e., one condition significant, the other not), and whether picture naming along the right lateral occipital complex (LOC) has overlapping or unique activation relative to referent words. Experiment 1 used familiar regular and exception words (to force lexical reading) and their corresponding pictures in separate naming blocks, and showed dominant activation for pictures in the LOC, and shared activation in the VWFA for exception words and their corresponding pictures (regular words did not elicit significant VWFA activation). Experiment 2 controlled for visual complexity by superimposing the words and pictures and instructing participants to either name the word or the picture, and showed primarily shared activation in the VWFA and LOC regions for both word reading and picture naming, with some dominant activation for pictures in the LOC. Overall, these results highlight the importance of including exception words to force lexical reading when comparing to picture naming, and the significant shared activation in VWFA and LOC serves to challenge specialized models of reading or picture naming.
... This includes regions that respond more strongly to semantically richer stimuli: the angular gyrus, lateral and ventral temporal cortex, ventromedial prefrontal cortex, inferior frontal gyrus, dorsal medial prefrontal cortex and the precuneus/posterior cingulate gyrus (Binder et al., 2009). Studies employing multivariate pattern analysis (MVPA) have determined the non-perceptual sensitivity of elements of the semantic system to semantic content (Fairhall and Caramazza, 2013;Devereux et al., 2013, Clark and Tyler, 2014, Simanova et al., 2014, Bruffaerts et al., 2013Liuzzi et al., 2015, Borghesani et al., 2016, Martin et al., 2018. Nevertheless, all these studies adopted active semantic tasks: naming task (Devereux et al., 2013, Clark andTyler, 2014), judgment of semantic consistency (Simanova et al., 2014), property verification task (Bruffaerts et al., 2013;Liuzzi et al., 2015Liuzzi et al., , 2017Liuzzi et al., , 2019Martin et al., 2018), semantic decision (Borghesani et al., 2016), or a typicality task (Fairhall and Caramazza, 2013;Liuzzi et al., 2020). ...
... Studies employing multivariate pattern analysis (MVPA) have determined the non-perceptual sensitivity of elements of the semantic system to semantic content (Fairhall and Caramazza, 2013;Devereux et al., 2013, Clark and Tyler, 2014, Simanova et al., 2014, Bruffaerts et al., 2013Liuzzi et al., 2015, Borghesani et al., 2016, Martin et al., 2018. Nevertheless, all these studies adopted active semantic tasks: naming task (Devereux et al., 2013, Clark andTyler, 2014), judgment of semantic consistency (Simanova et al., 2014), property verification task (Bruffaerts et al., 2013;Liuzzi et al., 2015Liuzzi et al., , 2017Liuzzi et al., , 2019Martin et al., 2018), semantic decision (Borghesani et al., 2016), or a typicality task (Fairhall and Caramazza, 2013;Liuzzi et al., 2020). Representations of word-meaning have been studied during the naturalistic presentation of narratives (Huth et al., 2016, Deniz et al., 2020. ...
Article
When we read a word or see an object, conceptual meaning is automatically accessed. However, previous research investigating non-perceptual sensitivity to semantic class has employed active tasks. In this fMRI study, we tested whether conceptual representations in regions constituting the semantic network are invoked during passive semantic access and whether these representations are modulated by the need to access deeper knowledge. Seventeen healthy subjects performed a semantically active typicality judgment task and a semantically passive phonetic decision task, in both the written and the spoken input-modalities. Stimuli consisted of one hundred forty-four concepts drawn from six semantic categories. Multivariate Pattern Analysis (MVPA) revealed that the left posterior middle temporal gyrus (pMTG), posterior ventral temporal cortex (pVTC) and pars triangularis of the left inferior frontal gyrus (IFG) showed a stronger sensitivity to semantic category when active rather than passive semantic access is required. Using a cross-task training/testing classifier, we determined that conceptual representations were not only active in these regions during passive semantic access but that the neural representation of these categories was common to both active and passive access. Collectively, these results show that while representations in the pMTG, pVTC and IFG are strongly modulated by active conceptual access, consistent representational patterns are present during active and passive conceptual access in these same regions.
... D-E -Whole-brain analyses for distances along size and pitch separately. Motivated by recent models of how the different components of the meaning of words can be represented in different brain regions (Borghesani & Piazza 2016), we applied the same distance analysis but now considering only the distances between words along either the visual (size of the object) or the sound (pitch of the object) dimensions, separately. On the left side we show the results of the analysis that focused on distances along size, revealing a significant cluster in the occipital cortex, at the level of secondary visual areas (BA18). ...
... This finding has important implications for our understanding of how the human brain represents and compares the meaning of words. Previous studies showed that when subjects process words that refer to common concrete objects, the implied average size or sound are separately represented in visual and auditory areas, respectively (e.g., Borghesani et al., 2016;Coutanche 2019;Kiefer et al., 2008). Borghesani and Piazza (2017) suggested that semantic representations of concrete words can be conceived as points in multidimensional spaces where different dimensions represent the different characteristics that define the meaning of a word. ...
Article
Full-text available
Relational information about items in memory is thought to be represented in our brain thanks to an internal comprehensive model, also referred to as a “cognitive map”. In the human neuroimaging literature, two signatures of bi-dimensional cognitive maps have been reported: the grid-like code and the distance-dependent code. While these kinds of representation were previously observed during spatial navigation and, more recently, during processing of perceptual stimuli, it is still an open question whether they also underlie the representation of the most basic items of language: words. Here we taught human participants the meaning of novel words as arbitrary labels for a set of audiovisual objects varying orthogonally in size and sound. The novel words were therefore conceivable as points in a navigable 2D map of meaning. While subjects performed a word comparison task, we recorded their brain activity using functional magnetic resonance imaging (fMRI). By applying a combination of representational similarity and fMRI-adaptation analyses, we found evidence of (i) a grid-like code, in the right postero-medial entorhinal cortex, representing the relative angular positions of words in the word space, and (ii) a distance-dependent code, in medial prefrontal, orbitofrontal, and mid-cingulate cortices, representing the Euclidean distance between words. Additionally, we found evidence that the brain also separately represents the single dimensions of word meaning: their implied size, encoded in visual areas, and their implied sound, in Heschl's gyrus/Insula. These results support the idea that the meaning of words, when they are organized along two dimensions, is represented in the human brain across multiple maps of different dimensionality. Significant statement How do we represent the meaning of words and perform comparative judgements on them in our brain? According to influential theories, concepts are conceivable as points of an internal map (where distance represents similarity) that, as the physical space, can be mentally navigated. Here we use fMRI to show that when humans compare newly learnt words, they recruit a grid-like and a distance code, the same types of neural codes that, in mammals, represent relations between locations in the environment and support physical navigation between them.
... Although prior research has proved the left FG's pivotal role in word memory, the specific roles of its subregions have not been elaborated. Previous research has found that the anterior and posterior parts of the left FG are responsible for high-level lexical processing, whereas the posterior parts of the left FG is responsible for visuoperceptual processing (Borghesani et al., 2016;Bouhali et al., 2014;Lochy et al., 2018;Ludersdorfer et al., 2016;Mei et al., 2015;Seghier & Price, 2011;Vinckier et al., 2007;White et al., 2019). For example, by using intracerebral recordings, Lochy et al. (2018) explored neural responses in the left ventral occipitotemporal cortex during the processing of letters and words. ...
... The results of Experiment 1 suggest that the anterior and middle fusiform regions support the successful encoding of novel logographic words. These results are in line with prior findings of the critical contribution of the left anterior and middle FG to visual word processing (Borghesani et al., 2016;Lochy et al., 2018;Ludersdorfer et al., 2016), and also consistent with the neuronal recycling hypothesis . As discussed in Introduction, it is not clear whether such brain-behavior associations are consistent across different writings. ...
Article
The left fusiform cortex has been identified as a crucial structure in visual word learning and memory. Nevertheless, the specific roles of the fusiform subregions in word memory and their consistency across different writings have not been elaborated. To address these questions, the present study performed two experiments, in which study-test paradigm was used. Participants’ brain activity was measured with fMRI while memorizing novel logographic words in Experiment 1 and novel alphabetic words in Experiment 2. A post-scan recognition memory test was then administered to acquire the memory performance. Results showed that, neural responses in the left anterior and middle fusiform subregions during encoding were positively correlated with recognition memory of novel words. Moreover, the positive brain-behavior correlations in the left anterior and middle fusiform cortex were evident for both logographic and alphabetic writings. The present findings clarify the relationship between the left fusiform subregions and novel word memory.
... In our study, we demonstrate that VOTC reliably encodes the categorical membership of sounds from eight different categories in sighted and blind people, using a topography ( Figure 1B Previous studies using linguistic stimuli had already suggested that VOTC may actually represent categorical information in a more abstracted fashion than previously thought (Handjaras et al., 2016;Borghesani et al., 2016;Striem-Amit et al., 2018b;Peelen and Downing, 2017). However, even if the use of words is very useful in the investigation of pre-existing representation of concepts (Martin et al., 2017), it prevents the investigation of a bottom-up perceptual processing. ...
... By orthogonalizing category membership and visual features of visual stimuli, previous studies reported a residual categorical effect in VOTC, highlighting how some of the variance in the neural data of VOTC might be explained by high-level categorical properties of the stimuli even when the contribution of the basic low-level features has been controlled for (Bracci and Op de Beeck, 2016;Kaiser et al., 2016;Proklova et al., 2016). Category-selectivity has also been observed in VOTC during semantic tasks when word stimuli were used, suggesting an involvement of the occipito-temporal cortex in the retrieval of category-specific conceptual information (Handjaras et al., 2016;Borghesani et al., 2016;Peelen and Downing, 2017). Moreover, previous research has shown that learning to associate semantic features (e.g., 'floats') and spatial contextual associations (e.g., 'found in gardens') with novel objects influences VOTC representations, such that objects with contextual connections exhibited higher pattern similarity after learning in association with a reduction in pattern information about the object's visual features (Clarke et al., 2016). ...
Article
Full-text available
Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.
... In our study, we demonstrate that VOTC reliably encodes the categorical membership of sounds from eight different categories in sighted and blind people, using a topography ( Figure 1B Previous studies using linguistic stimuli had already suggested that VOTC may actually represent categorical information in a more abstracted fashion than previously thought (Handjaras et al., 2016;Borghesani et al., 2016;Striem-Amit et al., 2018b;Peelen and Downing, 2017). However, even if the use of words is very useful in the investigation of pre-existing representation of concepts (Martin et al., 2017), it prevents the investigation of a bottom-up perceptual processing. ...
... By orthogonalizing category membership and visual features of visual stimuli, previous studies reported a residual categorical effect in VOTC, highlighting how some of the variance in the neural data of VOTC might be explained by high-level categorical properties of the stimuli even when the contribution of the basic low-level features has been controlled for (Bracci and Op de Beeck, 2016;Kaiser et al., 2016;Proklova et al., 2016). Category-selectivity has also been observed in VOTC during semantic tasks when word stimuli were used, suggesting an involvement of the occipito-temporal cortex in the retrieval of category-specific conceptual information (Handjaras et al., 2016;Borghesani et al., 2016;Peelen and Downing, 2017). Moreover, previous research has shown that learning to associate semantic features (e.g., 'floats') and spatial contextual associations (e.g., 'found in gardens') with novel objects influences VOTC representations, such that objects with contextual connections exhibited higher pattern similarity after learning in association with a reduction in pattern information about the object's visual features (Clarke et al., 2016). ...
Article
Full-text available
Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.
... In our study, we demonstrate that VOTC reliably encodes the categorical membership of sounds from eight different categories in sighted and blind people, using a topography ( Figure 1B Previous studies using linguistic stimuli had already suggested that VOTC may actually represent categorical information in a more abstracted fashion than previously thought (Handjaras et al., 2016;Borghesani et al., 2016;Striem-Amit et al., 2018b;Peelen and Downing, 2017). However, even if the use of words is very useful in the investigation of pre-existing representation of concepts (Martin et al., 2017), it prevents the investigation of a bottom-up perceptual processing. ...
... By orthogonalizing category membership and visual features of visual stimuli, previous studies reported a residual categorical effect in VOTC, highlighting how some of the variance in the neural data of VOTC might be explained by high-level categorical properties of the stimuli even when the contribution of the basic low-level features has been controlled for (Bracci and Op de Beeck, 2016;Kaiser et al., 2016;Proklova et al., 2016). Category-selectivity has also been observed in VOTC during semantic tasks when word stimuli were used, suggesting an involvement of the occipito-temporal cortex in the retrieval of category-specific conceptual information (Handjaras et al., 2016;Borghesani et al., 2016;Peelen and Downing, 2017). Moreover, previous research has shown that learning to associate semantic features (e.g., 'floats') and spatial contextual associations (e.g., 'found in gardens') with novel objects influences VOTC representations, such that objects with contextual connections exhibited higher pattern similarity after learning in association with a reduction in pattern information about the object's visual features (Clarke et al., 2016). ...
Article
Full-text available
Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.
... By projecting the histological features of postmortem tissue into the spatial space of the neocortex, we can understand how these axes are determined by intersecting cell type specificity and cell structure gradients. Here, we focus primarily on the first two major histological gradients of cortical tissue and find that the changes of SFC are related mainly to the first histological gradient (anterior-posterior axis), which combines multiple local gradients and functional topologies, such as the ventral visual stream from the occipital pole to the temporal pole, which achieves the sensory-semantic dimension of perceptual processing 79,80 , and the rostrocaudal gradient in the prefrontal cortex, which describes the transition from cognitive processes supporting action preparation to cognitive processes tightly coupled with movement execution [81][82][83][84] , indicating that we can further understand the high coupling abnormal performance of PSP in the visual network from a cellular histological perspective. The second gradient (sensory-fugal axis) represents an overall organizational principle that combines these local processing streams, while this axis supports the separation of low-and highorder components in the assumed cortical hierarchy structure 38 . ...
Article
Full-text available
The anatomy of the brain supports inherent processes, fostering mental abilities and eventually facilitating adaptive behavior. Recent studies have shown that progressive supranuclear palsy (PSP) is accompanied by alterations in functional and structural networks. However, how the structure and function of PSP coordinates change is not clear, and the relationships between structural‒functional coupling (SFC) and the gradient of hierarchical structure and cellular histology remain largely unknown. Here, we use neuroimaging data from two independent cohorts and a public histological dataset to investigate the relationships among the cellular histology, hierarchical structure, and SFC of PSP patients. We find that the SFC of the entire cortex in PSP is severely disrupted, with higher coupling in the visual network (VN). Moreover, coupling differences in PSP follow a macroscopic organizational principle from unimodal to transmodal gradients. Finally, we elucidate greater laminar differentiation in VN regions sensitive to SFC changes in PSP, which is related mainly to the higher cellular density and smaller size of the internal-granular layer. In conclusion, our findings provide an interpretable framework for understanding SFC changes in PSP and provide new insights into the consistency of structural and functional changes in PSP regarding hierarchical structure and cellular histology.
... Neuroimaging studies have revealed that distributed regions are coactivated in a variety of semantic tasks (e.g., narrative story comprehension (Branzi et al., 2020;Deniz et al., 2019;Huth et al., 2016), semantic judgment (Ala-Salomaki et al., 2021;Borghesani et al., 2016;, typicality task (Fairhall & Caramazza 2013;Liuzzi et al., 2019Liuzzi et al., , 2020, naming task (Devereux et al., 2013), and property verification task (Martin et al., 2018;Bruffaerts et al., 2013;Liuzzi et al., 2019)); these regions are connected and form a unified network (the semantic network) that supports semantic processing. This network included the bilateral anterior temporal lobe (ATL), middle temporal gyrus (MTG), fusiform gyrus, parahippocampal gyrus, angular gyrus (AG), portions of the supramarginal gyrus (SMG), dorsomedial prefrontal cortex (dmPFC), ventromedial prefrontal cortex (vmPFC), inferior frontal gyrus (IFG, mainly the pars orbitalis), posterior cingulate gyrus and precuneus (Pre) (Acunzo et al., 2022;Binder et al., 2009;Binder, 2016;Blank & Fedorenko 2017;Humphreys et al., 2022;Patterson et al., 2007;Rogers et al., 2004;Xu et al., 2016). ...
Article
Full-text available
Semantic processing, a core of language comprehension, involves the activation of brain regions dispersed extensively across the frontal, temporal, and parietal cortices that compose the semantic network. To comprehend the functional structure of this semantic network and how it prepares for semantic processing, we investigated its intrinsic functional connectivity (FC) and the relation between this pattern and semantic processing ability in a large sample from the Human Connectome Project (HCP) dataset. We first defined a well-studied brain network for semantic processing, and then we characterized the within-network connectivity (WNC) and the between-network connectivity (BNC) within this network using a voxel-based global brain connectivity (GBC) method based on resting-state functional magnetic resonance imaging (fMRI). The results showed that 97.73% of the voxels in the semantic network displayed considerably greater WNC than BNC, demonstrating that the semantic network is a fairly encapsulated network. Moreover, multiple connector hubs in the semantic network were identified after applying the criterion of WNC > 1 SD above the mean WNC of the semantic network. More importantly, three of these connector hubs (i.e., the left anterior temporal lobe, angular gyrus, and orbital part of the inferior frontal gyrus) were reliably associated with semantic processing ability. Our findings suggest that the three identified regions use WNC as the central mechanism for supporting semantic processing and that task-independent spontaneous connectivity in the semantic network is essential for semantic processing.
... A person-selective region anatomically consistent with the right FFA clustered with a region of the visual cortex more active for the scholastic knowledge domain. This latter region was centred on the calcarine sulcus and, while some studies suggest these regions encode conceptual information such as object size 39 , it is possible that this cluster reflects unanticipated differences in the visual processing of stimuli associated with the scholastic domain. The second exception was connectivity between object-selective parahippocampal gyrus, which clustered and place-selective network, which included adjacent PPA. ...
Article
Full-text available
Our ability to know and access complex factual information has far reaching effects, influencing our scholastic, professional and social lives. Here we employ functional MRI to assess the relationship between individual differences in semantic aptitude in the task-based activation and resting-state functional connectivity. Using psychometric and behavioural measures, we quantified the semantic and executive aptitude of individuals and had them perform a general-knowledge semantic-retrieval task (N = 41) and recorded resting-state data (N = 43). During the semantic-retrieval task, participants accessed general-knowledge facts drawn from four different knowledge-domains (people, places, objects and ‘scholastic’). Individuals with greater executive capacity more strongly recruit anterior sections of prefrontal cortex (PFC) and the precuneus, and individuals with lower semantic capacity more strongly activate a posterior section of the dorsomedial PFC (dmPFC). The role of these regions in semantic processing was validated by analysis of independent resting-state data, where increased connectivity between a left anterior PFC and the precuneus predict higher semantic aptitude, and increased connectivity between left anterior PFC and posterior dmPFC predict lower semantic aptitude. Results suggest that coordination between core semantic regions in the precuneus and anterior prefrontal regions associated with executive processes support greater semantic aptitude.
... In case of word learning and processing, previous neuroimaging studies have revealed an increased brain activation not only in high-level "semantic hubs", such as the inferior frontal cortex (Devlin et al., 2003) and the anterior temporal cortex (Mion et al., 2010), but also in early visual regions of the ventral visual stream (Pulvermüller, 2013). Moreover, a recent study conducted by Borghesani et al. (2016) has confirmed the "gradient semantic encoding" hypothesis that visuo-perceptual aspects of written words (such as real word size) appear to be encoded primarily in posterior occipital regions, while conceptual aspects (such as semantic category) appear encoded primarily in anterior temporal areas. In this case, the activation of both the visual cortex (occipital region) and the "semantic hubs" (frontal region) might indicate the integrated encoding of different dimensions (perceptual and conceptual aspects) of word meaning. ...
Article
Full-text available
Individuals exhibit considerable variability in their capacity to learn and retain new information, including novel vocabulary. Prior research has established the importance of vigilance and electroencephalogram (EEG) alpha rhythm in the learning process. However, the interplay between vigilant attention, EEG alpha oscillations, and an individual's word learning ability (WLA) remains elusive. To address this knowledge gap, here we conducted two experiments with a total of 140 young and middle-aged adults who underwent resting EEG recordings prior to completing a paired-associate word learning task and a psychomotor vigilance test (PVT). The results of both experiments consistently revealed significant positive correlations between WLA and resting EEG alpha oscillations in the occipital and frontal regions. Furthermore, the association between resting EEG alpha oscillations and WLA was mediated by vigilant attention, as measured by the PVT. These findings provide compelling evidence supporting the crucial role of vigilant attention in linking EEG alpha oscillations to an individual's learning ability.
... Many studies have around 20 participants (Carota et al., 2020;Dong et al., 2021;Li et al., 2019;Meersmans et al., 2021;Staples & Graves, 2020;Wang et al., 2017), with a range from 9 (Anderson et al., 2016) to 51 (Guo et al., 2022). The median number of stimulus presentations was 4 (Borghesani et al., 2016;Bruffaerts et al., 2013;Liu et al., 2023;Meersmans et al., 2021), with a range from 1 (i.e., no repetitions) (Dong et al., 2021;Gao et al., 2022;Guo et al., 2022;Li et al., 2022;Staples & Graves, 2020) to 12 (Fischer-Baum et al., 2017). ...
Article
Full-text available
In studies using representational similarity analysis (RSA) of fMRI data, the reliability of the neural representational dissimilarity matrix (RDM) is a limiting factor in the ability to detect neural correlates of a model. A common strategy for boosting neural RDM reliability is to employ repeated presentations of the stimulus set across imaging runs or sessions. However, little is known about how the benefits of stimulus repetition are affected by repetition suppression, or how they compare with the benefits of increasing the number of participants. We examined the effects of these design parameters in two large data sets where participants performed a semantic decision task on visually presented words. We found that reliability gains from stimulus repetition were strongly affected by repetition suppression, both within and across scanning sessions separated by multiple weeks. The results provide new insights into these experimental design choices, particularly for item-level RSA studies of semantic cognition.
... controls following Transcranial Magnetic Simulation (TMS) on the ATL (Woollams, 2012), showed that disruption of the ATL leads to impairments in naming tasks for more atypical concepts, and an fMRI study with healthy participants showed increased activation in the ATL with decreasing item typicality during a category verification task (Santi et al., 2016). On the other hand, studies using Representational Similarity Analysis (RSA, Kriegeskorte et al., 2008) have shown that, in the ATL region, the semantic similarity between concepts (as measured by feature norms, indexing their shared and distinctive features) matches similarity between voxel patterns elicited by objects processed semantically (Borghesani et al., 2016;Bruffaerts et al., 2013;Chen et al., 2016;Clarke, 2020;Clarke & Tyler, 2014;Fairhall & Caramazza, 2013;Liuzzi et al., 2015;Martin et al., 2018). Taken together, these results support the idea that concepts are processed and represented in the ATL as unique complex entities according to the integration of their constituting features, both shared and distinctive ones (Bruett et al., 2020;Bruffaerts et al., 2019;Coutanche & Thompson-Schill, 2015). ...
Article
Full-text available
Concept typicality is a key semantic dimension supporting the categorical organization of items based on their features, such that typical items share more features with other members of their category than atypical items, which are more distinctive. Typicality effects manifest in better accuracy and faster response times during categorization tasks, but higher performance for atypical items in episodic memory tasks, due to their distinctiveness. At a neural level, typicality has been linked to the anterior temporal lobe (ATL) and the inferior frontal gyrus (IFG) in semantic decision tasks, but patterns of brain activity during episodic memory tasks remain to be understood. We investigated the neural correlates of typicality in semantic and episodic memory to determine the brain regions associated with semantic typicality and uncover effects arising when items are reinstated during retrieval. In an fMRI study, 26 healthy young subjects first performed a category verification task on words representing typical and atypical concepts (encoding), and then completed a recognition memory task (retrieval). In line with previous literature, we observed higher accuracy and faster response times for typical items in the category verification task, while atypical items were better recognized in the episodic memory task. During category verification, univariate analyses revealed a greater involvement of the angular gyrus for typical items and the inferior frontal gyrus for atypical items. During the correct recognition of old items, regions belonging to the core recollection network were activated. We then compared the similarity of the representations from encoding to retrieval (ERS) using Representation Similarity Analyses. Results showed that typical items were reinstated more than atypical ones in several regions including the left precuneus and left anterior temporal lobe (ATL). This suggests that the correct retrieval of typical items requires finer-grained processing, evidenced by greater item-specific reinstatement, which is needed to resolve their confusability with other members of the category due to their higher feature similarity. Our findings confirm the centrality of the ATL in the processing of typicality while extending it to memory retrieval.
... The anterior-posterior axis is a key structure of cortical organization established in non-human primates and the human brain (Hagmann et al., 2008;Mesulam, 1998;Paquola et al., 2020). This axis consists of the rostrocaudal axis in the prefrontal cortex, which integrates multiple cognitive control processes, particularly action coupled with premotor processes (Badre & D'esposito, 2009;Braga et al., 2017;Nachev et al., 2008), and the ventral visual stream spans from the primary visual cortex to the ventral areas in the occipital and temporal cortices that implement perception processing (Borghesani et al., 2016;Goodale & Milner, 1992;Grill-Spector & Malach, 2004;Takemura et al., 2016). The superior-inferior axis resembled an established model of the sensory-transmodal hierarchy (Margulies et al., 2016), which expands from the sensorimotor area with higher myelination to heteromodal association areas with lower myelination. ...
Preprint
Full-text available
A bstract Autism spectrum disorder is a common neurodevelopmental condition that manifests as a disruption in sensory and social skills. Although it has been shown that the brain morphology of individuals with autism is asymmetric, how this differentially affects the structural connectome organization of each hemisphere remains under-investigated. We studied whole-brain structural connectivity-based brain asymmetry in 47 individuals with autism and 37 healthy controls using diffusion magnetic resonance imaging obtained from the Autism Brain Imaging Data Exchange initiative. By leveraging dimensionality reduction techniques, we constructed low-dimensional representations of structural connectivity and calculated their asymmetry index. We compared the asymmetry index between individuals with autism and neurotypical controls and found atypical structural connectome asymmetry in the sensory, default-mode, and limbic networks and the caudate in autism. Network communication provided topological underpinnings by demonstrating that the temporal and dorsolateral prefrontal regions showed reduced global network communication efficiency and decreased send-receive network navigation in the caudate region in individuals with autism. Finally, supervised machine learning revealed that structural connectome asymmetry is associated with communication-related autistic symptoms and nonverbal intelligence. Our findings provide insights into macroscale structural connectome alterations in autism and their topological underpinnings.
... More recently, imaging studies, often associated with computational approaches (Borghesani et al., 2016;Carota et al., 2017), have highlighted that the semantic information is distributed across both modality-preferential sensorimotor and multimodal areas, the latter enabling the integration of motor and sensory information (Fernandino et al., 2016). ...
Article
Full-text available
Neuroscience research has provided evidence that semantic information is stored in a distributed brain network involved in sensorimotor and linguistic processing. More specifically, according to the embodied cognition accounts, the representation of concepts is deemed as grounded in our bodily states. For these reasons, normative measures of words should provide relevant information about the extent to which each word embeds perceptual and action properties. In the present study, we collected ratings for 959 Italian nouns and verbs from 398 volunteers, recruited via an online platform. The words were mostly taken from the Italian adaptation of the Affective Norms for English Words (ANEW). A pool of 145 verbs was added to the original set. All the words were rated on 11 sensorimotor dimensions: six perceptual modalities (vision, audition, taste, smell, touch, and interoception) and five effectors (hand-arm, foot-leg, torso, mouth, head). The new verbs were also rated on the ANEW dimensions. Results showed good reliability and consistency with previous studies. Relations between perceptual and motor dimensions are described and interpreted, along with relations between the sensorimotor and the affective dimensions. The currently developed dataset represents an important novelty, as it includes different word classes, i.e., both nouns and verbs, and integrates ratings of both sensorimotor and affective dimensions, along with other psycholinguistic parameters; all features only partially accomplished in previous studies.
... Previous fMRI studies have indicated that when processing word meanings there is an increased activation not only in high-level "semantic hubs, " such as the inferior frontal cortex (Devlin et al., 2003), the anterior temporal cortex (Mion et al., 2010), or the inferior parietal cortex (Bonner et al., 2013), but also in early visual regions of the ventral visual stream (Pulvermüller, 2013). Further, a recent study conducted by Borghesani et al. (2016) has confirmed the "gradient semantic encoding" hypothesis that visuo-perceptual aspects of written words (such as real word size) appear to be encoded primarily in posterior occipital regions, while conceptual aspects (such as semantic category) appear encoded primarily in anterior temporal areas. In this case, the functional connectivity between the visual regions (occipital cortex) and the "sematic hubs" (across frontal and temporal cortex) might indicate the integrated encoding of different dimensions (perceptual and conceptual aspects) of word meaning. ...
Article
Full-text available
Adult language learners show distinct abilities in acquiring a new language, yet the underlying neural mechanisms remain elusive. Previous studies suggested that resting-state brain connectome may contribute to individual differences in learning ability. Here, we recorded electroencephalography (EEG) in a large cohort of 106 healthy young adults (50 males) and examined the associations between resting-state alpha band (8–12 Hz) connectome and individual learning ability during novel word learning, a key component of new language acquisition. Behavioral data revealed robust individual differences in the performance of the novel word learning task, which correlated with their performance in the language aptitude test. EEG data showed that individual resting-state alpha band coherence between occipital and frontal regions positively correlated with differential word learning performance (p = 0.001). The significant positive correlations between resting-state occipito-frontal alpha connectome and differential world learning ability were replicated in an independent cohort of 35 healthy adults. These findings support the key role of occipito-frontal network in novel word learning and suggest that resting-state EEG connectome may be a reliable marker for individual ability during new language learning.
... By translating the approach previously formulated in adults (21) to typically developing adolescents in the current work, we demonstrated that the wiring space in youth overall resembles the one previously seen in adults. Indeed, the two principal dimensions of the wiring space differentiated unimodal from transmodal cortex and anterior from posterior regions-two major axes of adult macroscale cortical topography (71)(72)(73)(74)(75). On the other hand, we also showed how the structural networks increasingly reconfigure into those seen in adults. ...
Article
Adolescence is a time of profound changes in the physical wiring and function of the brain. Here, we analyzed structural and functional brain network development in an accelerated longitudinal cohort spanning 14 to 25 y ( n = 199). Core to our work was an advanced in vivo model of cortical wiring incorporating MRI features of corticocortical proximity, microstructural similarity, and white matter tractography. Longitudinal analyses assessing age-related changes in cortical wiring identified a continued differentiation of multiple corticocortical structural networks in youth. We then assessed structure–function coupling using resting-state functional MRI measures in the same participants both via cross-sectional analysis at baseline and by studying longitudinal change between baseline and follow-up scans. At baseline, regions with more similar structural wiring were more likely to be functionally coupled. Moreover, correlating longitudinal structural wiring changes with longitudinal functional connectivity reconfigurations, we found that increased structural differentiation, particularly between sensory/unimodal and default mode networks, was reflected by reduced functional interactions. These findings provide insights into adolescent development of human brain structure and function, illustrating how structural wiring interacts with the maturation of macroscale functional hierarchies.
... In this view, conceptual representations are perceptual in nature and depend on brain areas devoted to perceptual and motor processing (Barsalou, 1999;Binder & Desai, 2011;Gallese & Lakoff, 2005;Pulvermüller, 1999). In fact, many researchers have found that concrete words activate cortical areas that are also active when the corresponding concrete objects or experiences are physically present and being perceived, as if people were simulating the perceptual experience to understand concrete words (Borghesani et al., 2016;Martin & Chao, 2001;Saygin, McCullough, Alac, & Emmorey, 2010). Canessa, S. E. Chaigneau, S. Moreno / Cognitive Science 45 (2021) Congenitally blind individuals offer a testbed for embodied cognition theories, because lack of sight should alter sensorimotor representations, such that conceptual representations should exhibit noticeable differences between sighted and congenitally blind individuals (Casasanto, 2011). ...
Article
In the property listing task (PLT), participants are asked to list properties for a concept (e.g., for the concept dog, "barks," and "is a pet" may be produced). In conceptual property norming (CPNs) studies, participants are asked to list properties for large sets of concepts. Here, we use a mathematical model of the property listing process to explore two longstanding issues: characterizing the difference between concrete and abstract concepts, and characterizing semantic knowledge in the blind versus sighted population. When we apply our mathematical model to a large CPN reporting properties listed by sighted and blind participants, the model uncovers significant differences between concrete and abstract concepts. Though we also find that blind individuals show many of the same processing differences between abstract and concrete concepts found in sighted individuals, our model shows that those differences are noticeably less pronounced than in sighted individuals. We discuss our results vis-a-vis theories attempting to characterize abstract concepts.
... This is interestingly in line with results from connectivity analyses showing that a cortical network comprising these same regions in left inferior frontal (BA 47), posterior inferior temporal, and inferior parietal cortex supports semantic processing functions specifically (e.g., Xiang et al., 2010). In particular, posterior inferior temporal cortex is known to store lexical information about conceptual features in memory (Mitchell et al., 2008;Tyler et al., 2013;Fairhall and Caramazza, 2013;Devereux et al., 2013;Carlson et al., 2014;Hagoort, 2019;Mitchell and Cusack, 2015;Ghio et al., 2016;Borghesani et al., 2016;Coutanche et al., 2016), thus supporting a semantic route to reading (which was not certified by peer review) is the author/funder. All rights reserved. ...
Preprint
Neuronal populations code similar concepts by similar activity patterns across the human brain's networks supporting language comprehension. However, it is unclear to what extent such meaning-to-symbol mapping reflects statistical distributions of symbol meanings in language use, as quantified by word co-occurrence frequencies, or, rather, experiential information thought to be necessary for grounding symbols in sensorimotor knowledge. Here we asked whether integrating distributional semantics with human judgments of grounded sensorimotor semantics better approximates the representational similarity of conceptual categories in the brain, as compared with each of these methods used separately. We examined the similarity structure of activation patterns elicited by action- and object-related concepts using multivariate representational similarity analysis (RSA) of fMRI data. The results suggested that a semantic vector integrating both sensorimotor and distributional information yields best category discrimination on the cognitive-linguistic level, and explains the corresponding activation patterns in left posterior inferior temporal cortex. In turn, semantic vectors based on detailed visual and motor information uncovered category-specific similarity patterns in fusiform and angular gyrus for object-related concepts, and in motor cortex, left inferior frontal cortex (BA 44), and supramarginal gyrus for action-related concepts.
... By translating this approach previously formulated in adults [21] to adolescents, we demonstrated that the wiring space in youth overall resembles the one previously seen in adults. Indeed, the two principal dimensions of the wiring space represented sensory-fugal and anteriorposterior gradients -both major axes of adult macroscale cortical topography [59][60][61][62][63]. On the other hand, we could obtain new insights into adolescent reconfigurations of structural networks via longitudinal analyses. ...
Preprint
Full-text available
A bstract Adolescence is a time of profound changes in the structural wiring of the brain and maturation of large-scale functional interactions. Here, we analyzed structural and functional brain network development in an accelerated longitudinal cohort spanning 14–25 years (n = 199). Core to our work was an advanced model of cortical wiring that incorporates multimodal MRI features of (i) cortico-cortical proximity, (ii) microstructural similarity, and (iii) diffusion tractography. Longitudinal analyses assessing age-related changes in cortical wiring during adolescence identified increases in cortical wiring within attention and default-mode networks, as well as between transmodal and attention, and sensory and limbic networks, indicative of a continued differentiation of cortico-cortical structural networks. Cortical wiring changes were statistically independent from age-related cortical thinning seen in the same subjects. Conversely, resting-state functional MRI analysis in the same subjects indicated an increasing segregation of sensory and transmodal systems during adolescence, with age-related reductions in their functional connectivity alongside with an increase in structural wiring distance. Our findings provide new insights into adolescent brain network development, illustrating how the maturation of structural wiring interacts with the development of macroscale network function.
... The wiring space identified here captured both sensory-fugal and anterior-posterior processing streams, 2 core modes of cortical organisation and hierarchies established by seminal tract-tracing work in nonhuman primates [54,55]. The anterior-posterior axis combines multiple local gradients and functional topographies, such as the ventral visual stream running from the occipital pole to the anterior temporal pole that implements a sensory-semantic dimension of perceptual processing [56,57] and a rostro-caudal gradient in the prefrontal cortex that describes a transition from high-level cognitive processes supporting action preparation to those tightly coupled with motor execution [55,[58][59][60]. The sensory-fugal axis represents an overarching organisational principle that unites these local processing streams. ...
Article
Full-text available
The vast net of fibres within and underneath the cortex is optimised to support the convergence of different levels of brain organisation. Here, we propose a novel coordinate system of the human cortex based on an advanced model of its connectivity. Our approach is inspired by seminal, but so far largely neglected models of cortico–cortical wiring established by postmortem anatomical studies and capitalises on cutting-edge in vivo neuroimaging and machine learning. The new model expands the currently prevailing diffusion magnetic resonance imaging (MRI) tractography approach by incorporation of additional features of cortical microstructure and cortico–cortical proximity. Studying several datasets and different parcellation schemes, we could show that our coordinate system robustly recapitulates established sensory-limbic and anterior–posterior dimensions of brain organisation. A series of validation experiments showed that the new wiring space reflects cortical microcircuit features (including pyramidal neuron depth and glial expression) and allowed for competitive simulations of functional connectivity and dynamics based on resting-state functional magnetic resonance imaging (rs-fMRI) and human intracranial electroencephalography (EEG) coherence. Our results advance our understanding of how cell-specific neurobiological gradients produce a hierarchical cortical wiring scheme that is concordant with increasing functional sophistication of human brain organisation. Our evaluations demonstrate the cortical wiring space bridges across scales of neural organisation and can be easily translated to single individuals.
... Indeed, recent work using electroencephalography has identified a reversal of information flow during object recall as compared with encoding (Linde-Domingo et al. 2019). Alternatively, other research has suggested a gradient within the neocortex that reflects a split of conceptual information represented anterior (or downstream) to perceptual information (Peelen and Caramazza 2012;Borghesani et al. 2016;Martin 2016). Although recent work shows highly detailed visual content within recalled memories (Bainbridge et al. 2019), it is possible recalled memories may be more abstracted and conceptual compared with their encoded representations. ...
Article
During memory recall and visual imagery, reinstatement is thought to occur as an echoing of the neural patterns during encoding. However, the precise information in these recall traces is relatively unknown, with previous work primarily investigating either broad distinctions or specific images, rarely bridging these levels of information. Using ultra-high-field (7T) functional magnetic resonance imaging with an item-based visual recall task, we conducted an in-depth comparison of encoding and recall along a spectrum of granularity, from coarse (scenes, objects) to mid (e.g., natural, manmade scenes) to fine (e.g., living room, cupcake) levels. In the scanner, participants viewed a trial-unique item, and after a distractor task, visually imagined the initial item. During encoding, we observed decodable information at all levels of granularity in category-selective visual cortex. In contrast, information during recall was primarily at the coarse level with fine-level information in some areas; there was no evidence of mid-level information. A closer look revealed segregation between voxels showing the strongest effects during encoding and those during recall, and peaks of encoding–recall similarity extended anterior to category-selective cortex. Collectively, these results suggest visual recall is not merely a reactivation of encoding patterns, displaying a different representational structure and localization from encoding, despite some overlap.
... Visual cortex is critical to our ability to recognise objects or words ( Goodale & Milner, 1992 ;Patterson et al., 2007 ) -visual cortex can gate meaning retrieval, since word and object recognition have both been shown to involve interactive-activation between the heteromodal hub in ATL and visual 'spoke' representations ( Carreiras, Armstrong, Perea, & Frost, 2014 ;Clarke & Tyler, 2014 ;Nobre, Allison, & McCarthy, 1994 ;Pammer et al., 2004 ;Tyler et al., 2013 ). Visual cortex also plays a role in representing the perceptual features of concepts, and consequently allows the fully instantiated retrieval of a concept through visual imagery ( Bannert & Bartels, 2013 ;Bergmann, Genç, Kohler, Singer, & Pearson, 2016 ;Borghesani et al., 2016 ;Dijkstra, Zeidman, Ondobaka, van Gerven, & Friston, 2017 ;Kan, Barsalou, Olseth Solomon, Minor, & Thompson-Schill, 2003 ;Mellet, Tzourio, Denis, & Mazoyer, 1998 ;Murphy et al., 2019 ;Peelen & Caramazza, 2012 ). These observations give rise to at least two potential explanations for the modulation of visual activation according to task knowledge. ...
Article
Full-text available
Semantic retrieval is flexible, allowing us to focus on subsets of features and associations that are relevant to the current task or context: for example, we use taxonomic relations to locate items in the supermarket (carrots are a vegetable), but thematic associations to decide which tools we need when cooking (carrot goes with peeler). We used fMRI to investigate the neural basis of this form of semantic flexibility; in particular, we asked how retrieval unfolds differently when participants have advanced knowledge of the type of link to retrieve between concepts (taxonomic or thematic). Participants performed a semantic relatedness judgement task: on half the trials, they were cued to search for a taxonomic or thematic link, while on the remaining trials, they judged relatedness without knowing which type of semantic relationship would be relevant. Left inferior frontal gyrus showed greater activation when participants knew the trial type in advance. An overlapping region showed a stronger response when the semantic relationship between the items was weaker, suggesting this structure supports both top-down and bottom-up forms of semantic control. Multivariate pattern analysis further revealed that the neural response in left inferior frontal gyrus reflects goal information related to different conceptual relationships. Top-down control specifically modulated the response in visual cortex: when the goal was unknown, there was greater deactivation to the first word, and greater activation to the second word. We conclude that top-down control of semantic retrieval is primarily achieved through the gating of task-relevant ‘spoke’ regions.
... One study, for instance, found higher activity in EVC associated with categorylevel target detection despite the need for upstream semantic systems to define these targets, providing evidence for these connections (Hon et al., 2009). Given that our effect was present at the superordinate level, it is likely not representative of bottom-up early visual processing of concepts (as otherwise it would also be observed at the item-level), but instead likely reflects feedback mechanisms from higher level regions (Hon et al., 2009;Luck et al., 1997) which is known to affect multi-voxel patterns, as shown by the ability to decode visual properties from associated words in EVC (Borghesani et al., 2016). Finally, the right AG was the only region to marginally predict memory on the day of the scan. ...
Article
The irregularities of the world ensure that each interaction we have with a concept is unique. In order to generalize across these unique encounters to form a high-level representation of a concept, we must draw on similarities between exemplars to form new conceptual knowledge that is maintained over a long time. Two neural similarity measures — pattern robustness and encoding-retrieval similarity — are particularly important for predicting memory outcomes. In this study, we used fMRI to measure activity patterns while people encoded and retrieved novel pairings between unfamiliar (Dutch) words and visually presented animal species. We address two underexplored questions: 1) whether neural similarity measures can predict memory outcomes, despite perceptual variability between presentations of a concept and 2) if pattern similarity measures can predict subsequent memory over a long delay (i.e., one month). Our findings indicate that pattern robustness during encoding in brain regions that include parietal and medial temporal areas is an important predictor of subsequent memory. In addition, we found significant encoding-retrieval similarity in the left ventrolateral prefrontal cortex after a month’s delay. These findings demonstrate that pattern similarity is an important predictor of memory for novel word-animal pairings even when the concept includes multiple exemplars. Importantly, we show that established predictive relationships between pattern similarity and subsequent memory do not require visually identical stimuli (i.e., are not simply due to low-level visual overlap between stimulus presentations) and are maintained over a month.
... Relationships reflected in similarity of responses can be due to shared perceptual features or cognitive factors such as agency (Connolly et al., 2012;Sha et al., 2015;Thorat et al., 2019) and action goals (Nastase et al., 2017). The relationships reflected in representational geometry differ by cortical field (Borghesani et al., 2016;Freiwald and Tsao, 2010;Guntupalli et al., 2017;Guntupalli et al., 2016;Visconti di Oleggio Castello et al., 2017), and these differences reflect processing or the disentangling of target information from confounds (DiCarlo et al., 2012;DiCarlo and Cox, 2007;Kriegeskorte and Kievit, 2013). The topographic organization of units with particular functional profiles may arise over the course of development and with varying experience (Arcaro et al., 2017). ...
Article
Full-text available
Information that is shared across brains is encoded in idiosyncratic fine-scale functional topographies. Hyperalignment captures shared information by projecting pattern vectors for neural responses and connectivities into a common, high-dimensional information space, rather than by aligning topographies in a canonical anatomical space. Individual transformation matrices project information from individual anatomical spaces into the common model information space, preserving the geometry of pairwise dissimilarities between pattern vectors, and model cortical topography as mixtures of overlapping, individual-specific topographic basis functions, rather than as contiguous functional areas. The fundamental property of brain function that is preserved across brains is information content, rather than the functional properties of local features that support that content. In this perspective, we present the conceptual framework that motivates hyperalignment, its computational underpinnings for joint modeling of a common information space and idiosyncratic cortical topographies, and discuss implications for understanding the structure of cortical functional architecture.
... Single-trial responses are more commonly analyzed for classification analyses, but less so for RSA, in which exemplar-level responses (modeled across multiple trials featuring the same exemplar) are more commonly analyzed. Moreover, it is not uncommon for multivariate pattern analyses to be applied successfully to event-related designs with ITIs shorter than 2 s (1.7-1.9 s in Borghesani et al., 2016;1.5 s in Bracci, et al 2017a, and. ...
Article
Full-text available
A region in the posterior inferior temporal gyrus (pITG) is thought to be specialized for processing Arabic numerals, but fMRI studies that compared passive viewing of numerals to other character types (e.g., letters and novel characters) have not found evidence of numeral preference in the pITG. However, recent studies showed that the engagement of the pITG is modulated by attention and task contexts, suggesting that passive viewing paradigms may be ill-suited for examining numeral specialization in the pITG. It is possible, however, that even if the strengths of responses to different category types are similar, the distributed response patterns (i.e., neural representations) in a candidate numeral-preferring pITG region (“pITG-numerals”) may reveal categorical distinctions, even during passive viewing. Using representational similarity analyses with three datasets that share the same task paradigm and stimulus sets (total N = 88), we tested whether the neural representations of digits, letters, and novel characters in pITG-numerals were organized according to visual form and/or conceptual categories (e.g., familiar versus novel, numbers versus others). Small-scale frequentist and Bayesian meta-analyses of our dataset-specific findings revealed that the organization of neural representations in pITG-numerals is unlikely to be described by differences in abstract shape, but can be described by a categorical “digits versus letters” distinction, or even a “digits versus others” distinction (suggesting greater numeral sensitivity). Evidence of greater numeral sensitivity during passive viewing suggest that pITG-numerals is likely part of a neural pathway that has been developed for automatic processing objects with potential numerical relevance. Given that numerals and letters do not differ categorically in terms of shape, categorical distinction in pITG-numerals during passive viewing must reflect ontogenetic differentiation of symbol set representations based on repeated usage of numbers and letters in differing task contexts.
... Attitudes and neutralizations are forms of semantic knowledge that cannot be learned through operant conditioning. Instead, semantic memory influences the mechanism of operant conditioning insofar as a person's attitudes or neutralizations affect the action plan one generates (Borghesani et al., 2016;Kalénine & Buxbaum, 2016;Quandt, Lee, & Chatterjee, 2017). Further, semantic memory also influences the mechanism of vicarious operant conditioning insofar as an individual's attitudes or neutralizations affect one's symbolic definitions or interpretations of others (Golkar, Castro, & Olsson, 2015;Golkar & Olsson, 2017) and their observed actions (Gerson, Meyer, Hunnius, & Bekkering, 2017;Hudson, Nicholson, Ellis, & Bach, 2016;Joiner, Piva, Turrin, & Chang, 2017;Yang, Rosenblau, Keifer, & Pelphrey, 2015). ...
... The relatively well-modeled voxels are typically located in the temporal, parietal and frontal lobes. Studies show that context length [5], semantic category [2], [7], conceptual concreteness [8] and perceptual properties [9], [10] are all related factors to differentially elicit semantic activities in different lobes. Yet, despite abundant literatures on theoretical semantic network architecture [11], no concluding evidence enlightens each cortical region's functional role in the processing. ...
Preprint
Full-text available
The word embeddings related to paradigmatic and syntagmatic axes are applied in an fMRI encoding experiment to explore human brain's activity pattern during story listening. This study proposes the construction of paradigmatic and syntagmatic semantic embeddings respectively by transforming WordNet-alike knowledge bases and subtracting paradigmatic information from a statistical word embedding. It evaluates the semantic embeddings by leveraging word-pair proximity ranking tasks and contrasts voxel encoding models trained with the two types of semantic features to reveal the brain's spatial pattern for semantic processing. Results indicate that in listening comprehension, paradigmatic and syntagmatic semantic operations both recruit inferior (ITG) and middle temporal gyri (MTG), angular gyrus, superior parietal lobule (SPL), inferior frontal gyrus. A non-continuous voxel line is found in MTG with a predominance of paradigmatic processing. The ITG, middle occipital gyrus and the surrounding primary and associative visual areas are more engaged by syntagmatic processing. The comparison of two semantic axes' brain map does not suggest a neuroanatomical segregation for paradigmatic and syntagmatic processing. The complex yet regular contrast pattern starting from temporal pole, along MTG to SPL necessitates further investigation.
... Other experiments used silent reading. During silent reading of concrete words, subcategory membership can be decoded from activity patterns in anterior occipitotemporal cortex (BA20 and BA38) laterally and anteriorly to BA35/36 (Borghesani, 2017;Borghesani et al., 2016). Using fMRI Xu et al. (2018) presented 45 Chinese written words and performed a region-of-interest based RSA. ...
Article
Full-text available
Traditional neuroanatomical models of written word processing have proposed multiple parallel routes from the visual word form area to lateral temporal, inferior parietal and inferior frontal cortex. Here we hypothesize the existence of an alternative ventromedial occipitotemporal route that culminates in the left perirhinal cortex which codes for the learned association between a concrete written word and the entity it refers to. The hypothesis fits in a broader context that considers perirhinal cortex as a connector hub connecting sensory input with more widespread representations of its content. According to the hypothesis, perirhinal coding of the association between a concrete word and its referent relies on the same operational principles as the coding of paired associates by perirhinal neurons documented by electrophysiological recordings in nonhuman primates. The evidence for a role of human left perirhinal cortex in written word processing is primarily based on two sources: Direct electrophysiological recordings reveal responses to concrete written words compared to function words or nonword stimuli. Secondly, in humans, the conceptual similarity between concrete written words is reflected in the similarity of the activity patterns evoked by these words in perirhinal cortex. The hypothesis has clinical relevance: Patients with the semantic variant of primary progressive aphasia who have damage of the left perirhinal cortex among other anterior temporal regions, have surface alexia as one of their defining features, i.e. the inability to access meaning from written words. The hypothesis of an alternative, ventral occipitotemporal written word processing pathway aligns with the concept that written language processing builds upon pre-existing visual object processing mechanisms.
... The resulting neural similarity matrices depict how similar the different conditions are in terms of distributed pattern of activity. Values were Fisher r-to-z transformed, and partial correlation was then used to compute the correlation between the neural matrix and each of the predicted ones while controlling for the others (Borghesani et al., 2016;Clarke & Tyler, 2014). For each participant, we thus obtained four maps depicting the multivariate effect of (1) semantic Supplementary Table 2B. ...
Article
Full-text available
Previous evidence from neuropsychological and neuroimaging studies suggests functional specialization for tools and related semantic knowledge in a left frontoparietal network. It is still debated whether these areas are involved in the representation of rudimentary movement-relevant knowledge regardless of semantic domains (animate vs. inanimate) or categories (tools vs. nontool objects). Here, we used fMRI to record brain activity while 13 volunteers performed two semantic judgment tasks on visually presented items from three different categories: animals, tools, and nontool objects. Participants had to judge two distinct semantic features: whether two items typically move in a similar way (e.g., a fan and a windmill move in circular motion) or whether they are usually found in the same environment (e.g., a seesaw and a swing are found in a playground). We investigated differences in overall activation (which areas are involved) as well as representational content (which information is encoded) across semantic features and categories. Results of voxel-wise mass univariate analysis showed that, regardless of semantic category, a dissociation emerges between processing information on prototypical location (involving the anterior temporal cortex and the angular gyrus) and movement (linked to left inferior parietal and frontal activation). Multivoxel pattern correlation analyses confirmed the representational segregation of networks encoding task- and category-related aspects of semantic processing. Taken together, these findings suggest that the left frontoparietal network is recruited to process movement properties of items (including both biological and nonbiological motion) regardless of their semantic category.
Article
Introduction: Autism spectrum disorder (ASD) is characterized by deficits in social behavior and executive function (EF), particularly in cognitive flexibility. Whether transcranial magnetic stimulation (TMS) can improve cognitive outcomes in patients with ASD remains an open question. We examined the acute effects of prefrontal TMS on cortical excitability and fluid cognition in individuals with ASD who underwent TMS for refractory major depression. Methods: We analyzed data from an open-label pilot study involving nine participants with ASD and treatment-resistant depression who received 30 sessions of accelerated theta burst stimulation of the dorsolateral prefrontal cortex, either unilaterally or bilaterally. Electroencephalography data were collected at baseline and 1, 4, and 12-weeks posttreatment and analyzed using a mixed-effects linear model to assess changes in regional cortical excitability using three models of spectral parametrization. Fluid cognition was measured using the National Institutes of Health Toolbox Cognitive Battery. Results: Prefrontal TMS led to a decrease in prefrontal cortical excitability and an increase in right temporoparietal excitability, as measured using spectral exponent analysis. This was associated with a significant improvement in the NIH Toolbox Fluid Cognition Composite score and the Dimensional Change Card Sort subtest from baseline to 12 weeks posttreatment (t = 3.79, p = 0.005, n = 9). Improvement in depressive symptomatology was significant (HDRS-17, F (3, 21) = 28.49, p < 0.001) and there was a significant correlation between cognitive improvement at week 4 and improvement in depression at week 12 (r = 0.71, p = 0.05). Conclusion: These findings link reduced prefrontal excitability in patients with ASD and improvements in cognitive flexibility. The degree to which these mechanisms can be generalized to ASD populations without Major Depressive Disorder remains a compelling question for future research.
Article
Taxonomic and thematic relations are major components of semantic representation but their neurocognitive underpinnings are still debated. We hypothesised that taxonomic relations preferentially activate parts of anterior temporal lobe (ATL) because they rely more on colour and shape features, while thematic relations preferentially activate temporoparietal cortex (TPC) because they rely more on action and location knowledge. We first conducted activation likelihood estimation (ALE) meta-analysis to assess evidence for neural specialisation in the existing fMRI literature (Study 1), then used a primed semantic judgement task to examine if the two relations are primed by different feature types (Study 2). We find that taxonomic relations show minimal feature-based specialisation but preferentially activate the lingual gyrus. Thematic relations are more dependent on action and location features and preferentially engage TPC. The meta-analysis also showed that lateral ATL is preferentially engaged by Thematic relations, which may reflect their greater reliance on verbal associations.
Article
Identifying printed words and pictures concurrently is ubiquitous in daily tasks, and so it is important to consider the extent to which reading words and naming pictures may share a cognitive-neurophysiological functional architecture. Two functional magnetic resonance imaging (fMRI) experiments examined whether reading along the left ventral occipitotemporal region (vOT; often referred to as a visual word form area, VWFA) has activation that is overlapping with referent pictures (i.e., both conditions significant and shared, or with one significantly more dominant) or unique (i.e., one condition significant, the other not), and whether picture naming along the right lateral occipital complex (LOC) has overlapping or unique activation relative to referent words. Experiment 1 used familiar regular and exception words (to force lexical reading) and their corresponding pictures in separate naming blocks, and showed dominant activation for pictures in the LOC, and shared activation in the VWFA for exception words and their corresponding pictures (regular words did not elicit significant VWFA activation). Experiment 2 controlled for visual complexity by superimposing the words and pictures and instructing participants to either name the word or the picture, and showed primarily shared activation in the VWFA and LOC regions for both word reading and picture naming, with some dominant activation for pictures in the LOC. Overall, these results highlight the importance of including exception words to force lexical reading when comparing to picture naming, and the significant shared activation in VWFA and LOC serves to challenge specialized models of reading or picture naming.
Article
Full-text available
Semantic representations are processed along a posterior-to-anterior gradient reflecting a shift from perceptual (e.g., it has eight legs ) to conceptual (e.g., venomous spiders are rare ) information. One critical region is the anterior temporal lobe (ATL): patients with semantic variant primary progressive aphasia (svPPA), a clinical syndrome associated with ATL neurodegeneration, manifest a deep loss of semantic knowledge. We test the hypothesis that svPPA patients perform semantic tasks by over-recruiting areas implicated in perceptual processing. We compared MEG recordings of svPPA patients and healthy controls during a categorization task. While behavioral performance did not differ, svPPA patients showed indications of greater activation over bilateral occipital cortices and superior temporal gyrus, and inconsistent engagement of frontal regions. These findings suggest a pervasive reorganization of brain networks in response to ATL neurodegeneration: the loss of this critical hub leads to a dysregulated (semantic) control system, and defective semantic representations are seemingly compensated via enhanced perceptual processing.
Article
Multivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of simple events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical structure, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance.
Thesis
Full-text available
La voie visuelle ventrale, s’étendant des régions occipitales aux régions temporales antérieures, est spécialisée dans la reconnaissance, par la modalité visuelle, des objets et personnes rencontrés au quotidien. De nombreuses études en imagerie par résonance magnétique fonctionnelle se sont intéressées aux bases cérébrales de la reconnaissance visuelle. Toutefois, la susceptibilité de cette technique aux artefacts magnétiques dans les régions du lobe temporal antérieur a conduit à sous-estimer le rôle de ces régions au sein de la voie ventrale. Le but de cette thèse est de mieux comprendre les mécanismes de reconnaissance visuelle au sein du cortex ventral occipito-temporal, et notamment de clarifier la contribution des structures temporales postérieures et antérieures dans la mise en œuvre des mécanismes de reconnaissance visuelle et de mise en lien avec la mémoire sémantique. Pour cela, nous nous appuyons sur une approche multimodale combinant neuropsychologie, stimulation visuelle périodique rapide (FPVS) et enregistrements en EEG de scalp et en EEG intracérébral (SEEG), chez des participants neurotypiques et des participants épileptiques. Nous rapportons cinq études empiriques dans lesquelles nous démontrons que (1) les patients avec une épilepsie temporale antérieure (i.e., le type d’épilepsie focale le plus fréquemment concerné par une procédure en SEEG) présentent des performances typiques en discrimination individuelle de visages, (2) la stimulation électrique du gyrus fusiforme antérieur droit peut entraîner un déficit transitoire spécifique à la reconnaissance des visages, même lorsqu’aucune dénomination n’est requise, (3) le processus de discrimination de visages familiers parmi des visages inconnus sollicite l’engagement d’un large réseau de structures ventrales bilatérales incluant les régions temporales antérieures et médiales, (4) certaines structures du lobe temporal antérieur ventral gauche sont impliquées dans l’intégration d’un visage familier et de son nom en une représentation unifiée, et (5) les régions temporales antérieures ventrales bilatérales sont engagées dans la mise en œuvre de représentations sémantiques associées à des mots écrits. Dans l’ensemble, nos travaux montrent que (1) le réseau de reconnaissance visuelle s’organise le long de la voie visuelle ventrale en suivant une hiérarchisation progressive selon l’axe postéro-antérieur, au sein duquel une transition graduelle s’effectue entre représentations majoritairement perceptives et représentations sémantiques de plus en plus abstraites, et (2) les régions impliquées dans la reconnaissance visuelle sont fortement latéralisées dans les régions postérieures ventrales, et deviennent bilatérales dans les régions temporales antérieures ventrales.
Preprint
Full-text available
How is conceptual knowledge organized and retrieved by the brain? Recent evidence points to the anterior temporal lobe (ATL) as a crucial semantic hub integrating both abstract and concrete conceptual features according to a dorsal-to-medial gradient. It is however unclear when this conceptual gradient emerges and how semantic information reaches the ATL during conceptual retrieval. Here we used a multiple regression approach to magnetoencephalography signals of spoken words, combined with dimensionality reduction in concrete and abstract semantic feature spaces. Results showed that the dorsal-to-medial abstract-to-concrete ATL gradient emerges only in late stages of word processing: Abstract and concrete semantic information are initially encoded in posterior temporal regions and travel along separate cortical pathways eventually converging in the ATL. The present finding sheds light on the neural dynamics of conceptual processing that shape the organization of knowledge in the anterior temporal lobe.
Article
Full-text available
Human visual cortex is partitioned into different functional areas that, from lower to higher, become increasingly selective and responsive to complex feature dimensions. Here we use a Representational Similarity Analysis (RSA) of fMRI-BOLD signals to make quantitative comparisons across LGN and multiple visual areas of the low-level stimulus information encoded in the patterns of voxel responses. Our stimulus set was picked to target the four functionally distinct subcortical channels that input visual cortex from the LGN: two achromatic sinewave stimuli that favor the responses of the high-temporal magnocellular and high-spatial parvocellular pathways, respectively, and two chromatic stimuli isolating the L/M-cone opponent and S-cone opponent pathways, respectively. Each stimulus type had three spatial extents to sample both foveal and para-central visual field. With the RSA, we compare quantitatively the response specializations for individual stimuli and combinations of stimuli in each area and how these change across visual cortex. First, our results replicate the known response preferences for motion/flicker in the dorsal visual areas. In addition, we identify two distinct gradients along the ventral visual stream. In the early visual areas (V1-V3), the strongest differential representation is for the achromatic high spatial frequency stimuli, suitable for form vision, and a very weak differentiation of chromatic versus achromatic contrast. Emerging in ventral occipital areas (V4, VO1 and VO2), however, is an increasingly strong separation of the responses to chromatic versus achromatic contrast and a decline in the high spatial frequency representation. These gradients provide new insight into how visual information is transformed across the visual cortex.
Article
Separated ventral and dorsal streams in auditory processes have been proposed to process sound identification and localization respectively. Despite the popularity of the dual-pathway model, it remains controversial how much independence two neural pathways enjoy and whether visual experiences can influence the distinct cortical organizational scheme. In this study, representational similarity analysis (RSA) was used to explore the functional roles of distinct cortical regions that lay within either the ventral or dorsal auditory streams of sighted and early blind (EB) participants. We found functionally segregated auditory networks in both sighted and EB groups where anterior superior temporal gyrus (aSTG) and inferior frontal junction (IFJ) were more related to the sound identification, while posterior superior temporal gyrus (pSTG) and inferior parietal lobe (IPL) preferred the sound localization. The findings indicated visual experiences may not have an influence on this functional dissociation and the cortex of the human brain may be organized as task-specific and modality-independent strategies. Meanwhile, partial overlap of spatial and non-spatial auditory information processing was observed, illustrating the existence of interaction between the two auditory streams. Furthermore, we investigated the effect of visual experiences on the neural bases of auditory perception and observed the cortical reorganization in EB participants in whom middle occipital gyrus was recruited to process auditory information. Our findings examined the distinct cortical networks that abstractly encoded sound identification and localization, and confirmed the existence of interaction from the multivariate perspective. Furthermore, the results suggested visual experience might not impact the functional specialization of auditory regions.
Preprint
Full-text available
A bstract The vast net of fibres within and underneath the cortex is optimised to support the convergence of different levels of brain organisation. Here we propose a novel coordinate system of the human cortex based on an advanced model of its connectivity. Our approach is inspired by seminal, but so far largely neglected models of cortico-cortical wiring established by post mortem anatomical studies and capitalizes on cutting-edge neuroimaging and machine learning. The new model expands the currently prevailing diffusion MRI tractography approach by incorporation of additional features of cortical microstructure and cortico-cortical proximity. Studying several datasets, we could show that our coordinate system robustly recapitulates established sensory-limbic and anterior-posterior dimensions of brain organisation. A series of validation experiments showed that the new wiring space reflects cortical microcircuit features (including pyramidal neuron depth and glial expression) and allowed for competitive simulations of functional connectivity and dynamics across a broad range contexts (based on resting-state fMRI, task-based fMRI, and human intracranial EEG coherence). Our results advance our understanding of how cell-specific neurobiological gradients produce a hierarchical cortical wiring scheme that is concordant with increasing functional sophistication of human brain organisation. Our evaluations demonstrate the cortical wiring space bridges across scales of neural organisation and can be easily translated to single individuals.
Article
Twenty years after Barsalou’s seminal perceptual-symbols article, embodied cognition, the notion that cognition involves simulations of sensory, motor, or affective states, has moved from an outlandish proposal to a mainstream position adopted by many researchers in the psychological and cognitive sciences (and neurosciences). Though it has generated productive work in the cognitive sciences as a whole, it has had a particularly strong impact on research into language comprehension. The view of a mental lexicon based on symbolic word representations, which are arbitrarily linked to sensory aspects of their referents, was generally accepted since the cognitive revolution in the 1950s. This has radically changed. Given the current status of embodiment as a main theory of cognition, it is somewhat surprising that a close look at the literature reveals that the debate about the nature of the processes involved in language comprehension is far from settled, and key questions remain unanswered. We present several suggestions for a productive way forward.
Article
Distributed sub-systems of the brain's semantic network have been shown to process semantics associated with visual features of objects (e.g., shape, colour) in the ventral visual processing stream, whereas semantics associated with actions are processed in the dorsal stream. Orthographic lexical processing has also been shown to occur in the ventral stream. Past research from our lab (Neudorf et al., 2019) has demonstrated a temporal (i.e., reaction time) priming advantage for object primes over action primes in the lexical decision task consistent with ventral shared-stream processing of visual feature object semantics and orthographic lexical identification, whereby object primes produced larger priming effects than action primes. The current experiment explored this paradigm using functional magnetic resonance imaging (fMRI) and identified the potential loci of shared-stream processing to regions in the ventral stream just anterior to colour sensitive visual area V4 cortex in the left fusiform gyrus and anterior to lexical and shape sensitive regions in the left fusiform gyrus, as well as in cerebellar lobule VI. Action priming showed more activation than object priming in dorsal stream motion related regions of the right parietal occipital junction, right superior occipital gyrus, and bilateral visual area V3. The fMRI activation observed in this experiment supports the theory that spatially shared-stream activation occurs in the ventral stream during object (but not action) priming of lexical identification, which is consistent with our earlier behavioural research showing that these processes are also temporally shared.
Preprint
Full-text available
The Ventral Occipito-Temporal Cortex (VOTC) shows reliable category selective response to visual information. Do the development, topography and information content of this categorical organization depend on visual input or even visual experience? To further address this question, we used fMRI to characterize the brain responses to eight categories (4 living, 4 non-living) presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. Using a combination of decoding and representational similarity analyses, we observed that VOTC reliably encodes sounds categories in the sighted and blind groups, using a representational structure strikingly similar to the one found in vision. Moreover, we found that the representational connectivity between VOTC and large-scale brain networks was substantially similar across modalities and groups. Blind people however showed higher decoding accuracies and higher inter-subject consistency for the representation of sounds in VOTC, and the correlation between the representational structure of visual and auditory categories was almost double in the blind when compared to the sighted group. Crucially, we also demonstrate that VOTC represents the categorical membership of sounds rather that their acoustic features in both groups. Our results suggest that early visual deprivation triggers an extension of the intrinsic categorical organization of VOTC that is at least partially independent from vision.
Article
Full-text available
Wernicke (1900, as cited in G. H. Eggert, 1977) suggested that semantic knowledge arises from the interaction of perceptual representations of objects and words. The authors present a parallel distributed processing implementation of this theory, in which semantic representations emerge from mechanisms that acquire the mappings between visual representations of objects and their verbal descriptions. To test the theory, they trained the model to associate names, verbal descriptions, and visual representations of objects. When its inputs and outputs are constructed to capture aspects of structure apparent in attribute-norming experiments, the model provides an intuitive account of semantic task performance. The authors then used the model to understand the structure of impaired performance in patients with selective and progressive impairments of conceptual knowledge. Data from 4 well-known semantic tasks revealed consistent patterns that find a ready explanation in the model. The relationship between the model and related theories of semantic representation is discussed.
Article
Full-text available
Science has changed many of our dearly held and commonsensical (but incorrect) beliefs. For example, few still believe the world is flat, and few still believe the sun orbits the earth. Few still believe humans are unrelated to the rest of the animal kingdom, and soon few will believe human thinking is computer-like. Instead, as with all animals, our thoughts are based on bodily experiences, and our thoughts and behaviors are controlled by bodily and neural systems of perception, action, and emotion interacting with the physical and social environments. We are embodied; nothing more. Embodied cognition is about cognition formatted in sensorimotor experience, and sensorimotor systems make those thoughts dynamic. Even processes that seem abstract, such as language comprehension and goal understanding, are embodied. Thus, embodied cognition is not limited to 1 type of thought or another: It is cognition. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Article
Full-text available
The thesis of embodied cognition has developed as an alternative to the view that cognition is mediated, at least in part, by symbolic representations. A useful testing ground for the embodied cognition hypothesis is the representation of concepts. An embodied view of concept representation argues that concepts are represented in a modality-specific format. I argue that questions about representational format are tractable only in the context of explicit hypotheses about how information spreads among conceptual representations and sensorimotor systems. When reasonable alternatives to the embodied cognition hypothesis are clearly defined, the available evidence does not distinguish between the embodied cognition hypothesis and those alternatives. Furthermore, I argue, the available data that are theoretically constraining indicate that concepts are more than just sensory and motor content. As such, the embodied/nonembodied debate is either largely resolved or at a point where the embodied and nonembodied approaches are no longer coherently distinct theories. This situation merits a reconsideration of what the available evidence can tell us about the structure of the conceptual system. I suggest that it is the independence of thought from perception and action that makes human cognition special-and that independence is made possible by the representational distinction between concepts and sensorimotor representations. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Article
Full-text available
Optimal sensor placement is an important part in the structural health monitoring of bridge structures. However, some defects are present in the existing methods, such as the focus on a single optimal index, the selection of modal order and sensor number based on experience, and the long computation time. A hybrid optimization strategy named MSE-AGA is proposed in this study to address these problems. The approach firstly selects modal order using modal participation factor. Then, the modal strain energy method is adopted to conduct the initial sensor placement. Finally, the adaptive genetic algorithm (AGA) is utilized to determine the optimal number and locations of the sensors, which uses the root mean square of off-diagonal elements in the modal assurance criterion matrix as the fitness function. A case study of sensor placement on a numerically simulated bridge structure is provided to verify the effectiveness of the MSE-AGA strategy, and the AGA method without initial placement is used as a contrast experiment. A comparison of these strategies shows that the optimal results obtained by the MSE-AGA method have a high modal strain energy index, a short computation time, and small off-diagonal elements in the modal assurance criterion matrix.
Article
Full-text available
While major advances have been made in uncovering the neural processes underlying perceptual representations, our grasp of how the brain gives rise to conceptual knowledge remains relatively poor. Recent work has provided strong evidence that concepts rely, at least in part, on the same sensory and motor neural systems through which they were acquired, but it is still unclear whether the neural code for concept representation uses information about sensory-motor features to discriminate between concepts. In the present study, we investigate this question by asking whether an encoding model based on five semantic attributes directly related to sensory-motor experience-sound, color, visual motion, shape, and manipulation-can successfully predict patterns of brain activation elicited by individual lexical concepts. We collected ratings on the relevance of these five attributes to the meaning of 820 words, and used these ratings as predictors in a multiple regression model of the fMRI signal associated with the words in a separate group of participants. The five resulting activation maps were then combined by linear summation to predict the distributed activation pattern elicited by a novel set of 80 test words. The encoding model predicted the activation patterns elicited by the test words significantly better than chance. As expected, prediction was successful for concrete but not for abstract concepts. Comparisons between encoding models based on different combinations of attributes indicate that all five attributes contribute to the representation of concrete concepts. Consistent with embodied theories of semantics, these results show, for the first time, that the distributed activation pattern associated with a concept combines information about different sensory-motor attributes according to their respective relevance. Future research should investigate how additional features of phenomenal experience contribute to the neural representation of conceptual knowledge. Copyright © 2015. Published by Elsevier Ltd.
Article
Full-text available
Auditory cortex is the first cortical region of the human brain to process sounds. However, it has recently been shown that its neurons also fire in the absence of direct sensory input, during memory maintenance and imagery. This has commonly been taken to reflect neural coding of the same acoustic information as during the perception of sound. However, the results of the current study suggest that the type of information encoded in auditory cortex is highly flexible. During perception and memory maintenance, neural activity patterns are stimulus specific, reflecting individual sound properties. Auditory imagery of the same sounds evokes similar overall activity in auditory cortex as perception. However, during imagery abstracted, categorical information is encoded in the neural patterns, particularly when individuals are experiencing more vivid imagery. This highlights the necessity to move beyond traditional “brain mapping” inference in human neuroimaging, which assumes common regional activation implies similar mental representations.
Article
Full-text available
Semantic memory is a crucial higher cortical function that codes the meaning of objects and words, and when impaired after neurological damage, patients are left with significant disability. Investigations of semantic dementia have implicated the anterior temporal lobe (ATL) region, in general, as crucial for multimodal semantic memory. The potentially crucial role of the ventral ATL subregion has been emphasized by recent functional neuroimaging studies, but the necessity of this precise area has not been selectively tested. The implantation of subdural electrode grids over this subregion, for the presurgical assessment of patients with partial epilepsy or brain tumor, offers the dual yet rare opportunities to record cortical local field potentials while participants complete semantic tasks and to stimulate the functionally identified regions in the same participants to evaluate the necessity of these areas in semantic processing. Across 6 patients, and utilizing a variety of semantic assessments, we evaluated and confirmed that the anterior fusiform/inferior temporal gyrus is crucial in multimodal, receptive, and expressive, semantic processing. © The Author 2014. Published by Oxford University Press.
Article
Full-text available
Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial frequency, and orientation) are encoded during mental imagery. However, the specific hypothesis that low-level visual features are encoded during mental imagery is difficult to directly test using MVPC. The difficulty is especially acute when considering the representation of complex, multi-object scenes that can evoke multiple sources of variation that are distinct from low-level visual features. Therefore, we used a voxel-wise modeling and decoding approach to directly test the hypothesis that low-level visual features are encoded in activity generated during mental imagery of complex scenes. Using fMRI measurements of cortical activity evoked by viewing photographs, we constructed voxel-wise encoding models of tuning to low-level visual features. We also measured activity as subjects imagined previously memorized works of art. We then used the encoding models to determine if putative low-level visual features encoded in this activity could pick out the imagined artwork from among thousands of other randomly selected images. We show that mental images can be accurately identified in this way; moreover, mental image identification accuracy depends upon the degree of tuning to low-level visual features in the voxels selected for decoding. These results directly confirm the hypothesis that low-level visual features are encoded during mental imagery of complex scenes. Our work also points to novel forms of brain-machine interaction: we provide a proof-of-concept demonstration of an internet image search guided by mental imagery.
Article
Full-text available
To create subjective experience, our brain must translate physical stimulus input by incorporating prior knowledge and expectations. For example, we perceive color and not wavelength information, and this in part depends on our past experience with colored objects ( Hansen et al. 2006; Mitterer and de Ruiter 2008). Here, we investigated the influence of object knowledge on the neural substrates underlying subjective color vision. In a functional magnetic resonance imaging experiment, human subjects viewed a color that lay midway between red and green (ambiguous with respect to its distance from red and green) presented on either typical red (e.g., tomato), typical green (e.g., clover), or semantically meaningless (nonsense) objects. Using decoding techniques, we could predict whether subjects viewed the ambiguous color on typical red or typical green objects based on the neural response of veridical red and green. This shift of neural response for the ambiguous color did not occur for nonsense objects. The modulation of neural responses was observed in visual areas (V3, V4, VO1, lateral occipital complex) involved in color and object processing, as well as frontal areas. This demonstrates that object memory influences wavelength information relatively early in the human visual system to produce subjective color vision.
Article
Full-text available
Major theories for explaining the organization of semantic memory in the human brain are premised on the often-observed dichotomous dissociation between living and nonliving objects. Evidence from neuroimaging has been interpreted to suggest that this distinction is reflected in the functional topography of the ventral vision pathway as lateral-to-medial activation gradients. Recently, we observed that similar activation gradients also reflect differences among living stimuli consistent with the semantic dimension of graded animacy. Here, we address whether the salient dichotomous distinction between living and nonliving objects is actually reflected in observable measured brain activity or whether previous observations of a dichotomous dissociation were the illusory result of stimulus sampling biases. Using fMRI, we measured neural responses while participants viewed 10 animal species with high to low animacy and two inanimate categories. Representational similarity analysis of the activity in ventral vision cortex revealed a main axis of variation with high-animacy species maximally different from artifacts and with the least animate species closest to artifacts. Although the associated functional topography mirrored activation gradients observed for animate–inanimate contrasts, we found no evidence for a dichotomous dissociation. We conclude that a central organizing principle of human object vision corresponds to the graded psychological property of animacy with no clear distinction between living and nonliving stimuli. The lack of evidence for a dichotomous dissociation in the measured brain activity challenges theories based on this premise.
Article
Full-text available
Neuroimaging studies have revealed strong selectivity for object categories in high-level regions of the human visual system. However, it is unknown whether this selectivity is truly based on object category, or whether it reflects tuning for low-level features that are common to images from a particular category. To address this issue, we measured the neural response to different object categories across the ventral visual pathway. Each object category elicited a distinct neural pattern of response. Next, we compared the patterns of neural response between object categories. We found a strong positive correlation between the neural patterns and the underlying low-level image properties. Importantly, this correlation was still evident when the within-category correlations were removed from the analysis. Next, we asked whether basic image properties could also explain variation in the pattern of response to different exemplars from one object category (faces). A significant correlation was also evident between the similarity of neural patterns of response and the low-level properties of different faces, particularly in regions associated with face processing. These results suggest that the appearance of category-selective regions at this coarse scale of representation may be explained by the systematic convergence of responses to low-level features that are characteristic of each category.
Article
Full-text available
Category-specificity has been demonstrated in the human posterior ventral temporal cortex for a variety of object categories. Although object representations within the ventral visual pathway must be sufficiently rich and complex to support the recognition of individual objects, little is known about how specific objects are represented. Here, we used representational similarity analysis to determine what different kinds of object information are reflected in fMRI activation patterns and uncover the relationship between categorical and object-specific semantic representations. Our results show a gradient of informational specificity along the ventral stream from representations of image-based visual properties in early visual cortex, to categorical representations in the posterior ventral stream. A key finding showed that object-specific semantic information is uniquely represented in the perirhinal cortex, which was also increasingly engaged for objects that are more semantically confusable. These findings suggest a key role for the perirhinal cortex in representing and processing object-specific semantic information that is more critical for highly confusable objects. Our findings extend current distributed models by showing coarse dissociations between objects in posterior ventral cortex, and fine-grained distinctions between objects supported by the anterior medial temporal lobes, including the perirhinal cortex, which serve to integrate complex object information.
Article
Full-text available
Conceptual knowledge reflects our multi-modal 'semantic database'. As such, it brings meaning to all verbal and non-verbal stimuli, is the foundation for verbal and non-verbal expression and provides the basis for computing appropriate semantic generalizations. Multiple disciplines (e.g. philosophy, cognitive science, cognitive neuroscience and behavioural neurology) have striven to answer the questions of how concepts are formed, how they are represented in the brain and how they break down differentially in various neurological patient groups. A long-standing and prominent hypothesis is that concepts are distilled from our multi-modal verbal and non-verbal experience such that sensation in one modality (e.g. the smell of an apple) not only activates the intramodality long-term knowledge, but also reactivates the relevant intermodality information about that item (i.e. all the things you know about and can do with an apple). This multi-modal view of conceptualization fits with contemporary functional neuroimaging studies that observe systematic variation of activation across different modality-specific association regions dependent on the conceptual category or type of information. A second vein of interdisciplinary work argues, however, that even a smorgasbord of multi-modal features is insufficient to build coherent, generalizable concepts. Instead, an additional process or intermediate representation is required. Recent multidisciplinary work, which combines neuropsychology, neuroscience and computational models, offers evidence that conceptualization follows from a combination of modality-specific sources of information plus a transmodal 'hub' representational system that is supported primarily by regions within the anterior temporal lobe, bilaterally.
Article
Full-text available
Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects.
Article
Full-text available
How verbal and nonverbal visuoperceptual input connects to semantic knowledge is a core question in visual and cognitive neuroscience, with significant clinical ramifications. In an event-related functional magnetic resonance imaging (fMRI) experiment we determined how cosine similarity between fMRI response patterns to concrete words and pictures reflects semantic clustering and semantic distances between the represented entities within a single category. Semantic clustering and semantic distances between 24 animate entities were derived from a concept-feature matrix based on feature generation by >1000 subjects. In the main fMRI study, 19 human subjects performed a property verification task with written words and pictures and a low-level control task. The univariate contrast between the semantic and the control task yielded extensive bilateral occipitotemporal activation from posterior cingulate to anteromedial temporal cortex. Entities belonging to a same semantic cluster elicited more similar fMRI activity patterns in left occipitotemporal cortex. When words and pictures were analyzed separately, the effect reached significance only for words. The semantic similarity effect for words was localized to left perirhinal cortex. According to a representational similarity analysis of left perirhinal responses, semantic distances between entities correlated inversely with cosine similarities between fMRI response patterns to written words. An independent replication study in 16 novel subjects confirmed these novel findings. Semantic similarity is reflected by similarity of functional topography at a fine-grained level in left perirhinal cortex. The word specificity excludes perceptually driven confounds as an explanation and is likely to be task dependent.
Article
Full-text available
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects.
Article
Full-text available
How brain structures and neuronal circuits mechanistically underpin symbolic meaning has recently been elucidated by neuroimaging, neuropsychological, and neurocomputational research. Modality-specific 'embodied' mechanisms anchored in sensorimotor systems appear to be relevant, as are 'disembodied' mechanisms in multimodal areas. In this paper, four semantic mechanisms are proposed and spelt out at the level of neuronal circuits: referential semantics, which establishes links between symbols and the objects and actions they are used to speak about; combinatorial semantics, which enables the learning of symbolic meaning from context; emotional-affective semantics, which establishes links between signs and internal states of the body; and abstraction mechanisms for generalizing over a range of instances of semantic meaning. Referential, combinatorial, emotional-affective, and abstract semantics are complementary mechanisms, each necessary for processing meaning in mind and brain.
Article
Full-text available
In the ventral visual pathway, early visual areas encode light patterns on the retina in terms of image properties, for example, edges and color, whereas higher areas encode visual information in terms of objects and categories. At what point does semantic knowledge, as instantiated in human language, emerge? We examined this question by studying whether semantic similarity in language relates to the brain's organization of object representations in inferior temporal cortex (ITC), an area of the brain at the crux of several proposals describing how the brain might represent conceptual knowledge. Semantic relationships among words can be viewed as a geometrical structure with some pairs of words close in their meaning (e.g., man and boy) and other pairs more distant (e.g., man and tomato). ITC's representation of objects similarly can be viewed as a complex structure with some pairs of stimuli evoking similar patterns of activation (e.g., man and boy) and other pairs evoking very different patterns (e.g., man and tomato). In this study, we examined whether the geometry of visual object representations in ITC bears a correspondence to the geometry of semantic relationships between word labels used to describe the objects. We compared ITC's representation to semantic structure, evaluated by explicit ratings of semantic similarity and by five computational measures of semantic similarity. We show that the representational geometry of ITC—but not of earlier visual areas (V1)—is reflected both in explicit behavioral ratings of semantic similarity and also in measures of semantic similarity derived from word usage patterns in natural language. Our findings show that patterns of brain activity in ITC not only reflect the organization of visual information into objects but also represent objects in a format compatible with conceptual thought and language.
Article
Full-text available
Interaction with everyday objects requires the representation of conceptual object properties, such as where and how an object is used. What are the neural mechanisms that support this knowledge? While research on semantic dementia has provided evidence for a critical role of the anterior temporal lobes (ATLs) in object knowledge, fMRI studies using univariate analysis have primarily implicated regions outside the ATL. In the present human fMRI study we used multivoxel pattern analysis to test whether activity patterns in ATLs carry information about conceptual object properties. Participants viewed objects that differed on two dimensions: where the object is typically found (in the kitchen or the garage) and how the object is commonly used (with a rotate or a squeeze movement). Anatomical region-of-interest analyses covering the ventral visual stream revealed that information about the location and action dimensions increased from posterior to anterior ventral temporal cortex, peaking in the temporal pole. Whole-brain multivoxel searchlight analysis confirmed these results, revealing highly significant and regionally specific information about the location and action dimensions in the anterior temporal lobes bilaterally. In contrast to conceptual object properties, perceptual and low-level visual properties of the objects were reflected in activity patterns in posterior lateral occipitotemporal cortex and occipital cortex, respectively. These results provide fMRI evidence that object representations in the anterior temporal lobes are abstracted away from perceptual properties, categorizing objects in semantically meaningful groups to support conceptual object knowledge.
Article
Full-text available
Two experiments provide evidence that information about the real-life size of objects is elicited by nouns. A priming paradigm was used with a category membership verification task. The results showed that targets were responded to faster when preceded by a same-size prime, and that large entities were processed faster than small ones. Overall, our results significantly extend previous work on perceptual information elicited by concepts (e.g., Zwaan & Yaxley, 2004) and, in particular, on size information (e.g., Rubinstein & Henik, 2002) by means of a size-unrelated paradigm.