ArticlePDF Available

Multivariate Pattern Analysis Reveals Category-Related Organization of Semantic Representations in Anterior Temporal Cortex

Authors:

Abstract and Figures

Unlabelled: The neural substrates of semantic representation have been the subject of much controversy. The study of semantic representations is complicated by difficulty in disentangling perceptual and semantic influences on neural activity, as well as in identifying stimulus-driven, "bottom-up" semantic selectivity unconfounded by top-down task-related modulations. To address these challenges, we trained human subjects to associate pseudowords (TPWs) with various animal and tool categories. To decode semantic representations of these TPWs, we used multivariate pattern classification of fMRI data acquired while subjects performed a semantic oddball detection task. Crucially, the classifier was trained and tested on disjoint sets of TPWs, so that the classifier had to use the semantic information from the training set to correctly classify the test set. Animal and tool TPWs were successfully decoded based on fMRI activity in spatially distinct subregions of the left medial anterior temporal lobe (LATL). In addition, tools (but not animals) were successfully decoded from activity in the left inferior parietal lobule. The tool-selective LATL subregion showed greater functional connectivity with left inferior parietal lobule and ventral premotor cortex, indicating that each LATL subregion exhibits distinct patterns of connectivity. Our findings demonstrate category-selective organization of semantic representations in LATL into spatially distinct subregions, continuing the lateral-medial segregation of activation in posterior temporal cortex previously observed in response to images of animals and tools, respectively. Together, our results provide evidence for segregation of processing hierarchies for different classes of objects and the existence of multiple, category-specific semantic networks in the brain. Significance statement: The location and specificity of semantic representations in the brain are still widely debated. We trained human participants to associate specific pseudowords with various animal and tool categories, and used multivariate pattern classification of fMRI data to decode the semantic representations of the trained pseudowords. We found that: (1) animal and tool information was organized in category-selective subregions of medial left anterior temporal lobe (LATL); (2) tools, but not animals, were encoded in left inferior parietal lobe; and (3) LATL subregions exhibited distinct patterns of functional connectivity with category-related regions across cortex. Our findings suggest that semantic knowledge in LATL is organized in category-related subregions, providing evidence for the existence of multiple, category-specific semantic representations in the brain.
Content may be subject to copyright.
A preview of the PDF is not available
... For example, the anterior aspects of the superior temporal gyrus have been associated with social and emotional processes (Olson et al., 2007;Zahn et al., 2007;Simmons et al., 2010;Mellem et al., 2016;Wang et al., 2017), while anterior regions of the fusiform gyrus have been associated with representing abstract properties of objects (Peelen and Caramazza, 2012;Malone et al., 2016), and the temporal poles have been linked to representing and naming unique entities (Grabowski et al., 2001;Damasio et al., 2004;Mesulam et al., 2013). In addition, it has been argued that the anterior region of the fusiform gyri acts as a domain-general hub for all semantic knowledge (Mion et al., 2010;Hoffman et al., 2014;Binney et al., 2016). ...
... Our parcellation of the ATL into many distinct functional regions is consistent with prior task-based studies that report a wide range of cognitive functions localized to the ATL (e.g., Grabowski et al., 2001;Olson et al., 2007;Patterson et al., 2007;Simmons et al., 2010;Peelen and Caramazza, 2012;Mesulam et al., 2013;Malone et al., 2016;Mellem et al., 2016;Wang et al., 2017). However, while many of these studies report peak coordinates for a particular cognitive function, they all ultimately refer to the functional region as "the ATL". ...
... However, when we directly compared the connectivity patterns of a parcel that overlaps the anterior fusiform gyrus with another that overlaps the parahippocampal gyrus, we found that the anterior fusiform gyrus is connected to face-selective cortex, including the FFA and OFA in the ventral face stream (Duchaine and Yovel, 2015) and regions in the social cognitive network (Gotts et al., 2012), rather than to diverse functional and sensory systems across the brain (i.e., the spokes), while the parcel overlapping the parahippocampal gyrus is preferentially connected to regions that are involved in scene processing, including the PPA, OPA, and RSC (Epstein and Baker, 2019). Another recent study reported a lateral-tomedial organization in a region overlapping the left anterior fusiform gyrus, with successful decoding of animals in a lateral region and successful decoding of tools in a medial region (Malone et al., 2016). Thus, rather than the anterior fusiform gyrus being a domain-general semantic hub, patterns of connectivity between it and the rest of the brain suggest that it is more likely a continuation of the domain-specific ventral-visual pathway that begins in early visual cortex and continues into occipitotemporal cortex, potentially terminating in the temporal poles (Anzellotti et al., 2011;Skipper et al., 2011;Kravitz et al., 2013). ...
Article
Full-text available
Even though the anterior temporal lobe (ATL) comprises several anatomical and functional subdivisions, it is often reduced to a homogeneous theoretical entity, such as a domain-general convergence zone, or “hub”, for semantic information. Methodological limitations are largely to blame for the imprecise mapping of function to structure in the ATL. There are two major obstacles to using fMRI to identify the precise functional organization of the ATL: the difficult choice of stimuli and tasks to activate, and dissociate, specific regions within the ATL and poor signal quality due to magnetic field distortions near the sinuses. To circumvent these difficulties, we developed a data-driven parcellation routine using resting-state fMRI data (24 females, 64 males) acquired using a sequence that was optimized to enhance signal in the ATL. Focusing on patterns of functional connectivity between each ATL voxel and the rest of the brain, we found that the ATL comprises at least 34 distinct functional parcels that are arranged into bands along the lateral and ventral cortical surfaces, extending from the posterior temporal lobes into the temporal poles. In addition, the anterior region of the fusiform gyrus, most often cited as the location of the semantic hub, was found to be part of a domain-specific network associated with face and social processing, rather than a domain-general semantic hub. These findings offer a fine-grained functional map of the ATL and offer an initial step towards using more precise language to describe the locations of functional responses in this heterogeneous region of human cortex.
... Similar models have been proposed to underlie the reinstatement and instantiation of categories. For example, cognitive models such as the Token Model of category instantiation (Anderson & Bower, 1973) and neural models like the hub-and-spoke model of semantic representation (Xi, Li, Gao, He, & Tang, 2019;Lambon Ralph, Jefferies, Patterson, & Rogers, 2017;Malone, Glezer, Kim, Jiang, & Riesenhuber, 2016). In the Token Model, incoming information is evaluated against a category token, or target, and a decision is made whether a match exists. ...
... In the Token Model, incoming information is evaluated against a category token, or target, and a decision is made whether a match exists. The hub-and-spoke model posits that the ATLs act as a transmodal hub that integrates all category-relevant information represented across the brain (Xi et al., 2019;Lambon Ralph et al., 2017;Malone et al., 2016). Similarly, models of schema processing highlight the vMPFC as the transmodal hub integrating context-related information. ...
Article
Prior knowledge, such as schemas or semantic categories, influences our interpretation of stimulus information. For this to transpire, prior knowledge must first be reinstated and then instantiated by being applied to incoming stimuli. Previous neuropsychological models implicate the ventromedial prefrontal cortex (vMPFC) in mediating these functions for schemas and the anterior/lateral temporal lobes and related structures for categories. vMPFC, however, may also affect processing of semantic category information. Here, the putative differential role of the vMPFC in the reinstatement and instantiation of schemas and semantic categories was examined by probing network-level oscillatory dynamics. Patients with vMPFC damage (n = 11) and healthy controls (n = 13) were instructed to classify words according to a given schema or category, while electroencephalography was recorded. As reinstatement is a preparatory process, we focused on oscillations occurring 500 msec prior to stimulus presentation. As instantiation occurs at stimulus presentation, we focused on oscillations occurring between stimulus presentation and 1000 msec poststimulus. We found that reinstatement was associated with prestimulus, theta and alpha desynchrony between vMPFC and the posterior parietal cortex for schemas, and between lateral temporal lobe and inferotemporal cortex for categories. Damage to the vMPFC influenced both schemas and categories, but patients with damage to the subcallosal vMPFC showed schema-specific deficits. Instantiation showed similar oscillatory patterns in the poststimulus time frame, but in the alpha and beta frequency bands. Taken together, these findings highlight distinct but partially overlapping neural mechanisms implicated in schema- and category-mediated processing.
... Secondorder RSA was performed using a searchlight approach; semantic RSMs (i.e. the word2vec-based RSM and ELMobased RSM) were compared with neural pattern similarity matrices (brain-based RSM) to test what semantic information was represented in different brain regions, see Fig. 2b. Neural pattern similarity was estimated for cubic regions of interest (ROIs) containing 125 voxels surrounding a central voxel, as many previous studies examining semantic representation used this approach successfully (Fairhall and Caramazza 2013;Malone et al. 2016;Stolier and Freeman 2016;Leshinskaya et al. 2017;Wang et al. 2017;Viganò and Piazza 2020). In each of these ROIs, we compared patterns of brain activity to derive a neural RSM from the pairwise Pearson correlations of each pair of trials. ...
Article
Full-text available
How concepts are coded in the brain is a core issue in cognitive neuroscience. Studies have focused on how individual concepts are processed, but the way in which conceptual representation changes to suit the context is unclear. We parametrically manipulated the association strength between words, presented in pairs one word at a time using a slow event-related fMRI design. We combined representational similarity analysis and computational linguistics to probe the neurocomputational content of these trials. Individual word meaning was maintained in supramarginal gyrus (associated with verbal short-term memory) when items were judged to be unrelated, but not when a linking context was retrieved. Context-dependent meaning was instead represented in left lateral prefrontal gyrus (associated with controlled retrieval), angular gyrus, and ventral temporal lobe (regions associated with integrative aspects of memory). Analyses of informational connectivity, examining the similarity of activation patterns across trials between sites, showed that control network regions had more similar multivariate responses across trials when association strength was weak, reflecting a common controlled retrieval state when the task required more unusual associations. These findings indicate that semantic control and representational sites amplify contextually relevant meanings in trials judged to be related.
... The ATC is considered as an extended part of the limbic system 41 , including the OFC, and indeed has a strong connection with the OFC 42,43 . A variety of cognitive functions, such as semantic memory 44,45 , social cognition 46 , and emotion 47,48 , are associated with the ATC. Of them, the involvement in emotional processing is in common with the OFC 49 , which also showed relatively strong genetic effects in our results (Fig. 5a). ...
Preprint
Full-text available
Natural sensory inputs in everyday situations induce unique experiences that vary between individuals, even when inputs are identical. This experiential uniqueness stems from the representations of sensory signals in each brain. We investigated whether genetic factors control individual differences in sensory representations in the brain by studying the brain representations of natural audiovisual signals in twin-pairs. We measured the brain response to natural movies in twins using functional magnetic resonance imaging and quantified the genetic influence on the multivoxel-pattern similarity of movie clip representations between each twin. The whole-brain analysis revealed a genetic influence on the multivoxel-pattern similarity in widespread brain regions, which included the occipitotemporal sensory cortices as well as the frontoparietal association cortices and subcortical structures. Our findings suggest that genetic factors exhibit an effect on natural audiovisual signaling by controlling audiovisual representations in the brain.
... The ATC is considered as an extended part of the limbic system 41 , including the OFC, and indeed has a strong connection with the OFC 42,43 . A variety of cognitive functions, such as semantic memory 44,45 , social cognition 46 , and emotion 47,48 , are associated with the ATC. Of them, the involvement in emotional processing is in common with the OFC 49 , which also showed relatively strong genetic effects in our results (Fig. 5a). ...
Preprint
Full-text available
Natural sensory inputs in everyday situations induce unique experiences that vary between individuals, even when inputs are identical. This experiential uniqueness stems from the representations of sensory signals in each brain. We investigated whether genetic factors control individual differences in sensory representations in the brain by studying the brain representations of natural audiovisual signals in twin-pairs. We measured the brain response to natural movies in twins using functional magnetic resonance imaging and quantified the genetic influence on the multivoxel-pattern similarity of movie clip representations between each twin. The whole-brain analysis revealed a genetic influence on the multivoxel-pattern similarity in widespread brain regions, which included the occipitotemporal sensory cortices as well as the frontoparietal association cortices and subcortical structures. Our findings suggest that genetic factors exhibit an effect on natural audiovisual signaling by controlling audiovisual representations in the brain.
... Nevertheless, in line with our results, two recent studies using tool pictures also found that tool manipulation-related information can be decoded from the ATC (74,75). Clarifying specific roles of the regions identified here will be an important next step in understanding how the brain achieves complex tool-use and is well suited for connectivity approaches (76,77,78). preprint (which was not certified by peer review) is the author/funder. ...
Preprint
Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipito-temporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools. Using real action fMRI and multi-voxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is being grasped appropriately for use) were decodable from hand-selective areas in occipito-temporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control nontools. In addition, decoding of grasp typicality was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naïve to object category (tool vs non-tools). Finding a specificity for typical tool grasping in hand-, rather than tool-, selective regions challenges the long-standing assumption that brain activation for viewing tool images reflects sensorimotor processing linked to tool manipulation. Instead our results show that typicality representations for tool grasping are automatically evoked in visual regions specialised for representing the human hand, the brain’s primary tool for interacting with the world. Significance Statement The unique ability of humans to manufacture and use tools is unsurpassed across the animal kingdom, with tool use considered a defining feature of our species. Most neuroscientific studies that investigate the brain mechanisms that support tool use, record brain activity while people simply view images of tools or hands and not when people perform actual hand movements with tools. Here we show that specific areas of the human visual system that preferentially process hands automatically encode how to appropriately grasp 3D tools, even when no actual tool use is required. These findings suggest that visual areas optimized for processing hands represent fundamental aspects of tool grasping in humans, such as which side they should be grasped for correct manipulation.
Article
Full-text available
The existence of a neural representation for whole words (i.e., a lexicon) is a common feature of many models of speech processing. Prior studies have provided evidence for a visual lexicon containing representations of whole written words in an area of the ventral visual stream known as the “Visual Word Form Area” (VWFA). Similar experimental support for an auditory lexicon containing representations of spoken words has yet to be shown. Using fMRI rapid adaptation techniques, we provide evidence for an auditory lexicon in the “Auditory Word Form Area” (AWFA) in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens the selectivity of their representations in the AWFA. These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across modalities in the visual and auditory ventral streams.
Article
A large body of literature supports theories positing a distributed, perceptually grounded semantic memory system. Prominent models have assumed distributed features are integrated into networks using either shallow or deep hierarchies. Previous behavioural tests of modality effects in shallow and deep hierarchies inspired by, but not implemented in, connectionist models support deep hierarchy architectures. We behaviourally replicate and model speeded dual feature verification in a sample of general-purpose modality-specific computational models of semantic memory trained on feature production norms for 541 concepts. The cross-modal advantage in semantic processing shown behaviourally and in simulations supports hierarchically organised distributed models of semantic memory and provides novel insight into the division of labour in these models. Analyses of the emergent model structure suggest animacy distinctions arise from the self-organisation of statistical co-occurrences among multisensory features but weakly among unisensory features. These findings suggest a privileged role of the multisensory convergence area for category representation.
Article
For confidence of memory, a neural basis such as traces of stored memories should be required. However, because false memories have never been stored, the neural basis for false memory confidence remains unclear. Here we monitored the brain activity in participants while they viewed learned or novel objects, subsequently decided whether each presented object was learned and assessed their confidence levels. We found that when novel objects are presented, false memory confidence significantly depends on the shared representations with learned objects in the prefrontal cortex. However, such a tendency was not found in posterior regions including the visual cortex, which may be involved in the processing of perceptual gist. Furthermore, the confidence-dependent shared representations were not observed when participants correctly answered novel objects as non-learned objects. These results demonstrate that false memory confidence is critically based on the reinstatement of high-level semantic gist of stored memories in the prefrontal cortex.
Article
Full-text available
Humans quickly and accurately learn new visual concepts from sparse data, sometimes just a single example. The impressive performance of artificial neural networks which hierarchically pool afferents across scales and positions suggests that the hierarchical organization of the human visual system is critical to its accuracy. These approaches, however, require magnitudes of order more examples than human learners. We used a benchmark deep learning model to show that the hierarchy can also be leveraged to vastly improve the speed of learning. We specifically show how previously learned but broadly tuned conceptual representations can be used to learn visual concepts from as few as two positive examples; reusing visual representations from earlier in the visual hierarchy, as in prior approaches, requires significantly more examples to perform comparably. These results suggest techniques for learning even more efficiently and provide a biologically plausible way to learn new visual concepts from few examples.
Article
Full-text available
The nature of orthographic representations in the human brain is still subject of much debate. Recent reports have claimed that the visual word form area (VWFA) in left occipitotemporal cortex contains an orthographic lexicon based on neuronal representations highly selective for individual written real words (RWs). This theory predicts that learning novel words should selectively increase neural specificity for these words in the VWFA. We trained subjects to recognize novel pseudowords (PWs) and used fMRI rapid adaptation to compare neural selectivity with RWs, untrained PWs (UTPWs), and trained PWs (TPWs). Before training, PWs elicited broadly tuned responses, whereas responses to RWs indicated tight tuning. After training, TPW responses resembled those of RWs, whereas UTPWs continued to show broad tuning. This change in selectivity was specific to the VWFA. Therefore, word learning appears to selectively increase neuronal specificity for the new words in the VWFA, thereby adding these words to the brain's visual dictionary. Copyright © 2015 the authors 0270-6474/15/354965-08$15.00/0.
Article
Full-text available
Despite indications that regions within the anterior temporal lobe (ATL) might make a crucial contribution to pan-modal semantic representation, to date there have been no investigations of when during semantic processing the ATL plays a critical role. To test the timing of the ATL involvement in semantic processing, we studied the effect of double-pulse TMS on behavioral responses in semantic and difficulty-matched control tasks. Chronometric TMS was delivered over the left ATL (10 mm from the tip of the temporal pole along the middle temporal gyrus). During each trial, two pulses of TMS (40 msec apart) were delivered either at baseline (before stimulus presentation) or at one of the experimental time points 100, 250, 400, and 800 msec poststimulus onset. A significant disruption to performance was identified from 400 msec on the semantic task but not on the control assessment. Our results not only reinforce the key role of the left ATL in semantic representation but also indicate that its contribution is especially important around 400 msec poststimulus onset. Together, these facts suggest that the ATL may be one of the neural sources of the N400 ERP component.
Article
Full-text available
Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.
Article
Full-text available
The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns.
Article
Full-text available
The present fMRI study used a spelling task to investigate the hypothesis that the left ventral occipitotemporal cortex (vOT) hosts neuronal representations of whole written words. Such an orthographic word lexicon is posited by cognitive dual-route theories of reading and spelling. In the scanner, participants performed a spelling task in which they had to indicate if a visually presented letter is present in the written form of an auditorily presented word. The main experimental manipulation distinguished between an orthographic word spelling condition in which correct spelling decisions had to be based on orthographic whole-word representations, a word spelling condition in which reliance on orthographic whole-word representations was optional and a phonological pseudoword spelling condition in which no reliance on such representations was possible. To evaluate spelling-specific activations the spelling conditions were contrasted with control conditions that also presented auditory words and pseudowords, but participants had to indicate if a visually presented letter corresponded to the gender of the speaker. We identified a left vOT cluster activated for the critical orthographic word spelling condition relative to both the control condition and the phonological pseudoword spelling condition. Our results suggest that activation of left vOT during spelling can be attributed to the retrieval of orthographic whole-word representations and, thus, support the position that the left vOT potentially represents the neuronal equivalent of the cognitive orthographic word lexicon. Hum Brain Mapp, 2015. © 2014 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Article
Full-text available
Reading requires the interaction between multiple cognitive processes situated in distant brain areas. This makes the study of functional brain connectivity highly relevant for understanding developmental dyslexia. We used seed-voxel correlation mapping to analyse connectivity in a left-hemispheric network for task-based and resting-state fMRI data. Our main finding was reduced connectivity in dyslexic readers between left posterior temporal areas (fusiform, inferior temporal, middle temporal, superior temporal) and the left inferior frontal gyrus. Reduced connectivity in these networks was consistently present for 2 reading-related tasks and for the resting state, showing a permanent disruption which is also present in the absence of explicit task demands and potential group differences in performance. Furthermore, we found that connectivity between multiple reading-related areas and areas of the default mode network, in particular the precuneus, was stronger in dyslexic compared with nonimpaired readers.
Article
Full-text available
In this review, we propose that the neural basis for the spontaneous, diversified human tool use is an area devoted to the execution and observation of tool actions, located in the left anterior supramarginal gyrus (aSMG). The aSMG activation elicited by observing tool use is typical of human subjects, as macaques show no similar activation, even after an extensive training to use tools. The execution of tool actions, as well as their observation, requires the convergence upon aSMG of inputs from different parts of the dorsal and ventral visual streams. Non-semantic features of the target object may be provided by the posterior parietal cortex (PPC) for tool-object interaction, paralleling the well-known PPC input to anterior intraparietal (AIP) for hand-object interaction. Semantic information regarding tool identity, and knowledge of the typical manner of handling the tool, could be provided by inferior and middle regions of the temporal lobe. Somatosensory feedback and technical reasoning, as well as motor and intentional constraints also play roles during the planning of tool actions and consequently their signals likewise converge upon aSMG. We further propose that aSMG may have arisen though duplication of monkey AIP and invasion of the duplicate area by afferents from PPC providing distinct signals depending on the kinematics of the manipulative action. This duplication may have occurred when Homo Habilis or Homo Erectus emerged, generating the Oldowan or Acheulean Industrial complexes respectively. Hence tool use may have emerged during hominid evolution between bipedalism and language. We conclude that humans have two parietal systems involved in tool behavior: a biological circuit for grasping objects, including tools, and an artifactual system devoted specifically to tool use. Only the latter allows humans to understand the causal relationship between tool use and obtaining the goal, and is likely to be the basis of all technological developments.
Article
Full-text available
Spatial frequency (SF) selection has long been recognized to play a role in global and local processing, though the nature of the relationship between SF processing and global/local perception is debated. Previous studies have shown that attention to relatively lower SFs facilitates global perception, and that attention to relatively higher SFs facilitates local perception. Here we recorded event-related brain potentials (ERPs) to investigate whether processing of low versus high SFs is modulated automatically during global and local perception, and to examine the time course of any such effects. Participants compared bilaterally presented hierarchical letter stimuli and attended to either the global or local levels. Irrelevant SF grating probes flashed at the center of the display 200 ms after the onset of the hierarchical letter stimuli could either be low or high in SF. It was found that ERPs elicited by the SF grating probes differed as a function of attended level (global versus local). ERPs elicited by low SF grating probes were more positive in the interval 196-236 ms during global than local attention, and this difference was greater over the right occipital scalp. In contrast, ERPs elicited by the high SF gratings were more positive in the interval 250-290 ms during local than global attention, and this difference was bilaterally distributed over the occipital scalp. These results indicate that directing attention to global versus local levels of a hierarchical display facilitates automatic perceptual processing of low versus high SFs, respectively, and this facilitation is not limited to the locations occupied by the hierarchical display. The relatively long latency of these attention-related ERP modulations suggests that initial (early) SF processing is not affected by attention to hierarchical level, lending support to theories positing a higher level mechanism to underlie the relationship between SF processing and global versus local perception.
Article
Left perirhinal cortex has been previously implicated in associative coding. According to a recent experiment, the similarity of perirhinal fMRI response patterns to written concrete words is higher for words which are more similar in their meaning. If left perirhinal cortex functions as an amodal semantic hub, one would predict that this semantic similarity effect would extend to the spoken modality. We conducted an event-related fMRI experiment and evaluated whether a same semantic similarity effect could be obtained for spoken as for written words. Twenty healthy subjects performed a property verification task in either the written or the spoken modality. Words corresponded to concrete animate entities for which extensive feature generation was available from more than 1000 subjects. From these feature generation data, a concept–feature matrix was derived which formed the basis of a cosine similarity matrix between the entities reflecting their similarity in meaning (called the “semantic cossimilarity matrix”). Independently, we calculated a cosine similarity matrix between the left perirhinal fMRI activity patterns evoked by the words (called the “fMRI cossimilarity matrix”). Next, the similarity was determined between the semantic cossimilarity matrix and the fMRI cossimilarity matrix. This was done for written and spoken words pooled, for written words only, for spoken words only, as well as for crossmodal pairs. Only for written words did the fMRI cossimilarity matrix correlate with the semantic cossimilarity matrix. Contrary to our prediction, we did not find any such effect for auditory word input nor did we find cross-modal effects in perirhinal cortex between written and auditory words. Our findings situate the contribution of left perirhinal cortex to word processing at the top of the visual processing pathway, rather than at an amodal stage where visual and auditory word processing pathways have already converged.
Article
Successful learning involves integrating new material into existing memory networks. A learning procedure known as fast mapping (FM), thought to simulate the word-learning environment of children, has recently been linked to distinct neuroanatomical substrates in adults. This idea has suggested the (never-before tested) hypothesis that FM may promote rapid incorporation into cortical memory networks. We test this hypothesis here in two experiments. In our first experiment, we introduced fifty participants to sixteen unfamiliar animals and names through FM or explicit encoding (EE), and tested subjects on the training day, and again after sleep. Learning through EE produced strong declarative memories, without immediate lexical competition, as expected from slow-consolidation models. Learning through FM, however, led to almost immediate lexical competition, which continued to the next day. Additionally, the learned words began to prime related concepts on the day following FM (but not EE) training. In a second experiment, we replicated the lexical integration results, and determined that presenting an already-known item during learning was crucial for rapid integration through FM. The findings presented here indicate that learned items can be integrated into cortical memory networks at an accelerated rate through fast mapping. The retrieval of a known related concept, in order to infer the target of learning, is critical for this effect.