Malte R. Henningsen-Schomers’s research while affiliated with Freie Universität Berlin and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (9)


The power of words
  • Article

July 2024

·

102 Reads

The Project Repository Journal

·

Fynn Dobler

·

Malte Henningsen-Schomers

The power of words Compared to our closest living relatives, who typically use fewer than 100 words, humans can build vocabularies of tens of hundreds of thousands of words. The ERC- funded Advanced Grant project ‘Material Constraints enabling Human Cognition’, or ‘MatCo’, will find out why. It will use novel insights from human neurobiology. These will be translated into mathematically exact computational models to find new answers to long-standing questions in cognitive science, linguistics and philosophy. The project will also explore how semantic meaning is implemented for gestures and words and, more specifically, for referential and categorical terms. To identify human cognitive capacities, MatCo will develop models replicating structural differences between human and non-human primate brains. The results will shed light on the biologically constrained networks.


Illustrations of conceptual structure—full semantic feature overlap versus family resemblance—of relatively more concrete and abstract concepts and meanings. Panel A: Examples of overlapping and specific grounded experiential features of instances of concrete (left) and abstract (right) concepts. Below, a schematic illustration of the structural difference (full vs. partial semantic feature overlap) is depicted. Panel B: Specific grounding patterns used in this study. Colors indicate whether a neural element is part of one (blue, red, green), two (yellow, cyan, magenta), or all three instances (white).
Large‐scale network structure and simulation conditions. Panel A: Structure and connectivity of the neural network model. In total, 12 brain areas were modeled, including areas in frontal, temporal, and occipital cortex. Perisylvian areas comprise an inferior frontal articulatory system (red colors) and a superior temporal auditory system (blue colors), and extrasylvian areas comprise a lateral dorsal hand motor system (yellow/brown) and a visual “what” stream of object processing (green). Numbers refer to Brodmann areas, and the colored lines represent long distance cortico‐cortical connections as documented by neuroanatomical studies. Panel B: Schematic depiction of the brain areas modeled (using the same coloring for different brain areas as in Panel A), along with their connectivity structure. Arrows indicate between‐area connections. Panel C: Training regime used for concrete concepts and symbols. Colored dots indicate whether neural elements are related to instance‐specific (blue/green/red), semantic/conceptual (white), or symbol form (cyan) information. For each concept, three instance‐related grounding patterns were presented separately to the network in random order. For symbol learning, a wordform pattern was presented together with one of the instance‐related patterns of the related concept.
Temporal activity dynamics elicited by instances of concrete and abstract concepts after learning out of and within symbol context. Panel A: Activity dynamics of an instance‐related cell assembly (CA), for either all neurons in the CA (left panel), only unique neurons (middle panel), or only conceptual neurons (right panel). The stimulation period is marked in gold. Depicted is the number of spikes per timestep, normalized for the number of neurons in the grounding pattern. Whereas unique neurons contribute to early stages of CA activity, including its ignition, sustained activity is primarily carried by conceptual neurons. Panel B: Reverberation time or working memory period (WMP) for concrete and abstract concepts and symbols. Without a symbol, abstract concepts barely show any prolonged activity or reverberation, whereas concrete concepts do. Abstract concepts benefit the most from learning concepts in context of symbols and become functionally similar to concrete ones. Panel C: The number of spikes at ignition. Through the addition of a symbol, the raw number of neurons involved in ignition increases. Additionally, relatively more conceptual than unique neurons contribute to ignition for both types of symbol.
Verbal Symbols Support Concrete but Enable Abstract Concept Formation: Evidence From Brain‐Constrained Deep Neural Networks
  • Article
  • Full-text available

May 2024

·

42 Reads

·

5 Citations

Concrete symbols (e.g., sun, run) can be learned in the context of objects and actions, thereby grounding their meaning in the world. However, it is controversial whether a comparable avenue to semantic learning exists for abstract symbols (e.g., democracy). When we simulated the putative brain mechanisms of conceptual/semantic grounding using brain‐constrained deep neural networks, the learning of instances of concrete concepts outside of language contexts led to robust neural circuits generating substantial and prolonged activations. In contrast, the learning of instances of abstract concepts yielded much reduced and only short‐lived activity. Crucially, when conceptual instances were learned in the context of wordforms, circuit activations became robust and long‐lasting for both concrete and abstract meanings. These results indicate that, although the neural correlates of concrete conceptual representations can be built from grounding experiences alone, abstract concept formation at the neurobiological level is enabled by and requires the correlated presence of linguistic forms.

Download

Causal influence of linguistic learning on perceptual and conceptual processing: A brain-constrained deep neural network study of proper names and category terms

January 2024

·

55 Reads

·

5 Citations

The Journal of Neuroscience : The Official Journal of the Society for Neuroscience

Language influences cognitive and conceptual processing, but the mechanisms through which such causal effects are realized in the human brain remain unknown. Here, we use a brain-constrained deep neural network model of category formation and symbol learning and analyze the emergent model-internal mechanisms at the neural circuit level. In one set of simulations, the network was presented with similar patterns of neural activity indexing instances of objects and actions belonging to the same categories. Biologically realistic Hebbian learning led to the formation of instance-specific neurons distributed across multiple areas of the network, and, in addition, to cell assembly circuits of ‘shared’ neurons responding to all category instances – the network correlates of conceptual categories. In two separate sets of simulations, the network learned the same patterns together with symbols for individual instances (‘ proper names ’) or symbols related to classes of instances sharing common features (‘ category terms ’). Learning category terms remarkably increased the number of shared neurons in the network, thereby making category representations more robust while reducing the number of neurons of instance-specific ones. In contrast, proper-name learning prevented substantial reduction of instance-specific neurons and blocked the overgrowth of category-general cells. Representational Similarity Analysis further confirmed that the neural activity patterns of category instances became more similar to each other after category-term learning, relative to both learning with proper names and without any symbols. These network-based mechanisms for concepts, proper names and category terms explain why and how symbol learning changes object perception and memory, as revealed by experimental studies. Significance Statement How do verbal symbols for specific individuals ( Micky Mouse ) and object categories ( house mouse ) causally influence conceptual representation and processing? Category terms and proper names have been shown to respectively promote category formation and instance learning, potentially by respectively directing attention to category-critical and object-specific features. Yet the mechanisms underlying these observations at the neural circuit level remained unknown. Using a mathematically precise deep neural network model constrained by properties of the human brain, we show category-term learning strengthens and solidifies conceptual representations, whereas proper names support object-specific mechanisms. Based on network-internal mechanisms and unsupervised correlation-based learning, this work offers neurobiological explanations for causal effects of symbol learning on concept formation, category building and instance representation in the human brain.


Influence of language on perception and concept formation in a brain-constrained deep neural network model

December 2022

·

285 Reads

·

16 Citations

A neurobiologically constrained model of semantic learning in the human brain was used to simulate the acquisition of concrete and abstract concepts, either with or without verbal labels. Concept acquisition and semantic learning were simulated using Hebbian learning mechanisms. We measured the network's category learning performance, defined as the extent to which it successfully (i) grouped partly overlapping perceptual instances into a single (abstract or concrete) conceptual representation, while (ii) still distinguishing representations for distinct concepts. Co-presence of linguistic labels with perceptual instances of a given concept generally improved the network's learning of categories, with a significantly larger beneficial effect for abstract than concrete concepts. These results offer a neurobiological explanation for causal effects of language structure on concept formation and on perceptuo-motor processing of instances of these concepts: supplying a verbal label during concept acquisition improves the cortical mechanisms by which experiences with objects and actions along with the learning of words lead to the formation of neuronal ensembles for specific concepts and meanings. Furthermore, the present results make a novel prediction, namely, that such ‘Whorfian’ effects should be modulated by the concreteness/abstractness of the semantic categories being acquired, with language labels supporting the learning of abstract concepts more than that of concrete ones. This article is part of the theme issue ‘Concepts in interaction: social engagement and inner experiences’.



Modelling concrete and abstract concepts using brain-constrained deep neural networks

November 2021

·

399 Reads

·

39 Citations

Psychological Research

A neurobiologically constrained deep neural network mimicking cortical areas relevant for sensorimotor, linguistic and conceptual processing was used to investigate the putative biological mechanisms underlying conceptual category formation and semantic feature extraction. Networks were trained to learn neural patterns representing specific objects and actions relevant to semantically ‘ground’ concrete and abstract concepts. Grounding sets consisted of three grounding patterns with neurons representing specific perceptual or action-related features; neurons were either unique to one pattern or shared between patterns of the same set. Concrete categories were modelled as pattern triplets overlapping in their ‘shared neurons’, thus implementing semantic feature sharing of all instances of a category. In contrast, abstract concepts had partially shared feature neurons common to only pairs of category instances, thus, exhibiting family resemblance, but lacking full feature overlap. Stimulation with concrete and abstract conceptual patterns and biologically realistic unsupervised learning caused formation of strongly connected cell assemblies (CAs) specific to individual grounding patterns, whose neurons were spread out across all areas of the deep network. After learning, the shared neurons of the instances of concrete concepts were more prominent in central areas when compared with peripheral sensorimotor ones, whereas for abstract concepts the converse pattern of results was observed, with central areas exhibiting relatively fewer neurons shared between pairs of category members. We interpret these results in light of the current knowledge about the relative difficulty children show when learning abstract words. Implications for future neurocomputational modelling experiments as well as neurobiological theories of semantic representation are discussed.


Fig. 2 | Networks for modelling cognitive functions. a | A localist network model includes nodes representing cognitive entities, for example word forms (middle layer), phonemes (bottom layer) and semantic features (top layer). Lines indicate links between nodes. Nodes sum up their inputs linearly, such that the activation of the phoneme nodes of /d/, /o/ and /g/ activate the word node for 'dog', which, in turn, activates semantic feature units (filled circles at the top) characterizing the related concept. b | Auto-associative networks include connections between their neurons, such that reverberating activity is possible; they are inspired by the local connectivity between adjacent cortical pyramidal cells 20,22 . This panel shows the connectivity matrix between five artificial neurons, α to ε. These neurons make up an auto-associative network that includes two discrete representations indicated in magenta (neurons α-to-γ) and cyan (neurons γ-to-ε). Numbers specify the presence (1) or absence (0) of a connection from the neuron listed on the left of the matrix to the neuron indicated at the top. Each neuron becomes active if and only if it receives at least two simultaneous inputs, thus resulting in the discrete representations maintaining activity over time. c | In heteroassociative networks, neuron populations ordered in 'layers' project onto each other serially, resembling connectivity in some neural structures. The typical three-layer networks used in many parallel-distributed processing (PDP) models include input and output layers plus a 'hidden' layer in-between. d | Deep neural networks include several hidden layers. The number of neurons per layer can vary substantially. Representations are
Fig. 4 | Model of evolutionary connectivity change in left fronto-temporal cortex and its functional consequences. Verbal working memory is a feature of humans that apes and monkeys apparently lack. a | 'Monkey' and 'human' models of six areas of fronto-temporal cortex involved in articulation and auditory perception (middle panel) were used to address this issue. Areas were modelled as sets of 625 mean-field excitatory neurons, each projecting randomly to local neighbourhoods of other excitatory units (coloured); each excitatory cell has a corresponding inhibitory 'cell' (in grey) projecting to a narrow local neighbourhood (left panel). Between-area connections implementing comparative DTI results 170,171,221,222 included next-neighbour connections between areas (green) and the second-next area connections specific to human perisylvian cortex (violet links in the 'human' model only). Correlation-based Hebbian plasticity was applied to imitate early sound and sign learning and to interlink articulatory and auditory information. b | After stimulation with learnt auditoryarticulatory patterns, the monkey model showed weak and short-lived sequential activation of the model areas: A1, primary auditory cortex; AB auditory belt cortex; M1, primary articulatory motor cortex; PB, auditory parabelt cortex; PF, inferior prefrontal cortex; PM, premotor cortex. c | The same stimulation led to strong and long-lasting parallel activation in the areas of the human model. This prolonged activity can be interpreted as verbal working memory, a mechanism necessary for human language. The model applies all seven constraints discussed above. The left portion of part a is adapted with permission from ref. 211 . Parts a-c are adapted from ref. 205 .
Fig. 5 | Brain-constrained model of semantic grounding. a | For simulating the infant's learning of the meaning of object-related and action-related words, a 12-area model was created including the six inferior-frontal and superior-temporal perisylvian areas of Fig. 4 (A1, primary auditory cortex; AB auditory belt cortex; M1, primary articulatory motor cortex; PB, auditory parabelt cortex; PF, inferior prefrontal cortex; PM, premotor cortex), plus a ventral temporo-occipital visual stream (in green: AT, anterior-temporal cortex; TO, temporo-occipital cortex;V1, primary visual cortex) and a dorsolateral frontal action stream (in yellow-brown: lateral PF (PFL), lateral PM (PML) and lateral M1 (M1L)). Between-area connectivity is shown by arrows. Semantic learning and grounding of object and action words was modelled by co-presenting acoustic and articulatory information along with either semanticreferential object-related information or action-related information. This was done by co-activating specific patterns of spiking neurons in the different 'primary' areas of the model (M1 and M1L, V1 and A1) and Hebbian correlation learning. After learning, 'auditory word comprehension' was simulated by presenting specific previously learned auditory patterns to area A1. As a result, specific circuits of neurons distributed across the network were activated, as indicated by the coloured dots in the insets (1 dot indicates 1 active model neuron; blue, object-word circuit; red, action-word circuit; yellow, both), shown in the black boxes representing areas. b,c | Distribution of circuit neurons across model areas. Bars give average numbers of neurons per area for object-(dark grey) and action-word circuits (light grey); whiskers give standard errors. Note the relatively stronger representation of object-word circuits in ventral-visual areas and that of action-word circuits in dorsolateral-frontal areas in part a 50,213 , which offer an explanation for well-known differences between the cortical mechanisms underlying actionrelated and object-related concepts 207,223-226 . All seven constraints discussed in the main text were implemented. Figure adapted, with permission, from ref. 185 .
Biological constraints on neural network models of cognitive function

June 2021

·

1,086 Reads

·

138 Citations

Nature Reviews Neuroscience

Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative, hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning to implementation of inhibition and control, along with neuroanatomical properties including areal structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, on the basis of these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling. Neural network models have potential for improving our understanding of brain functions. In this Perspective, Pulvermüller and colleagues examine various aspects of such models that may need to be constrained to make them more neurobiologically realistic and therefore better tools for understanding brain function.


Analysis of continuous neuronal activity evoked by natural speech with computational corpus linguistics methods

August 2020

·

511 Reads

·

40 Citations

·

·

Malte R. Henningsen-Schomers

·

[...]

·

In the field of neurobiology of language, neuroimaging studies are generally based on stimulation paradigms consisting of at least two different conditions. Designing those paradigms can be very time-consuming and this traditional approach is necessarily data-limited. In contrast, in computational and corpus linguistics, analyses are often based on large text corpora, which allow a vast variety of hypotheses to be tested by repeatedly re-evaluating the data set. Furthermore, text corpora also allow exploratory data analysis in order to generate new hypotheses. By drawing on the advantages of both fields, neuroimaging and computational corpus linguistics, we here present a unified approach combining continuous natural speech and MEG to generate a corpus of speech-evoked neuronal activity.


Analysis of ongoing neuronal activity evoked by continuous speech with computational corpus linguistics methods

April 2020

·

440 Reads

In the field of neurobiology of language, neuroimaging studies are generally based on stimulation paradigms consisting of at least two different conditions. Depending on the desired evaluation, these conditions, in turn, have to contain dozens of items to achieve a good signal to noise ratio. Designing those paradigms can be very time-consuming. Subsequently, a group of participants is stimulated with the new paradigm, while brain activity is assessed, e.g. with EEG/MEG. The measured data are then pre-processed and finally contrasted according to the different stimulus conditions. In this way, only a limited number of analyses and hypothesis tests can be performed, while for alternative or further analyses, completely new paradigms usually need to be designed. This traditional approach is necessarily data-limited, and the cost-benefit ratio is therefore rather poor. In contrast, in computational linguistics analyses are based on text corpora, which allow a vast variety of hypotheses to be tested by repeatedly re-evaluating the data set. Furthermore, text corpora also allow exploratory data analysis in order to generate new hypotheses. By combining the two approaches, we here present a unified approach of continuous natural speech and MEG to generate a corpus-like database of speech-evoked neuronal activity.

Citations (6)


... The word "beauty", on the other hand, functions as a perceptual representation that glues these perceptions together (Pulvermüller, 2013). This starring role of language is also supported by recent work in computational modeling (Dobler et al., 2024;Pulvermüller, 2018). In PP this functions via the root node which can selectively activate only certain parts of its sub-network, and which is easier to access when it has a single identifying label (Michel, 2022). ...

Reference:

Higher-Level Cognition under Predictive Processing: Structural Representations, Grounded Cognition, and Conceptual Spaces
Verbal Symbols Support Concrete but Enable Abstract Concept Formation: Evidence From Brain‐Constrained Deep Neural Networks

... This approach allows us to explore both fine-grained and population-level neural dynamics in our simulations. The detailed mathematical equations governing each neuron type are provided in the Appendix; the model parameters were adopted from previous successful simulations Pulvermüller 2016, Tomasello et al. 2017;Tomasello et al. 2018;Constant et al. 2023;Nguyen et al. 2024) ensuring consistency and comparability with previous established simulation work. For a more in-depth description of the parameters and their impact on CA formation, see Garagnani et al. (2008). ...

Causal influence of linguistic learning on perceptual and conceptual processing: A brain-constrained deep neural network study of proper names and category terms

The Journal of Neuroscience : The Official Journal of the Society for Neuroscience

... Our findings can be understood through the lens of language's influence on perceptual and cognitive processes, much like other sensory inputs (Kray et al., 2004;Lupyan, 2016;Radulescu et al., 2022). Language has been shown to play a pivotal role in increasing categorical clarity, reducing ambiguity, and organizing knowledge, which in turn shapes perception, concept formation, and task representations (Davis, & Yee, 2021;Dove et al., 2022;Henningsen-Schomers et al., 2023;Lupyan, 2012b;Mikolov, Joulin, & Baroni, 2018). Lupyan and Thompson-Schill (2012) demonstrated that verbal labels activate familiar knowledge in a manner that differs significantly from nonverbal cues. ...

Influence of language on perception and concept formation in a brain-constrained deep neural network model

... In addition, a wide body of evidence shows people need more contextual information to understand abstract words compared to that needed for concrete words (Schwanenflugel & Stowe, 1989;Schwanenflugel, Akin, & Luh, 1992). Consistently, research shows that abstract concepts are processed and represented differently from concrete concepts Binder, Westbury, McKiernan, Possing, & Medler, 2005;Bolognesi & Steen, 2018;Borghi, 2023;Conca, Borsa, Cappa, & Catricalà, 2021;Dove, 2022;Henningsen-Schomers & Pulvermüller, 2022;Mazzuca et al., 2021). These differences might be the result of differing acquisition pathways, and the scientific literature in this domain has emphasised distinct aspects that might drive abstract word acquisition, such as emotional valence, interoception, and social interaction (Wauters, Tellings, Van Bon, & Van Haaften, 2003;Ponari, Norbury, & Vigliocco, 2018;Connell, Lynott, & Banks, 2018;Borghi, 2023;Dove, 2022, Reilly et al., 2024. ...

Modelling concrete and abstract concepts using brain-constrained deep neural networks

Psychological Research

... These results indicate that MGF can effectively alleviate neuronal injury. The incidence of cognitive dysfunction caused by ischemic stroke is approximately 30%, and it manifests mainly as learning and memory impairment, which seriously affects the prognosis of patients (Pulvermüller et al., 2021). It has been previously shown that MGF can improve the learning and memory ability of animal models of various nervous system diseases (Zhang et al., 2024). ...

Biological constraints on neural network models of cognitive function

Nature Reviews Neuroscience

... neurolinguistic research has increasingly adopted more realistic, continuous speech stimuli, including excerpts from audio-books, to better reflect real-world language processing [Sch+21a; KSK23;Gar+22;Sch+23b;Sch+24]. Such more realistic, context-enriched stimuli offer a promising way to immerse participants in the complexities of naturalistic language processing [Sch+21a]. ...

Analysis of continuous neuronal activity evoked by natural speech with computational corpus linguistics methods