Project

Neurocognitive Poetics

Goal: Generate highly cross-disciplinary, open-minded and open-ended research on how literature and poetry in particular are constructed in our mindbrains on the basis of artful verbal stimuli (cf. Jacobs15FNHUMdoi: 10.3389/fnhum.2015.00186)

Methods: Computational Linguistics, Computational Neuroscience, Questionnaire-Based Surveys, Modeling, Reaction Time, EEG Analysis, fMRI Analysis, Eye Tracking, Digital Humanities, Neural Networks, Computational Stylistics, Digital Literary Studies

Updates
0 new
34
Recommendations
0 new
81
Followers
0 new
288
Reads
8 new
4816

Project log

Arthur M Jacobs
added an update
here's a new piece on how to simulate effects of reading age on semantic and syntactic skills.
cheers
art
 
Markus J. Hofmann
added a research item
Though there is a strong consensus that word length and frequency are the most important single-word features determining visual-orthographic access to the mental lexicon, there is less agreement as how to best capture syntactic and semantic factors. The traditional approach in cognitive reading research assumes that word predictability from sentence context is best captured by cloze completion probability (CCP) derived from human performance data. We review recent research suggesting that probabilistic language models provide deeper explanations for syntactic and semantic effects than CCP. Then we compare CCP with three probabilistic language models for predicting word viewing times in an English and a German eye tracking sample: (1) Symbolic n-gram models consolidate syntactic and semantic short-range relations by computing the probability of a word to occur, given two preceding words. (2) Topic models rely on subsymbolic representations to capture long-range semantic similarity by word co-occurrence counts in documents. (3) In recurrent neural networks (RNNs), the subsymbolic units are trained to predict the next word, given all preceding words in the sentences. To examine lexical retrieval, these models were used to predict single fixation durations and gaze durations to capture rapidly successful and standard lexical access, and total viewing time to capture late semantic integration. The linear item-level analyses showed greater correlations of all language models with all eye-movement measures than CCP. Then we examined non-linear relations between the different types of predictability and the reading times using generalized additive models. N-gram and RNN probabilities of the present word more consistently predicted reading performance compared with topic models or CCP. For the effects of last-word probability on current-word viewing times, we obtained the best results with n-gram models. Such count-based models seem to best capture short-range access that is still underway when the eyes move on to the subsequent word. The prediction-trained RNN models, in contrast, better predicted early preprocessing of the next word. In sum, our results demonstrate that the different language models account for differential cognitive processes during reading. We discuss these algorithmically concrete blueprints of lexical consolidation as theoretically deep explanations for human reading.
Arthur M Jacobs
added an update
hi folks,
if you're interested in how to compute the beauty of entire books, you can find a proposal in this new paper of ours.
cheers
art
 
Arthur M Jacobs
added 15 research items
This perspective paper discusses four general desiderata of current computational stylistics and (neuro-)cognitive poetics concerning the development of (a) appropriate databases/training corpora, (b) advanced qualitative-quantitative narrative analysis (Q2NA) and machine learning tools for feature extraction, (c) ecologically valid literary test materials, and (d) open-access reader-response data banks. In six explorative computational stylistics studies, it introduces a number of tools that provide QNA indices of the foregrounding potential at the sublexical, lexical, inter- and supralexical levels for poems by Shakespeare, Blake, or Dickens. These concern lexical diversity and aesthetic potential, sentiment analysis, sublexical sonority scores or phrase structure, and topics analysis. The results illustrate the complex interplay of stylistic features and the necessity for theoretical guidance and interdisciplinary cooperation in selecting adequate training corpora, QNA tools, test texts, and response measures.
Two computational studies provide different sentiment analyses for text segments (e.g., “fearful” passages) and figures (e.g., “Voldemort”) from the Harry Potter books (Rowling, 1997, 1998, 1999, 2000, 2003, 2005, 2007) based on a novel simple tool called SentiArt. The tool uses vector space models together with theory-guided, empirically validated label lists to compute the valence of each word in a text by locating its position in a 2d emotion potential space spanned by the words of the vector space model. After testing the tool's accuracy with empirical data from a neurocognitive poetics study, it was applied to compute emotional figure and personality profiles (inspired by the so-called “big five” personality theory) for main characters from the book series. The results of comparative analyses using different machine-learning classifiers (e.g., AdaBoost, Neural Net) show that SentiArt performs very well in predicting the emotion potential of text passages. It also produces plausible predictions regarding the emotional and personality profile of fiction characters which are correctly identified on the basis of eight character features, and it achieves a good cross-validation accuracy in classifying 100 figures into “good” vs. “bad” ones. The results are discussed with regard to potential applications of SentiArt in digital literary, applied reading and neurocognitive poetics studies such as the quantification of the hybrid hero potential of figures.
In this paper, we compute the affective-aesthetic potential (AAP) of literary texts by using a simple sentiment analysis tool called SentiArt. In contrast to other established tools, SentiArt is based on publicly available vector space models (VSMs) and requires no emotional dictionary, thus making it applicable in any language for which VSMs have been made available (>150 so far) and avoiding issues of low coverage. In a first study, the AAP values of all words of a widely used lexical databank for German were computed and the VSM’s ability in representing concrete and more abstract semantic concepts was demonstrated. In a second study, SentiArt was used to predict ~2800 human word valence ratings and shown to have a high predictive accuracy (R2 > 0.5, p < 0.0001). A third study tested the validity of SentiArt in predicting emotional states over (narrative) time using human liking ratings from reading a story. Again, the predictive accuracy was highly significant: R2adj = 0.46, p < 0.0001, establishing the SentiArt tool as a promising candidate for lexical sentiment analyses at both the micro- and macrolevels, i.e., short and long literary materials. Possibilities and limitations of lexical VSM-based sentiment analyses of diverse complex literary texts are discussed in the light of these results.
Arthur M Jacobs
added an update
hi folks,
just right before the election results our latest piece for Neurocognitive Poetics; whether electoral programs QUALIfy for the label 'poetic' is up to you to decide. we provide the methods to QUANTIfy their comprehensibility and likeability and look at how similar they are according to different measures.
have fun
annette & art
 
Arthur M Jacobs
added an update
hi, here's a funny paper discussing how words and their associations can be turned into 'personalities', real or fictive, and I hope you enjoy!
cheers
art
 
Arthur M Jacobs
added an update
hi folks,
if you wanna learn more on how children's brains respond to positive affective words (a neural positivity superiority effect) read this freshly published paper!
 
Arthur M Jacobs
added an update
hey folks,
after many years of hard interdisciplinary work we finally reaped what we sowed: the FAM or Foregrounding Assessment Matrix. It's still work in progress, but a useful tool to start with serious Q2NA (qualitative-quantitative narrative analysis), the only promising approach to a better scientific understanding of complex literary texts.
enjoy
art
 
Markus J. Hofmann
added a research item
The corpus, from which a predictive language model is trained, can be considered the experience of a semantic system. We recorded everyday reading of two participants for two months on a tablet, generating individual corpus samples of 300/500K tokens. Then we trained word2vec models from individual corpora and a 70 million-sentence newspaper corpus to obtain individual and norm-based long-term memory structure. To test whether individual corpora can make better predictions for a cognitive task of long-term memory retrieval, we generated stimulus materials consisting of 134 sentences with uncorrelated individual and norm-based word probabilities. For the subsequent eye tracking study 1-2 months later, our regression analyses revealed that individual, but not norm-corpus-based word probabilities can account for first-fixation duration and first-pass gaze duration. Word length additionally affected gaze duration and total viewing duration. The results suggest that corpora representative for an individual’s long-term memory structure can better explain reading performance than a norm corpus, and that recently acquired information is lexically accessed rapidly.
Arthur M Jacobs
added an update
hi folks,
finally a new 'positive', i.e. polyannan piece to chew on in this project; have fun!
 
Arthur M Jacobs
added an update
Hi again,
here's something to check your own personality assessment of Harry Potter characters against...
have fun!
 
Arthur M Jacobs
added an update
Hi everybody,
there's been little movement in the project for some time, but now we restart with a very nice analysis of readers' eye movements while enjoying one of the most famous French poems: Baudelaire's 'Les chats'.
 
Markus J. Hofmann
added a research item
While word frequency and predictability effects have been examined extensively, any evidence on interactive effects as well as parafoveal influences during whole sentence reading remains inconsistent and elusive. Novel neuroimaging methods utilize eye movement data to account for the hemodynamic responses of very short events such as fixations during natural reading. In this study, we used the rapid sampling frequency of near-infrared spectroscopy (NIRS) to investigate neural responses in the occipital and orbitofrontal cortex to word frequency and predictability. We observed increased activation in the right ventral occipital cortex when the fixated word N was of low frequency, which we attribute to an enhanced cost during saccade planning. Importantly, unpredictable (in contrast to predictable) low frequency words increased the activity in the left dorsal occipital cortex at the fixation of the preceding word N-1, presumably due to an upcoming breach of top-down modulated expectation. Opposite to studies that utilized a serial presentation of words (e.g. Hofmann et al., 2014), we did not find such an interaction in the orbitofrontal cortex, implying that top-down timing of cognitive subprocesses is not required during natural reading. We discuss the implications of an interactive parafoveal-on-foveal effect for current models of eye movement.
Markus J. Hofmann
added a research item
Computational models are often challenged to explain empirical findings while remaining biologically plausible. A recent interactive activation model, the associative read-out model (AROM), uses computationally calculated word co-occurrences to predict semantic processing during visual word recognition. Its semantic layer is hierarchically interconnected to orthographic word form, letter and visual feature layers, therefore proposing connectivity from the inferior frontal gyrus along the ventral visual stream. Direct empirical evidence for its connectivity assumptions is so far missing. In this study, we employed psychophysiological interaction analysis on the neuroimaging data of a semantic priming experiment, targeting the left inferior frontal gyrus (LIFG) as main region to resolve semantic conflicts. We further manipulated the prime and target word by their direct association and semantic similarity. At a low semantic similarity, we observed increased functional connectivity of the LIFG to the fusiform gyrus, the hippocampus, the anterior cingulate cortex and the orbitofrontal cortex, indicating a connective pattern analogous to the semantic layer of the AROM. Surprisingly, a high direct association showed no influence on brain activation, which raises the question about the diverging cognitive processes of the two priming types.
Arthur M Jacobs
added an update
Hi folks,
to start the golden 20ies here's a new piece on SentiArt testing its predictive validity.
have a happy year
art
 
Arthur M Jacobs
added a research item
As a part of a larger interdisciplinary project on Shakespeare sonnets’ reception (Jacobs et al., 2017; Xue et al., 2017), the present study analyzed the eye movement behavior of participants reading three of the 154 sonnets as a function of seven lexical features extracted via Quantitative Narrative Analysis (QNA). Using a machine learning- based predictive modeling approach five ‘surface’ features (word length, orthographic neighborhood density, word frequency, orthographic dissimilarity and sonority score) were detected as important predictors of total reading time and fixation probability in poetry reading. The fact that one phonological feature, i.e., sonority score, also played a role is in line with current theorizing on poetry reading. Our approach opens new ways for future eye movement research on reading poetic texts and other complex literary materials (cf. Jacobs, 2015c).
Arthur M Jacobs
added an update
If you want to compute the EXTRAVERSION or EMOTIONAL INSTABILITY scores for your preferred character from 'Harry Potter', 'GoT' or Homer's Iliad, then here's a new promising sentiment analysis tool (SentiArt): https://techxplore.com/news/2019-07-sentiart-sentiment-analysis-tool-profiling.html
 
Arthur M Jacobs
added an update
here's the newest on how the brain processes figurative language!
 
Arthur M Jacobs
added an update
hi folks,
it's finally accepted, the presumably 1st machine learning-assisted QNA-based predictive modeling paper on eye movements (in poetry reading)! We authors really hope that more eye tracking studies using longer natural texts will follow. The time is ripe for this and the computational methods are ready to assist.
 
Arthur M Jacobs
added a research item
This perspective paper discusses four general desiderata of current computational stylistics and (neuro-)cognitive poetics concerning the development of (a) appropriate databases/training corpora, (b) advanced qualitative- quantitative narrative analysis (Q2NA) and machine learning tools for fea- ture extraction, (c) ecologically valid literary test materials, and (d) open- access reader-response data banks.
Arthur M Jacobs
added an update
Hi folks,
anyone interested in my new blog presenting work in progress on Computational Poetics can now go to: sentiart.de
You can find examples regarding the computation of the Emotion, Immersion and Aesthetic Potential of literary texts or the Emotional Figure Profile and Figure Personality Profile of main characters in novels.
enjoy!
 
Arthur M Jacobs
added an update
Do Words Stink? Neural Reuse as a Principle for Understanding Emotions in Reading
Johannes C. Ziegler1, Marie Montant1, Benny B. Briesemeister2, Tila T. Brink2, Bruno Wicker1, Aurélie Ponz1, Mireille Bonnard3, Arthur M. Jacobs2, and Mario Braun4
Journal of Cognitive Neuroscience 30:7, pp. 1023–1032 doi:10.1162/jocn_a_01268
 
Arthur M Jacobs
added 2 research items
What determines human ratings of association? We planned this paper as a test for association strength (AS) that is derived from the log likelihood that two words co‐occur significantly more often together in sentences than is expected from their single word frequencies. We also investigated the moderately correlated interactions of word frequency, emotional valence, arousal, and imageability of both words (r's ≤ .3). In three studies, linear mixed effects models revealed that AS and valence reproducibly account for variance in the human ratings. To understand further correlated predictors, we conducted a hierarchical cluster analysis and examined the predictors of four clusters in competitive analyses: Only AS and word2vec skip‐gram cosine distances reproducibly accounted for variance in all three studies. The other predictors of the first cluster (number of common associates, (positive) point‐wise mutual information, and word2vec CBOW cosine) did not reproducibly explain further variance. The same was true for the second cluster (word frequency and arousal); the third cluster (emotional valence and imageability); and the fourth cluster (consisting of joint frequency only). Finally, we discuss emotional valence as an important dimension of semantic space. Our results suggest that a simple definition of syntagmatic word contiguity (AS) and a paradigmatic measure of semantic similarity (skip‐gram cosine) provide the most general performance‐independent explanation of association ratings.
In this theoretical note we compare different types of computational models of word similarity and association in their ability to predict a set of about 900 rating data. Using regression and predictive modeling tools (neural net, decision tree) the performance of a total of 28 models using different combinations of both surface and semantic word features is evaluated. The results present evidence for the hypothesis that word similarity ratings are based on more than only semantic relatedness. The limited cross-validated performance of the models asks for the development of psychological process models of the word similarity rating task.
Arthur M Jacobs
added a research item
In this theoretical note we compare different types of computational models of word similarity and association in their ability to predict a set of about 900 rating data. Using regression and predictive modeling tools (neural net, decision tree) the performance of a total of 28 models using different combinations of both surface and semantic word features is evaluated. The results present evidence for the hypothesis that word similarity ratings are based on more than only semantic relatedness. The limited cross-validated performance of the models asks for the development of psychological process models of the word similarity rating task.
Arash Aryani
added a research item
A similarity between the form and meaning of a word (i.e., iconicity) may help language users to more readily access its meaning through direct form-meaning mapping. Previous work has supported this view by providing empirical evidence for this facilitatory effect in sign language, as well as for onomatopoetic words (e.g., cuckoo) and ideophones (e.g., zigzag). Thus, it remains largely unknown whether the beneficial role of iconicity in making semantic decisions can be considered a general feature in spoken language applying also to “ordinary” words in the lexicon. By capitalizing on the affective domain, and in particular arousal, we organized words in two distinctive groups of iconic vs. non-iconic based on the congruence vs. incongruence of their lexical (meaning) and sublexical (sound) arousal. In a two-alternative forced choice task, we asked participants to evaluate the arousal of printed words that were lexically either high or low arousing. In line with our hypothesis, iconic words were evaluated more quickly and more accurately than their non-iconic counterparts. These results indicate a processing advantage for iconic words, suggesting that language users are sensitive to sound-meaning mappings even when words are presented visually and read silently.
Arthur M Jacobs
added 2 research items
The development of theories and computational models of reading requires an understanding of processing constraints, in particular of timelines related to word recognition and oculomotor control. Timelines of word recognition are usually determined with event-related potentials (ERPs) recorded under conditions of serial visual presentation (SVP) of words; timelines of oculomotor control are derived from parameters of eye movements (EMs) during natural reading. We describe two strategies to integrate these approaches. One is to collect ERPs and EMs in separate SVP and natural reading experiments for the same experimental material (but different subjects). The other strategy is to co-register EMs and ERPs during natural reading from the same subjects. Both strategies yield data that allow us to determine how lexical properties influence ERPs (e.g., the N400 component) and EMs (e.g., fixation durations) across neighboring words. We review our recent research on the effects of frequency and predictability of words on both EM and ERP measures with reference to current models of eye-movement control during reading. Results are in support of the proposition that lexical access is distributed across several fixations and across brain-electric potentials measured on neighboring words.
The long history of poetry and the arts, as well as recent empirical results suggest that the way a word sounds (e.g., soft vs. harsh) can convey affective information related to emotional responses (e.g., pleasantness vs. harshness). However, the neural correlates of the affective potential of the sound of words remain unknown. In an fMRI study involving passive listening, we focused on the affective dimension of arousal and presented words organized in two discrete groups of sublexical (i.e., sound) arousal (high vs. low), while controlling for lexical (i.e., semantic) arousal. Words sounding high arousing, compared to their low arousing counterparts, resulted in an enhanced BOLD signal in bilateral posterior insula, the right auditory and premotor cortex, and the right supramarginal gyrus. This finding provides first evidence on the neural correlates of affectivity in the sound of words. Given the similarity of this neural network to that of nonverbal emotional expressions and affective prosody, our results support a unifying view that suggests a core neural network underlying any type of affective sound processing.
Arthur M Jacobs
added an update
Answers to the above question can be found in a series of three papers by Arash Aryani that all investigate the secrets of sound-meaning interaction at the basic (sub-)lexical level. These findings are highly relevant for the development of the Neurocognitive Poetics Model of literary reading and a better (quantitative) understanding of Jakobson's famous 'poetic function'.
 
Arthur M Jacobs
added 2 research items
To investigate whether second language processing is characterized by the same sensitivity to the emotional content of language - as compared to native language processing - we conducted an EEG study manipulating word emotional valence in a visual lexical decision task. Two groups of late bilinguals - native speakers of German and Spanish with sufficient proficiency in their respective second language - performed each a German and a Spanish version of the task containing identical semantic material: translations of words in the two languages. In contrast to theoretical proposals assuming attenuated emotionality of second language processing, a highly similar pattern of results was obtained across L1 and L2 processing: event related potential waves generally reflected an early posterior negativity plus a late positive complex for words with positive or negative valence compared to neutral words regardless of the respective test language and its L1 or L2 status. These results suggest that the coupling between cognition and emotion does not qualitatively differ between L1 and L2 although latencies of respective effects differed about 50-100 ms. Only Spanish native speakers currently living in the L2 country showed no effects for negative as compared to neutral words presented in L2 - potentially reflecting a predominant positivity bias in second language processing when currently being exposed to a new culture.
There is an ongoing debate whether deaf individuals access phonology when reading, and if so, what impact the ability to access phonology might have on reading achievement. However, the debate so far has been theoretically unspecific on two accounts: (a) the phonological units deaf individuals may have of oral language have not been specified and (b) there seem to be no explicit cognitive models specifying how phonology and other factors operate in reading by deaf individuals. We propose that deaf individuals have representations of the sublexical structure of oral–aural language which are based on mouth shapes and that these sublexical units are activated during reading by deaf individuals. We specify the sublexical units of deaf German readers as 11 “visemes” and incorporate the viseme set into a working model of single-word reading by deaf adults based on the dual-route cascaded model of reading aloud by Coltheart, Rastle, Perry, Langdon, and Ziegler (2001. DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204–256. doi: 10.1037//0033-295x.108.1.204). We assessed the indirect route of this model by investigating the “pseudo-homoviseme” effect using a lexical decision task in deaf German reading adults. We found a main effect of pseudo-homovisemy, suggesting that at least some deaf individuals do automatically access sublexical structure during single-word reading.
Arthur M Jacobs
added a research item
Emotions guide our actions and have a profound influence on how we approach, experience and later remember information. This chapter provides an overview of the role of emotions in reading and describes how emotional processes differ across digital and print texts. After introducing what emotions are and how they can be measured, we discuss the influence of text genre on inducing emotions. We further explore the effects of digital texts on the emotional experiences in narrative and expository text reading. We then consider the importance of the reader’s motivational orientation, as it may influence preferences for digital vs. print texts. Finally, we discuss the impact of the communicative and collaborative aspects of digital reading environments on the emotions emerging during reading. It is evident that little is known about the impact of digitalisation on emotional processes during reading, and more theory-based research is needed.
Arthur M Jacobs
added 2 research items
Background / Purpose: Emotion regulation has been mostly investigated by taking an individualistic point of view emphasizing voluntary control mechanisms such as cognitive reappraisal or suppression. In the present study, we aimed at investigating effects of empathic comments in the context of extrinsic emotion regulation (e.g., when applying the techniques of paraphrasing or active listening in client-centered therapy). Main conclusion: The results of our study provide evidence for a neural basis of extrinsic/interpersonal emotion regulation mechanisms.First, we showed a modulation of neural activity in emotion processing brain regions following empathic comments given by a conversational partner.Second, receiving verbal comments expressing empathic concern (emotional empathy) involves different neural networks and processes than receiving comments reflecting perspective taking (cognitive empathy).Third, the results indicate that most brain regions involved in feeling empathy are also active when one is being treated in an empathic manner.
In this article we investigate structural differences between “literary” metaphors created by renowned poets and “nonliterary” ones imagined by non-professional authors from Katz et al.’s 1988 corpus. We provide data from quantitative narrative analyses (QNA) of the altogether 464 metaphors on over 70 variables, including surface features like metaphor length, phonological features like sonority score, or syntactic-semantic features like sentence similarity. In a first computational study using machine learning tools (i.e., a classifier of the decision tree family) we show that Katz et al.’s literary metaphors can be successfully discriminated from their nonliterary ones on the basis of response measures (10 ratings), in particular the ratings for familiarity, ease of interpretation, semantic relatedness, and comprehensibility. A second computational study then shows that the classifier can reliably detect and predict between-group differences on the basis of five QNA features generalizing from a training to a test corpus. Our results shed light on surface and semantic features that co-determine the reception of metaphors and raise important questions about their literariness, aptness or poetic potential. They tentatively suggest a set of 11 features that could influence the “literariness” of metaphors, including their sonority score, length and surprisal value.
Arthur M Jacobs
added an update
Arthur M Jacobs
added 2 research items
How do we understand the emotional content of written words? Here, we investigate the hypothesis that written words that carry emotions are processed through phylogenetically ancient neural circuits that are involved in the processing of the very same emotions in nonlanguage contexts. This hypothesis was tested with respect to disgust. In an fMRI experiment, it was found that the same region of the left anterior insula responded whether people observed facial expressions of disgust or whether they read words with disgusting content. In a follow-up experiment, it was found that repetitive TMS over the left insula in comparison with a control site interfered with the processing of disgust words to a greater extent than with the processing of neutral words. Together, the results support the hypothesis that the affective processes we experience when reading rely on the reuse of phylogenetically ancient brain structures that process basic emotions in other domains and species.
Inhalt: Introduction A two-dimensional affective space: Valence and arousal effects in word processing Higher dimensional affective space: a role of discrete emotions in word processing? A direct comparison of the affective space models References
Arthur M Jacobs
added a research item
Our life is full of stories: some of them depict real-life events and were reported, e.g. in the daily news or in autobiographies, whereas other stories, as often presented to us in movies and novels, are fictional. However, we have only little insights in the neurocognitive processes underlying the reading of factual as compared to fictional contents. We investigated the neurocognitive effects of reading short narratives, labeled to be either factual or fictional. Reading in a factual mode engaged an activation pattern suggesting an action-based reconstruction of the events depicted in a story. This process seems to be past-oriented and leads to shorter reaction times at the behavioral level. In contrast, the brain activation patterns corresponding to reading fiction seem to reflect a constructive simulation of what might have happened. This is in line with studies on imagination of possible past or future events.
Arthur M Jacobs
added 2 research items
This paper describes a corpus of about 3,000 English literary texts with about 250 million words extracted from the Gutenberg project that span a range of genres from both fiction and non-fiction written by more than 130 authors (e.g., Darwin, Dickens, Shakespeare). Quantitative narrative analysis (QNA) is used to explore a cleaned subcorpus, the Gutenberg English Poetry Corpus (GEPC), which comprises over 100 poetic texts with around two million words from about 50 authors (e.g., Keats, Joyce, Wordsworth). Some exemplary QNA studies show author similarities based on latent semantic analysis, significant topics for each author or various text-analytic metrics for George Eliot’s poem “How Lisa Loved the King” and James Joyce’s “Chamber Music,” concerning, e.g., lexical diversity or sentiment analysis. The GEPC is particularly suited for research in Digital Humanities, Computational Stylistics, or Neurocognitive Poetics, e.g., as training and test corpus for stimulus development and control in empirical studies.
Arthur M Jacobs
added 4 research items
An alphabetic decision task was used to study effects of form priming on letter recognition at very short prime durations (20 to 80 msec). The task required subjects to decide whether a stimulus was a letter or a nonletter. Experiment 1 showed clear facilitatory effects of primes being either physically or nominally identical to the targets, with a stable advantage for the former. Experiment 2 demonstrated that uppercase letters are classified more rapidly as letters(vs nonletters) when they are preceded by a briefly exposed, forward- and backward-masked, visually similar uppercase letter than when they are preceded by a visually dissimilar uppercase letter. Finally, Experiment 3 demonstrated that nominally identical and visually similar primes facilitate processing more than do nominally identical, visually dissimilar primes. The alphabetic decision task proved to produce sensitive and stable priming effects at the feature, letter, and response-choice level. The present results on letter-letter priming thus constitute a solid data base against which to evaluate other priming effects, such as word-letter priming. The results are discussed in light of current activation models of letter and word recognition and are compared with data simulatedby the interactive activation model (McClelland & Rumelhart, 1981).
An alphabetic decision task was used to study effects of form priming on letter recognition at very short prime durations (20 to 80 msec). The task required subjects to decide whether a stimulus was a letter or a nonletter. Experiment 1 showed clear facilitatory effects of primes being either physically or nominally identical to the targets, with a stable advantage for the former. Experiment 2 demonstrated that uppercase letters are classified more rapidly as letters (vs. non-letters) when they are preceded by a briefly exposed, forward- and backward-masked, visually similar uppercase letter than when they are preceded by a visually dissimilar uppercase letter. Finally, Experiment 3 demonstrated that nominally identical and visually similar primes facilitate processing more than do nominally identical, visually dissimilar primes. The alphabetic decision task proved to produce sensitive and stable priming effects at the feature, letter, and response-choice level. The present results on letter-letter priming thus constitute a solid data base against which to evaluate other priming effects, such as word-letter priming. The results are discussed in light of current activation models of letter and word recognition and are compared with data simulated by the interactive activation model (McClelland & Rumelhart, 1981).
The theoretical “difficulty in separating association strength from [semantic] feature overlap” has resulted in inconsistent findings of either the presence or absence of “pure” associative priming in recent literature (Hutchison, 2003, Psychonomic Bulletin & Review, 10(4), p. 787). The present study used co-occurrence statistics of words in sentences to provide a full factorial manipulation of direct association (strong/no) and the number of common associates (many/no) of the prime and target words. These common associates were proposed to serve as semantic features for a recent interactive activation model of semantic processing (i.e., the associative read-out model; Hofmann & Jacobs, 2014). With stimulus onset asynchrony (SOA) as an additional factor, our findings indicate that associative and semantic priming are indeed dissociable. Moreover, the effect of direct association was strongest at a long SOA (1,000 ms), while many common associates facilitated lexical decisions primarily at a short SOA (200 ms). This response pattern is consistent with previous performance-based accounts and suggests that associative and semantic priming can be evoked by computationally determined direct and common associations.
Arthur M Jacobs
added an update
I hope this corpus helps advance the Neurocognitive Poetics project and motivates follow-ups in other languages!
 
Arthur M Jacobs
added 2 research items
A quantitative, coordinate-based meta-analysis combined data from 354 participants across 22 fMRI studies and one positron emission tomography (PET) study to identify the differences in neural correlates of figurative and literal language processing, and to investigate the role of the right hemisphere (RH) in figurative language processing. Studies that reported peak activations in standard space contrasting figurative vs. literal language processing at whole brain level in healthy adults were included. The left and right IFG, large parts of the left temporal lobe, the bilateral medial frontal gyri (medFG) and an area around the left amygdala emerged for figurative language processing across studies. Conditions requiring exclusively literal language processing did not activate any selective regions in most of the cases, but if so they activated the cuneus/precuneus, right MFG and the right IPL. No general RH advantage for metaphor processing could be found. On the contrary, significant clusters of activation for metaphor conditions were mostly lateralized to the left hemisphere (LH). Subgroup comparisons between experiments on metaphors, idioms, and irony/sarcasm revealed shared activations in left frontotemporal regions for idiom and metaphor processing. Irony/sarcasm processing was correlated with activations in midline structures such as the medFG, ACC and cuneus/precuneus. To test the graded salience hypothesis (GSH, Giora, 1997), novel metaphors were contrasted against conventional metaphors. In line with the GSH, RH involvement was found for novel metaphors only. Here we show that more analytic, semantic processes are involved in metaphor comprehension, whereas irony/sarcasm comprehension involves theory of mind processes.
The comprehension of stories requires the reader to imagine the cognitive and affective states of the characters. The content of many stories is unpleasant, as they often deal with conflict, disturbance or crisis. Nevertheless, unpleasant stories can be liked and enjoyed. In this fMRI study, we used a parametric approach to examine (1) the capacity of increasing negative valence of story contents to activate the mentalizing network (cognitive and affective theory of mind, ToM), and (2) the neural substrate of liking negatively valenced narratives. A set of 80 short narratives was compiled, ranging from neutral to negative emotional valence. For each story mean rating values on valence and liking were obtained from a group of 32 participants in a prestudy, and later included as parametric regressors in the fMRI analysis. Another group of 24 participants passively read the narratives in a three Tesla MRI scanner. Results revealed a stronger engagement of affective ToM-related brain areas with increasingly negative story valence. Stories that were unpleasant, but simultaneously liked, engaged the medial prefrontal cortex (mPFC), which might reflect the moral exploration of the story content. Further analysis showed that the more the mPFC becomes engaged during the reading of negatively valenced stories, the more coactivation can be observed in other brain areas related to the neural processing of affective ToM and empathy.
Arthur M Jacobs
added 21 research items
In contrast to standard models of emotional valence, which assume a bipolar valence dimension ranging from negative to positive valence with a neutral midpoint, the evaluative space model (ESM) proposes two independent positivity and negativity dimensions. Previous imaging studies suggest higher predictive power of the ESM when investigating the neural correlates of verbal stimuli. The present study investigates further assumptions on the behavioral level. A rating experiment on more than 600 German words revealed 48 emotionally ambivalent stimuli (i.e., stimuli with high scores on both ESM dimensions), which were contrasted with neutral stimuli in two subsequent lexical decision experiments. Facilitative processing for emotionally ambivalent words was found in Experiment 2. In addition, controlling for emotional arousal and semantic ambiguity in the stimulus set, Experiment 3 still revealed a speed-accuracy trade-off for emotionally ambivalent words. Implications for future investigations of lexical processing and for the ESM are discussed.
It is argued that although the present target article falls short of presenting a computational theory in the sense of Marr (1982), its potential for the important task of constructing multilevel, multitask models of higher cognitive functions is considerable. Word recognition is taken as an example illustrating the point.
Arthur M Jacobs
added 4 research items
This paper presents a neuroscientific study of aesthetic judgments on written texts. In an fMRI experiment participants read a number of proverbs without explicitly evaluating them. In a post-scan rating they rated each item for familiarity and beauty. These individual ratings were correlated with the functional data to investigate the neural correlates of implicit aesthetic judgments. We identified clusters in which BOLD activity was correlated with individual post-scan beauty ratings. This indicates that some spontaneous aesthetic evaluation takes place during reading, even if not required by the task. Positive correlations were found in the ventral striatum and in medial prefrontal cortex, likely reflecting the rewarding nature of sentences that are aesthetically pleasing. On the contrary, negative correlations were observed in the classic left frontotemporal reading network. Midline structures and bilateral temporo-parietal regions correlated positively with familiarity, suggesting a shift from the task-network towards the default network with increasing familiarity.
Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity.
Arthur M Jacobs
added 3 research items
Facial expressions are used by humans to convey various types of meaning in various contexts. The range of meanings spans basic possibly innate socio-emotional concepts such as "surprise" to complex and culture specific concepts such as "carelessly." The range of contexts in which humans use facial expressions spans responses to events in the environment to particular linguistic constructions within sign languages. In this mini review we summarize findings on the use and acquisition of facial expressions by signers and present a unified account of the range of facial expressions used by referring to three dimensions on which facial expressions vary: semantic, compositional, and iconic.
Basic research has established a strong relationship between stimulus induced human motivation for ap-proach-related behavior and left-frontal electrophysiological activity in the alpha band, i.e. frontal alpha asymmetry (FAA). Since approach motivation is also of interest for various fields of applied research, several recent studies investigated the usefulness of FAA as a diagnostic tool of stimulus induced motiva-tional changes. The present review introduces the theory and the methods commonly used in approach/ withdrawal motivation research, and summarizes work on applied FAA with a focus on product design, marketing, brain-computer communication and mental health studies, where approach motivation is of interest. Studies investigating and developing the application of FAA training in the treatment of affective disorders such as major depressive disorder and anxiety disorder are also introduced, highlighting some of the future possibilities.
We administered German and Spanish versions of the Neuroticism Extraversion Openness-Five-Factor Inventory personality inventory to two groups of late bilinguals (second-language learners) of these two languages. Regardless of individuals' first language, both groups scored higher on Extraversion and Neuroticism when Spanish was the test language. In turn, scores on Agreeability were higher when German was used as the test language. The results are interpreted as evidence for cultural frame shifts consistent with cultural norms associated with the presently used language. Beyond the acquisition of linguistic skills, learning a second language seems to provide individuals with a new range of perceiving and displaying their own personality.
Arthur M Jacobs
added 2 research items
Talking about emotion and putting feelings into words has been hypothesized to regulate emotion in psychotherapy as well as in everyday conversation. However, the exact dynamics of how different strategies of verbalization regulate emotion and how these strategies are reflected in characteristics of the voice has received little scientific attention. In the present study, we showed emotional pictures to 30 participants and asked them to verbally admit or deny an emotional experience or a neutral fact concerning the picture in a simulated conversation. We used a 2 × 2 factorial design manipulating the focus (on emotion or facts) as well as the congruency (admitting or denying) of the verbal expression. Analyses of skin conductance response (SCR) and voice during the verbalization conditions revealed a main effect of the factor focus. SCR and pitch of the voice were lower during emotion compared to fact verbalization, indicating lower autonomic arousal. In contradiction to these physiological parameters, participants reported that fact verbalization was more effective in down-regulating their emotion than emotion verbalization. These subjective ratings, however, were in line with voice parameters associated with emotional valence. That is, voice intensity showed that fact verbalization reduced negative valence more than emotion verbalization. In sum, the results of our study provide evidence that emotion verbalization as compared to fact verbalization is an effective emotion regulation strategy. Moreover, based on the results of our study we propose that different verbalization strategies influence valence and arousal aspects of emotion selectively.
This study investigates neural correlates of music-evoked fear and joy with fMRI. Studies on neural correlates of music-evoked fear are scant, and there are only a few studies on neural correlates of joy in general. Eighteen individuals listened to excerpts of fear-evoking, joy-evoking, as well as neutral music and rated their own emotional state in terms of valence, arousal, fear, and joy. Results show that BOLD signal intensity increased during joy, and decreased during fear (compared to the neutral condition) in bilateral auditory cortex (AC) and bilateral superficial amygdala (SF). In the right primary somatosensory cortex (area 3b) BOLD signals increased during exposure to fear-evoking music. While emotion-specific activity in AC increased with increasing duration of each trial, SF responded phasically in the beginning of the stimulus, and then SF activity declined. Psychophysiological Interaction (PPI) analysis revealed extensive emotion-specific functional connectivity of AC with insula, cingulate cortex, as well as with visual, and parietal attentional structures. These findings show that the auditory cortex functions as a central hub of an affective-attentional network that is more extensive than previously believed. PPI analyses also showed functional connectivity of SF with AC during the joy condition, taken to reflect that SF is sensitive to social signals with positive valence. During fear music, SF showed functional connectivity with visual cortex and area 7 of the superior parietal lobule, taken to reflect increased visual alertness and an involuntary shift of attention during the perception of auditory signals of danger.
Arthur M Jacobs
added a research item
This paper describes a corpus of about 3000 English literary texts with about 250 million words extracted from the Gutenberg project that span a range of genres from both fiction and non-fiction written by more than 130 authors (e.g., Darwin, Dickens, Shakespeare). Quantitative Narrative Analysis (QNA) is used to explore a cleaned subcorpus, the Gutenberg English Poetry Corpus (GEPC) which comprises over 100 poetic texts with around 2 million words from about 50 authors (e.g., Keats, Joyce, Wordsworth). Some exemplary QNA studies show author similarities based on latent semantic analysis, significant topics for each author or various text-analytic metrics for George Eliot's poem 'How Lisa Loved the King' and James Joyce's 'Chamber Music', concerning e.g. lexical diversity or sentiment analysis. The GEPC is particularly suited for research in Digital Humanities, Natural Language Processing or Neurocognitive Poetics, e.g. as training and test corpus, or for stimulus development and control.
Arthur M Jacobs
added 2 research items
Studies comparing verbal and pictorial stimuli with emotional content often revealed a picture advantage in terms of larger or more pronounced emotional valence effects evoked by pictorial stimuli. This picture advantage usually is accounted for by their heightened biological relevance compared to symbolic word stimuli. However, physical differences in terms of number of features and discriminability between lexical and pictorial stimuli might also account for this pattern.The present study used event-related potentials (ERPs) to examine the hypothesis that the picture advantage is associated with the pictures' heightened complexity compared to words. In a valence judgment task participants assessed the emotional impact of positive and neutral words and pictograms. It was expected that the differences in the emotion effects for these two types of stimulus modalities were diminished, as a result of the reduced complexity of the pictograms.The results show that both types of stimuli elicited significant and comparable positive-going emotional valence effects around 240–300 ms post-stimulus. However, around 340 ms after stimulus onset the valence effects evoked by pictograms were restricted to posterior regions and smaller in magnitude whereas those evoked by words were characterized by a larger and more widespread scalp distribution, possibly due to their heightened potential to exalt imagination. Furthermore, amplitudes in the late time windows evoked by pictograms over posterior regions were significantly more positive than ERP amplitudes evoked by words, suggesting that the processing of pictograms requires cognitive capacity and effort to a much greater extent than the processing of words. In conclusion, the previously reported picture superiority in emotion elicitation was not replicated using pictograms, suggesting that it can at least partially be explained by the pictures' heightened complexity and spatial distinctiveness.
While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform–amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala.
Arthur M Jacobs
added an update
here's a recent piece I submitted to Frontiers in Digital Humanities in november!, but they still didn't find an action editor. since I'm already using this corpus in published research, I make it available here (and on arXiv). I hope it'll be of use for future research on Neurocognitive Poetics.
 
Arthur M Jacobs
added a research item
Many studies have shown that behavioral measures are affected by manipulating the imageability of words. Though imageability is usually measured by human judgment, little is known about what factors underlie those judgments. We demonstrate that imageability judgments can be largely or entirely accounted for by two computable measures that have previously been associated with imageability, the size and density of a word's context and the emotional associations of the word. We outline an algorithmic method for predicting imageability judgments using co-occurrence distances in a large corpus. Our computed judgments account for 58% of the variance in a set of nearly two thousand imageability judgments, for words that span the entire range of imageability. The two factors account for 43% of the variance in lexical decision reaction times (LDRTs) that is attributable to imageability in a large database of 3697 LDRTs spanning the range of imageability. We document variances in the distribution of our measures across the range of imageability that suggest that they will account for more variance at the extremes, from which most imageability-manipulating stimulus sets are drawn. The two predictors account for 100% of the variance that is attributable to imageability in newly-collected LDRTs using a previously-published stimulus set of 100 items. We argue that our model of imageability is neurobiologically plausible by showing it is consistent with brain imaging data. The evidence we present suggests that behavioral effects in the lexical decision task that are usually attributed to the abstract/concrete distinction between words can be wholly explained by objective characteristics of the word that are not directly related to the semantic distinction. We provide computed imageability estimates for over 29,000 words.
Arthur M Jacobs
added 4 research items
In this study, we verify the observation that signs for emotion related concepts are articulated with the congruent facial movements in German Sign Language using a corpus. We propose an account for the function of these facial movements in the language that also explains the function of mouthings and other facial movements at the lexical level. Our data, taken from 20 signers in three different conditions, show that for the disgust related signs, a disgust related facial movement with temporal scope only over the individual sign occurred in most cases. These movements often occurred in addition to disgust related facial movements that had temporal scope over the entire clause. Using the Facial Action Coding System, we found some variability in how exactly the facial movement was instantiated, but most commonly, it consisted of tongue protrusion and an open mouth. We propose that these lexically related facial movements be regarded as an additional layer of communication with both phonological and morphological properties, and we extend this proposal to mouthings as well. The relationship between this layer and manual lexical items is analogous in some ways to the gesture-word relationship, and the intonation-word relationship.
We present the German adaptation of the Affective Norms for English Words (ANEW; Bradley & Lang in Technical Report No. C-1. Gainsville: University of Florida, Center for Research in Psychophysiology). A total of 1,003 Words-German translations of the ANEW material-were rated on a total of six dimensions: The classic ratings of valence, arousal, and dominance (as in the ANEW corpus) were extended with additional arousal ratings using a slightly different scale (see BAWL: Võ et al. in Behavior Research Methods 41: 531-538, 2009; Võ, Jacobs, & Conrad in Behavior Research Methods 38: 606-609, 2006), along with ratings of imageability and potency. Measures of several objective psycholinguistic variables (different types of word frequency counts, grammatical class, number of letters, number of syllables, and number of orthographic neighbors) for the words were also added, so as to further facilitate the use of this new database in psycholinguistic research. These norms can be downloaded as supplemental materials with this article.
Arthur M Jacobs
added 2 research items
The arbitrariness of the linguistic sign is a fundamental assumption in modern linguistic theory. In recent years, however, a growing amount of research has investigated the nature of non-arbitrary relations between linguistic sounds and semantics. This review aims at illustrating the amount of findings obtained so far and to organize and evaluate different lines of research dedicated to the issue of phonological iconicity. In particular, we summarize findings on the processing of onomatopoetic expressions, ideophones, and phonaesthemes, relations between syntactic classes and phonology, as well as sound-shape and sound-affect correspondences at the level of phonemic contrasts. Many of these findings have been obtained across a range of different languages suggesting an internal relation between sublexical units and attributes as a potentially universal pattern.
A dual read-out model of context effects in letter perception is described that predicts forced-choice accuracy in the Reicher paradigm and its relation to word reportability. It is hypothesized that a correct choice to a letter in a word context is made when either the correct letter representation or a word representation containing the correct letter in the correct position reaches a response threshold (a criterion level of activation). This hypothesis was implemented using the basic architecture of the interactive activation model (J. L. McClelland & D. E. Rumelhart, 1981) in its semistochastic variant (A. M. Jacobs and J. Grainger, 1992). The model successfully captures the data of J. C. Johnston (1978), otherwise thought to be critically damaging for this type of model, and accurately predicts performance in a series of new experiments using the Reicher paradigm.
Arthur M Jacobs
added a research item
In this paper I would like to pave the ground for future studies in Computational Stylistics and (Neuro-)Cognitive Poetics by describing procedures for predicting the subjective beauty of words. A set of eight tentative word features is computed via Quantitative Narrative Analysis (QNA) and a novel metric for quantifying word beauty, the aesthetic potential is proposed. Application of machine learning algorithms fed with this QNA data shows that a classifier of the decision tree family excellently learns to split words into beautiful vs. ugly ones. The results shed light on surface and semantic features theoretically relevant for affective-aesthetic processes in literary reading and generate quantitative predictions for neuroaesthetic studies of verbal materials.
Arthur M Jacobs
added an update
Arthur M Jacobs
added 2 research items
We investigated the interplay between arousal and valence in the early processing of affective words. Event-related potentials (ERPs) were recorded while participants read words organized in an orthogonal design with the factors valence (positive, negative, neutral) and arousal (low, medium, high) in a lexical decision task. We observed faster reaction times for words of positive valence and for those of high arousal. Data from ERPs showed increased early posterior negativity (EPN) suggesting improved visual processing of these conditions. Valence effects appeared for medium and low arousal and were absent for high arousal. Arousal effects were obtained for neutral and negative words but were absent for positive words. These results suggest independent contributions of arousal and valence at early attentional stages of processing. Arousal effects preceded valence effects in the ERP data suggesting that arousal serves as an early alert system preparing a subsequent evaluation in terms of valence.
Although poetry reception is often considered to be a highly emotional process, psychological research on emotional experiences when reading poetry is anything but abundant. Most research on poetry reception addresses the influence of poetic devices like meter and rhyme on readers' comprehension, appreciation, and the recollection of the poem. Empathic reactions and emotional involvement as described, for example, in studies on reading narratives are rarely discussed. Here, we propose that poetry is predisposed to induce a variety of different kinds of affective responses and feelings. In particular, we put forward the mood empathy hypothesis, according to which poems expressing moods of persons, situations, or objects should engage readers to mentally simulate and affectively resonate with the depicted state of affairs. We report evidence that this resonance can lead to the experience of the depicted mood itself, or some feeling closely associated with it, a process similar to empathy as a kind of Einfühlung or feeling in. Our results are interpreted in the larger frame of the neurocognitive poetics model of literary reading (Jacobs, 2011, 2014). In line with the model's postulate that backgrounding elements facilitate emotional involvement while foregrounding features promote aesthetic evaluation, we identified different predictors for both processes: familiarity and situational embedding were the main factors mediating mood empathy, and aesthetic liking was best predicted from foregrounding features like style and form. By supporting the mood empathy hypothesis, these findings open new perspectives for future studies on literary reading and poetics.
Arthur M Jacobs
added an update
Here's a freshly accepted piece in FNHUM (https://www.frontiersin.org/articles/10.3389/fnhum.2017.00622/abstract) that paves the ground for future studies in Computational Stylistics and (Neuro-)Cognitive Poetics by describing procedures for predicting the subjective beauty of words. A set of eight tentative word features is computed via Quantitative Narrative Analysis (QNA) and a novel metric for quantifying word beauty, the aesthetic potential is proposed. The results shed light on surface and semantic features theoretically relevant for affective-aesthetic processes in literary reading and generate quantitative predictions for neuroaesthetic studies of verbal materials.
 
Arthur M Jacobs
added 6 research items
Immersion in reading, described as a feeling of 'getting lost in a book', is a ubiquitous phenomenon widely appreciated by readers. However, it has been largely ignored in cognitive neuroscience. According to the fiction feeling hypothesis, narratives with emotional contents invite readers more to be empathic with the protagonists and thus engage the affective empathy network of the brain, the anterior insula and mid-cingulate cortex, than do stories with neutral contents. To test the hypothesis, we presented participants with text passages from the Harry Potter series in a functional MRI experiment and collected post-hoc immersion ratings, comparing the neural correlates of passage mean immersion ratings when reading fear-inducing versus neutral contents. Results for the conjunction contrast of baseline brain activity of reading irrespective of emotional content against baseline were in line with previous studies on text comprehension. In line with the fiction feeling hypothesis, immersion ratings were significantly higher for fear-inducing than for neutral passages, and activity in the mid-cingulate cortex correlated more strongly with immersion ratings of fear-inducing than of neutral passages. Descriptions of protagonists' pain or personal distress featured in the fear-inducing passages apparently caused increasing involvement of the core structure of pain and affective empathy the more readers immersed in the text. The predominant locus of effects in the mid-cingulate cortex seems to reflect that the immersive experience was particularly facilitated by the motor component of affective empathy for our stimuli from the Harry Potter series featuring particularly vivid descriptions of the behavioural aspects of emotion.
Ever since Aristotle discussed the issue in Book II of his Rhetoric, humans have attempted to identify a set of “basic emotion labels”. In this paper we propose an algorithmic method for evaluating sets of basic emotion labels that relies upon computed co-occurrence distances between words in a 12.7-billion-word corpus of unselected text from USENET discussion groups. Our method uses the relationship between human arousal and valence ratings collected for a large list of words, and the co-occurrence similarity between each word and emotion labels. We assess how well the words in each of 12 emotion label sets—proposed by various researchers over the past 118 years—predict the arousal and valence ratings on a test and validation dataset, each consisting of over 5970 items. We also assess how well these emotion labels predict lexical decision residuals (LDRTs), after co-varying out the effects attributable to basic lexical predictors. We then demonstrate a generalization of our method to determine the most predictive “basic” emotion labels from among all of the putative models of basic emotion that we considered. As well as contributing empirical data towards the development of a more rigorous definition of basic emotions, our method makes it possible to derive principled computational estimates of emotionality—specifically, of arousal and valence—for all words in the language.
Arthur M Jacobs
added 2 research items
With the aim to improve the ecological validity when studying real life phenomena, research has increasingly been employing more complex and realistic materials, from pictorial and verbal (e.g., movies vs. pictures, narratives vs. single words), to interactive or virtual settings. is article has the objective to understand the emotional impact of these di erent types of media. It rst summarizes neuroimaging ndings on emotional processing, focusing on the development toward more realistic and complex materials. e presented literature shows that all media types, whether it is simple words or complex movies, may induce consistent emotional responses, mirrored in activations in core emotion regions. Regions related to the (embodied) simulation of another’s bodily state, and mentalizing, the cognitive representation of another’s mental state, are particularly reported in response to more complex, narrative or social materials. Other media-speci c responses are described in sensory or language brain regions, while dynamic and multimodal stimuli are reported to yield behavioral advantages together with increased emotional brain responses. Finally, the article discusses the role of immersive processes for emotional engagement in di erent media settings. e potential of di erent media types to make the viewer immerse into ctitious or arti cial worlds, is proposed as a crucial modulator for emotional responses in di erent media types, leading to the formulation of open questions and implications for future research on emotion processing.
Arthur M Jacobs
added 2 research items
Stories can elicit powerful emotions. A key emotional response to narrative plots (e.g., novels, movies, etc.) is suspense. Suspense appears to build on basic aspects of human cognition such as processes of expectation, anticipation, and prediction. However, the neural processes underlying emotional experiences of suspense have not been previously investigated. We acquired functional magnetic resonance imaging (fMRI) data while participants read a suspenseful literary text (E.T.A. Hoffmann's "The Sandman") subdivided into short text passages. Individual ratings of experienced suspense obtained after each text passage were found to be related to activation in the medial frontal cortex, bilateral frontal regions (along the inferior frontal sulcus), lateral premotor cortex, as well as posterior temporal and temporo-parietal areas. The results indicate that the emotional experience of suspense depends on brain areas associated with social cognition and predictive inference.
Arthur M Jacobs