Language, Cognition and Neuroscience

Language, Cognition and Neuroscience

Published by Taylor & Francis

Online ISSN: 2327-3801

·

Print ISSN: 0169-0965

Journal websiteAuthor guidelines

Top-read articles

75 reads in the past 30 days

Figure 1. Baseline corrected pupil response to high and low frequency words presented in quiet and in noise in L1 Hebrew and L2 English. Note: Baseline-corrected pupil response as a function of time averaged over all trials by listening condition, language (Hebrew L1, English L2), and lexical frequency. Frequency was divided into high and low based on a median split. The shaded areas represent the Standard Error of the Mean. A.u. = arbitrary units.
Figure 2. Baseline corrected pupil response to high and low frequency words presented in quiet in noise for the Arabic-Hebrew speakers in their L2 Hebrew. Note: Baseline-corrected pupil response as a function of time averaged over all trials by listening condition and lexical frequency. Frequency was divided into high and low based on a median split. Participants were Arabic native speakers tested in their L2 Hebrew. The shaded areas represent the Standard Error of the Mean.
Accuracy data for word repetition.
Results of the cross-validation and bin analysis.
How lexical frequency, language dominance and noise affect listening effort - insights from pupillometry

November 2024

·

75 Reads

·

·

·

Download

19 reads in the past 30 days

Producing non-basic word orders in (in)felicitous contexts: Evidence from pupillometry and functional near-infrared spectroscopy (fNIRS)

September 2024

·

168 Reads

·

Keiyu Niikuni

·

Ruri Shimura

·

[...]

·

The present study examined why speakers of languages with flexible word orders are more likely to use syntactically complex non-basic word orders when they provide discourse-given information earlier in sentences. This may be because they are more efficient for speakers to produce (the Speaker Economy Hypothesis). Alternatively, speakers may produce them to help listeners understand sentences more efficiently (the Listener Economy Hypothesis), given that previous studies showed that the processing of non-basic word orders was facilitated when the felicitous context was provided (i.e. a displaced object refers to discourse-given information). We addressed this issue by conducting a picture-description experiment, in which participants uttered sentences with syntactically basic Subject-Object-Verb (SOV) or non-basic Object-Subject-Verb (OSV) in felicitous or infelicitous contexts while cognitive load was tracked using pupillometry and functional near-infrared spectroscopy. The results showed that the felicitous context facilitated the filler-gap dependency formation of OSVs in production, supporting the Speaker Economy Hypothesis.

Aims and scope


Publishes studies of the brain and language, and language function and learning from a cognitive neuroscience perspective.

  • Language, Cognition and Neuroscience is an international peer-reviewed journal promoting integrated cognitive theoretical studies of language and its neural bases.
  • The journal takes an interdisciplinary approach to the study of brain and language, aiming to integrate excellent cognitive science and neuroscience to answer key questions about the nature of language and cognition in the mind and the brain.
  • It aims to engage researchers and practitioners alike in how to better understand cognitive language function, including: Language cognition; Neuroscience; Brain and language
  • The journal publishes high-quality, theoretically-motivated cognitive behavioral studies of language function, and papers which integrate cognitive theoretical accounts of language with…

For a full list of the subject areas this journal covers, please visit the journal website.

Recent articles


Sub-word orthographic processing and semantic activation as revealed by ERPs
  • Article

November 2024

·

15 Reads




Figure 1. Baseline corrected pupil response to high and low frequency words presented in quiet and in noise in L1 Hebrew and L2 English. Note: Baseline-corrected pupil response as a function of time averaged over all trials by listening condition, language (Hebrew L1, English L2), and lexical frequency. Frequency was divided into high and low based on a median split. The shaded areas represent the Standard Error of the Mean. A.u. = arbitrary units.
Figure 2. Baseline corrected pupil response to high and low frequency words presented in quiet in noise for the Arabic-Hebrew speakers in their L2 Hebrew. Note: Baseline-corrected pupil response as a function of time averaged over all trials by listening condition and lexical frequency. Frequency was divided into high and low based on a median split. Participants were Arabic native speakers tested in their L2 Hebrew. The shaded areas represent the Standard Error of the Mean.
Accuracy data for word repetition.
Results of the cross-validation and bin analysis.
How lexical frequency, language dominance and noise affect listening effort - insights from pupillometry
  • Article
  • Full-text available

November 2024

·

75 Reads

Acoustic, listener, and stimulus-related factors modulate speech-in-noise processes. This study examined how noise, listening experience, manipulated at two levels, native [L1] vs. second language [L2], and lexical frequency impact listening effort. Forty-seven participants, tested in their L1 Hebrew and L2 English, completed a word recognition test in quiet and noisy conditions while pupil size was recorded to assess listening effort. Results showed that listening in L2 was overall more effortful than in L1, with frequency effects modulated by language and noise. In L1, pupil responses to high and low frequency words were similar in both conditions. In L2, low frequency words elicited a larger pupil response, indicating greater effort, but this effect vanished in noise. A time-course analysis of the pupil response suggests that L1-L2 processing differences occur during lexical selection, indicating that L2 listeners may struggle to match acoustic-phonetic signals to long-term memory representations.



Neural correlates of masked priming: Only morphologically derived words facilitate lexical decisions

November 2024

·

45 Reads

The visual processing of morphologically complex words has been studied for decades now. One influential account proposes initial sublexical parsing, based on surface structure, before semantic information comes in (form-first models; Rastle & Davis, 2008, Morphological decomposition based on the analysis of orthography. Language and Cognitive Processes, 23(7-8), 942–971. https://doi.org/10.1080/01690960802069730). We tested this account in German, a morphologically rich language in a masked-priming lexical decision study with (pseudo-)derived German words. Behavioural data showed masked priming for truly morphologically complex (e.g. farmer – FARM), but not for pseudo-complex (e.g. corner – CORN) or merely form-related primes (e.g. cartel – CAR). MEG data revealed an early sensitivity to form (140–330 ms), followed by a differentiation between derived and other primes (260–480 ms), challenging the notion of early blind decomposition. The results align with the AUSTRAL model (Taft, 2023, Localist lexical representation of polymorphemic words. In D. Crepaldi (Ed.), Linguistic morphology in the mind and brain (1st ed., pp. 152–166). Routledge. https://doi.org/10.4324/9781003159759-11) and suggest that German speakers have heightened sensitivity to morpho-semantic information, unlike English speakers, which likely due to differences in morphological complexity between languages.


Exploring the flexibility of word position encoding in Chinese reading: the role of transposition effects

October 2024

·

13 Reads

Previous research in the alphabetic writing system has demonstrated that transposition distance (adjacent vs. non-adjacent) modulates word position encoding during sentence reading. To examine the generality of this pattern within a more holistic model of sentence processing, we investigated this effect in the logographic Chinese writing system. We manipulated the number of words intervening between the transposed words, creating four conditions: none, one, two, and three intervening words. Participants performed a rapid grammaticality judgment task. Results showed longer response latencies and higher error rates when words were transposed adjacent to each other compared to non-adjacent transpositions. Furthermore, more errors and longer response times were observed when one word intervened between the transposed words compared to two or three intervening words. However, no significant differences emerged between the two- and three-word interval conditions. These findings suggest that word position encoding exhibits graded, flexible tolerance to transposition distance, constrained by proximity.


Producing non-basic word orders in (in)felicitous contexts: Evidence from pupillometry and functional near-infrared spectroscopy (fNIRS)

September 2024

·

168 Reads

The present study examined why speakers of languages with flexible word orders are more likely to use syntactically complex non-basic word orders when they provide discourse-given information earlier in sentences. This may be because they are more efficient for speakers to produce (the Speaker Economy Hypothesis). Alternatively, speakers may produce them to help listeners understand sentences more efficiently (the Listener Economy Hypothesis), given that previous studies showed that the processing of non-basic word orders was facilitated when the felicitous context was provided (i.e. a displaced object refers to discourse-given information). We addressed this issue by conducting a picture-description experiment, in which participants uttered sentences with syntactically basic Subject-Object-Verb (SOV) or non-basic Object-Subject-Verb (OSV) in felicitous or infelicitous contexts while cognitive load was tracked using pupillometry and functional near-infrared spectroscopy. The results showed that the felicitous context facilitated the filler-gap dependency formation of OSVs in production, supporting the Speaker Economy Hypothesis.


The prediction of segmental and tonal information in Mandarin Chinese: An eye-tracking investigation

August 2024

·

66 Reads

There is controversy about the extent to which people predict phonology during comprehension. In three visual-world experiments, we ask whether it occurs in Mandarin, a tonal language. Participants heard sentences containing a target word that was highly predictable (Cloze 80.2%, Experiment 1) or very highly predictable (Cloze 93.9%, Experiments 2-3) and saw an array of objects containing one whose name matched the target word (Experiments 1-2), was unrelated to the target word (Experiments 1-3), or matched the target word in segment and tone (Experiments 1-3), in segment only (Experiments 1-3), or tone only (Experiment 3). In comparison to the unrelated object, participants looked at the segment+tone object more (Experiments 1-3), and sometimes looked at the segment object more (Experiments 1 and 3), but there was no evidence that they looked at the tone object more. We conclude that participants predict segmental information, and that they do so independently of tone.


Figure 1. The pitch contours of the target sentences.
Figure 2. The mean acceptability of the sentences across conditions. Error bars represent the standard error of the mean. (cf. Table 1 for the summary of the conditions).
Figure 3. The illustration of each trial.
Figure 5: The time course of the mean pupil size.
The x-axis indicates time (ms) from the onset of the target sentence, while the y-axis indicates the relative change in pupil size (a.u). The shaded area of the lines shows standard errors. The horizontal lines on the bottom show time-windows with significant clusters of the main effect and interactions. The vertical lines show the mean onset of each phrase.
Figure 6: The time course of the mean pupil size in all conditions.
The x-axis indicates time (ms) from the onset of the target sentence, and the y-axis indicates the relative change in pupil size (a.u). The shaded area of the lines shows standard errors. The horizontal lines on the bottom show time-windows with significant clusters of the main effect and interactions. The vertical lines show the mean onset of each phrase. (cf. Table 1 for the summary of the conditions)
Role of prosody and word order in identifying focus: Evidence from pupillometry

August 2024

·

220 Reads

·

1 Citation

This study investigated the role of prosody and word order in identifying the focus of sentences in Japanese. Native Japanese speakers listened to sentences with different types of word order (subject–object–verb (SOV) vs. object–subject–verb (OSV)), prosody (whether the first noun phrase is stressed or not) and preceding contexts (object- vs. subject-wh questions), while processing costs were measured using pupillometry. Although syntactically non-basic OSV was more difficult to process than basic SOV, this processing difficulty was considerably reduced when the supportive context (the subject-wh question) required S to be focused. The time–course analysis of pupillometry revealed that the Japanese speakers immediately used prosodic cues to determine the focus of sentences, but the effect of word order cues for focus was delayed until the sentence-final verb was encountered. This study advances our understanding of the temporal dynamics of focus processing and the interplay between syntactic and information structures in sentence comprehension.


Generic masculine role nouns interfere with the neural processing of female referents: evidence from the P600

August 2024

·

38 Reads

·

2 Citations

The masculine form in German is used to refer to male people specifically and to people of any gender generically. While behavioural research has demonstrated that this dual function leads to male-biased responses, the neural underpinnings of this bias are still underexplored. In the present EEG study, we investigated how the presentation of generically intended masculine role nouns (vs. role nouns in the feminine–masculine and masculine–feminine pair form) affected the neural processing of references to men and women. Referring to women after generic masculine role nouns induced difficulties during perceptual processing in the P200 range and, crucially, also during high-level reference resolution, as indicated by an enhanced P600 amplitude over posterior sites. In contrast, no significant processing conflicts were observed after the pair form. These findings illuminate the neural consequences of grammatical gender and support the notion that the generic masculine does not represent different genders equally well.


Enhanced prosody adds to morpho-syntactic cues in the interpretation of structural ambiguities in German

May 2024

·

81 Reads

This study investigated the effects of syntactically marked and enhanced prosody on local ambiguity resolution in German SVO and OVS sentences. In a visual-world experiment, thirty younger and thirty elderly healthy participants performed a sentence-picture matching task. Response accuracy, reaction times and fixations proportions to the target picture were analysed using linear mixed models. We found no support for beneficial effects of syntactically marked prosody, however, results suggested a facilitative role of enhanced prosodic cues (i.e. increased f0 maximum) prior to the point of disambiguation in SVO structures, as well as the beneficial effects of enhanced prosody adding to morpho-syntactic cues in OVS structures. Both age groups showed comparable cue use but inter-individual variability in prosodic cue processing. Overall, our study replicates and extends previous findings demonstrating the importance of examining variability in prosodic cue processing in future research.


Working memory training yields improvements in L2 morphosyntactic processing

May 2024

·

14 Reads

The present study investigated whether working memory (WM) training enhances morphosyntactic processing in the second language (L2). L2 learners of Spanish in the treatment group completed WM updating pre/post tasks and a moving window task containing sentences with gender agreement. Moreover, the treatment group trained with two WM tasks for five consecutive days, they completed a posttest and a delayed posttest two months after. The results show that the treatment group presented transfer to untrained WM tasks at both posttests, while the control group did not. While neither group showed sensitivity to gender disagreement at pretest, the treatment group was sensitive to violations after training, and this improvement in morphosyntactic processing was sustained at the delayed posttest. The control group, however, remained insensitive at posttest. Taken together, the findings suggest that WM is a malleable system and WM training may be used as a cognitive tool to facilitate L2 morphosyntax. Access to eprint: https://www.tandfonline.com/eprint/I2QKYBVXFWPGDU7ARKYF/full?target=10.1080/23273798.2024.2359560


Figure 1. 
a) Example of a target word and its corresponding distractors with decreasing visual resemblance. The letters inducing critical visual variations are underlined. b) Timeline description of a trial.
Figure 2. 
a) Mean percentage of responses for target words and the different distractors. Error bars correspond to a confidence interval of 95%. (b) Raincloud plot showing the mean percentage of accuracy for each participant for each word type. Boxplots and distribution density plots are shown for Target, Same Viseme, Different Viseme, and Different Vowel.
Figure 3 A) Responses rate for target words and each type of distractor where the red line represents chance level. B) The effect of the place of articulation on lip-reading performance and error rate where is red line represents no effect, positive values represent a significant effect towards bilabial and negative values a significant effect towards alveolar places of articulation. The coloured areas represent the posterior distribution of the variables, obtained through Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. This distribution reflects the updated probability distribution of parameters after incorporating observed data, providing a probabilistic representation of the variable's uncertainty.
Figure 3 A) Responses rate for target words and each type of distractor where the red line represents chance level. B) The effect of the place of articulation on lip-reading performance and error rate where is red line represents no effect, positive values represent a significant effect towards bilabial and negative values a significant effect towards alveolar places of articulation. The coloured areas represent the posterior distribution of the variables, obtained through Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. This distribution reflects the updated probability distribution of parameters after incorporating observed data, providing a probabilistic representation of the variable's uncertainty.
Sub-visemic discrimination and the effect of visual resemblance on silent lip-reading

May 2024

·

22 Reads

Relevant visual information is available in speakers’ faces during face-to-face interactions that improve speech perception. There is an ongoing debate, however, about how phonemes and their visual counterparts, visemes are mapped. An influential hypothesis claims that several phonemes can be mapped into a single visemic category (many-to-one phoneme-viseme mapping). In contrast, recent findings have challenged this view, reporting evidence for sub-visemic syllable discrimination. We aim to investigate whether Spanish words from the same visemic category can be identified or not. We designed a lip-reading task in which participants had to identify target words presented in silent video clips among 3 distractors differing in their visual resemblance from the target. Target words were identified above chance and significantly more than distractors from the same visemic category. Moreover, the error rate for distractors significantly decreased with decreasing visemic resemblance to the target. These results challenge the many-to-one phoneme-viseme mapping hypothesis.


Do bilingual adults gesture when they are disfluent?: Understanding gesture-speech interaction across first and second languages

May 2024

·

44 Reads

·

2 Citations

People are more disfluent in their second language (L2) than their first language (L1). Gesturing facilitates cognitive processes, including speech production. This study investigates speech disfluency and representational gesture production across Turkish-English bilinguals' L1 (Turkish) and L2 (English) through a narrative retelling task (N = 27). Results showed that people were more disfluent and used more representational gestures in English. Controlling for L2 proficiency, people were still more disfluent in English. The more people were proficient in L2, the more they used gestures in that language. Similarly, disfluency-gesture co-occurrences were more common in English. L2 proficiency was positively correlated with the likelihood of a disfluency being accompanied by a gesture. These findings suggest that gestures may not necessarily compensate for weak language skills. Rather, people might gesture during disfluent moments if they can detect their errors, suggesting a close link between representational gestures and language competency in benefiting from gestures when disfluent.


The role of semantically related gestures in the language comprehension of simultaneous interpreters in noise

April 2024

·

205 Reads

·

1 Citation

Manual co-speech gestures can facilitate language comprehension, especially in adverse listening conditions. However, we do not know whether gestures influence simultaneous interpreters' language comprehension in adverse listening conditions, and if so, whether this influence is modulated by interpreting experience, or by active simultaneous interpreting (SI). We exposed 24 interpreters and 24 bilinguals without interpreting experience to utterances with semantically related gestures, semantically unrelated gestures, or without gestures while engaging in comprehension (interpreters and bilinguals) or in SI (interpreters only). Tasks were administered in clear and noisy speech. Accuracy and reaction time were measured, and participants' gaze was tracked. During comprehension, semantically related gestures facilitated both groups' processing in noise. Facilitation was not modulated by interpreting experience. However, when interpreting noisy speech, interpreters did not benefit from gestures. This suggests that the comprehension component, and specifically crossmodal information processing, in SI differs from that of other language comprehension.


What can we learn about integration of novel words into semantic memory from automatic semantic priming?

April 2024

·

62 Reads

According to the Complementary Learning Systems model of word learning, only integrated novel words can interact with familiar words during lexical selection. The pre-registered study reported here is the first to examine behavioural and electrophysiological markers of integration in a task that relies primarily on automatic semantic processing. 71 young adults learned novel names for two sets of novel concepts, one set on each of two consecutive days. On Day 2, learning was followed by a continuous primed lexical decision task with EEG recording. In the N400 window, novel names trained immediately before testing differed from both familiar and untrained novel words, and, in the time window between 500ms-800ms post onset, they also differed from novel names that had undergone a 24-hour consolidation, for which a small behavioural priming effect was observed. We develop an account that attributes the observed effects to processes rooted in episodic, rather than semantic, memory.


From breaking bread to breaking hearts: embodied simulation and action language comprehension

March 2024

·

147 Reads

·

4 Citations

In this study, we conducted a behavioural experiment using literal, idiomatic, conventional and novel metaphorical action sentences. Participants viewed an action video, immediately after a sentence containing a verb that did (matching modality) or did not (mismatching modality) match the observed action. All the sentences were presented both in the matching modality and the mismatching modality. Participants had to indicate whether the sentence made sense or not by pressing a designated response key. We recorded participants' reaction times and accuracy. We found no significant differences between the matching and mismatching modality in the idiomatic condition. Instead, we found a facilitation effect for the literal and the metaphorical conventional condition in the matching modality compared to the mismatching modality and an interference effect for the metaphorical novel condition in the matching modality compared to the mismatching modality. We interpret these findings in light of the Embodied Cognition approach to language.


Effects of social interactions on the neural representation of emotional words in late bilinguals

January 2024

·

278 Reads

·

4 Citations

This fMRI study explored the relationship between social interactions and neural representations of emotionality in a foreign language (LX). Forty-five late learners of Japanese performed an auditory Japanese lexical decision task involving positive and negative words. The intensity of their social interactions with native Japanese speakers was measured using the Study Abroad Social Interaction Questionnaire. Activity in the left ventral striatum significantly correlated with social interaction intensity for positive words, while the right amygdala showed a significant correlation for negative words. These results indicate neural representations of LX emotional words link with the intensity of social interactions. Furthermore, LX negative words activated the left inferior frontal gyrus more than positive and neutral words, suggesting greater cognitive effort for processing negative words, aligning with a bias in adult social interactions towards more positively-valenced language. Overall, our findings underscore the importance of social interaction experiences in the processing of LX emotional words.



Figure 2. Pairwise comparisons of ERP differences between conditions at the critical verb position. Figures in the first row (A, B, C ) show the comparison between Ambiguous Patient and Ambiguous Patient conditions. The second row (E, F, G) compares the Ambiguous Patient and Unambiguous Patient conditions. The third row compares the Ambiguous Agent and Unambiguous Agent conditions (G, H, I), and the last row (J, K, L) compares the Unambiguous Patient and Unambiguous Agent conditions. In each row, figures on the left (A, D, G and J ) show grand mean ERP time courses at Cz. Figures in the centre (B, E, H, K) show grand mean topographical distribution of the differences between conditions. Figures on the right (C, F, I, L) show estimated GAMM difference surfaces of the topographic distribution of differences between conditions in the 300-500 ms time window. Non-shaded (bright) areas indicate where the 95% CIs exclude 0. Differences in the centre and right column are always between mV in the upper minus mV in the lower condition according to the legend in the left columns.
Figure 3. Comparison of power differences in individually defined alpha band in the 300-500 ms time window after critical word (verb) presentation. Large plots show estimated (GAMM) difference surfaces of the topographic distribution of differences between conditions (A-H). Adjacent small plots show the corresponding observed grand average differences.
Figure 5. Pairwise comparisons of ERP differences between conditions at the critical verb position for the late time window (500-700 ms time window). Plotting conventions are identical to Figure 2.
Figure 6. Comparison of power differences in individually defined alpha band in the 500-700 ms time window after critical word (verb) presentation. Large panels show estimated (GAMM) difference surfaces of the topographic distribution of differences between conditions (A-D). Adjacent small plots show the corresponding observed grand average differences.
Incremental sentence processing is guided by a preference for agents: EEG evidence from Basque

August 2023

·

138 Reads

·

2 Citations

Comprehenders across languages tend to interpret role-ambiguous arguments as the subject or the agent of a sentence during parsing. However, the evidence for such a subject/agent preference rests on the comprehension of transitive, active-voice sentences where agents/ subjects canonically precede patients/objects. The evidence is thus potentially confounded by the canonical order of arguments. Transitive sentence stimuli additionally con ate the semantic agent role and the syntactic subject function. We resolve these two confounds in an experiment on the comprehension of intransitive sentences in Basque. When exposed to sentence-initial role-ambiguous arguments, comprehenders preferentially interpreted these as agents and had to revise their interpretation when the verb disambiguated to patient-initial readings. The revision was re ected in an N400 component in ERPs and a decrease in power in the alpha and lower beta bands. This finding suggests that sentence processing is guided by a top-down heuristic to interpret ambiguous arguments as agents, independently of word order and independently of transitivity. ARTICLE HISTORY


Do readers misassign thematic roles? Evidence from a trailing boundary-change paradigm

February 2023

·

38 Reads

·

3 Citations

We report an eye-tracking experiment with a trailing boundary-change paradigm as people read subject- and object-relative clauses that were either plausible or implausible. We sought to determine whether readers sometime misassign thematic roles to arguments in implausible, noncanonical sentences. In some sentences, argument nouns were reversed after participants had read them. Thus, implausible noncanonical sentences like “The bird that the worm ate yesterday was small” changed to plausible “The worm that the bird ate was small.” If initial processing generates veridical representations, all changes should disrupt rereading, irrespective of plausibility or syntactic structure. Misinterpretation effects should only arise in offline comprehension. If misassignment of thematic roles occurs during initial processing, differences should be apparent in first-pass reading times, and rereading should be differentially affected by the direction of the text change. Results provide evidence that readers sometimes misassign roles during initial processing and sometimes fail to revise misassignments during rereading.


Multivariate analysis of brain activity patterns as a tool to understand predictive processes in speech perception Multivariate analysis of brain activity patterns as a tool to understand predictive processes in speech perception

January 2023

·

108 Reads

·

7 Citations

Speech perception is heavily influenced by our expectations about what will be said. In this review, we discuss the potential of multivariate analysis as a tool to understand the neural mechanisms underlying predictive processes in speech perception. First, we discuss the advantages of multivariate approaches and what they have added to the understanding of speech processing from the acoustic-phonetic form of speech, over syllable identity and syntax, to its semantic content. Second, we suggest that using multivariate techniques to measure informational content across the hierarchically organised speech-sensitive brain areas might enable us to specify the mechanisms by which prior knowledge and sensory speech signals are combined. Specifically, this approach might allow us to decode how different priors, e.g. about a speaker's voice or about the topic of the current conversation, are represented at different processing stages and how incoming speech is as a result differently represented. ARTICLE HISTORY


Processing argument structure complexity in Basque-Spanish bilinguals

December 2022

·

70 Reads

·

2 Citations

Previous research on argument structure (AS) has shown that verb processing costs scale with the number of arguments and as a result of non-canonical thematic mapping. The Basque language has unique AS: Basque unergatives and transitives select transitive auxiliary and ergative subject case markings, while unaccusatives are syntactically less complex. We studied the contribution of these syntactic factors in seventy-one, simultaneous Basque-Spanish bilinguals, measuring their performance on unergative, unaccusatives, and transitive verbs in a lexical decision and a sentence production task. We observed no differences between verb groups in the lexical decision task. In the production task, Basque unergatives elicited more ungrammatical sentences, while Spanish unaccusatives, in line with previous findings, elicited longer speech onset times. Our results indicate that AS processing can differ across languages, calling for further cross-linguistic investigation.


The effect of input sensory modality on the multimodal encoding of motion events

November 2022

·

145 Reads

·

3 Citations

Each sensory modality has different affordances: vision has higher spatial acuity than audition, whereas audition has better temporal acuity. This may have consequences for the encoding of events and its subsequent multimodal language production-an issue that has received relatively little attention to date. In this study, we compared motion events presented as audio-only, visual-only, or multimodal (visual + audio) input and measured speech and co-speech gesture depicting PATH and MANNER of motion in Turkish. Input modality affected speech production. Speakers with audio-only input produced more PATH descriptions and fewer MANNER descriptions in speech compared to speakers who received visual input. In contrast, the type and frequency of gestures did not change across conditions. Path-only gestures dominated throughout. Our results suggest that while speech is more susceptible to auditory vs. visual input in encoding aspects of motion events, gesture is less sensitive to such differences. ARTICLE HISTORY


Journal metrics


2.3 (2022)

Journal Impact Factor™


33%

Acceptance rate


4.7 (2022)

CiteScore™


13 days

Submission to first decision

Editors