Yuki Kamide’s research while affiliated with University of Dundee and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (29)


Figure 1. Example visual array paired with spoken discourses (e.g. (1) -(4)) in Experiment 1 (1) Together condition: The piano and the trumpet are in the bar. The carrot and the
Figure 2. Mean proportion of fixations (shaded bands show ± 1 SE of the mean) on the target (piano), competitor (trumpet) and distractor (carrot, lantern) in the Apart (A) and Together (B) conditions in Experiment 1. Vertical broken lines indicate points at which fixations were resynchronised in the discourse.
Figure 3. Example visual array paired with spoken discourses (e.g. (5) -(8)) in Experiment 2 Visual displays were paired with pre-recorded spoken discourses, such as those outlined in (5) -(8) below.
Figure 4. Mean proportion of fixations (shaded bands show ± 1 SE of the mean) on the target (bat), competitor (cigarette) and distractor (melon, shirt) in the Apart (A) and Together (B) conditions in Experiment 2. Vertical broken lines indicate points at which fixations were resynchronised in the discourse.
Parameter estimates, standard errors (SE), and 95% confidence intervals (95% CI) for the pairwise comparisons exploring the effect of object within each condition during the critical noun region ('piano') + 300ms in Experiment 1

+2

Spatial narrative context modulates semantic (but not visual) competition during discourse processing
  • Article
  • Full-text available

October 2019

·

138 Reads

·

4 Citations

Journal of Memory and Language

·

·

Yuki Kamide

Recent research highlights the influence of (e.g., task) context on conceptual retrieval. To assess whether conceptual representations are context-dependent rather than static, we investigated the influence of spatial narrative context on accessibility for lexical-semantic information by exploring competition effects. In two visual world experiments, participants listened to narratives describing semantically related (piano-trumpet; Experiment 1) or visually similar (bat-cigarette; Experiment 2) objects in the same or separate narrative locations while viewing arrays displaying these (‘target’ and ‘competitor’) objects and other distractors. Upon re-mention of the target, we analysed eye movements to the competitor. In Experiment 1, we observed semantic competition only when targets and competitors were described in the same location; in Experiment 2, we observed visual competition regardless of context. We interpret these results as consistent with context-dependent approaches, such that spatial narrative context dampens accessibility for semantic but not visual information in the visual world.

Download

The Influence of Globally Ungrammatical Local Syntactic Constraints on Real‐Time Sentence Comprehension: Evidence From the Visual World Paradigm and Reading

October 2018

·

42 Reads

·

8 Citations

Cognitive Science A Multidisciplinary Journal

We investigated the influence of globally ungrammatical local syntactic constraints on sentence comprehension, as well as the corresponding activation of global and local representations. In Experiment 1, participants viewed visual scenes with objects like a carousel and motorbike while hearing sentences with noun phrase (NP) or verb phrase (VP) modifiers like “The girl who likes the man (from London/very much) will ride the carousel.” In both cases, “girl” and “ride” predicted carousel as the direct object; however, the locally coherent combination “the man from London will ride…” in NP cases alternatively predicted motorbike. During “ride,” local constraints, although ruled out by the global constraints, influenced prediction as strongly as global constraints: While motorbike was fixated less than carousel in VP cases, it was fixated as much as carousel in NP cases. In Experiment 2, these local constraints likewise slowed reading times. We discuss implications for theories of sentence processing.


Spatial narrative context modulates semantic (but not visual) competition during discourse processing

July 2018

·

28 Reads

Recent research highlights the influence of (e.g., task) context on conceptual retrieval. In order to assess whether conceptual representations are context-dependent rather than static, we investigated the influence of spatial narrative context on accessibility for lexical-semantic information by exploring competition effects. In two visual world experiments, participants listened to narratives describing semantically related (piano-trumpet; Experiment 1) or visually similar (bat-cigarette; Experiment 2) objects in the same or separate narrative locations while viewing arrays displaying these (“target” and “competitor”) objects and other distractors. Upon re-mention of the target, we analysed eye movements to the competitor. In Experiment 1, we observed semantic competition only when targets and competitors were described in the same location; in Experiment 2, we observed visual competition regardless of context. We interpret these results as consistent with context-dependent approaches, such that spatial narrative context dampens accessibility for semantic but not visual information in the visual world.


Event Processing in the Visual World: Projected Motion Paths During Spoken Sentence Comprehension

October 2015

·

571 Reads

·

18 Citations

Journal of Experimental Psychology Learning Memory and Cognition

Motion events in language describe the movement of an entity to another location along a path. In two eye-tracking experiments we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In the first experiment, participants listened to sentences describing the movement of an agent to a goal location with verbs suggesting a more upwards (e.g., “jump”) or more downwards oriented path (e.g., “crawl”) while concurrently viewing a visual scene depicting the agent, the goal, and some ‘empty space’ in between. We found that in the rare event of fixating the empty space region between agent and goal, visual attention was biased upwards or downwards depending on the kind of verb. In Experiment 2, the sentences were presented concurrently with scenes featuring a central ‘obstruction’ which would not only impose further constraints on verb-related motion paths, but also increase the likelihood of fixating the area in-between the agent and the goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse tracking task which encouraged a more explicit spatial re-enactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space.







Figure 2: Mean saccade launch latencies per condition (in ms), relative to spoken word onset (the words ended ca. 715 ms after word onset on average). Error bars represent 95% confidence intervals for the means derived from the by-subject analysis. 
Hearing "moon" and looking up: Word-related spatial associations facilitate saccades to congruent locations

July 2014

·

165 Reads

·

11 Citations

In the experiment reported here, 30 participants made a lexical decision on 120 spoken words and 120 spoken non-words. The words had either an upward (e.g. 'moon') or downward (e.g. 'sewer') spatial association, or they were neutral in this respect (e.g. 'letter'). Participants made their lexical decisions by fixating a target located either above or below the centre of the screen, counterbalanced across participants. Saccade launch latencies to targets in a congruent spatial location (e.g., hearing 'moon' and looking up to confirm that the stimulus is a word) were significantly faster than those to targets in an incongruent location (e.g., hearing 'moon' and looking down to confirm that it is a word). Crucially, saccade launch latencies to incongruent target locations did not differ from those launched after hearing neutral words. Our results extend earlier findings (Dudschig et al., 2013) by showing that language-related spatial associations facilitate eye movements towards congruent locations rather than inhibiting eye movements towards incongruent locations.


Citations (16)


... Beyond the question of pronoun interpretation, the results are compatible with findings showing that listeners and speakers spontaneously update mental representations based on past events, and that referential expressions are interpreted relative to these dynamic representations (see, e.g., Altmann & Kamide, 2007;Chambers & San Juan, 2008;Ibarra & Tanenhaus, 2016;Kukona et al., 2014;Williams, Kukona, & Kamide, 2019). It is also interesting to note the connection between these (and our) findings and work in visual cognition, where the indexical pointers used for tracking entities (e.g., Pylyshyn, 1989) may be separable from the information associated with these entities in working memory (Thyer et al., 2022). ...

Reference:

How do Antecedent Semantics Influence Pronoun Interpretation? Evidence from Eye Movements
Spatial narrative context modulates semantic (but not visual) competition during discourse processing

Journal of Memory and Language

... Kukona et al. (2011) observed predictive eye movements during "arrest" to both the crook, which was related to the verb and a predictable direct object, and policeman, which was related to the verb but an unpredictable direct object. Thus, participants' predictions were not extinguished by conflicting information (e.g., see also Kamide & Kukona, 2018). Similarly, comprehenders' predictions may not be extinguished by exposure to unexpected sentences (e.g., which conflict with their predictions), which may also account for evidence showing that comprehenders do not adapt. ...

The Influence of Globally Ungrammatical Local Syntactic Constraints on Real‐Time Sentence Comprehension: Evidence From the Visual World Paradigm and Reading
  • Citing Article
  • October 2018

Cognitive Science A Multidisciplinary Journal

... This paucity of evidence on predictive language processing in ASD is in contrast to substantial evidence on incremental processing in language comprehension among neurotypical adults (e.g., Altmann & Kamide, 1999;Altmann & Kamide, 2007Altmann & Mirković, 2009;Kamide et al., 2003;Kamide et al., 2016;Kang et al., 2020;Kang & Ge, 2022;Knoeferle et al., 2005;Knoeferle & Crocker, 2007;Kukona et al., 2014) and children (e.g., Borovsky et al., 2012;Gambi et al., 2018;Gambi et al., 2016;Huang & Snedeker, 2011;Reuter et al., 2021;Trueswell et al., 1999). The ability to process language incrementally is also associated with individual differences in language skills and nonverbal cognitive abilities, such as vocabulary size (Borovsky et al., 2012;Lew-Williams & Fernald, 2007). ...

Event Processing in the Visual World: Projected Motion Paths During Spoken Sentence Comprehension

Journal of Experimental Psychology Learning Memory and Cognition

... However, analogues of the graded view have been explored in the language processing literature. Prior studies have found that world knowledge violations elicit similar patterns of neural activity as SRVs (Hagoort et al., 2004;Matsuki et al., 2011), and event-based plausibility plays a rapid and important role in online sentence comprehension (e.g., Garnsey et al., 1997;McRae et al., 1998;Kamide et al., 2003;Van Berkum et al., 2005). Beyond this, the lexical/world knowledge distinction might also be dis-preferred on the basis of parsimony. ...

The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements
  • Citing Article
  • July 2003

Journal of Memory and Language

... Such detrimental effects have been explained by arguing that the process of identifying a target in the cue-congruent location may first require an inhibition of spatial features activated by the cue (Estes et al., 2015). Similarly, it has also been hypothesized that the inhibition effect in those studies is due to the feature overlap being manipulated between the cue and the location of the target, rather than the cue and the target itself (Dunn et al., 2014), or due to the fact that the cueing effect interfered with the more demanding discrimination rather than a simpler detection task (Dudschig et al., 2012). However, given that the two studies that reported the cueinduced spatial facilitation effect used spatially congruent but cue-irrelevant targets (Dudschig et al., 2012) and a discrimination task (Ostarek & Vigliocco, 2017), more research is needed in order to understand the conditions and mechanism behind the inhibition effects of cues spatially congruent with the target location. ...

Hearing "moon" and looking up: Word-related spatial associations facilitate saccades to congruent locations

... This finding aligns with theoretical predictions concerning human memory capacity (Abney & Johnson, 1991;Resnik, 1992). In the case of a head-final language like Japanese, the construction of a left-branching CCG structure requires establishing the relationship between arguments before the verb, which is supported by the findings in psycholinguistics (e.g., Mazuka & Itoh, 1995;Kamide & Mitchell, 1999;Isono & Hirose, 2022). By employing CCG as a theory of grammar, we can examine how much distinct syntactic operations with distinct syntactic and semantic properties contribute to predicting human reading times. ...

Incremental Pre-head Attachment in Japanese Parsing
  • Citing Article
  • October 1999

... This paucity of evidence on predictive language processing in ASD is in contrast to substantial evidence on incremental processing in language comprehension among neurotypical adults (e.g., Altmann & Kamide, 1999;Altmann & Kamide, 2007Altmann & Mirković, 2009;Kamide et al., 2003;Kamide et al., 2016;Kang et al., 2020;Kang & Ge, 2022;Knoeferle et al., 2005;Knoeferle & Crocker, 2007;Kukona et al., 2014) and children (e.g., Borovsky et al., 2012;Gambi et al., 2018;Gambi et al., 2016;Huang & Snedeker, 2011;Reuter et al., 2021;Trueswell et al., 1999). The ability to process language incrementally is also associated with individual differences in language skills and nonverbal cognitive abilities, such as vocabulary size (Borovsky et al., 2012;Lew-Williams & Fernald, 2007). ...

Knowing what, where, and when: Event comprehension in language processing
  • Citing Article
  • June 2014

Cognition

... Indeed, the computational properties of actual language comprehension closely resemble those expected under such a model. This includes evidence from brain potentials (DeLong, Urbach, & Kutas, 2005; Dikker & Pylkkänen, 2013; Kutas & Hillyard, 1984; Van Berkum, Brown, Zwitserlood, Kooijman, & Hagoort, 2005; for recent reviews, see Kuperberg, 2013; Van Petten & Luka, 2012), eye-movements during reading (Boston, Hale, Kliegl, Patil, & Vasishth, 2008; Demberg & Keller, 2008; Staub & Clifton, 2006) and spoken sentence comprehension (Altmann & Kamide, 1999; Kamide, Altmann, & Haywood, 2003; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995), and self-paced reading time data (Garnsey, Pearlmutter, Myers, & Lotocky, 1997; MacDonald, Pearlmutter, & Seidenberg, 1994; N. J. Smith & Levy, 2013; Trueswell, Tanenhaus, & Kello, 1993). All these works point to a language comprehension system that heavily relies on prediction of the signal (see also Farmer, Brown, & Tanenhaus, 2013; Kuperberg, 2013; MacDonald, 2013; Pickering & Garrod, 2013). ...

Corrigendum to “The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements” [Journal of Memory and Language 49 (2003) 133–156]
  • Citing Article
  • January 2004

Journal of Memory and Language

... For example, dwell time and fixation counts on the depicted protagonist and destination were shorter and fewer for sentences describing fast actions compared to those describing slow actions (e.g., A man dashed/sauntered into the supermarket). In other words, the speed of a motion described in a sentence influenced the amount of gaze directed at the protagonist (i.e., the man) and the destination (i.e., the supermarket) relevant to the motion (Lindsay et al., 2013;Speed and Vigliocco, 2014). The changes in dwell time and fixation counts on the depicted protagonist and the destination indicate that indexing occurs in two directions. ...

To Dash or to Dawdle: Verb-Associated Speed of Motion Influences Eye Movements during Spoken Sentence Comprehension

... Furthermore, unlike Baccino et al. (2000), De Vincenzi and Job (1995), and Kamide and Mitchell (1997) who found differences between offline attachment preferences and online data regarding the processing cost of either RC attachment, we did not find such a difference in our work. ...

Relative Clause Attachment: Nondeterminism in Japanese Parsing
  • Citing Article
  • March 1997

Journal of Psycholinguistic Research