34 reads in the past 30 days
Intentional Binding: Merely a Procedural Confound?May 2023
·
177 Reads
·
20 Citations
Published by American Psychological Association
Online ISSN: 1939-1277
·
Print ISSN: 0096-1523
34 reads in the past 30 days
Intentional Binding: Merely a Procedural Confound?May 2023
·
177 Reads
·
20 Citations
28 reads in the past 30 days
Associations Between Musical Expertise and Auditory ProcessingMarch 2025
·
54 Reads
·
1 Citation
Many studies have linked musical expertise with nonmusical abilities such as speech perception, memory, or executive functions. Far fewer have examined associations with basic auditory skills. Here, we asked whether psychoacoustic thresholds predict four aspects of musical expertise: music training, melody perception, rhythm perception, and self-reported musical abilities and behaviors (other than training). A total of 138 participants completed nine psychoacoustic tasks, as well as the Musical Ear Test (melody and rhythm subtests) and the Goldsmiths Musical Sophistication Index. We also measured and controlled for demographics, general cognitive abilities, and personality traits. The psychoacoustic tasks assessed discrimination thresholds for pitch and temporal perception (both assessed with three tasks), and for timbre, intensity, and backward masking (each assessed with one task). Both music training and melody perception predicted better performance on the pitch-discrimination tasks. Rhythm perception was associated with better performance on several temporal and nontemporal tasks, although none had unique associations when the others were held constant. Self-reported musical abilities and behaviors were associated with performance on one of the temporal tasks: duration discrimination. The findings indicate that basic auditory skills correlate with individual differences in musical expertise, whether expertise is defined as music training or musical ability.
27 reads in the past 30 days
Reinvestigating Endogenous Attention and Perceived Duration of Peripheral Stimuli: Differential Effects for Neutral Versus Valid and Invalid CuesFebruary 2025
·
48 Reads
26 reads in the past 30 days
Salience Effects on Attentional Selection Are Enabled by Task RelevanceSeptember 2024
·
303 Reads
·
2 Citations
26 reads in the past 30 days
Exposure to Second-Language Accent Prompts Recalibration of Phonemic CategoriesMarch 2025
·
44 Reads
The Journal of Experimental Psychology: Human Perception and Performance® publishes studies on perception, control of action, perceptual aspects of language processing, and related cognitive processes. All sensory modalities and motor systems are within its purview. The journal also encourages studies with a neuroscientific perspective that contribute to the functional understanding of perception and performance.
April 2025
·
20 Reads
When humans experience pain during a movement, they can develop fear and avoid this movement afterward; these responses likely play a role in chronic pain. Previous experiments have investigated the underlying learning mechanisms by pairing movements with painful stimuli but, usually, other visuospatial cues were concurrently presented during the learning context. Therefore, participants might have primarily avoided these visuospatial rather than the movement-related cues, potentially invalidating related interpretations of pain-induced movement avoidance. Here, we separated kinesthetic from visuospatial cues to investigate their respective contribution to avoidance. Participants used a hand-held robotic manipulandum and, during an acquisition phase, received painful stimuli during center-out movements. Pain stimuli could be avoided by choosing curved rather than direct movement trajectories. To distinguish the contribution of kinesthetic versus visuospatial cues we tested two generalization contexts: either participants executed novel movements that passed through the same location at which pain had previously been presented in the acquisition phase; or they were reseated and then executed identical movements as those that had been associated with pain, but without passing through the pain-associated spatial location. Avoidance generalization was comparable in both contexts, and remarkably, highly correlated between them. Our findings suggest that both visuospatial and kinesthetic cues available during acquisition were associated with pain and led to avoidance. Our research corroborates previous studies’ findings that pain can become associated with movements. However, visuospatial cues also play a critical role for avoidance acquisition. Future studies should distinguish movement-related and space-related associations in pain-related avoidance.
April 2025
·
9 Reads
The self-prioritization effect (SPE) reflects the ability to efficiently discern self-relevant information. The self-voice emerges as a crucial identity marker because of its inherent self-relevance, and previous work has demonstrated the perceptual and cognitive advantages of the self-voice over other voices. Yet, the extent to which humans prioritize their self-voice when they hear it is because it is both self-similar (“That sounds like my voice”) and self-generated (“I said that”) remains understudied. Here, we examined the impacts of self-similarity and self-generation on the SPE through three experiments. In each experiment, participants learned associations between three voices and three identities (self, friend, and other), and then performed a task requiring them to perceptually match the heard voices with visual labels (“you,” “friend,” and “stranger”). Experiment 1 revealed an augmented SPE when the self-associated voice in the task was the participant’s own self-similar and self-generated voice. In Experiment 2, the SPE was diminished when the self-voice was associated with the “stranger” label—here, the other-associated, but self-similar and self-generated, voice was similarly prioritized to a self-associated but unfamiliar voice. In Experiment 3, we investigated the role of self-generation, by associating the self with a self-similar but machine-generated audio clone of the participant. The SPE was again enhanced. In sum, we demonstrate that listeners show flexibility in their mental representation of self, where multiple sources of self-related information in the voice can be jointly and severally prioritized, independently of self-generation. These findings have implications for the application of self-voice cloning within voice-mediated technologies.
April 2025
·
9 Reads
Abrupt onsets are commonly assumed to be a class of stimuli with high physical salience. This high salience has been used to explain past findings showing abrupt onsets captured attention more strongly compared to other types of distractors, such as color singletons. However, there has been a lack of consensus about the definition and measurement of physical salience. As a result, it is unclear if abrupt onsets capture attention more strongly simply because they are more salient than other types of stimuli. Using a psychophysical technique recently developed by Stilwell et al. (2023), we explicitly quantified the level of physical salience of abrupt onsets, color singletons, and color singleton onsets. Surprisingly, abrupt onsets were the least salient among the three types of items examined. Despite this, only abrupt onsets captured attention in a subsequent visual search task, whereas color singletons and color singleton onsets were both suppressed. Thus, abrupt onsets tend to capture attention more strongly than color singletons, but this is not apparently because of high physical salience. Indeed, high physical salience may make an object easier to suppress during visual search.
April 2025
·
4 Reads
April 2025
·
15 Reads
Among several items held in working memory, an item can be prioritized by focusing attention on it. Some studies found that an item in the focus of attention is better protected from interference than other items in working memory. Others have found that a prioritized item is particularly vulnerable to interference. These two groups of studies have used different ways to study information in the focus of attention in working memory. Protection for the prioritized item has been found when a retro-cue has been used to direct attention to this item, whereas particular vulnerability has been observed for the last-presented item of a serially presented list, which is often assumed to remain in the focus of attention during the retention interval. As these two methods might represent distinct forms of prioritization, we examined whether these two prioritization modes result in opposing results. To do so, we sequentially presented four to-be-memorized colored shapes and probed memory with a recall task. We varied the presentation of interfering visual stimuli following the last list item. In half of the trials, we indicated which item was most likely to be probed using a retro-cue (Experiments 1 and 5) or a precue (Experiments 2–4). We observed some evidence for the last-presented item being particularly vulnerable to visual interference but only in specific task situations. Generally, we observed that memory items were equally vulnerable to visual interference regardless of their priority state in working memory and regardless of the prioritization mode used.
April 2025
·
7 Reads
In this 50th anniversary special commentary, I reflect on my journey as an early career researcher moving from skepticism about cognitive psychology to embracing its importance in understanding embodied social behavior and downstream outcomes for marginalized group members. I describe how the work published in Journal of Experimental Psychology: Human Perception and Performance (JEP:HPP) has contributed to the diversification of social cognition research both directly and indirectly and highlight opportunities for JEP:HPP to play a key role in diversifying the literature on human perception in the coming 50 years.
April 2025
·
19 Reads
·
1 Citation
In a seminal article, Hönekopp set up rigorous criteria to understand when aesthetic values had individual versus shared bases. Using these criteria, he showed that the dichotomy between private and shared values was in balance. With this result, he gave a scientific answer to a debate that raged on for millennia. Unsurprisingly, therefore, his methods and results influenced scholars across a variety of fields, including psychology, cognitive and computational neuroscience, artificial intelligence, arts, fashion, and architecture. Later studies revealed that shared values were in part genetic. Their other components included, among others, social biases and interpersonal relations. Interestingly, the social basis of shared values extended even to social polarization, something that our sense of beauty had in common with other domains of society. In turn, individual aesthetic values also had genetic components. Similarly, learning played a role in the individuation of aesthetic values in part by using signals from our bodies, which are so different across individuals. Another source of individuation stemmed from natural learning being stochastic and chaotic, and having a high-dimensional space of values, allowing for multiple outcomes. Thus, Hönekopp’s influential results of balance between individual versus shared values extended to the similarity of their underlying mechanisms.
March 2025
·
5 Reads
People learn to associate external (predictive) cues (e.g., pictures; colors) with the attentional demands (e.g., the likelihood of conflict) that tend to accompany these cues. Such learning supports item-specific control, the reactive triggering of control settings associated with predictive cues (e.g., high level of focus triggered by a cue predicting high attentional demands). Item-specific control is assumed to operate with a degree of automaticity that allows for efficient processing even in the presence of competing demands. In three experiments, we investigated whether the unpredictable appearance of another salient stimulus (external distractor) presented along with the predictive cue would interfere with the triggering of item-specific control settings. The first two blocks of each experiment (i.e., acquisition phase) allowed participants to learn associations between different pictures and their likelihood of conflict in a picture–word Stroop task without external distraction. In the last two blocks (i.e., test phase), we introduced a random visual distractor (Experiments 1 and 2) or a combined visual and auditory distractor (i.e., multisensory; Experiment 3), with Experiment 2 additionally manipulating the timing of the distractor onset. Overall, the item-specific proportion congruence effect remained intact in both distractor-present and distractor-absent trials in all experiments, suggesting that item-specific control is robust to the presence of external distraction. We consider the theoretical implications of the results, with a focus on the automaticity of item-specific control and future investigations of potential boundary conditions.
March 2025
·
4 Reads
Semantic context effects in picture naming and categorization are central to word production theories. However, unlike naming studies, categorization studies have shown inconsistent results. Recently, Wöhner, Luckow, et al. (2024) replicated the inconsistent pattern in blocked categorization in a within-participant and within-item design. Pictures were presented in a semantically homogeneous or heterogeneous context. In the homogeneous context, there was interference for naming and facilitation for naturalness categorization, but no context effect for size categorization. The authors concluded that the inconsistent categorization findings in their own and previous studies could be due either to the use of tasks based on different kinds of features (stored in semantic memory [natural vs. man-made] vs. ad hoc [smaller vs. larger than a standard]) or to a difference in the response mapping for the exemplars from the semantic categories creating the context (convergent vs. divergent). The present study again contrasted the Wöhner, Luckow et al. tasks, but used materials that resulted in convergent response mapping for both categorization tasks. There was semantic interference in naming and semantic facilitation in both naturalness and size categorization. This pattern suggests that convergent response mapping, not the use of a task based on a stored semantic feature, is critical for obtaining facilitation in blocked semantic categorization. Our result provides further support for the notion that semantic interference in blocked word production has its locus at the lexical level and its origin at the semantic level. This conclusion does not depend any longer on data from only a single categorization task.
March 2025
·
17 Reads
People integrate “what” and “where” information to recognize objects. Even when irrelevant or uninformative, location information can influence object identity judgments. When two sequential stationary objects occupy the same location, people are faster and more accurate to respond (sensitivity effects) and are more likely to judge the objects as identical (spatial congruency bias [SCB]). Other paradigms using moving objects highlight spatiotemporal contiguity’s role in object processing. To bridge these gaps, we conducted two preregistered experiments asking how moving objects’ locations (trajectories) affect identity judgments, both at fixation and across eye movements. In Experiment 1, subjects fixated a constant location and judged whether two sequentially presented moving stimuli were the same or different object identities. The first stimulus moved linearly from behind one occluder to another. The second stimulus reappeared (still moving) continuing along the same spatiotemporal trajectory (Predictable trajectory), or from the same initial location (Same Exact trajectory), or a different location (Different trajectory). We found the strongest sensitivity and SCB for Same Exact trajectory, with a smaller but significant SCB for Predictable trajectory. In Experiment 2, subjects performed a saccade during occlusion, revealing a robust SCB for Same Exact trajectory in retinotopic coordinates, with a smaller SCB for Predictable trajectory in both retinotopic and spatiotopic coordinates. Our findings strengthen prior reports that object-location binding is primarily retinotopic after both object and eye movements, but the presence of concurrent weak SCB effects along predictable and spatiotopic trajectories suggests more ecologically relevant information may also be incorporated when objects are moving more continuously.
March 2025
·
17 Reads
The way we perceive the movement of two intersecting discs can be influenced by auditory information. When a brief tone is played while these discs overlap, people tend to report that the discs bounce off each other instead of streaming past each other. This is known as the auditory-induced bouncing/streaming illusion. Both perceptual/attentional and decisional processes have been discussed as explanations for the bouncing/streaming illusion. In four experiments, we study how the abruptness of tone onsets and offsets affects the bouncing/streaming illusion. We found that tones with more abrupt onsets and offsets resulted in a higher proportion of bouncing impressions than those with smoother ones (Experiment 1). This effect was not due to differences in loudness between the tones (Experiment 2). Additionally, we found that the abruptness of the tone onset, rather than the offset, caused the increase in bouncing impressions (Experiment 3). This effect was observed regardless of the temporal alignment of the tones with the moment of visual overlap (onset-aligned vs. centered vs. offset-aligned; Experiment 4). In sum, our results revealed evidence in favor of a chain of perceptual as well as decisional processes contributing to the reported bouncing/streaming impressions, and we discuss how both might interact during the resolution of the ambiguous bouncing/streaming display.
March 2025
·
13 Reads
Previous studies have demonstrated that auditory cues are integrated with other sensory cues for navigation. However, the extent to which auditory cues are used remains an open question, particularly in the context of reduced availability of visual landmarks. Sensory cue-combination paradigms have used homing tasks to quantify how much a single cue contributes to spatial updating. These paradigms have tested whether multisensory cue integration fits a model of optimal integration, or the reduction of multisensory variability in the form of a maximum likelihood function based on the variability of a single sensory cue. Here, we test the extent to which individuals rely on spatial auditory landmarks relative to body-based self-motion cues in the absence of useful visual landmarks. Twenty-seven participants with normal sensory acuity completed a homing task in virtual reality with auditory landmarks, self-motion cues, or both. Furthermore, a condition with a covert spatial conflict was introduced to test how much participants rely on either auditory landmarks or self-motion information. As a group, participants relied more on body-based self-motion cues than on auditory landmarks; however, there was a wide range of sensory cue weighting strategies. We found some support for optimal combination of these two sets of sensory cues, a novel pairing in the absence of visual spatial landmarks. Overall, these data indicate that the provision of auditory landmarks may complement spatial updating during navigation. This finding may be of particular value to individuals with visual impairments who struggle with effective spatial updating.
March 2025
·
18 Reads
This study concerns motor representations acquired in learning to write. Most theorists assume that at the highest levels of the motor programming hierarchy, learned motor programs for writing characters (e.g., “A”) are effector-independent, specifying the order and trajectories of writing strokes in a form not tied to specific effectors (e.g., right hand). On this view, once a high-level motor program has been learned with one effector, that same program will be used for writing with other effectors. However, in experiments conducted during 2018–2024, we found a clear qualitative difference between dominant and nondominant hands for participants writing in uppercase print: the direction of horizontal writing strokes (rightward or leftward) varied systematically with the hand used for writing. We interpret this phenomenon as evidence against the standard effector independence hypothesis and offer two alternatives. The first proposes that even the highest level motor programs are effector-specific. The second assumes that high-level motor programs learned with one effector can drive writing with other effectors, yet may be nonoptimal for a novel effector, in which case a new motor program may be generated. Both hypotheses imply a dual-route conception in which a high-level motor program may be activated either by retrieving a previously learned program from memory, or by generating a new program on the fly.
March 2025
·
19 Reads
Standard investigations of contextual facilitation typically use invariant distractor arrangements predicting a fixed target location. In the real world, however, invariant spatial contexts are not always predictive. We examined how facilitation is influenced by uncertainty in target location prediction: comparing conditions where old contexts were 100% versus minimally (3%) predictive (Experiment 1), 80% predictive (20% nonpredictive) versus 20% predictive (Experiment 2), or a trial-wise mixed condition where 80% predicted a fixed location and 20% a random location (Experiment 3). New-context displays with matching target-location probabilities served as baselines. The results revealed both fully predictive and minimally predictive old contexts to expedite the search, but facilitation was larger for the former (Experiment 1). This held even when the display types were randomly intermixed at an 80:20 cross-trial uncertainty ratio (Experiment 3). However, when old displays predicted the target location in 80% of trials (Experiment 2), facilitation dropped to the level of minimally predictive displays. This indicates only fully predictive old displays support acquiring contextual cues that guide attention. The facilitation seen with 80% predictive contexts likely involves a less efficient process: singling out the target by context suppression. These findings can be incorporated into a neural-network model of context effects: When distractor representations are suppressed, the formation of facilitative links between distractor representations and the target location on the priority map becomes unlikely.
March 2025
·
7 Reads
Adolescence is a critical period for developing adaptive cognitive control, including the ability to selectively switch attention in response to changes in the environment (cognitive flexibility) and regulate attention (metacognition), through monitoring performance and employing adaptive control strategies. However, little is known about how individual differences in adolescent metacognition impact the spontaneous use of strategies for improving cognitive flexibility. In a sample of 141 participants aged 11–15 years (collected between July 2022 and February 2023), adolescents spontaneously controlled their own preparation time in a cued task-switching paradigm. Adolescents spontaneously adopted the strategy of increasing preparation time for switch trials relative to repeat trials. This strategy use differed for individuals in distinct metacognitive profiles and was positively related to subjectively and objectively scored self-report measures of metacognition. Therefore, individual differences in adolescent metacognitive ability predict the adoption of spontaneous strategy adjustment to enhance cognitive flexibility, suggesting that improving metacognition may encourage the adaptive direction of capacity-limited attention resources among adolescents. Participants were largely from high socioeducational advantage schools in Australia, which should be taken into account when generalizing the present results.
March 2025
·
33 Reads
·
1 Citation
A persistent belief holds that humans can imagine visual content but not odors. While visual imagery is regarded as recreating a perceptual representation, it is unknown whether olfactory mental imagery shares a perceptual format. Visual imagery studies have demonstrated this perceptual formatting using distance and shape similarity judgments, whereas olfactory studies often use single-odor vividness ratings, complicating the establishment of perceptual formatting for odors. Using odor pair similarity scores from two experiments (odor-based: 8,880 ratings from 37 participants, including 20 women; label-based: 129,472 ratings from 2,023 participants, including 1,164 women), we observed a strong correlation (r = .71) between odor-based and label-based odor pairs. The correlation was unaffected by gender and age and was present in a wide range of self-perceived olfactory functions. Pleasantness similarity was the main determinant of overall similarity for both odor-based (r = −.63) and label-based (r = −.45) odor pairs. We then used a large language model to derive semantic similarity scores for the labels of all odor pairs. Semantic similarity only mediated a small part of the observed correlation, further supporting our conclusions that odor imagery shares a perceptual formatting with vision, that odor percepts may be elicited from verbal labels alone, and that odor pair pleasantness may be a dominant and accessible feature in this regard.
March 2025
·
44 Reads
We examine how first-language (L1) Spanish listeners with varying levels of experience with English recalibrate their phonemic category boundaries following exposure to second-language (L2), American-English-accented Spanish. Specifically, we examine changes to voice onset time boundaries, which are often positively shifted when produced by American-English-accented Spanish speakers (as compared to L1 Spanish speakers). Our results demonstrate that listeners make adjustments to their phonemic category boundaries following exposure to accented words with the critical sounds in onset position (e.g., “bailar” and “parir,” meaning “to dance” and “to give birth,” for the /b/ and /p/ phonemic categories). In many cases, generalization of phonemic learning was also observed, such that boundaries for categories that were not presented in training were also adjusted. Surprisingly, however, there were cases in which boundaries for trained categories did not show adjustments; for example, listeners trained with items for all places of articulation showed recalibration of their bilabial boundary but not their alveolar and velar boundaries. Also notable was the role of the Spanish listeners’ experience with English: More experienced listeners showed more positively shifted (English-like) boundaries in the pretest session. This suggests that more experienced listeners may have rapidly identified the American-English-accented Spanish and applied their English category boundaries accordingly. We conclude that listener accommodation of L2 accent is supported by a phonemic recalibration mechanism and that experience with the L1 of an L2-accented speaker facilitates rapid recalibration of phonemic categories.
March 2025
·
54 Reads
·
1 Citation
Many studies have linked musical expertise with nonmusical abilities such as speech perception, memory, or executive functions. Far fewer have examined associations with basic auditory skills. Here, we asked whether psychoacoustic thresholds predict four aspects of musical expertise: music training, melody perception, rhythm perception, and self-reported musical abilities and behaviors (other than training). A total of 138 participants completed nine psychoacoustic tasks, as well as the Musical Ear Test (melody and rhythm subtests) and the Goldsmiths Musical Sophistication Index. We also measured and controlled for demographics, general cognitive abilities, and personality traits. The psychoacoustic tasks assessed discrimination thresholds for pitch and temporal perception (both assessed with three tasks), and for timbre, intensity, and backward masking (each assessed with one task). Both music training and melody perception predicted better performance on the pitch-discrimination tasks. Rhythm perception was associated with better performance on several temporal and nontemporal tasks, although none had unique associations when the others were held constant. Self-reported musical abilities and behaviors were associated with performance on one of the temporal tasks: duration discrimination. The findings indicate that basic auditory skills correlate with individual differences in musical expertise, whether expertise is defined as music training or musical ability.
March 2025
·
34 Reads
Visual mental imagery is a core topic of cognitive psychology and cognitive neuroscience. Several early behavioral contributions were published in the Journal of Experimental Psychology: Human Perception and Performance, and they continue to influence the field despite the advent of new technologies and statistical models that are used in contemporary research on mental imagery. Future research will lead to new discoveries showing a broader importance of mental imagery, ranging from consciousness, problem-solving, expectations, perception, and reality monitoring.
March 2025
·
17 Reads
·
1 Citation
Humans are so sensitive to faces and face-like patterns in the environment that sometimes we mistakenly see a face where none exists—a common illusion called “face pareidolia.” Examples of face pareidolia, “illusory faces,” occur in everyday objects such as trees and food and contain two identities: an illusory face and an object. In this study, we studied illusory faces in a rapid serial visual presentation paradigm over three experiments to explore the detectability of illusory faces under various task conditions and presentation speeds. The first experiment revealed the rapid and reliable detection of illusory faces even with only a glimpse, suggesting that face pareidolia arises from an error in rapidly detecting faces. Experiment 2 demonstrated that illusory facial structures within food items did not interfere with the recognition of the object’s veridical identity, affirming that examples of face pareidolia maintain their objecthood. Experiment 3 directly compared behavioral responses to illusory faces under different task conditions. The data indicate that, with extended viewing time, the object identity dominates perception. From a behavioral perspective, the findings revealed that illusory faces have two distinct identities as both faces and objects that may be processed in parallel. Future research could explore the neural representation of these unique stimuli under varying circumstances and attentional demands, providing deeper insights into the encoding of visual stimuli for detection and recognition.
March 2025
·
19 Reads
Recent empirical findings demonstrate that, in visual search for a target in an array of distractors, observers exploit information about object relations to increase search efficiency. We investigated how people searched for interacting people in a crowd, and how the eccentricity of the target affected this search (Experiments 1–3). Participants briefly viewed crowded arrays and had to search for an interacting dyad (two bodies face-to-face) among noninteracting dyads (back-to-back distractors), or vice versa, with the target presented in the attended central location or at a peripheral location. With central targets, we found a search asymmetry, whereby interacting people among noninteracting people were detected better than noninteracting people among interacting people. With peripheral targets, the advantage disappeared, or even tended to reverse in favor of noninteracting dyads. In Experiments 4–5, we asked whether the search asymmetry generalized to object pairs whose spatial relations did or did not form a functionally interacting set (a computer screen above a keyboard vs. a computer screen below a keyboard). We found no advantage for interacting over noninteracting sets either in central or peripheral locations for objects, but, if anything, evidence for the opposite effect. Thus, the effect of relational information on visual search is contingent on both stimulus category and attentional focus: The presentation of social interaction—but not of nonsocial interaction—at the attended (central) location readily captures an individual’s attention.
March 2025
·
23 Reads
The relative timing between sensory signals strongly determines whether they are integrated in the brain. Two classical measures of temporal integration are provided by simultaneity judgments, where one judges whether cross-modal stimuli are synchronous, and violations of the race model inequality (RMI) due to faster responses to cross-modal than unimodal stimuli. While simultaneity judgments are subject to trial history effects (rapid temporal recalibration) and long-term experience (musical training), it is unknown whether RMI violations are similarly affected. Musicians and nonmusicians made simultaneity judgments and speeded responses to brief auditory-visual stimuli with varying onset asynchronies. We derived a so-called temporal integration window for both measures, via an observer model for simultaneity judgments and a nonparametric test for detecting observer-level RMI violations. Simultaneity judgments were subject to rapid recalibration and musicians were less likely than nonmusicians to perceive stimuli as synchronous. Proportionally, twice as many musicians as nonmusicians exhibited RMI violations within a temporal window spanning −33 to 100 ms. Response times (and RMI violations) were unaffected by rapid recalibration and modality shift costs, suggesting that rapid recalibration is not caused by changes in early sensory latency. Our findings show that perception- and action-based measures of multisensory temporal processing are affected differently by experience.
March 2025
·
25 Reads
The question of whether low-level perceptual processes are involved in language comprehension remains unclear. Here, we introduce a promising paradigm in which the role of motion perception in phrase understanding may be causally inferred without interpretational ambiguity. After participants had been adapted to either leftward or rightward drifting motion, resulting in the reduced responsiveness of motion neurons coding for the adapted direction, they were asked to indicate whether a subsequent verb phrase denoted leftward or rightward motion. When the adapting stimulus was blocked from visual awareness under continuous flash suppression, wherein only the influence of low-level perceptual processes existed, we found the response inhibition in the adapted direction across diverse verb phrases, indicating that desensitization of motion perception impaired the understanding of verb phrases. Our findings provide evidence for the functional relevance of motion perception to phrase understanding. However, when the adapting stimulus was consciously perceived, wherein both the influence of low-level perceptual processes and high-level cognitive processes coexisted but counteracted each other, we found different results for diverse verb phrases. Our findings highlight the importance of considering the influence of conscious awareness on how visual perception affects language comprehension.
March 2025
·
23 Reads
Motivated by the compositional semantics perspective (Marelli, 2023), which regards the meaning-combination process as playing an important role in the recognition of polymorphemic words, the present study revisited a study by Crepaldi et al. (2013, Experiment 1) to reevaluate the role of semantic transparency in the processing of nonwords comprising existing morphemes. We replicated the transposed compound interference effect, namely, the greater difficulty in rejecting a nonword generated by reversing the order of the morpheme constituents (e.g., SIDELAKE from lakeside). Contrary to the claim of the original study, here we found evidence that this interference effect is greater if the original compound word was semantically transparent (e.g., lakeside) than opaque (e.g., hallmark). Importantly, we also show that this effect of baseword semantic transparency is in fact an effect of compositionality (the ease of generating a meaningful compound from the constituents). We discuss the implication of this finding for the processing of polymorphemic words, with particular regard to the experimental conditions that are favorable for finding a role for semantics.
March 2025
·
20 Reads
The study of attentional allocation due to external stimulation has a long history in psychology. Early research by Yantis and Jonides suggested that abrupt onsets constitute a unique class of stimuli that captures attention in a stimulus-driven fashion unless attention is proactively directed elsewhere. Since then, the study of visual attention has evolved significantly. This article revisits the core conclusions by Yantis and Jonides in light of subsequent findings and highlights emerging issues for future investigation. These issues include clarifying key concepts of visual attention, adopting measures with greater spatiotemporal precision, exploring how past experiences modulate the effects of abrupt onsets, and understanding individual differences in attentional allocation. Addressing these issues is challenging but crucial, and we offer some perspectives on how one might choose to study these issues going forward. Finally, we call for more investigation into abrupt onsets. Perhaps due to their strong potential to capture attention, abrupt onsets are often set aside in pursuit of other conditions that show attenuation of distractor interference. However, given their real-world relevance, abrupt onsets represent the exact type of stimuli that we need to study more to connect laboratory attention research to real life.
Journal Impact Factor™
Immediacy Index
Eigenfactor®
Article processing charge
Editor in Chief
Vanderbilt University, USA