Stephanie Badde’s research while affiliated with Tufts University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (62)


Precision-based causal inference modulates audiovisual temporal recalibration
  • Preprint

January 2025

·

5 Reads

·

Fangfang Hong

·

Stephanie Badde

·

Cross-modal temporal recalibration guarantees stable temporal perception across ever-changing environments. Yet, the mechanisms of cross-modal temporal recalibration remain unknown. Here, we conducted an experiment to measure how participants’ temporal perception was affected by exposure to audiovisual stimuli with consistent temporal delays. Consistent with previous findings, recalibration effects plateaued with increasing audiovisual asynchrony and varied by which modality led during the exposure phase. We compared six observer models that differed in how they update the audiovisual temporal bias during the exposure phase and whether they assume modality-specific or modality-independent precision of arrival latency. The causal-inference observer shifts the audiovisual temporal bias to compensate for perceived asynchrony, which is inferred by considering two causal scenarios: when the audiovisual stimuli have a common cause or separate causes. The asynchrony-contingent observer updates the bias to achieve simultaneity of auditory and visual measurements, modulating the update rate by the likelihood of the audiovisual stimuli originating from a simultaneous event. In the asynchrony-correction model, the observer first assesses whether the sensory measurement is asynchronous; if so, she adjusts the bias proportionally to the magnitude of the measured asynchrony. Each model was paired with either modality-specific or modality-independent precision of arrival latency. A Bayesian model comparison revealed that both the causal-inference process and modality-specific precision in arrival latency are required to capture the nonlinearity and asymmetry observed in audiovisual temporal recalibration. Our findings support the hypothesis that audiovisual temporal recalibration relies on the same causal-inference processes that govern cross-modal perception.


Humans use overconfident estimates of auditory spatial and temporal uncertainty for perceptual inference

December 2024

·

20 Reads

Making decisions based on noisy sensory information is a crucial function of the brain. Various decisions take each sensory signal's uncertainty into account. Here, we investigated whether perceptual inferences rely on accurate estimates of sensory uncertainty. Participants completed a set of auditory, visual, and audiovisual spatial as well as temporal tasks. We fitted Bayesian observer models of each task to every participant's complete dataset. Crucially, in some model variants the uncertainty estimates employed for perceptual inferences were independent of the actual uncertainty associated with the sensory signals. Model comparisons and analysis of the best-fitting parameters revealed that, in unimodal and bimodal contexts, participants' perceptual decisions relied on overconfident estimates of auditory spatial and audiovisual temporal uncertainty. These findings challenge the ubiquitous assumption that human behavior optimally accounts for sensory uncertainty regardless of sensory domain.






Visual and auditory stimuli. (A) Low-contrast images of cars and human hands (2 exemplars shown here, 6 exemplars per image category were used in total) were presented, typically, to the non-dominant eye. Car engine or finger snapping sounds were delivered concurrently with image presentation. (B) Six combinations of image and sound categories were tested. (C) Dichoptic screen viewed through a stereoscope. A high contrast flickering mask was presented to the dominant eye. Suppression from awareness with continuous flash suppression was achieved by presenting the car or finger image to the other, non-dominant eye (82% of trials). In the non-suppressed control condition (18% of trials), the image was presented on top of the flickering mask and thus clearly visible.
Trial structure. After a mandatory fixation period of 200 ms, an image of a car or a hand was gradually faded in for 250 ms and presented for 1,000 ms to the non-dominant eye while a flickering mask was continuously presented to the dominant eye. Simultaneously to the onset of the visual images, a sound, lasting for the whole duration of image presentation, was presented (except in the no sound condition). The mask was displayed for an additional 200 ms to prevent aftereffects of the target image. After stimulus presentation, participants indicated the location and category of the target image, guessing if the image was successfully suppressed, and rated the visibility of the target image.
Objective and subjective measures of awareness of target images suppressed from visual awareness using continuous flash suppression. (A) Proportion of successfully suppressed trials, i.e., trials rated as ‘not seen at all’ separately for each combination of visual target stimulus category (car vs. finger) and sound category (no sound vs. car sound vs. finger snapping sound). Small markers show subject-level averages, large markers group averages. (B) Proportions of correct responses in the position task in successfully suppressed trials for each of the six stimulus categories. The dashed line indicates chance level performance.
Spatial distribution of eye gaze during stimulus presentation. (A) Percentage of time gaze rested in each of the four spatial quadrants during the time interval starting 500 ms after full onset of the target image and ending at trial offset. Gaze coordinates were aligned such that the target center (gray dot) is in the upper right quadrant (orange square). (B) Dwell time bias toward the target quadrant per target image and sound condition. The bias in gaze position over time was quantified by subtracting the mean percentage of time the gaze rested in one of the three non-target quadrants from the percentage of time gaze rested on the target quadrant.
Auditory guidance of eye movements toward threat-related images in the absence of visual awareness
  • Article
  • Full-text available

August 2024

·

14 Reads

The human brain is sensitive to threat-related information even when we are not aware of this information. For example, fearful faces attract gaze in the absence of visual awareness. Moreover, information in different sensory modalities interacts in the absence of awareness, for example, the detection of suppressed visual stimuli is facilitated by simultaneously presented congruent sounds or tactile stimuli. Here, we combined these two lines of research and investigated whether threat-related sounds could facilitate visual processing of threat-related images suppressed from awareness such that they attract eye gaze. We suppressed threat-related images of cars and neutral images of human hands from visual awareness using continuous flash suppression and tracked observers’ eye movements while presenting congruent or incongruent sounds (finger snapping and car engine sounds). Indeed, threat-related car sounds guided the eyes toward suppressed car images, participants looked longer at the hidden car images than at any other part of the display. In contrast, neither congruent nor incongruent sounds had a significant effect on eye responses to suppressed finger images. Overall, our results suggest that only in a danger-related context semantically congruent sounds modulate eye movements to images suppressed from awareness, highlighting the prioritisation of eye responses to threat-related stimuli in the absence of visual awareness.

Download

Uncertainty-based causal inference modulates audiovisual temporal recalibration

June 2024

·

5 Reads

Cross-modal temporal recalibration is crucial for maintaining coherent perception in a multimodal environment. The classic view suggests that cross-modal temporal recalibration aligns the perceived timing of sensory signals from different modalities, such as sound and light, to compensate for physical and neural latency differences. However, this view cannot fully explain the nonlinearity and asymmetry observed in audiovisual recalibration effects: the amount of re-calibration plateaus with increasing audiovisual asynchrony and varies depending on the leading modality of the asynchrony during exposure. To address these discrepancies, our study examines the mechanism of audiovisual temporal recalibration through the lens of causal inference, considering the brain’s capacity to determine whether multimodal signals come from a common source and should be integrated, or else kept separate. In a three-phase recalibration paradigm, we manipulated the adapter stimulus-onset asynchrony in the exposure phase across nine sessions, introducing asynchronies up to 0.7 s of either auditory or visual lead. Before and after the exposure phase in each session, we measured participants’ perception of audiovisual relative timing using a temporal-order-judgment task. We compared models that assumed observers re-calibrate to approach either the physical synchrony or the causal-inference-based percept, with uncertainties specific to each modality or comparable across them. Modeling results revealed that a causal-inference model incorporating modality-specific uncertainty captures both the nonlinearity and asymmetry of audiovisual temporal recalibration. Our results indicate that human observers employ causal-inference-based percepts to recalibrate cross-modal temporal perception.


Uncertainty-based causal inference modulates audiovisual temporal recalibration

June 2024

·

7 Reads

Cross-modal temporal recalibration is crucial for maintaining coherent perception in a multimodal environment. The classic view suggests that cross-modal temporal recalibration aligns the perceived timing of sensory signals from different modalities, such as sound and light, to compensate for physical and neural latency differences. However, this view cannot fully explain the nonlinearity and asymmetry observed in audiovisual recalibration effects: the amount of re-calibration plateaus with increasing audiovisual asynchrony and varies depending on the leading modality of the asynchrony during exposure. To address these discrepancies, our study examines the mechanism of audiovisual temporal recalibration through the lens of causal inference, considering the brain’s capacity to determine whether multimodal signals come from a common source and should be integrated, or else kept separate. In a three-phase recalibration paradigm, we manipulated the adapter stimulus-onset asynchrony in the exposure phase across nine sessions, introducing asynchronies up to 0.7 s of either auditory or visual lead. Before and after the exposure phase in each session, we measured participants’ perception of audiovisual relative timing using a temporal-order-judgment task. We compared models that assumed observers re-calibrate to approach either the physical synchrony or the causal-inference-based percept, with uncertainties specific to each modality or comparable across them. Modeling results revealed that a causal-inference model incorporating modality-specific uncertainty captures both the nonlinearity and asymmetry of audiovisual temporal recalibration. Our results indicate that human observers employ causal-inference-based percepts to recalibrate cross-modal temporal perception.



Citations (35)


... Twenty participants (17 females, mean age 23.5 years) were included in the final analysis. We determined the sample size based on simulations of our statistical model (see Cary et al., 2024 for an example). Data of three additional participants were excluded from all analyses; for one of them the continuous flash suppression worked only in 37% of trials resulting in sparse eye data (<75%). ...

Reference:

Auditory guidance of eye movements toward threat-related images in the absence of visual awareness
Audiovisual simultaneity windows reflect temporal sensory uncertainty

Psychonomic Bulletin & Review

... Thus, models must infer the causal structure of the sensory signals [27,32,49,[51][52][53][54][55][56][57][58][59][60] to draw conclusions about the estimates of sensory uncertainty underlying perceptual inference (Fig. 2B). Sensory biases affect causal inference [61] and priors over stimulus properties influence integration, stressing the need to account for potential tradeoffs between model parameters when accounting for the complexity of the observer's world model. ...

Multisensory causal inference is feature-specific, not object-based

... Accordingly, semantically congruent information in different modalities is more likely to be integrated than incongruent information (Dolan et al., 2001;Doehrmann and Naumer, 2008;Noppeney et al., 2008). Although multisensory integration per se requires awareness of the integrated information (Palmer and Ramsey, 2012;Montoya and Badde, 2023), aware information in one modality can facilitate the breakthrough into awareness for suppressed, congruent information presented in another modality . For example, congruent sounds can facilitate the detection and identification of visual stimuli suppressed from awareness (Chen and Spence, 2010;Conrad et al., 2010;Chen et al., 2011;Alsius and Munhall, 2013;Lunghi et al., 2014;Aller et al., 2015;Lee et al., 2015;Delong and Noppeney, 2021) as do congruent haptic stimuli (Lunghi et al., 2010;Hense et al., 2019). ...

Only visible flicker helps flutter: Tactile-visual integration breaks in the absence of visual awareness
  • Citing Article
  • June 2023

Cognition

... During the interaction with the external objects, we also rely on an intrinsic model of the body structure to mediate an understanding of position 28,29 . Tactile perception relies on prior knowledge rather than accurate spatiotopic representations based on current sensory input 30 . Other evidence suggests that the brain employs a standard posture or a Bayesian prior for guiding body-space perception and action 31 . ...

The hands’ default location guides tactile spatial selectivity
  • Citing Article
  • April 2023

Proceedings of the National Academy of Sciences

... However, in these models, sensory uncertainty and priors trade off. Even if the prior is learned during the experiment, it might be learned imprecisely [25][26][27], and its final representation is inaccessible to the experimenter, as are estimates of sensory uncertainty employed for perceptual inference. sound, you spot movement behind a nearby bush. ...

Repeated exposure to either consistently spatiotemporally congruent or consistently incongruent audiovisual stimuli modulates the audiovisual common-cause prior

... Similar to our finding, observers take their sensory uncertainty into account when setting subjective criteria in tasks in which they indicate their confidence in their own perceptual performance (Denison et al., 2018;Fleming & Daw, 2017;Locke et al., 2022;Mamassian, 2011). And observers optimally account for their sensory (Badde, Navarro, et al., 2020b;Ernst & Banks, 2002;Hong et al., 2021;Körding et al., 2007;Trommershäuser et al., 2011) and motor (Faisal & Wolpert, 2009;Hudson et al., 2010;Zhang et al., 2013) uncertainty in a multitude of perceptual tasks that do not contain a subjective component. Theoretical models suggest that sensory uncertainty is encoded in the population-level responses of neurons in sensory cortices (Ma et al., 2006;Ma & Jazayeri, 2014), and newer fMRI methods decode sensory uncertainty in the BOLD signals from early sensory cortices (Van Bergen et al., 2015). ...

Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception

... 29 A study found that ionic edema, a subtype of cerebral edema, also occurs in the brains of healthy individuals exposed to 16 h of normobaric hypoxia, mimicking a sudden ascent to a high altitude of 4500 meters. 30 In addition, inflammatory activation may also be involved in HACE. 31 Hypoxiaactivated microglia upregulate NRF1, which induces an inflammatory response by transcriptionally activating NF-κB, p65, and mitochondrial transcription factor A. This process compromises BBB integrity and releases pro-inflammatory factors, ultimately inducing HACE. 1 Maintaining osmotic pressure within the BBB, neurons, and glial cells is an energetically demanding process. ...

Exposure to 16 h of normobaric hypoxia induces ionic edema in the healthy brain

... Both plausible scenarios, doubts about the shared origin of the sensory signals and employing inaccurate estimates of sensory uncertainty, will lead to sub-optimal cue combination. Thus, models must infer the causal structure of the sensory signals [27,32,49,[51][52][53][54][55][56][57][58][59][60] to draw conclusions about the estimates of sensory uncertainty underlying perceptual inference (Fig. 2B). Sensory biases affect causal inference [61] and priors over stimulus properties influence integration, stressing the need to account for potential tradeoffs between model parameters when accounting for the complexity of the observer's world model. ...

Causal inference and the evolution of opposite neurons

Proceedings of the National Academy of Sciences

... A study by Puckett et al. (2019) used somatosensory stimulation of the four digit-tips, with a Bayesian analysis framework to estimate pRF digit maps (n = 6). In that study, a 1D Gaussian profile of spatial tuning across the digit-tips was assumed to estimate location and pRF size of voxels in S1. Schellekens et al. (2021) also stimulated the digit-tips and used a 1D Gaussian model to assess pRF size within Brodmann areas defined from a Freesurfer atlas (Fischl 2012). They showed pRF sizes were smallest in BA3 (rostral wall of the postcentral gyrus), increased slightly towards BA1 (crown of the postcentral gyrus), and were largest in BA2 (caudal wall at base of the postcentral gyrus) (n = 8). ...

A touch of hierarchy: population receptive fields reveal fingertip integration in Brodmann areas in human primary somatosensory cortex

Brain Structure and Function

... While the exact computational rules that govern the magnitude of crossmodal shifts ( βp , βv ) remain an active area of research (Hong et al., 2020), we assume that the perceptual shifts follow three general principles based on observations reported in the previous literature: ...

Audiovisual Recalibration and Stimulus Reliability
  • Citing Article
  • October 2020

Journal of Vision