Figure 12 - uploaded by Marc O Ernst
Content may be subject to copyright.
1 Schematic representation of the likelihood functions of the individual visual and haptic size estimatesˆSestimatesˆ estimatesˆS V andˆSandˆ andˆS H and of the combined visual-haptic size estimatê S VH , which is a weighted average according to Eq. 12.1. The variance associated with the visual-haptic distribution is less than either of the two individual estimates (Eq. 12.3). (Adapted from Ernst & Banks, 2002.)

1 Schematic representation of the likelihood functions of the individual visual and haptic size estimatesˆSestimatesˆ estimatesˆS V andˆSandˆ andˆS H and of the combined visual-haptic size estimatê S VH , which is a weighted average according to Eq. 12.1. The variance associated with the visual-haptic distribution is less than either of the two individual estimates (Eq. 12.3). (Adapted from Ernst & Banks, 2002.)

Source publication
Chapter
Full-text available
The brain receives information about the environment from all the sensory modalities, including vision, touch, and audition. To interact efficiently with the environment, this information must eventually converge to form a reliable and accurate multimodal percept. This process is often complicated by the existence of noise at every level of signal...

Citations

... In contrast, reweighting pertains to the idea that the contribution of a given modality to a common, integrated location estimate is modified, resulting in a bias towards or away from that modality after assigning it more or less weight, respectively (Limanowski, 2021). These mechanisms are not mutually exclusive because they can occur simultaneously or one following upon the other (Ernst & Di Luca, 2011). ...
... In line with the principle of bidirectional influence, the visual stimulus is also perceptually biased towards the auditory (Alais & Burr, 2004) or tactile stimulus (Samad & Shams, 2018), respectively. Explanations of these effects have recurred both on sensory recalibration (Ernst & Di Luca, 2011) and optimal integration (Alais & Burr, 2004), bearing obvious conceptual analogies to experiments involving bodily illusions such as the RHI (Kilteni et al., 2015). ...
Full-text available
Preprint
When concurrent visual and tactile stimuli are repeatedly presented with a spatial offset, even unisensory tactile stimuli are afterwards perceived with a spatial bias towards the previously presented visual stimuli. This so-called visuotactile ventriloquism aftereffect reflects visuotactile recalibration. It is unknown whether this recalibration occurs within a bodily map and interacts with perceived features like shape and size of body parts. Here, we applied tactile stimuli to participants’ hidden left hand and simultaneously presented visual stimuli with spatial offsets that – if integrated with the tactile stimuli – implied an enlarged hand size. We either used a fixed spatial mapping between tactile and visual positions (“congruent”), or a scrambled (“incongruent”) mapping. We assessed implicitly perceived hand size via two independent behavioral assessments: pointing movements to unisensory tactile stimuli and tactile distance judgments. Moreover, we assessed explicitly perceived change in hand size with perceptual self-reports. Especially after congruent recalibration, participants localized unimodal tactile stimuli as if they were aiming at an enlarged hand. They also reported tactile distance as shorter after congruent than incongruent recalibration. These modulations resemble those obtained after using tools that prolong the arm and extend reaching space; they suggest that recalibration affected a common, implicit hand representation that underlies both tasks. In contrast, explicit perceptual self-reports did not differ significantly between congruent and incongruent recalibration. Thus, simple visuotactile stimuli are sufficient to modify implicitly perceived body size, indicating a tight link of low-level multisensory processes such as the visuotactile ventriloquism aftereffect and body representation.
... Furthermore, recalibration emerges both in the presence and the absence of explicit feedback about the accuracy of individual judgements (Adams et al., 2010;Ernst and Luca, 2011;Zaidel et al., 2011;Zaidel et al., 2013). Importantly, the presence or absence of feedback may also differentiate functionally distinct forms of recalibration. ...
Full-text available
Preprint
Multisensory integration and recalibration are two processes by which perception deals with discrepant multisensory signals. In the lab, integration can be probed via the presentation of simultaneous spatially discrepant audio-visual stimuli, i.e. in a spatial ventriloquism paradigm. Recalibration here manifests as an aftereffect bias in unisensory judgements following immediate or long-term exposure to discrepant audio-visual stimuli. Despite many studies exploring the ventriloquism effect (VE) and the immediate aftereffect (VAE), it remains unclear whether the VAE is a direct consequence of the integration of preceding discrepant multisensory signals. We here provide evidence that these two biases are not strictly related. First, we analysed data from ten experiments probing the dependence of each bias on audio-visual discrepancies experienced in multiple preceding trials. This revealed a seven-fold stronger dependence of the VAE on preceding discrepancies compared to the VE. Second, we analysed data from an experiment deviating from the typical trial context in which these biases are probed and found that the VAE can vanish despite the VE being present. We argue that integration may help maintaining a stable percept by reducing immediate sensory discrepancies, whereas recalibration may help maintaining an accurate percept by accounting for consistent discrepancies. Hence, the immediate VAE is not a direct and necessary consequence of the integration of discrepant signals and these two well-studied multisensory response biases can be experimentally dissociated.
... [22][23][24][25] Intermittent exposure may result in nontraditional forms of adaptation such as cue reweighting, with multiple exposures leading the visual system to reinterpret the trustworthiness or reliability of cues. [26][27][28][29][30][31][32][33] Indeed, intermittent and continuous adaptation pose different challenges to the visual system that may necessitate different mechanisms of adaptation. ...
Full-text available
Article
Purpose: To examine perceptual adaptation when people wear spectacles that produce unequal retinal image magnification. Methods: Two groups of 15 participants (10 male; mean age 25.6 ± 4.9 years) wore spectacles with a 3.8% horizontal magnifier over one eye. The continuous-wear group wore the spectacles for 5 hours straight. The intermittent-wear group wore them for five 1-hour intervals. To measure slant and shape distortions produced by the spectacles, participants adjusted visual stimuli until they appeared frontoparallel or equiangular, respectively. Adaptation was quantified as the difference in responses at the beginning and end of wearing the spectacles. Aftereffects were quantified as the difference before and after removing the spectacles. We hypothesized that intermittent wear may lead to visual cue reweighting, so we fit a cue combination model to the data and examined changes in weights given to perspective and binocular disparity slant cues. Results: Both groups experienced significant shape adaptation and aftereffects. The continuous-wear group underwent significant slant adaptation and the intermittent group did not, but there was no significant difference between groups, suggesting that the difference in adaptation was negligible. There was no evidence for cue reweighting in the intermittent wear group, but unexpectedly, the weight given to binocular disparity cues for slant increased significantly in the continuous-wear group. Conclusions: We did not find strong evidence that adaptation to spatial distortions differed between the two groups. However, there may be differences in the cue weighting strategies employed when spectacles are worn intermittently or continuously.
... The image of the two footprints on the right is a 180 degree rotation of the image on the left, but the perception of the shape of the footprints is markedly different. Adapted from Ernst & Luca (2011). ...
Full-text available
Preprint
Predictive coding is a unifying framework for understanding perception, action and neocortical organization. In predictive coding, different areas of the neocortex implement a hierarchical generative model of the world that is learned from sensory inputs. Cortical circuits are hypothesized to perform Bayesian inference based on this generative model. Specifically, the Rao-Ballard hierarchical predictive coding model assumes that the top-down feedback connections from higher to lower order cortical areas convey predictions of lower-level activities. The bottom-up, feedforward connections in turn convey the errors between top-down predictions and actual activities. These errors are used to correct current estimates of the state of the world and generate new predictions. Through the objective of minimizing prediction errors, predictive coding provides a functional explanation for a wide range of neural responses and many aspects of brain organization.
... Thus, the best prediction these models could produce for either monotonically or non-monotonically increasing auditory recalibration effects with decreasing visual reliability was no influence of stimulus reliability (Fig 8B, right panel). The observed influences of cue reliability on recalibration are also at odds with models of recalibration that assume the amount of recalibration relies only on the identity of the two modalities in conflict [61,64]. These models predict no influence of stimulus reliability on recalibration. ...
Full-text available
Article
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.
... As a result, it is now accepted that bias needs to be accounted for in any explanation of how observers combine cues to make perceptual estimates (Ernst & Di Luca, 2011;Scarfe & Hibbard, 2011). The etiology of perceptual bias and how this might be incorporated into models of sensory cue combination is an area of active debate (Di Luca et al., 2010;Domini & Caudek, 2009;Domini et al., 2006;Saunders & Chen, 2015;Scarfe & Hibbard, 2011;Tassinari & Domini, 2008;Todd, 2015;Todd et al., 2010). ...
Full-text available
Article
When we move, the visual direction of objects in the environment can change substantially. Compared with our understanding of depth perception, the problem the visual system faces in computing this change is relatively poorly understood. Here, we tested the extent to which participants' judgments of visual direction could be predicted by standard cue combination rules. Participants were tested in virtual reality using a head-mounted display. In a simulated room, they judged the position of an object at one location, before walking to another location in the room and judging, in a second interval, whether an object was at the expected visual direction of the first. By manipulating the scale of the room across intervals, which was subjectively invisible to observers, we put two classes of cue into conflict, one that depends only on visual information and one that uses proprioceptive information to scale any reconstruction of the scene. We find that the sensitivity to changes in one class of cue while keeping the other constant provides a good prediction of performance when both cues vary, consistent with the standard cue combination framework. Nevertheless, by comparing judgments of visual direction with those of distance, we show that judgments of visual direction and distance are mutually inconsistent. We discuss why there is no need for any contradiction between these two conclusions.
... Thus, the best prediction these models could produce for either monotonically or non-monotonically increasing recalibration gain with decreasing stimulus reliability of the other modality was no influence of stimulus reliability (Fig 5B, right panel). The observed recalibration gains are also at odds with models of recalibration that assume the amount of recalibration relies only on the identity of the two modalities in conflict [61,68]. These models predict no influence of stimulus reliability on recalibration. ...
Full-text available
Preprint
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying reliability. Visual spatial reliability was smaller, comparable to and greater than that of auditory stimuli. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During audiovisual recalibration, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its final estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, first increased and then decreased, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli. Author summary Audiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical recalibration task in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the recalibration task. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, and this model is able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a cue and its final estimate. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.
... To minimize uncertainty, the brain needs to make maximal use of the available information, such as knowledge about previously experienced events and the present sensory inputs. The uncertainty can be optimally reduced (to a minimum) when this information is integrated according to its reliability (Ernst & Di Luca, 2011;Taubert et al., 2016). The Bayesian-Estimator model proposed in the current study makes two adjustments in evaluating the sources of uncertainty arising from both stages of the task (duration production and reproduction), according to different temporal contexts: First, based on the fact that subjective duration can differ between different modalities (e.g., Wearden et al., 1998) and temporal context, we assume that the prior itself can be biased. ...
Full-text available
Article
The coefficient of variation (CV), also known as relative standard deviation, has been used to measure the constancy of the Weber fraction, a key signature of efficient neural coding in time perception. It has long been debated whether or not duration judgments follow Weber's law, with arguments based on examinations of the CV. However, what has been largely ignored in this debate is that the observed CVs may be modulated by temporal context and decision uncertainty, thus questioning conclusions based on this measure. Here, we used a temporal reproduction paradigm to examine the variation of the CV with two types of temporal context: full-range mixed vs. sub-range blocked intervals, separately for intervals presented in the visual and auditory modalities. We found a strong contextual modulation of both interval-duration reproductions and the observed CVs. We then applied a two-stage Bayesian model to predict those variations. Without assuming a violation of the constancy of the Weber fraction, our model successfully predicted the central-tendency effect and the variation in the CV. Our findings and modeling results indicate that both the accuracy and precision of our timing behavior are highly dependent on the temporal context and decision uncertainty. And, critically, they advise caution with using variations of the CV to reject the constancy of the Weber fraction of duration estimation.
... It is less clear, however, according to which principles these different spatial codes are employed. Both bottom-up features such as the availability of sensory information (Bernier and Grafton, 2010) and the spatial reliability of a sensory channel (Ernst and Banks, 2002;van Beers et al., 2002), as well as top-down information such as task-constraints Schubert et al., 2017), action context (Mueller and Fiehler, 2014b), and cognitive load (Badde et al., 2014) can affect the relative contributions of different reference frames, presumably in a weighted manner (Angelaki et al., 2009;Atsma et al., 2016;Ernst and Di Luca, 2011;Kayser and Shams, 2015;Lohmann and Butz, 2017;Tramper and Medendorp, 2015). Yet, whereas there is widespread consensus that each spatial code can have more or less influence depending on the specific situation, it is currently not known whether all putative codes are always constructed, or whether they are only computed based on demand. ...
Full-text available
Article
When humans indicate on which hand a tactile stimulus occurred, they often err when their hands are crossed. This finding seemingly supports the view that the automatically determined touch location in external space affects limb assignment: the crossed right hand is localized in left space, and this conflict presumably provokes hand assignment errors. Here, participants judged on which hand the first of two stimuli, presented during a bimanual movement, had occurred, and then indicated its external location by a reach-to-point movement. When participants incorrectly chose the hand stimulated second, they pointed to where that hand had been at the correct, first time point, though no stimulus had occurred at that location. This behavior suggests that stimulus localization depended on hand assignment, not vice versa. It is, thus, incompatible with the notion of automatic computation of external stimulus location upon occurrence. Instead, humans construct external touch location post-hoc and on demand.
... It is less clear, however, according to which principles these different spatial codes are employed. Both bottom-up features such as the availability of sensory information (Bernier and Grafton, 2010) and the spatial reliability of a sensory channel (Ernst and Banks, 2002;van Beers et al., 2002), as well as top-down information such as task-constraints Schubert et al., 2017), action context (Mueller and Fiehler, 2014b), and cognitive load (Badde et al., 2014) can affect the relative contributions of different reference frames, presumably in a weighted manner (Angelaki et al., 2009;Atsma et al., 2016;Ernst and Di Luca, 2011;Kayser and Shams, 2015;Lohmann and Butz, 2017;Tramper and Medendorp, 2015). Yet, whereas there is widespread consensus that each spatial code can have more or less influence depending on the specific situation, it is currently not known whether all putative codes are always constructed, or whether they are only computed based on demand. ...
Full-text available
Article
When humans indicate on which hand a tactile stimulus occurred, they often err when their hands are crossed. This finding seemingly supports the view that the automatically determined touch location in external space affects limb assignment: the crossed right hand is localized in left space, and this conflict presumably provokes hand assignment errors. Here, participants judged on which hand the first of two stimuli, presented during a bimanual movement, had occurred, and then indicated its external location by a reach-to-point movement. When participants incorrectly chose the hand stimulated second, they pointed to where that hand had been at the correct, first time point, though no stimulus had occurred at that location. This behavior suggests that stimulus localization depended on hand assignment, not vice versa. It is, thus, incompatible with the notion of automatic computation of external stimulus location upon occurrence. Instead, humans construct external touch location post-hoc and on demand.