Mark Wexler's research while affiliated with Université Paris Nanterre and other places

Publications (65)

Article
Multiple studies have shown that certain visual stimuli are perceived in accordance with strong biases that are both robust within individuals and highly variable from one individual to the next. These biases undergo small changes over time that demonstrate that they constitute latent states of the visual system. The literature to date indicates th...
Preprint
Probes flashed within a moving frame are dramatically displaced (Ozkan et al, 2021; Wong & Mack, 1981). The effect is much larger than that seen on static or moving probes (induced motion, Duncker, 1929; Wallach et al, 1978). These flashed probes are often perceived with the separation they have in frame coordinates -- a 100% effect. Here we explor...
Article
To capture where things are and what they are doing, the visual system may extract the position and motion of each object relative to its surrounding frame of reference [K. Duncker, Routledge and Kegan Paul, London 161–172 (1929) and G. Johansson, Acta Psychol (Amst.) 7, 25–79 (1950)]. Here we report a particularly powerful example where a paradoxi...
Article
Full-text available
An outstanding challenge for consciousness research is to characterize the neural signature of conscious access independently of any decisional processes. Here we present a model-based approach that uses inter-trial variability to identify the brain dynamics associated with stimulus processing. We demonstrate that, even in the absence of any task o...
Preprint
Full-text available
To capture where things are and what they are doing, the visual system may extract the position and motion of each object relative to its surrounding frame of reference (e.g., Johansson, 1950; Duncker, 1929). Here we report a particularly powerful example where a paradoxical stabilization is produced by a moving frame. We first take a frame that mo...
Article
Eye blinks strongly attenuate visual input, yet we perceive the world as continuous. How this visual continuity is achieved remains a fundamental and unsolved problem. A decrease in luminance sensitivity has been proposed as a mechanism but is insufficient to mask the even larger decrease in luminance because of blinks. Here we put forward a differ...
Article
In the visual quartet, alternating diagonal pairs of dots produce apparent motion horizontally or vertically, depending on proximity. Here, we studied a tactile quartet where vibrating tactors were attached to the thumbs and index fingers of both hands. Apparent motion was felt either within hands (from index finger to thumb) or between hands. Part...
Article
Full-text available
Abstract The shape of objects is typically identified through active touch. The accrual of spatial information by the hand over time requires the continuous integration of tactile and movement information. Sensory inputs arising from one single sensory source gives rise to an infinite number of possible touched locations in space. This observation...
Article
Full-text available
Saccades are crucial to visual information intake by re-orienting the fovea to regions of interest in the visual scene. However, they cause drastic disruptions of the retinal input by shifting the retinal image at very high speeds. The resulting motion and smear are barely noticed, a phenomenon known as saccadic omission. Here, we studied the perce...
Article
Full-text available
The extraction of spatial information by touch often involves exploratory movements, with tactile and kinesthetic signals combined to construct a spatial haptic percept. However, the body has many tactile sensory surfaces that can move independently, giving rise to the source binding problem: when there are multiple tactile signals originating from...
Article
Full-text available
Static visual stimuli are smeared across the retina during saccades, but in normal conditions this smear is not perceived. Instead, we perceive the visual scene as static and sharp. However, retinal smear is perceived if stimuli are shown only intrasaccadically, but not if the stimulus is additionally shown before a saccade begins, or after the sac...
Article
Studies of perception usually emphasize processes that are largely universal across observers and-except for short-term fluctuations-stationary over time. Here we test the universality and stationarity assumptions with two families of ambiguous visual stimuli. Each stimulus can be perceived in two different ways, parameterized by two opposite direc...
Article
Full-text available
We continually move our body and our eyes when exploring the world, causing our sensory surfaces, the skin and the retina, to move relative to external objects. In order to estimate object motion consistently, an ideal observer would transform estimates of motion acquired from the sensory surface into fixed, world-centered estimates, by taking the...
Article
Visual perception appears nearly continuous despite frequent disruptions of the input caused by saccades and blinks. While research has focused on perception around saccades, blinks also constitute a drastic perturbation of vision, leading to a complete interruption of the visual input that lasts about 100ms but that we barely notice. Does the visu...
Article
Observers often do not notice when a visual target is displaced during a saccade (Bridgeman et al., 1975). Instead, if the displacement is repeated on several saccades, the oculomotor system adapts so that saccades land near the displaced target (McLaughlin, 1967). Other disruptions of the visual input may also require a recalibration of gaze. Spec...
Article
We have previously (Wexler & Mamassian, VSS 2014) reported that there exist two independent biases related to the perception of 3D shape and motion from optic flow. These biases are both robust within observers, and highly variable across observers. Here we present a quantitative analysis of time series of these two variables, sampled once a day ov...
Article
Full-text available
Although motor actions can profoundly affect the perceptual interpretation of sensory inputs, it is not known whether the combination of sensory and movement signals occurs only for sensory surfaces undergoing movement or whether it is a more general phenomenon. In the haptic modality, the independent movement of multiple sensory surfaces poses a c...
Article
Observers show idiosyncratic biases in the perception of multistable stimuli, such as in the perception of tilt from structure-from-motion stimuli and in the perception of depth order in motion transparency. By measuring biases across a sampling of all surface tilts and motion directions, we show the perception of these stimuli in a vast majority o...
Presentation
Full-text available
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8619, pp. 427-429
Article
Although the retinal position of objects changes with each saccadic eye movement, we perceive the visual world to be stable. How this visual stability or constancy arises is debated. Cancellation accounts propose that the retinal consequences of eye movements are compensated for by an equal-but-opposite eye movement signal. Assumption accounts prop...
Poster
When we move a finger along an object with the eyes closed, we can sometimes identify its shape, size and orientation in space. However, the information available at every moment only includes the sensation corresponding to a small part of the object. To perceive the spatial properties of the entire object the brain must match the information about...
Article
Full-text available
When human observers are exposed to even slight motion signals followed by brief visual transients-stimuli containing no detectable coherent motion signals-they perceive large and salient illusory jumps. This visually striking effect, which we call "high phi," challenges well-entrenched assumptions about the perception of motion, namely the minimal...
Article
Full-text available
Perceiving three-dimensional object motion while moving through the world is hard: not only must optic flow be segmented and parallax resolved into shape and motion, but also observer motion needs to be taken into account in order to perceive absolute, rather than observer-relative motion. In order to simplify the last step, it has recently been su...
Article
Full-text available
Different attention and saccade control areas contribute to space constancy by remapping target activity onto their expected post-saccadic locations. To visualize this dynamic remapping, we used a technique developed by Honda (2006) where a probe moved vertically while participants made a saccade across the motion path. Observers do not report any...
Article
Full-text available
People often perform spontaneous body movements during spatial tasks such as giving complex directions or orienting themselves on maps. How are these spontaneous gestures related to spatial problem-solving? We measured spontaneous movements during a perspective-taking task inspired by map reading. Analyzing the motion data to isolate rotation and t...
Poster
In vision, spatial constancy is the phenomenon that when our eyes or body move, we perceive objects in an external or spatiotopic reference frame, independent of our own movement, even though visual information is initially retinotopic. Spatial constancy seems to require some sort of compensation of retinotopic signals to take into account the obse...
Conference Paper
Full-text available
We propose extending the concept of spatial constancy to haptic perception. In vision, spatial constancy refers to the conversion of retinotopic signals into spatiotopic representations, allowing the observer to perceive space independently of his or her own eye movements, or at least partly so. The problem would seem at least as important in hapti...
Poster
How does a moving observer perceive the motion of 3D objects in an external reference frame? Even in the case of a stationary observer, one can perceive object motion either by estimating the movements of each object independently or by using the heuristic of considering the background as stationary [Rushton and Warren, 2005 Current Biology 15(14)...
Article
Optic flow has traditionally been assumed to be the input to structure-from-motion (SFM). To test this assumption we compared the perception of 3D structure from optic flow, actively generated by observer head translation around a stationary object, to the same optic flow, but as experienced by a non-moving observer. For stimuli composed of an Ames...
Article
Whenever an eye movement is made, several visual areas (e.g., LIP, FEF, CS) help maintain visual stability by shifting the activation for currently attended targets to the locations these targets will have after a saccade. The temporal dynamics of this anticipatory activation, known as remapping, can be visualized with a continuous motion stimulus...
Article
Full-text available
Where we look when we scan visual scenes is an old question that continues to inspire both fundamental and applied research. Recently, it has been reported that depth is an important variable in driving eye movements: the directions of spontaneous saccades tend to follow depth gradients, or, equivalently, surface tilts (L. Jansen, S. Onat, & P. Kön...
Article
Full-text available
We make eye movements continuously to obtain an understanding of the 3D world around us, but how does the visual system plan these scanning movements of the environment? Here we study what 3D visual information is used to plan saccades. Cue conflict studies have shown that the visual system combines cues in a statistically optimal way for perceptio...
Article
Smooth pursuit adds a velocity field to the retinal image in the eyes opposite direction. To correctly perceive the objects physical motion, the visual system must compensate for this self-induced motion. Compensation is usually assumed to involve combining the retinal signal with an extra-retinal eye velocity estimate. According to the linear mode...
Article
It is commonly assumed that size constancy-invariance of perceived size of objects as they change retinal size because of changes in distance-depends solely on retinal stimulation and vergence, but on no other action-related signals. Distance to an object can change through displacement of either the observer or the object. The common assumption pr...
Article
Full-text available
Perceptual aftereffects provide a sensitive tool to investigate the influence of eye and head position on visual processing. There have been recent indications that the TAE is remapped around the time of a saccade to remain aligned to the adapting location in the world. Here, we investigate the spatial frame of reference of the TAE by independently...
Article
To perceive object motion when the eyes themselves undergo smooth movement, we can either perceive motion directly-by extracting motion relative to a background presumed to be fixed-or through compensation, by correcting retinal motion by information about eye movement. To isolate compensation, we created stimuli in which, while the eye undergoes s...
Article
Understanding how we spontaneously scan the visual world through eye movements is crucial for characterizing both the strategies and inputs of vision. Despite the importance of the third or depth dimension for perception and action, little is known about how the specifically three-dimensional aspects of scenes affect looking behavior. Here we show...
Chapter
We use multiple sensory modalities to perceive our environment. One of these is optic flow, the displacement and deformation of the image on the retina. It is generally caused by a relative motion between an observer and the objects in the visual scene. As optic flow depends largely on three-dimensional (3D ) shapes and motions, it can be used to e...
Article
Full-text available
Human observers can perceive the three- dimensional (3-D) structure of their environment using various cues, an important one of which is optic flow. The motion of any point's projection on the retina depends both on the point's movement in space and on its distance from the eye. Therefore, retinal motion can be used to extract the 3-D structure of...
Article
The connection between perception and action has classically been studied in one direction only: the effect of perception on subsequent action. Although our actions can modify our perceptions externally, by modifying the world or our view of it, it has recently become clear that even without this external feedback the preparation and execution of a...
Article
Rapid eye movements called saccades give rise to sudden, enormous changes in optic information arriving at the eye; how the world nonetheless appears stable is known as the problem of spatial constancy. One consequence of saccades is that the directions of all visible points shift uniformly; directional or 2D constancy, the fact that we do not perc...
Article
To perceive the real motion of objects in the world while moving the eyes, retinal motion signals must be compensated by information about eye movements. Here we study when this compensation takes place in the course of visual processing, and whether uncompensated motion signals are ever available. We used a paradigm based on asymmetry in motion de...
Article
integration of information [10]. Furthermore, points flashed around the time of a saccade are systematically mislocalized in a way that suggests slow build-up of compensation for retinal shifts [11--17]. The dynamic properties of neurons that remap their receptive fields in the anticipation of saccades may be closely connected with these distortion...
Article
The target article distinguishes between modal and amodal emulators (the former predict future sensory states from current sensory states and motor actions, the latter operate on more abstract descriptions of the environment), and motor and environment emulators (the former predict the results of one's own actions, the latter predict all changes in...
Article
Although visual input is egocentric, at least some visual perceptions and representations are allocentric, that is, independent of the observer's vantage point or motion. Three experiments investigated the visual perception of three-dimensional object motion during voluntary and involuntary motion in human subjects. The results show that the motor...
Article
Does self-motion affect object recognition? Researchers have long studied how 3-D objects are recognized from different points of view, but have disregarded the observers' own movement by keeping them motionless. A recent study by Simons et al. shows that self-motion cannot be ignored, as it changes the way that objects are recognized.
Article
We measured the ability to report the tilt (direction of maximal slope) of a plane under monocular viewing conditions, from static depth cues (square grid patterns) and motion parallax (small rotations of the plane about a frontoparallel axis). These two cues were presented separately, or simultaneously. In the latter case they specified tilts that...
Article
. Although visual input depends on the position of the eye, at least some neural representations of space in mammals are allocentric, i.e., independent of the observer's vantage point or motion. By comparing the visual perception of 3D object motion in actively and passively moving human subjects, I show that the motor command contributes to the pe...
Article
Having long considered that extraretinal information plays little or no role in spatial vision, the study of structure from motion (SfM) has confounded a moving observer perceiving a stationary object with a non-moving observer perceiving a rigid object undergoing equal and opposite motion. However, recently it has been shown that extraretinal info...
Article
External representation is the use of the physical world for cognitive ends, the enlargement of the mechanisms of representation to include the action-perception cycle. It has recently been observed that such representation is pervasive in human activity in both pragmatic and more abstract tasks. It is argued here that by forcing an artificial lear...
Article
There are two ways in which moving observers could judge whether threedimensional objects are stationary in an earth-fixed---rather than observerrelative ---reference frame. An observer may determine the relative motion of an object as well as his or her own motion relative to an earth-fixed reference frame, and then compare the two: the extraretin...
Article
Having long considered that extraretinal information plays little or
Article
The prediction of future positions of moving objects occurs in cases of actively produced and passively observed movement. Additionally, the moving object may or may not be tracked with the eyes. The authors studied the difference between active and passive movement prediction by asking observers to estimate displacements of an occluded moving targ...
Article
One of the ways that we perceive shape is through seeing motion. Visual motion may be actively generated (for example, in locomotion), or passively observed. In the study of the perception of three-dimensional structure from motion, the non-moving, passive observer in an environment of moving rigid objects has been used as a substitute for an activ...
Article
Much indirect evidence supports the hypothesis that transformations of mental images are at least in part guided by motor processes, even in the case of images of abstract objects rather than of body parts. For example, rotation may be guided by processes that also prime one to see results of a specific motor action. We directly test the hypothesis...
Article
The semiconductive property of the anodic oxide films on titanium formed by potential sweep oxidation at various sweep rates is investigated by AC impedance in 0.1 mol dm−3 sulfuric acid solution. The dielectric constant and donor density of the oxide films change with the sweep rate. The dielectric constant increases and the donor density decrease...
Article
The problem of inductive learning is hard, and--despite much work--no solution is in sight, from neural networks or other AI techniques. I suggest that inductive reasoning may be grounded in sensorimotor capacity. If an artificial system to generalize in ways that we find intelligent it should be appropriately embodied. This is illustrated with a n...

Citations

... The global neuronal workspace model distinguishes components of consciousness based on the global availability of information within cognitive and action systems, and self-monitoring or metacognition (Dehaene, 2014). Corroborating this distinction, a recent paper suggests that the network that subtends such global availability during conscious perception takes a different form according to whether participants are requested to decide on their perception or not (Sergent et al., 2021). However, attempts have been made to explain the role of metacognition within this framework (Shea & Frith, 2019) by suggesting that confidence is a key feature of the representations held in the global workspace, which affords a common currency to integrate information from different sensory systems (de Gardelle & Mamassian, 2014;Faivre et al., 2018) and cognitive processes that may be re-used to guide subsequent behavior and mental function. ...
... Where do we see objects? The perceived position of flashed targets does not depend entirely on their physical location but is also strongly influenced by their context, in particular by the presence of moving frames that can drag flashed objects along with them (Cavanagh & Anstis, 2013;€ Ozkan et al., 2021;Whitney, 2002;Whitney & Cavanagh, 2000). Thus, in Movie 1, a vertical bar flashes repetitively, always in the same position. ...
... What can act as a frame? Here we use an outline square, but we have also found similar results with a moving cloud of random dots where even a few dots are enough to trigger the position shift of the probes 10 . This suggests that anything that moves as a group probably engages motion discounting and the paradoxical stabilization it produces. ...
... The first example of an apparent motion stimulus that leads to perception of motion in different directions is the "apparent motion quartet" stimulus realized in both the visual and tactile domains (Carter et al., 2008). In different studies, pairs of vibrotactile stimuli were attached to the tip of a participant's index finger (Carter et al., 2008), at locations on both index fingers (Conrad et al., 2012), to the thumbs and index fingers (Haladjian et al., 2020), or to both forearms (Liaci et al., 2016). The position of each consecutive stimulus pair alternated between the opposing diagonal corners of an imaginary rectangle which leads to switches between the perception of motion traveling either horizontally or vertically (Fig. 1a). ...
... Furthermore, when subjects are instructed to position, with their hands but without support of vision, two bars in parallel in external space, a systematic deviation of Euclidian parallelism is observed (Kappers, 1999;Kappers & Liefers, 2012;Kappers & Viergever, 2006). In addition, some of these spatial distortions have been found to be related to the use of different reference frames (e.g., body-, head-or hand-centered deviations) in the coding of movement (Cohen & Andersen, 2002;Dupin, Hayward, & Wexler, 2018;Ghafouri & Lestienne, 2005) or somatosensory information (Ho & Spence, 2007;Pritchett, Carnevale, & Harris, 2012). Altogether, this suggests a strong dependence between body position and perception and action, as well as the presence of distortions in spatial body representations. ...
... Microsaccades, gaze shifts so small that the attended stimulus remains within the foveola, are the most frequent saccades when examining a distant face (7), threading a needle (8), or reading fine print (9), tasks in which they shift the line of sight with surprising precision. Because of their minute amplitudes, microsaccades pose specific challenges to the mechanisms traditionally held responsible for saccadic suppression (10)(11)(12)(13)(14)(15)(16)(17)(18)(19). These movements yield broadly overlapping pre-and postsaccadic images within the fovea, which would appear to provide little masking in visual stimulation (20). ...
... In analogy to other sensory systems, the information conveyed by the haptic system is subject to several top-down processes and biases. For example, the perception can be modified by expectations of tissue softness according to past experiences (61), visual signals, or the movement of another part of the operator's body (62). Hence, the haptic perception has to be refined to exclude every possible distraction: if every operator can feel subtle volume changes, training will help to interpret them better (50). ...
... The perceptual consequences of saccades depend on saccadic peak velocity (Ostendorf, Fischer, Finke, & Ploner, 2007) and the timing of post-saccadic visual information (Balsdon, Schweitzer, Watson, & Rolfs, 2018;Castet, Jeanjean, & Masson, 2002). In addition, even though pre-and intra-saccadic stimuli are routinely omitted from conscious perception (Campbell & Wurtz, 1978;Duyck, Collins, & Wexler, 2016), visual processing remains effective during omission (Watson & Krekelberg, 2009) and serves fundamental visuomotor functions (Schweitzer & Rolfs, 2020a. Changes in vigor should thus have immediate consequences for visual processing during and around the time of saccades. ...
... Ono, Rivest, & H. Ono, 1986;Ribeiro-Filho, De Souza & Da Silva, 1996;Rivest, Ono, & Saida, 1989;Rogers, & Collett, 1989;Rogers, & Graham, 1979, 1982S. Rogers, & B.J. Rogers, 1992;Sakurai, & Ono, 2000;O.W. Smith, & P.C. Smith, 1963;Steinbach, Ono, & Wolf, 1991;Ujike, & Ono, 2001;Wallach, & O'Leary, 1979;Wexler, Panerai, & Droulez, 2000). The main findings were that motion parallax is dependent of absolute distance information, may share stages of processing with binocular disparity, and is more effective when generated by observer self-movements. ...
... Recently it has become clear that the appearance of many suprathreshold visual stimuli is highly observer-dependent, as revealed both by formal experiments (Afraz et al., 2010;Boeykens et al., 2019;Carter & Cavanagh, 2007;Cretenoud et al., 2021;Drissi-Daoudi et al., 2020;Finlayson et al., 2020;Goutcher, 2016;Greenwood et al., 2017;Grzeczkowski et al., 2017;Houlsby et al., 2013;Hwang & Schütz, 2020;Kosovicheva & Whitney, 2017;Mamassian & Wallace, 2010;Moutsiana et al., 2016;Schütz, 2014;Schütz & Mamassian, 2016;Schwarzkopf et al., 2011;Schwarzkopf & Rees, 2013;Wang et al., 2020;Wexler, 2018;Wexler et al., 2015;Witzel et al., 2017) as well as by popular phenomena such as #TheDress and #TheShoe. Similar idiosyncratic phenomena exist in auditory perception (Deutsch, 1986;Pressnitzer et al., 2018), in temporal processing (Grabot & Kayser, 2020;Grabot & van Wassenhove, 2017), or in motor control (Schütz, 2014;Lebovich et al., 2019). ...