Article

Simultaneous adaptation to non-collinear retinal motion and smooth pursuit eye movement

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Simultaneously adapting to retinal motion and non-collinear pursuit eye movement produces a motion aftereffect (MAE) that moves in a different direction to either of the individual adapting motions. Mack, Hill and Kahn (1989, Perception, 18, 649-655) suggested that the MAE was determined by the perceived motion experienced during adaptation. We tested the perceived-motion hypothesis by having observers report perceived direction during simultaneous adaptation. For both central and peripheral retinal motion adaptation, perceived direction did not predict the direction of subsequent MAE. To explain the findings we propose that the MAE is based on the vector sum of two components, one corresponding to a retinal MAE opposite to the adapting retinal motion and the other corresponding to an extra-retina MAE opposite to the eye movement. A vector model of this component hypothesis showed that the MAE directions reported in our experiments were the result of an extra-retinal component that was substantially larger in magnitude than the retinal component when the adapting retinal motion was positioned centrally. However, when retinal adaptation was peripheral, the model suggested the magnitude of the components should be about the same. These predictions were tested in a final experiment that used a magnitude estimation technique. Contrary to the predictions, the results showed no interaction between type of adaptation (retinal or pursuit) and the location of adapting retinal motion. Possible reasons for the failure of component hypothesis to fully explain the data are discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The MAE was determined by the relative motion of the central grating with respect to the flanking gratings. Other extraretinal oculomotor elements that can be incorporated at different stages in the motion processing pathway can also induce a MAE (Chaudhuri, 1990a(Chaudhuri, , 1991Davies & Freeman, 2011;Freeman, 2007;Freeman & Sumnall, 2005;Freeman, Sumnall, & Snowden, 2003;Haarmeier, Bunjes, Lindner, Berret, & Thier, 2001). The integration of such nonvisual signals with retinal motion information may result in the representation of veridical motion, such that it allows discriminating between movement caused by gaze shifts (e.g., smooth pursuit) and movement in the real world. ...
... The perceptual registration of this efferent signal produces the MAE. Others demonstrated that an efference copy of the eye movements can induce a MAE which can be combined with the retinally induced MAE at subcortical or cortical levels (Davies & Freeman, 2011;Freeman, 2007;Freeman & Sumnall, 2005;Freeman et al., 2003;Haarmeier et al., 2001). ...
... The MAE could possibly result from a pursuit oculomotor signal per se, without any important role of the visual random-dot stimulus in the background (Chaudhuri, 1990a(Chaudhuri, , 1991Davies & Freeman, 2011;Freeman, 2007;Freeman & Sumnall, 2005). To rule this out, we conducted another control experiment, the Pursuit Only (PO) adaptation condition, which was performed by eight of the original subjects. ...
Article
Full-text available
Accurately perceiving the velocity of an object during smooth pursuit is a complex challenge: although the object is moving in the world, it is almost still on the retina. Yet we can perceive the veridical motion of a visual stimulus in such conditions, suggesting a nonretinal representation of the motion vector. To explore this issue, we studied the frames of representation of the motion vector by evoking the well known motion aftereffect during smooth-pursuit eye movements (SPEM). In the retinotopic configuration, due to an accompanying smooth pursuit, a stationary adapting random-dot stimulus was actually moving on the retina. Motion adaptation could therefore only result from motion in retinal coordinates. In contrast, in the spatiotopic configuration, the adapting stimulus moved on the screen but was practically stationary on the retina due to a matched SPEM. Hence, adaptation here would suggest a representation of the motion vector in spatiotopic coordinates. We found that exposure to spatiotopic motion led to significant adaptation. Moreover, the degree of adaptation in that condition was greater than the adaptation induced by viewing a random-dot stimulus that moved only on the retina. Finally, pursuit of the same target, without a random-dot array background, yielded no adaptation. Thus, in our experimental conditions, adaptation is not induced by the SPEM per se. Our results suggest that motion computation is likely to occur in parallel in two distinct representations: a low-level, retinal-motion dependent mechanism and a high-level representation, in which the veridical motion is computed through integration of information from other sources.
Article
Full-text available
According to the traditional inferential theory of perception, percepts of object motion or stationarity stem from an evaluation of afferent retinal signals (which encode image motion) with the help of extraretinal signals (which encode eye movements). Direct perception theory, on the other hand, assumes that the percepts derive from retinally conveyed information only. Neither view is compatible with a special perceptual phenomenon which occurs during visually induced sensations of ego-motion (vection). A modified version of inferential theory yields a model in which the concept of an extraretinal signal is replaced by that of a reference signal. Reference signals do not encode how the eyes move in their orbits, but how they move in space. Hence reference signals are produced not only during eye movements but also during ego-motion, (i.e., in response to vestibular stimulation and to retinal image flow, which may induce vection). The present theory describes how self-motion and object motion percepts interface. Empirical tests (using an experimental paradigm that allows quantitative measurement of the magnitude and gain of reference signals and the size of the Just Noticeable Difference (JND) between retinal and reference signals) reveal that the distinction between direct and inferential theories largely depends on: (1) a mistaken belief that perceptual veridicality is evidence that extraretinal information is not involved, and (2) a failure to distinguish between (the perception of) absolute object motion in space and relative motion of objects with respect to each other. The new model corrects these errors, thus providing a new, unified framework for interpretating many phenomena in the field of motion perception.
Article
Full-text available
During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion [3]), the perceived direction of moving objects is distorted (trajectory misperception [4]), and self-motion veers away from its true path (e.g., the slalom illusion [5]). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6-11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12-15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16-20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion.
Article
Full-text available
During pursuit eye movements, the world around us remains perceptually stable despite the retinal-image slip induced by the eye movement. It is commonly held that this perceptual invariance is achieved by subtracting an internal reference signal, reflecting the eye movement, from the retinal motion signal. However, if the reference signal is too small or too large, a false eye-movement-induced motion of the external world, the Filehne illusion (FI), will be perceived. A reference signal of inadequate size can be simulated experimentally by asking human subjects to pursue a target across backgrounds with externally added motion that are perceived as moving. In the present study we asked if non-human primates respond to such manipulation in a way comparable to humans. Using psychophysical methods, we demonstrate that Rhesus monkeys do indeed experience a percept of pursuit-induced background motion. In this study we show that an FI can be predictably induced in Rhesus monkeys. The monkey FI shows dependencies on the size and direction of background movement, which is very similar to the ones characterizing the human FI. This congruence suggests that the perception of self-induced visual motion is based on similar inferential mechanisms in non-human and human primates.
Article
Full-text available
One way the visual system estimates object motion during pursuit is to combine estimates of eye velocity and retinal motion. This questions whether observers need direct access to retinal motion during pursuit. We tested this idea by varying the correlation between retinal motion and objective motion in a two-interval speed discrimination task. Responses were classified according to three motion cues: retinal speed (based on measured eye movements), objective speed, and the relative motion between pursuit target and stimulus. In the first experiment, feedback was based on relative motion and this cue fit the response curves best. In the second experiment, simultaneous relative motion was removed but observers still used the sequential relative motion between pursuit target and dot pattern to make their judgements. In a final experiment, feedback was given explicitly on the retinal motion, using online measurements of eye movements. Nevertheless, sequential relative motion still provided the best account of the data. The results suggest that observers do not have direct access to retinal motion when making perceptual judgements about movement during pursuit.
Article
Full-text available
The question as to how the visual motion generated during eye movements can be 'canceled' to prevent an apparent displacement of the external world has a long history. The most popular theories (R. W. Sperry, 1950; E. von Holst & H. Mittelstaedt, 1950) lack specifics concerning the neural mechanisms involved and their loci. Here we demonstrate that a form of vector subtraction can be implemented in a biologically plausible way using cosine distributions of activity from visual motion sensors and from an extraretinal source such as a pursuit signal. We show that the net result of applying an 'efference copy/corollary discharge signal' in the form of a cosine distribution is a motion signal that is equivalent to that produced by vector subtraction. This vector operation provides a means of 'canceling' the effect of eye movements. It enables the extraretinal generated image motion to be correctly removed from the combined retinal-extraretinal motion, even in cases where the two motions do not share the same direction. In contrast to the established theories (efference copy and corollary discharge), our new model makes specific testable predictions concerning the location (the MT-MST/VIP areas) and nature of the eye-rotation cancellation stage (neural-based vector subtraction).
Article
Full-text available
The primate visual system faces a difficult problem whenever it encounters the motion of an object moving over a patch of the retina. Objects typically contain a number of edges at different orientations and so a range of image velocities are generated within the receptive field of a neuron processing the object movement. It is still a mystery as to how these different velocities are combined into one unified and correct velocity. Neurons in area MT (V5) are considered to be the neural substrate for this motion integration process. Some MT neurons (pattern type) respond selectively to the correct global motion of an object, whereas others respond primarily to the individual components making up the pattern (component type). Recent findings from MT pattern cells tested with small patches of motion (N. J. Majaj, M. Carandini, & J. A. Movshon, 2007) have put further constraints on the possible mechanisms underlying MT pattern motion integration. We tested and refined an existing model of MT pattern neurons (J. A. Perrone, 2004) using these same small patch stimuli and found that it can accommodate these new findings. We also discovered that the speed of the test stimuli may have had an impact on the N. J. Majaj et al. (2007) results and that MT direction and speed tuning may be more closely linked than previously thought.
Article
Full-text available
When the eyes track a moving object, the image of a stationary target shifts on the retina colinearly with the eye movement. A compensation process called position constancy prevents this image shift from causing perceived target motion commensurate with the image shift. The target either appears stationary or seems to move in the direction opposite to the eye movement, but much less than the image shift would warrant. Our work is concerned with the question of whether position constancy operates when the image shift and the eye movement are not colinear. That can occur when, during the eye movement, the target undergoes a motion of its own. Evidence is reported that position constancy fails to operate when the direction of the target motion forms an angle with the direction of the eye movement.
Article
Full-text available
The movement after-effect (MAE) is caused by inspecting a pattern in which many stimulus elements in the visual field are in coherent movement; after inspection, stationary elements seem to move in the opposite direction. By far the commonest cause of such a retinal stimulus is movement of the observer, not movement of the environment. We suggest here, therefore, that the usual laboratory stimulus for inducing the MAE presents the observer with conflicting sensory cues. The optical input is normally associated with self motion, but other cues such as the vestibular input simultaneously tell the observer that he is stationary. In these circumstances a recalibration of the relationship between optical and other information might occur and we suggest that the after-effect may be at least in part a consequence of this recalibration, rather than being entirely due to a passive fatigue-like process.
Article
Full-text available
The aim of this study was to test the hypothesis that an extra-retinal signal combines with retinal velocity in a linear manner as described by existing models to determine perceived velocity. To do so, we utilized a method that allowed the determination of the relative contributions of the retinal-velocity and the extra-retinal signals for the perception of stimulus velocity. We determined the velocity (speed and direction) of a stimulus viewed with stationary eyes that was perceptually the same as the velocity of the stimulus viewed with moving eyes. Eye movements were governed by the tracking (or pursuit) of a separate pursuit target. The velocity-matching data were unable to be fit with a model that linearly combined a retinal-velocity signal and an extra-retinal signal. A model that was successful in explaining the data was one that takes the difference between two simple saturating non-linear functions, g and f, each symmetric about the origin, but one having an interaction term. That is, the function g has two arguments: retinal velocity, R, and eye velocity, E. The only argument to f is retinal velocity, R. Each argument has a scaling parameter. A comparison of the goodness of fits between models demonstrated that the success of the model is the interaction term, i.e. the modification of the compensating eye velocity signal by the retinal velocity prior to combination.
Article
Full-text available
By adding retinal and pursuit eye-movement velocity one can determine the motion of an object with respect to the head. It would seem likely that the visual system carries out a similar computation by summing extra-retinal, eye-velocity signals with retinal motion signals. Perceived head-centred motion may therefore be determined by differences in the way these signals encode speed. For example, if extra-retinal signals provide the lower estimate of speed then moving objects will appear slower when pursued (Aubert-Fleischl phenomenon) and stationary objects will move opposite to an eye movement (Filehne illusion). Most previous work proposes that these illusions exist because retinal signals encode retinal motion accurately while extra-retinal signals under-estimate eye speed. A more general model is presented in which both signals could be in error. Two types of input/output speed relationship are examined. The first uses linear speed transducers and the second non-linear speed transducers, the latter based on power laws. It is shown that studies of the Aubert-Fleischl phenomenon and Filehne illusion reveal the gain ratio or power ratio alone. We also consider general velocity-matching and show that in theory matching functions are limited by gain ratio in the linear case. However, in the non-linear case individual transducer shapes are revealed albeit up to an unknown scaling factor. The experiments show that the Aubert-Fleischl phenomenon and Filehne illusion are adequately described by linear speed transducers with a gain ratio less than one. For some observers, this is also the case in general velocity-matching experiments. For other observers, however, behaviour is non-linear and, according to the transducer model, indicates the existence of expansive non-linearities in speed encoding. This surprising result is discussed in relation to other theories of head-centred motion perception and the possible strategies some observers might adopt when judging stimulus motion during an eye movement.
Article
Full-text available
It has long been known that ocular pursuit of a moving target has a major influence on its perceived speed (Aubert, 1886; Fleischl, 1882). However, little is known about the effect of smooth pursuit on the perception of target direction. Here we compare the precision of human visual-direction judgments under two oculomotor conditions (pursuit vs. fixation). We also examine the impact of stimulus duration (200 ms vs. ~800 ms) and absolute direction (cardinal vs. oblique). Our main finding is that direction discrimination thresholds in the fixation and pursuit conditions are indistinguishable. Furthermore, the two oculomotor conditions showed oblique effects of similar magnitudes. These data suggest that the neural direction signals supporting perception are the same with or without pursuit, despite remarkably different retinal stimulation. During fixation, the stimulus information is restricted to large, purely peripheral retinal motion, while during steady-state pursuit, the stimulus information consists of small, unreliable foveal retinal motion and a large efference-copy signal. A parsimonious explanation of our findings is that the signal limiting the precision of direction judgments is a neural estimate of target motion in head-centered (or world-centered) coordinates (i.e., a combined retinal and eye motion signal) as found in the medial superior temporal area (MST), and not simply an estimate of retinal motion as found in the middle temporal area (MT).
Article
Full-text available
Pursuit eye movements alter retinal motion cues to depth. For instance, the sinusoidal retinal velocity profile produced by a translating, corrugated surface resembles a sinusoidal shear during pursuit. One way to recover the correct spatial phase of the corrugation's profile (i.e. which part is near and which part is far) is to combine estimates of shear with extra-retinal estimates of translation. In support of this hypothesis, we found the corrugation's spatial phase appeared ambiguous when retinal shear was viewed without translation, but unambiguous when translated and viewed with or without a pursuit eye movement. The eyes lagged the sinusoidal translation by a small but persistent amount, raising the possibility that retinal slip could serve as the disambiguating cue in the eye-moving condition. A yoked control was therefore performed in which measured horizontal slip was fed back into a fixated shearing stimulus on a trial-by-trial basis. The results showed that the corrugation's phase was only seen unambiguously during the real eye movement. This supports the idea that extra-retinal estimates of eye velocity can help disambiguate ordinal depth structure within moving retinal images.
Article
Full-text available
Although many studies have been devoted to motion perception during smooth pursuit eye movements, relatively little attention has been paid to the question of whether the compensation for the effects of these eye movements is the same across different stimulus directions. The few studies that have addressed this issue provide conflicting conclusions. We measured the perceived motion direction of a stimulus dot during horizontal ocular pursuit for stimulus directions spanning the entire range of 360 degrees. The stimulus moved at either 3 or 8 degrees/s. Constancy of the degree of compensation was assessed by fitting the classical linear model of motion perception during pursuit. According to this model, the perceived velocity is the result of adding an eye movement signal that estimates the eye velocity to the retinal signal that estimates the retinal image velocity for a given stimulus object. The perceived direction depends on the gain ratio of the two signals, which is assumed to be constant across stimulus directions. The model provided a good fit to the data, suggesting that compensation is indeed constant across stimulus direction. Moreover, the gain ratio was lower for the higher stimulus speed, explaining differences in results in the literature.
Article
Full-text available
Repetitive eye movement produces a compelling motion aftereffect (MAE). One mechanism thought to contribute to the illusory movement is an extra-retinal motion signal generated after adaptation. However, extra-retinal signals are also generated during pursuit. They modulate activity within cortical motion-processing area MST, helping transform retinal motion into motion in the world during an eye movement. Given the evidence that MST plays a key role in generating MAE, it may also become indirectly adapted by prolonged pursuit. To differentiate between these two extra-retinal mechanisms we examined storage of the MAE across a period of darkness. In one condition observers were told to stare at a moving pattern, an instruction that induces a more reflexive type of eye movement. In another they were told to deliberately pursue it. We found equally long MAEs when testing immediately after adaptation but not when the test was delayed by 40 s. In the case of the reflexive eye movement the delay almost completely extinguished the MAE, whereas the illusory motion following pursuit remained intact. This suggests pursuit adapts cortical motion-processing areas whereas unintentional eye movement does not. A second experiment showed that cortical mechanisms cannot be the sole determinant of pursuit-induced MAE. Following oblique pursuit, we found MAE direction changes from oblique to vertical. Perceived MAE direction appears to be influenced by a subcortical mechanism as well, one based on the relative recovery rate of horizontal and vertical eye-movement processes recruited during oblique pursuit.
Article
Full-text available
Smooth pursuit eye movements change the retinal image velocity of objects in the visual field. In order to change from a retinocentric frame of reference into a head-centric one, the visual system has to take the eye movements into account. Studies on motion perception during smooth pursuit eye movements have measured either perceived speed or perceived direction during smooth pursuit to investigate this frame of reference transformation, but never both at the same time. We devised a new velocity matching task, in which participants matched both perceived speed and direction during fixation to that during pursuit. In Experiment 1, the velocity matches were determined for a range of stimulus directions, with the head-centric stimulus speed kept constant. In Experiment 2, the retinal stimulus speed was kept approximately constant, with the same range of stimulus directions. In both experiments, the velocity matches for all directions were shifted against the pursuit direction, suggesting an incomplete transformation of the frame of reference. The degree of compensation was approximately constant across stimulus direction. We fitted the classical linear model, the model of Turano and Massof (2001) and that of Freeman (2001) to the velocity matches. The model of Turano and Massof fitted the velocity matches best, but the differences between de model fits were quite small. Evaluation of the models and comparison to a few alternatives suggests that further specification of the potential effect of retinal image characteristics on the eye movement signal is needed.
Article
According to the traditional inferential theory of perception, percepts of object motion or stationarity stem from an evaluation of afferent retinal signals (which encode image motion) with the help of extraretinal signals (which encode eye movements). According to direct perception theory, on the other hand, the percepts derive from retinally conveyed information only. Neither view is compatible with a perceptual phenomenon that occurs during visually induced sensations of ego motion (vection). A modified version of inferential theory yields a model in which the concept of extraretinal signals is replaced by that of reference signals, which do not encode how the eyes move in their orbits but how they move in space. Hence reference signals are produced not only during eye movements but also during ego motion (i.e., in response to vestibular stimulation and to retinal image flow, which may induce vection). The present theory describes the interface between self-motion and object-motion percepts. An experimental paradigm that allows quantitative measurement of the magnitude and gain of reference signals and the size of the just noticeable difference (JND) between retinal and reference signals reveals that the distinction between direct and inferential theories largely depends on: (1) a mistaken belief that perceptual veridicality is evidence that extraretinal information is not involved, and (2) a failure to distinguish between (the perception of) absolute object motion in space and relative motion of objects with respect to each other. The model corrects these errors, and provides a new, unified framework for interpreting many phenomena in the field of motion perception.
Article
This experiment showed movement after-effects following presentation of moving stripes under various conditions of eye movement. Aftereffects only occur when the retinal image moves systematically across the retina, though movement may be observed when this is not the case. The aftereffects are due to specifically retinal stimulation, not to perception of movement per se. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
When an object increases in size and its retinal image expands, it is perceived to grow. But image expansion caused by one’s approaching an object of constant size does not result in perceived growth of the object. This is due, in part, to correct size perception which takes the distance of the object into account. But perceived growth may have another component, the perceived expanding motion of the object’s contours. Failure of growth to be perceived when the image expansion is caused by approaching an object may, in addition, be the result of a compensating process that prevents expandingmotion to be perceived when the image expansion occurs during a subject’s forward movement. That such a compensating process operates was demonstrated in an indirect manner. We made use of the fact that prolonged exposure of a retinal area to the same motion process leads to decrease in the speed of the resulting perceived motion and to motion aftereffect. When a compensating process operates, it might have an effect on these two consequences of motion perception, and such a result was obtained. Under conditions that would bring the compensation into effect, namely, when prolonged exposure to an expanding motion occurred only during the subject’s forward movements, subsequent speed decrease was significantly diminished and motion aftereffects occurred substantially less often than in the control conditions.
Article
We studied the effects of horizontal smooth pursuit on the ocular tracking responses to brief perturbations of a textured background in humans. When the subject was fixating a stationary spot, a brief perturbation (60 degrees/s, 40 ms) of the background in any one of four directions (right, left, up, down) elicited a small tracking response. When the subject was pursuing a target moving against the stationary background, the same background perturbation elicited a larger response when in the same direction as the pursuit, but a smaller response when its direction was opposite to the pursuit; the response to vertical background perturbations was also enhanced during pursuit. When the subject was pursuing while the target and background were moving together, the same background perturbations elicited the larger responses regardless of their direction. These results indicate that the sensitivity to background motion is increased during smooth pursuit. However, when pursuit is executed against a stationary background--the usual situation in everyday life--the system is selectively insensitive to the reafferent visual input associated with pursuit, thereby reducing the potentially adverse effect of the background on pursuit performance.
Article
There is little direct psychophysical evidence that the visual system contains mechanisms tuned to head-centered velocity when observers make a smooth pursuit eye movement. Much of the evidence is implicit, relying on measurements of bias (e.g., matching and nulling). We therefore measured discrimination contours in a space dimensioned by pursuit target motion and relative motion between target and background. Within this space, lines of constant head-centered motion are parallel to the main negative diagonal, so judgments dominated by mechanisms that combine individual components should produce contours with a similar orientation. Conversely, contours oriented parallel to the cardinal axes of the space indicate judgments based on individual components. The results provided evidence for mechanisms tuned to head-centered velocity-discrimination ellipses were significantly oriented away from the cardinal axes, toward the main negative diagonal. However, ellipse orientation was considerably less steep than predicted by a pure combination of components. This suggests that observers used a mixture of two strategies across trials, one based on individual components and another based on their sum. We provide a model that simulates this type of behavior and is able to reproduce the ellipse orientations we found.
Article
The vast majority of research on optic flow (retinal motion arising because of observer movement) has focused on its use in heading recovery and guidance of locomotion. Here we demonstrate that optic flow processing has an important role in the detection and estimation of scene-relative object movement during self movement. To do this, the brain identifies and globally discounts (i.e., subtracts) optic flow patterns across the visual scene-a process called flow parsing. Remaining motion can then be attributed to other objects in the scene. In two experiments, stationary observers viewed radial expansion flow fields and a moving probe at various onscreen locations. Consistent with global discounting, perceived probe motion had a significant component toward the center of the display and the magnitude of this component increased with probe eccentricity. The contribution of local motion processing to this effect was small compared to that of global processing (experiment 1). Furthermore, global discounting was clearly implicated because these effects persisted even when all the flow in the hemifield containing the probe was removed (experiment 2). Global processing of optic flow information is shown to play a fundamental role in the recovery of object movement during ego movement.
Article
When observers tracked moving stripes across a background either of stationary stripes, or of stripes moving in the opposite direction, they saw a clear motion aftereffect when the stripes stopped moving. The direction of this aftereffect was opposite to that of the previously tracked stripes, and was thus the same as the direction of the retinal movement of the non-tracked stripes. This aftereffect of tracking was shown not to depend upon slippage of the tracked contours on the retina during tracking, or upon the saccadic phase of optokinetic nystagmus. The effect showed storage over a period of time with the eyes shut. It appears that the effect is due to induced movement, and arises originally from stimulation of the retina by background contours in the tracking phase. This was shown by confining the view of the moving target to one eye, while permitting both eyes to be exposed to background stimulation during tracking. After such stimulation the magnitude of the aftereffect was equal in the two eyes.
Article
With accurate measurement of eye position during smooth tracking, comparison of the retinal and perceived paths of spots of light moving in harmonic motion indicates little compensation for smooth pursuit eye movements by the perceptual system. The data suggest that during smooth pursuit, the perceptual system has access to information about direction of tracking, and assumes a relatively low speed, almost irrespective of the actual speed of the eye. It appears, then, that the specification of innervation to the extraocular muscles for smooth tracking is predominantly peripheral, i.e. it occurs beyond the stage in the efferent command process monitored by perception.
Article
A new method, using phase-reversing sinusoidal gratings to cancel perceived motion, was developed to measure the motion aftereffect (MAE). This technique was used to show the existence of a remote MAE, i.e. an MAE in areas that were not directly stimulated during adaptation. In several experiments, this remote MAE was compared to the local MAE. The remote effect was generally weaker and of shorter duration. It showed no directional tuning within the investigated range, as compared to a tuning of +/- 60 deg of the local MAE. There was no adaptation effect to the component gratings of a plaid, indicating that the plaid was treated as a coherent pattern. The local MAE showed clear spatial frequency tuning, whereas the remote MAE varied little with spatial frequency difference, although there was a tendency towards frequencies lower than the adaptation frequency. The possibility is considered that both local and remote MAEs are generated in extrastriate areas.
Article
The motion aftereffect (MAE) was measured with retinally moving vertical gratings positioned above and below (flanking) a retinally stationary central grating (experiments 1 and 2). Motion over the retina was produced by leftward motion of the flanking gratings relative to the stationary eyes, and by rightward eye or head movements tracking the moving (but retinally stationary) central grating relative to the stationary (but retinally moving) surround gratings. In experiment 1 the motion occurred within a fixed boundary on the screen, and oppositely directed MAEs were produced in the central and flanking gratings with static fixation; but with eye or head tracking MAEs were reported only in the central grating. In experiment 2 motion over the retina was equated for the static and tracking conditions by moving blocks of grating without any dynamic occlusion and disclosure at the boundaries. Both conditions yielded equivalent leftward MAEs of the central grating in the same direction as the prior flanking motion, ie an MAE was consistently produced in the region that had remained retinally stationary. No MAE was recorded in the flanking gratings, even though they moved over the retina during adaptation. When just two gratings were presented, MAEs were produced in both, but in opposite directions (experiments 3 and 4). It is concluded that the MAE is a consequence of adapting signals for the relative motion between elements of a display.
Article
Previous work has demonstrated a difference in human sensitivity to compressive and shearing speed gradients. This raises the possibility that the ability to estimate the slant of a surface may vary with its direction of tilt. No such variance was found here, which may indicate that slant estimation depends upon deformation rather than upon compression or shear.
Article
It has been previously reported that prolonged unidirectional smooth pursuit often produces a negative motion aftereffect (MAE). This was believed to be caused by retinal image motion of stationary environmental contours during pursuit which subsequently produced a primary motion aftereffect in the tracking direction. The peripheral MAE then induced motion in the stationary tracking target resulting in illusory movement in the opposite direction. We have found that a negative MAE is also produced when the adapting field is devoid of any contours. Furthermore, the presence of a moving textured background in conjunction with smooth pursuit produced an MAE whose direction was inconsistent with the induced motion hypothesis. Since all examples of motion aftereffects in this study were associated with the pursuit aspect of the experiment rather than any interactions with background contours, it was proposed that the illusory motion had an oculomotor determinant. A scheme was tentatively outlined in which fixation suppression of an unregistered ocular drift following prolonged pursuit adaptation (pursuit after-nystagmus) produced the post-adaptive motion illusions.
Article
After a period of prolonged unidirectional smooth pursuit, the tracking target is seen to drift in the opposite direction when it is stopped, even though its retinal image is stationary. If, however, the tracking target is extinguished during the post-adaptive period, the eyes continue to drift in the tracking direction, a phenomenon known as pursuit afternystagmus. It is proposed that the visual system, in an effort to maintain fixation upon the target, produces a motor signal in the opposite direction in order to offset the residual afternystagmus. The perceptual registration of this efferent signal may then produce the motion illusion.
Article
Two experiments are described in which it was investigated whether the adaptation on which motion aftereffects (MAEs) are based is a response to retinal image motion alone or to the motion signal derived from the process which combines the image motion signal with information about eye movement (corollary discharge). In both experiments observers either fixated a stationary point or tracked a vertically moving point while a pattern (in experiment 1, a grating; in experiment 2, a random-dot pattern) drifted horizontally across the field. In the tracking condition the adapting retinal motion was oblique. In the fixation condition it was horizontal. In every case in both conditions the MAE was horizontal, in the direction opposite to that of pattern motion. These results are consistent with the hypothesis that the adaptation is a response to the motion signal derived from the comparison of eye and image motion rather than to retinal motion per se. An alternative explanation is discussed.
Article
If physical movements are to be seen veridically, it is necessary to distinguish between displacements over the retina due to self-motion and those due to object motion. When target motion is in a different direction from that of a pursuit eye movement, the perceived motion of the target is known to be shifted in direction toward the retinal path, indicating a partial failure of compensation for eye movements (Becklen, Wallach, & Nitzberg, 1984). The experiments reported here compared the perception of target motion when the head and/or eyes were moving in a direction different from that of the target. In three experiments, target motion was varied in direction, phase, and extent with respect to pursuit movements. In all cases, the compensation was less effective for head than for eye movements, although this difference was least when the extent of the tracked and target motions was the same. Compensation for pursuit eye movements was better than that reported in previous studies.
Article
During a pursuit eye movement made in darkness across a small stationary stimulus, the stimulus is perceived as moving in the opposite direction to the eyes. This so-called Filehne illusion is usually explained by assuming that during pursuit eye movements the extraretinal signal (which informs the visual system about eye velocity so that retinal image motion can be interpreted) falls short. A study is reported in which the concept of an extraretinal signal is replaced by the concept of a reference signal, which serves to inform the visual system about the velocity of the retinae in space. Reference signals are evoked in response to eye movements, but also in response to any stimulation that may yield a sensation of self-motion, because during self-motion the retinae also move in space. Optokinetic stimulation should therefore affect reference signal size. To test this prediction the Filehne illusion was investigated with stimuli of different optokinetic potentials. As predicted, with briefly presented stimuli (no optokinetic potential) the usual illusion always occurred. With longer stimulus presentation times the magnitude of the illusion was reduced when the spatial frequency of the stimulus was reduced (increased optokinetic potential). At very low spatial frequencies (strongest optokinetic potential) the illusion was inverted. The significance of the conclusion, that reference signal size increases with increasing optokinetic stimulus potential, is discussed. It appears to explain many visual illusions, such as the movement aftereffect and center-surround induced motion, and it may bridge the gap between direct Gibsonian and indirect inferential theories of motion perception.
Article
Adapting to a drifting grating (temporal frequency 4 Hz, contrast 0.4) in the periphery gave rise to a motion aftereffect (MAE) when the grating was stopped. A standard unadapted foveal grating was matched to the apparent velocity of the MAE, and the matching velocity was approximately constant regardless of the visual field position and spatial frequency of the adapting grating. On the other hand, when the MAE was measured by nulling with real motion of the test grating, nulling velocity was found to increase with eccentricity. The nulling velocity was constant when scaled to compensate for changes in the spatial 'grain' of the visual field. Thus apparent velocity of MAE is constant across the visual field, but requires a greater velocity of real motion to cancel it in the periphery. This confirms that the mechanism underlying MAE is spatially-scaled with eccentricity, but temporally homogeneous. A further indication of temporal homogeneity is that when MAE is tracked, by matching or by nulling, the time course of temporal decay of the aftereffect is similar for central and for peripheral stimuli.
Article
Contrary to an earlier report [Anstis and Gregory, Q. Jl exp. Psychol. 17, 173-174 (1965)], we find that the sustained retinal motion caused by tracking a moving target over a stationary grating does not result in a motion aftereffect (MAE) which is equivalent to that resulting from comparable retinal motion caused by actual motion of a grating. The MAE associated with tracking generally occurs in elements falling on areas not previously exposed to retinal motion. It is in the same direction as the previous retinal motion in the display and is apparently an induced MAE caused by a weak, below threshold MAE in the elements stimulating areas that were previously exposed to retinal motion. Based on an analysis of eye movement records, we do not believe that the weakness of the tracking MAE is primarily a function of the poor quality of the tracking eye movements. Other possible reasons for the weakness of the MAE are suggested.
Article
As a mechanism to detect differential motion, we have proposed a model of 'a motion contrast detector' and have shown that it can explain the perceptual change from motion capture to induced motion with increasing stimulus size and decreasing eccentricity. To further test the feasibility of the model, we examined the effect of surround motion on the motion aftereffect (MAE) elicited in the center. Using a drifting grating surrounded by another drifting grating, the duration of MAE in the center after adaptation was measured for various surround velocities (Expt 1). MAE was stronger when the surround moved oppositely to, than together with, the center. This finding was consistent with some previous reports. Using similar stimuli, MAE was measured at various stimulus sizes and eccentricities by the cancellation technique (Expt 2). The effect of surround modulation turned out to vary with both size and eccentricity. We examined if the apparent dependence on eccentricity could reflect a simpler effect of cortical size when the data were rescaled according to a linear scaling factor. We interpret our results in terms of motion contrast detectors, possibly located in the area MT.
Article
We investigated the effects of stationary and moving textured backgrounds on ocular and manual pursuit of a discrete target that suddenly starts to move at constant speed (ramp motion). When a stationary textured background was superimposed to the target displacement, the gain of the steady-state eye smooth pursuit velocity was significantly reduced, while the latency of pursuit initiation did not vary significantly, as compared to a dark background condition. The initial velocity of the eye smooth pursuit was also lowered. Both the initial acceleration and the steady-state manual tracking angular velocity were slightly, but not significantly, lowered when compared to a dark background condition. Detrimental effects of the stationary textured background were of comparable amplitude (approximately 10%) for ocular and manual pursuit. In a second condition, we compared ocular and manual pursuit when the textured background was either stationary or drifting. Initial and steady-state eye velocities increased when the textured background moved in the same direction as the target. Conversely, when the background moved in the opposite direction, both velocities were decreased. Eye displacement gain remained however close to unity due to an increase in the occurrence of catch-up corrective saccades. The effects of the moving backgrounds on the initial and steady-state forearm velocities were inverse to that reported for smooth pursuit eye movements. Neither manual nor ocular smooth pursuit latencies were affected.
Article
During smooth pursuit eye movements made across a stationary background an illusory motion of the background is perceived (Filehne illusion). The present study was undertaken in order to test if the Filehne illusion can be influenced by information unrelated to the retinal image slip prevailing and to the eye movement being executed. The Filehne illusion was measured in eight subjects by determining the amount of external background motion required to compensate for the illusory background motion induced by 12 deg/sec rightward smooth pursuit. Using a two-alternative forced-choice method, test trials, which yielded the estimate of the Filehne illusion, were randomly interleaved with conditioning trials, in which high retinal image slip was created by background stimuli moving at a constant horizontal velocity. There was a highly reproducible monotic relationship between the size and direction of the Filehne illusion and the velocity of the background stimulus in the conditioning trials with the following extremes: large Filehne illusions with illusory motion to the right occurred for conditioning stimuli moving to the left, i.e. opposite to the direction of eye movement in the test trials, while conversely, conditioning stimuli moving to the right yielded Filehne illusions close to zero. Additional controls suggest that passive motion aftereffects are unlikely to account for the modulation of the Filehne illusion by the conditioning stimulus. We hypothesize that this modification might reflect the dynamic character of the networks elaborating spatial constancy.
Article
The visual motion aftereffect (MAE) typically occurs when stationary contours are presented to a retinal region that has previously been exposed to motion. It can also be generated following observation of a stationary grating when two gratings (above and below it) move laterally: the surrounding gratings induce motion in the opposite direction in the central one. Following adaptation, the centre appears to move in the direction opposite to the previously induced motion, but little or no MAE is visible in the surround gratings [Swanston & Wade (1992) Perception, 21, 569-582]. The stimulus conditions that generate the MAE from induced motion were examined in five experiments. It was found that: the central MAE occurs when tested with stationary centre and surround gratings following adaptation to surround motion alone (Expt 1); no MAEs in either the centre or surround can be measured when the test stimulus is the centre alone or the surround alone (Expt 2); the maximum MAE in the central grating occurs when the same surround region is adapted and tested (Expt 3); the duration of the MAE is dependent upon the spatial frequency of the surround but not the centre (Expt 4); MAEs can be observed in the surround gratings when they are themselves surrounded by stationary gratings during test (Expt 5). It is concluded that the linear MAE occurs as a consequence of adapting restricted retinal regions to motion but it can only be expressed when nonadapted regions are also tested.
Article
We investigated the effects of stationary and moving textured backgrounds on the initiation and steady state of ocular pursuit using horizontally moving targets. We found that the initial eye acceleration was slightly reduced when a stationary textured background was employed, as compared to experiments with a homogeneous background. When a moving textured background was introduced, the initial eye acceleration was significantly larger when the target and the background moved in opposite directions than when the target and the background moved in the same direction. The use of stationary and moving textured backgrounds resulted in comparable effects on the initial eye acceleration when they were presented either as a large field or as a narrow, horizontal small field, only covering the trajectory of the target. Moreover, small-field stationary backgrounds slightly reduced the eye velocity during steady state pursuit. A small-field background moving in the opposite direction to the target distinctly reduced eye velocity, while a target and a background moving in the same direction sometimes even improved pursuit performance, when compared with a homogeneous background. The influences of small-field textured backgrounds on steady state pursuit were comparable with those of large-field backgrounds in both stationary and moving conditions.
Article
Electrophysiological recording from the extrastriate cortex of non-human primates has revealed neurons that have large receptive fields and are sensitive to various components of object or self movement, such as translations, rotations and expansion/contractions. If these mechanisms exist in human vision, they might be susceptible to adaptation that generates motion aftereffects (MAEs). Indeed, it might be possible to adapt the mechanism in one part of the visual field and reveal what we term a 'phantom MAE' in another part. The existence of phantom MAEs was probed by adapting to a pattern that contained motion in only two non-adjacent 'quarter' segments and then testing using patterns that had elements in only the other two segments. We also tested for the more conventional 'concrete' MAE by testing in the same two segments that had adapted. The strength of each MAE was quantified by measuring the percentage of dots that had to be moved in the opposite direction to the MAE in order to nullify it. Four experiments tested rotational motion, expansion/contraction motion, translational motion and a 'rotation' that consisted simply of the two segments that contained only translational motions of opposing direction. Compared to a baseline measurement where no adaptation took place, all subjects in all experiments exhibited both concrete and phantom MAEs, with the size of the latter approximately half that of the former. Adaptation to two segments that contained upward and downward motion induced the perception of leftward and rightward motion in another part of the visual field. This strongly suggests there are mechanisms in human vision that are sensitive to complex motions such as rotations.
Article
When we make a smooth eye movement to track a moving object, the visual system must take the eye's movement into account in order to estimate the object's velocity relative to the head. This can be done by using extra-retinal signals to estimate eye velocity and then subtracting expected from observed retinal motion. Two familiar illusions of perceived velocity--the Filehne illusion and Aubert-Fleischl phenomenon--are thought to be the consequence of the extra-retinal signal underestimating eye velocity. These explanations assume that retinal motion is encoded accurately, which is questionable because perceived retinal speed is strongly affected by several stimulus properties. We develop and test a model of head-centric velocity perception that incorporates errors in estimating eye velocity and in retinal-motion sensing. The model predicts that the magnitude and direction of the Filehne illusion and Aubert-Fleischl phenomenon depend on spatial frequency and this prediction is confirmed experimentally.
Article
Eye movements introduce retinal motion to the image and so affect motion cues to depth. For instance, the slant of a plane moving at right-angles to the observer is specified by translation and a component of relative motion such as shear. To a close approximation, the translation disappears from the image when the eye tracks the surface accurately with a pursuit eye movement. However, both translation and relative-motion components are needed to estimate slant accurately and unambiguously. During pursuit, therefore, an extra-retinal estimate of translation must be used by the observer to estimate surface slant. Extra-retinal and retinal estimates of translation speed are known to differ: a classic Aubert-Fleischl phenomenon was found for our stimuli. The decrease in perceived speed during pursuit predicts a corresponding increase in perceived slant when the eye tracks the surface. This was confirmed by comparing perceived slant in pursuit and eye-stationary conditions using slant-matching and slant-estimation techniques. Moreover, the increase in perceived slant could be quantified solely on the basis of the perceived-speed data. We found no evidence that relative-motion estimates change between the two eye-movement conditions. A final experiment showed that perceived slant decreases when a fixed retinal shear is viewed with increasing pursuit speed, as predicted by the model. The implication of the results for recovering metric depth estimates from motion-based cues is discussed.
Article
We usually perceive a stationary, stable world and we are able to correctly estimate the direction of heading from optic flow despite coherent visual motion induced by eye movements. This astonishing example of perceptual invariance results from a comparison of visual information with internal reference signals predicting the visual consequences of an eye movement. Here we demonstrate that the reference signal predicting the consequences of smooth-pursuit eye movements is continuously calibrated on the basis of direction-selective interactions between the pursuit motor command and the rotational flow induced by the eye movement, thereby minimizing imperfections of the reference signal and guaranteeing an ecologically optimal interpretation of visual motion.
Article
In principle, information for 3-D motion perception is provided by the differences in position and motion between left- and right-eye images of the world. It is known that observers can precisely judge between different 3-D motion trajectories, but the accuracy of binocular 3-D motion perception has not been studied. The authors measured the accuracy of 3-D motion perception. In 4 different tasks, observers were inaccurate, overestimating trajectory angle, despite consistently choosing similar angles (high precision). Errors did not vary consistently with target distance, as would be expected had inaccuracy been due to misestimates of viewing distance. Observers appeared to rely strongly on the lateral position of the target, almost to the exclusion of the use of depth information. For the present tasks, these data suggest that neither an accurate estimate of 3-D motion direction nor one of passing distance can be obtained using only binocular cues to motion in depth. ((c) 2003 APA, all rights reserved)
Article
Repetitive eye movements are known to produce motion aftereffect (MAE) when made to track a moving stimulus. Explanations typically centre on the retinal motion created in the peripheral visual field by the eye movement. This retinal motion is thought to induce perceived motion in the central test, either through the interaction between peripheral MAE and central target or by adaptation of mechanisms sensitive to the relative motion created between centre and surround. Less attention has been paid to possible extra-retinal contributions to MAE following eye movement. Prolonged eye movement leads to afternystagmus which must be suppressed in order to fixate the stationary test. Chaudhuri (1991, Vision Research, 131, 1639-1645) proposed that nystagmus-suppression gives rise to an extra-retinal motion signal that is incorrectly interpreted as movement of the target. Chaudhuri's demonstration of extra-retinal MAE depended on repeated pursuit to induce the aftereffect. Here we describe conditions for an extra-retinal MAE that follows more reflexive, nystagmus-like eye movement. The MAE is extra-retinal in origin because it occurs in part of the visual field that received no retinal motion stimulation during adaptation. In an explicit test of the nystagmus-suppression hypothesis, we find extra-retinal MAE fails to store over a 30s delay between adaptation and test. Implications for our understanding of motion aftereffects are discussed.
Article
Motion aftereffects are normally tested in regions of the visual field that have been directly exposed to motion (local or concrete MAEs). We compared concrete MAEs with remote or phantom MAEs, in which motion is perceived in regions not previously adapted to motion. Our aim was to study the spatial dependencies and spatiotemporal tuning of phantom MAEs generated by radially expanding stimuli. For concrete and phantom MAEs, peripheral stimuli generated stronger aftereffects than central stimuli. Concrete MAEs display temporal frequency tuning, while phantom MAEs do not show categorical temporal frequency or velocity tuning. We found that subjects may use different response strategies to determine motion direction when presented with different stimulus sizes. In some subjects, as adapting stimulus size increased, phantom MAE strength increased while the concrete MAE strength decreased; in other subjects, the opposite effects were observed. We hypothesise that these opposing findings reflect interplay between the adaptation of global motion sensors and local motion sensors with inhibitory interconnections.
Article
Smooth pursuit eye movements change the retinal image motion of objects in the visual field. To enable an observer to perceive the motion of these objects veridically, the visual system has to compensate for the effects of the eye movements. The occurrence of the Filehne-illusion (illusory motion of a stationary object during smooth pursuit) shows that this compensation is not always perfect. The amplitude of the illusion appears to decrease with increasing presentation durations of the stationary object. In this study we investigated whether presentation duration has the same effect when an observer views a vertically moving object during horizontal pursuit. In this case, the pursuit eye movements cause the perceived motion path to be oblique instead of vertical; this error in perceived motion direction should decrease with higher presentation durations. In Experiment 1, we found that the error in perceived motion direction indeed decreased with increasing presentation duration, especially for higher pursuit velocities. The results of Experiment 2 showed that the error in perceived motion direction did not depend on the moment during pursuit at which the stimulus was presented, suggesting that the degree of compensation for eye movements is constant throughout pursuit. The results suggest that longer presentation durations cause the eye movement signal that is used by the visual system to increase more than the retinal signal.
Article
When the eyes follow a target that is moving directly towards the head they make a vergence eye movement. Accurate perception of the target's motion requires adequate compensation for the movements of the eyes. The experiments in this paper address the issue of how well the visual system compensates for vergence eye movements when viewing moving targets. We show that there are small but consistent biases across observers: When the eyes follow a target that is moving in depth, it is typically perceived as slower than when the eyes are kept stationary. We also analysed the eye movements that were made by observers. We found that there are considerable differences between observers and between trials, but we did not find evidence that the gains and phase lags of the eye movements were related to psychophysical performance.