Article

The nonlinear structure of motion perception during smooth eye movements

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

To perceive object motion when the eyes themselves undergo smooth movement, we can either perceive motion directly-by extracting motion relative to a background presumed to be fixed-or through compensation, by correcting retinal motion by information about eye movement. To isolate compensation, we created stimuli in which, while the eye undergoes smooth movement due to inertia, only one object is visible-and the motion of this stimulus is decoupled from that of the eye. Using a wide variety of stimulus speeds and directions, we rule out a linear model of compensation, in which stimulus velocity is estimated as a linear combination of retinal and eye velocities multiplied by a constant gain. In fact, we find that when the stimulus moves in the same direction as the eyes, there is little compensation, but when movement is in the opposite direction, compensation grows in a nonlinear way with speed. We conclude that eye movement is estimated from a combination of extraretinal and retinal signals, the latter based on an assumption of stimulus stationarity. Two simple models, in which the direction of eye movement is computed from the extraretinal signal and the speed from the retinal signal, account well for our results.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Filehne (1922) found that a stationary flash during pursuit of another object appears to move in the opposite direction of pursuit. Since then many others have found that pursuit causes inaccuracies in the perception of speed and direction (Festinger et al. 1976;Freeman and Banks 1998;Haarmeier and Thier 1998;Morvan and Wexler 2009;Souman et al. 2005;Wertheim and Van Gelder 1990). ...
Article
Full-text available
Motion perception can be distorted (Filehne, 1922) or enhanced (Spering, Schütz, Braun, & Gegenfurtner, 2011) by smooth pursuit. Here we investigated the role of smooth pursuit in discriminating curvature of motion trajectories. Subjects viewed a white 0.5 deg diameter Gaussian blob on a black background in total darkness as it moved along an arc of constant curvature for 1s (standard). Then after a 1.5s delay, a second motion trajectory (comparison) was viewed for 1s after which subjects used a button press to report which path appeared "flatter". No feedback was given. Viewing condition was blocked. Subjects either fixated a point at the center of the motion trajectory or were instructed to smoothly follow the target as it moved across the screen. In order to prevent the use of the fixation point as a spatial reference, the comparison trajectory was randomly rotated away from the standard by +/- 0, 5,20, or 90 degrees. Psychophysical discrimination thresholds were lower during pursuit (M = 1.09,SD = 0.27o) compared to fixation (M = 1.33,SD = 0.44o), where M and SDs represent differences in radius of curvature between standard and comparison. We evaluated oculomotor curvature discrimination by calculating oculometric functions, with curvature judgments derived from the average signed distance between each de-saccaded unfiltered position sample in a particular interval and a line connecting the first and last points of the interval. Larger distances indicated more curvature. The results indicated that a window of 300 ms beginning at the start of steady state was required to successfully decode curvature from eye positions. Even then, pursuit thresholds never reached psychophysical thresholds and on average were larger by a factor of 3. These results indicate that smooth pursuit may be useful in discriminating curvature in motion, even though the pursuit itself indicates little curvature. Meeting abstract presented at VSS 2016
... Although an essential measure of the velocity and the direction of our linear motion (heading), the optic flow alone may lead to perceptual ambiguity; whether the body is moving, the environment is moving, or both are moving. Nonvisual cues from the vestibular system critically address such uncertainty and improve the fidelity of the optic flow information [2][3][4][5][6]. ...
Article
Full-text available
Perception of our linear motion, heading, relies on convergence from multiple sensory systems utilizing visual and vestibular signals. Multisensory convergence takes place in the visuo-vestibular areas of the cerebral cortex and posterior cerebellar vermis. Latter closely connected with the inferior olive may malfunction in disorders of olivo-cerebellar hypersynchrony, such as the syndrome of oculopalatal tremor (OPT). We had recently shown an impairment in vestibular heading perception in the subjects with OPT. Here we asked whether the hypersynchrony in the inferior-olive cerebellar circuit also affects the visual perception of heading, and the impairment is coupled with the deficits in vestibular heading perception. Three subjects with OPT and 11 healthy controls performed a two-alternative forced-choice task in two separate experiments; one when they were moved en bloc in a straight-ahead forward direction or at multiple heading angles to the right or the left; and second when under virtual reality goggle they experienced the movement of star cloud leading to the percept of heading straight, left or to the right at the heading angles similar to those utilized in the vestibular task. The resultant psychometric function curves, derived from the two-alternative-forced-choice task, revealed abnormal threshold to perceive heading direction, abnormal sensitivity to the change in heading direction compared to straight ahead, and a bias towards one side. Although the impairment was present in both visual and vestibular heading perception, the deficits were not coupled.
Article
Full-text available
Nativists have postulated fundamental geometric knowledge that predates linguistic and symbolic thought. Central to these claims is the proposal for an isolated cognitive system dedicated to processing geometric information. Testing such hypotheses presents challenges due to difficulties in eliminating the combination of geometric and non-geometric information through language. We present evidence using a modified matching interference paradigm that an incongruent shape word interferes with identifying a two-dimensional geometric shape, but an incongruent two-dimensional geometric shape does not interfere with identifying a shape word. This asymmetry in interference effects between two-dimensional geometric shapes and their corresponding shape words suggests that shape words activate spatial representations of shapes but shapes do not activate linguistic representations of shape words. These results appear consistent with hypotheses concerning a cognitive system dedicated to processing geometric information isolated from linguistic processing and provide evidence consistent with hypotheses concerning knowledge of geometric properties of space that predates linguistic and symbolic thought.
Article
Full-text available
Is forgetting from working memory (WM) better explained by decay or interference? The answer to this question is the topic of an ongoing debate. Recently, a number of studies showed that performance in tests of visual WM declines with an increasing unfilled retention interval. This finding was interpreted as revealing decay. Alternatively, it can be explained by interference theories as an effect of temporal distinctiveness. According to decay theories, forgetting depends on the absolute time elapsed since the event to be retrieved. In contrast, temporal distinctiveness theories predict that memory depends on relative time, that is, the time since the to-be-retrieved event relative to the time since other, potentially interfering events. In the present study, we contrasted the effects of absolute time and relative time on forgetting from visual WM, using a continuous color recall task. To this end, we varied the retention interval and the inter-trial interval. The error in reporting the target color was a function of the ratio of the retention interval to the inter-trial interval, as predicted by temporal distinctiveness theories. Mixture modeling revealed that lower temporal distinctiveness produced a lower probability of reporting the target, but no changes in its precision in memory. These data challenge the role of decay in accounting for performance in tests of visual WM, and show that the relative spacing of events in time determines the degree of interference.
Article
PurposeThis study investigated how aberration-controlling, customised soft contact lenses corrected higher-order ocular aberrations and visual performance in keratoconic patients compared to other forms of refractive correction (spectacles and rigid gas-permeable lenses).Methods Twenty-two patients (16 rigid gas-permeable contact lens wearers and six spectacle wearers) were fitted with standard toric soft lenses and customised lenses (designed to correct 3rd-order coma aberrations). In the rigid gas-permeable lens-wearing patients, ocular aberrations were measured without lenses, with the patient's habitual lenses and with the study lenses (Hartmann-Shack aberrometry). In the spectacle-wearing patients, ocular aberrations were measured both with and without the study lenses. LogMAR visual acuity (high-contrast and low-contrast) was evaluated with the patient wearing their habitual correction (of either spectacles or rigid gas-permeable contact lenses) and with the study lenses.ResultsIn the contact lens wearers, the habitual rigid gas-permeable lenses and customised lenses provided significant reductions in 3rd-order coma root-mean-square (RMS) error, 3rd-order RMS and higher-order RMS error (p ≤ 0.004). In the spectacle wearers, the standard toric lenses and customised lenses significantly reduced 3rd-order RMS and higher-order RMS errors (p ≤ 0.005). The spectacle wearers showed no significant differences in visual performance measured between their habitual spectacles and the study lenses. However, in the contact lens wearers, the habitual rigid gas-permeable lenses and standard toric lenses provided significantly better high-contrast acuities compared to the customised lenses (p ≤ 0.006).Conclusions The customised lenses provided substantial reductions in ocular aberrations in these keratoconic patients; however, the poor visual performances achieved with these lenses are most likely to be due to small, on-eye lens decentrations.
Article
Full-text available
We review the features of the S-cone system that appeal to the psychophysicist and summarize the celebrated characteristics of S-cone mediated vision. Two factors are emphasized: First, the fine stimulus control that is required to isolate putative visual mechanisms and second, the relationship between physiological data and psychophysical approaches. We review convergent findings from physiology and psychophysics with respect to asymmetries in the retinal wiring of S-ON and S-OFF visual pathways, and the associated treatment of increments and decrements in the S-cone system. Beyond the retina, we consider the lack of S-cone projections to superior colliculus and the use of S-cone stimuli in experimental psychology, for example to address questions about the mechanisms of visually driven attention. Careful selection of stimulus parameters enables psychophysicists to produce entirely reversible, temporary, "lesions," and to assess behavior in the absence of specific neural subsystems.
Article
By the request of the authors, the following two research articles will be retracted from the Journal of Cognitive Neuroscience: 1. Anderson, D. E., Ester, E. F., Klee, D., Vogel, E. K., & Awh, E. (2014). Electrophysiological evidence for failures of item individuation in crowded visual displays. Journal of Cognitive Neuroscience, 26(10), 2298– 2309. https://dx.doi.org/10.1162/jocn_a_00649 . 2. Anderson, D. E., Bell, T. A., & Awh, E. (2012). Polymorphisms in the 5-HTTLPR gene mediate storage capacity of visual working memory. Journal of Cognitive Neuroscience, 24(5), 1069–1076. https://dx.doi. org/10.1162/jocn_a_00207 . On August 1, 2015, the Office of Research Integrity (ORI) announced a settlement agreement with David E. Anderson, the Respondent ( http://ori.hhs.gov/content/ case-summary-anderson-david ). On the basis of the Respondent’s admission and an analysis by the University of Oregon, ORI concluded that the Respondent had engaged in research misconduct by falsifying and/or fabricating data in four publications. Those publications were retracted immediately after the release of the ORI findings. Since that time, additional problems have been discovered with Article 1 above. Data points shown in Figure 8 were removed without justification and in contradiction to the analytic approach described in the methods and results. In light of this discovery and of the previous ORI findings, authors Bell and Awh no longer have confidence in the integrity of the data in Article 2. For these reasons, all authors on both articles (including the Respondent) have agreed to the retraction of Articles 1 and 2 above.
Article
We studied the functional connectivity of cells in the lateral geniculate nucleus (LGN) with the primary visual cortex (V1) in anesthetized marmosets (Callithrix jacchus). The LGN sends signals to V1 along parallel visual pathways called parvocellular (P), magnocellular (M), and koniocellular (K). To better understand how these pathways provide inputs to V1, we antidromically activated relay cells in the LGN by electrically stimulating V1 and measuring the conduction latencies of P (n = 7), M (n = 14), and the "Blue-ON" (n = 5) subgroup of K cells (K-BON cells). We found that the antidromic latencies of K-BON cells were similar to those of P cells. We also measured the response latencies to high contrast visual stimuli for a subset of cells. We found the LGN cells that have the shortest latency of response to visual stimulation also have the shortest antidromic latencies. We conclude that Blue color signals are transmitted directly to V1 from the LGN by K-BON cells.
Article
This study examined the relation between psychopathic traits and the brain response to facial emotion by analyzing the N170 component of the ERP. Fifty-four healthy participants were assessed for psychopathic traits and exposed to images of emotional and neutral faces with varying spatial frequency content. The N170 was modulated by the emotional expressions, irrespective of psychopathic traits. Fearless dominance was associated with a reduced N170, driven by the low spatial frequency components of the stimuli, and dependent on the tectopulvinar visual pathway. Conversely, coldheartedness was related to overall enhanced N170, suggesting mediation by geniculostriate processing. Results suggest that different dimensions of psychopathy are related to distinct facial emotion processing mechanisms and support the existence of both amygdala deficits and compensatory engagement of cortical structures for emotional processing in psychopathy.
Article
This study tested two hypotheses: (1) that non-cardinal color mechanisms may be due to individual differences: some subjects have them (or have stronger ones), while other subjects do not; and (2) that non-cardinal mechanisms may be stronger in the isoluminant plane of color space than in the two planes with luminance. Five to six subjects per color plane were tested on three psychophysical paradigms: adaptation, noise masking, and plaid coherence. There were no consistent individual differences in non-cardinal mechanism strength across the three paradigms. In group-averaged data, non-cardinal mechanisms appear to be weaker in the two planes with luminance than in the isoluminant plane.
Article
Full-text available
Retinal ganglion cell (RGC) isodensity maps indicate important regions in an animal's visual field. These maps can also be combined with measures of focal length to estimate the theoretical visual acuity. Here we present the RGC isodensity maps and anatomical spatial resolving power in three budgerigars (Melopsittacus undulatus) and two Bourke's parrots (Neopsephotus bourkii). Because RGCs were stacked in several layers, we modified the Nissl staining procedure to assess the cell number in the whole-mounted and cross-sectioned tissue of the same retinal specimen. The retinal topography showed surprising variation; however, both parrot species had an area centralis without discernable fovea. Budgerigars also had a putative area nasalis never reported in birds before. The peak RGC density was 22,300-34,200 cells/mm(2) in budgerigars and 18,100-38,000 cells/mm(2) in Bourke's parrots. The maximum visual acuity based on RGCs and focal length was 6.9 cyc/deg in budgerigars and 9.2 cyc/deg in Bourke's parrots. These results are lower than earlier behavioural estimates. Our findings illustrate that retinal topography is not a very fixed trait and that theoretical visual acuity estimations based on RGC density can be lower than the behavioural performance of the bird.
Article
Full-text available
Recent evidence from neuroimaging and psychophysics suggests common neural and representational substrates for visual perception and visual short-term memory (VSTM). Visual perception is adapted to a rich set of statistical regularities present in the natural visual environment. Common neural and representational substrates for visual perception and VSTM suggest that VSTM is adapted to these same statistical regularities too. This article discusses how the study of VSTM can be extended to stimuli that are ecologically more realistic than those commonly used in standard VSTM experiments and what the implications of such an extension could be for our current view of VSTM. We advocate for the development of unified models of visual perception and VSTM-probabilistic and hierarchical in nature-incorporating prior knowledge of natural scene statistics.
Article
Although amblyopia typically manifests itself as a monocular condition, its origin has long been linked to unbalanced neural signals from the two eyes during early postnatal development, a view confirmed by studies conducted on animal models in the last 50 years. Despite recognition of its binocular origin, treatment of amblyopia continues to be dominated by a period of patching of the non-amblyopic eye that necessarily hinders binocular co-operation. This review summarizes evidence from three lines of investigation conducted on an animal model of deprivation amblyopia to support the thesis that treatment of amblyopia should instead focus upon procedures that promote and enhance binocular co-operation. First, experiments with mixed daily visual experience in which episodes of abnormal visual input were pitted against normal binocular exposure revealed that short exposures of the latter offset much longer periods of abnormal input to allow normal development of visual acuity in both eyes. Second, experiments on the use of part-time patching revealed that purposeful introduction of episodes of binocular vision each day could be very beneficial. Periods of binocular exposure that represented 30-50% of the daily visual exposure included with daily occlusion of the non-amblyopic could allow recovery of normal vision in the amblyopic eye. Third, very recent experiments demonstrate that a short 10 day period of total darkness can promote very fast and complete recovery of visual acuity in the amblyopic eye of kittens and may represent an example of a class of artificial environments that have similar beneficial effects. Finally, an approach is described to allow timing of events in kitten and human visual system development to be scaled to optimize the ages for therapeutic interventions.
Article
The aim of the study was to investigate the sensitivity of the visual mismatch negativity (vMMN) component of event-related potentials (ERPs) to the perceptual experience of brightness changes. The percept could be based on either real contrast or illusory brightness changes. In the illusory condition, we used Craik-Cornsweet-O'Brien (CCOB) stimuli. CCOB stimuli comprised of grey, equiluminant areas and Cornsweet-edges that separated the areas. These specific edges, containing opposing darkening and ligthening gradients, modify the perceived brigthness of the flanking areas. Areas next to the darkening part (of the edges) perceived darker while areas next to the lightening part perceived lighter. Reversing the gradients induces illusory brigthness changes. The normal and reversed stimuli were delivered according to a passive oddball paradigm. In another condition (REAL condition), we used stimuli with real contrast difference. The perceived brightness of the stimuli applied in this sequence was fitted to the normal and reversed CCOB stimuli. In a third condition (CONTROL condition), we tested the ERP effect of the reversing of Cornsweet-edge. In this condition, the changes did not induce illusory brightness changes. We obtained vMMN with double peaks to both real and illusory brigthness changes, furthermore, no vMMN emerged in the CONTROL condition. vMMNs fell in the same latency range in the two conditions, nevertheless the components slightly differed in terms of scalp distribution. Since the perceptual experience (i.e. brightness changes) was similar in the two conditions, we argue that the vMMN is primarily sensitive to the perceptual experience and the physical attributes of the stimulation has only a moderate effect in the elicitation of the vMMN.
Article
Full-text available
Working memory is widely considered to be limited in capacity, holding a fixed, small number of items, such as Miller's 'magical number' seven or Cowan's four. It has recently been proposed that working memory might better be conceptualized as a limited resource that is distributed flexibly among all items to be maintained in memory. According to this view, the quality rather than the quantity of working memory representations determines performance. Here we consider behavioral and emerging neural evidence for this proposal.
Article
While viewing ambiguous figures, such as the Necker cube, the available perceptual interpretations alternate with one another. The role of higher level mechanisms in such reversals remains unclear. We tested whether perceptual reversals of discontinuously presented Necker cube pairs depend on working memory resources by manipulating cognitive load while recording event-related potentials (ERPs). The ERPs showed early enhancements of negativity, which were obtained in response to the first cube approximately 500ms before perceived reversals. We found that working memory load influenced reversal-related brain responses in response to the second cube over occipital areas at the 150-300ms post-stimulus and over central areas at P3 time window (300-500ms), suggesting that it modulates intermediate visual processes. Interestingly, reversal rates remained unchanged by the working memory load. We propose that perceptual reversals in discontinuous presentation of ambiguous stimuli are governed by an early (well preceding pending reversals) mechanism, while the effects of load on the reversal related ERPs may reflect general top-down influences on visual processing, possibly mediated by the prefrontal cortex.
Article
Full-text available
Purpose: Amblyopia is a developmental disorder that results in both monocular and binocular deficits. Although traditional treatment in clinical practice (i.e., refractive correction, or occlusion by patching and penalization of the fellow eye) is effective in restoring monocular visual acuity, there is little information on how binocular function, especially stereopsis, responds to traditional amblyopia treatment. We aim to evaluate the effects of perceptual learning on stereopsis in observers with amblyopia in the current study. Methods: Eleven observers (21.1 ± 5.1 years, six females) with anisometropic or ametropic amblyopia were trained to judge depth in 10 to 13 sessions. Red-green glasses were used to present three different texture anaglyphs with different disparities but a fixed exposure duration. Stereoacuity was assessed with the Fly Stereo Acuity Test and visual acuity was assessed with the Chinese Tumbling E Chart before and after training. Results: Averaged across observers, training significantly reduced disparity threshold from 776.7″ to 490.4″ (P < 0.01) and improved stereoacuity from 200.3″ to 81.6″ (P < 0.01). Interestingly, visual acuity also significantly improved from 0.44 to 0.35 logMAR (approximately 0.9 lines, P < 0.05) in the amblyopic eye after training. Moreover, the learning effects in two of the three retested observers were largely retained over a 5-month period. Conclusions: Perceptual learning is effective in improving stereo vision in observers with amblyopia. These results, together with previous evidence, suggest that structured monocular and binocular training might be necessary to fully recover degraded visual functions in amblyopia. Chinese Abstract.
Article
Full-text available
Previous studies often revealed a right-hemisphere specialization for processing the global level of compound visual stimuli. Here we explore whether a similar specialization exists for the detection of intersected contours defined by a chain of local elements. Subjects were presented with arrays of randomly oriented Gabor patches that could contain a global path of collinearly arranged elements in the left or in the right visual hemifield. As expected, the detection accuracy was higher for contours presented to the left visual field/right hemisphere. This difference was absent in two control conditions where the smoothness of the contour was decreased. The results demonstrate that the contour detection, often considered to be driven by lateral coactivation in primary visual cortex, relies on higher-level visual representations that differ between the hemispheres. Furthermore, because contour and non-contour stimuli had the same spatial frequency spectra, the results challenge the view that the right-hemisphere advantage in global processing depends on a specialization for processing low spatial frequencies.
Article
Simultaneous tracking of multiple moving objects is essential in tasks such as traffic control, automobile driving, and scene surveillance. Recently, an increasing number of studies have focused on the roles of object identity and location binding in unique target tracking tasks, but contradictory results have been reported. In the present study, we introduced for the first time the Stroop stimuli of Chinese characters to the multiple-identity tracking paradigm, taking advantage of the ease to control the overall size, familiarity, and visual complexity of the Chinese characters. The results showed that when the observers were asked to track unique objects that bear two distinct features, the feature conflict disrupted the tracking performance, even when the given object has a distinctive identity. Our data revealed that the internal disaccord of semantic and physical features on an object can disturb the identity-location binding process in a multiple-identity tracking task, but does not affect the location information significantly.
Article
The spatial summation of excitation and inhibition determines the final output of neurons in the cat V1. To characterize the spatial extent of the excitatory classical receptive field (CRF) and inhibitory non-classical receptive field (nCRF) areas, we examined the spatial summation properties of 169 neurons in cat V1 at high (20-90%) and low (5-15%) stimulus contrasts. Three categories were classified based on the difference in the contrast dependency of the surround suppression. We discovered that the three categories significantly differed in CRF size, peak firing rate, and the proportion of simple/complex cell number. The classification of simple and complex cells was determined at both high and low contrasts. While the majority of V1 neurons had stable modulation ratios in their responses, 10 cells (6.2%) in our sample crossed the classification boundary under different stimulus contrasts. No significant difference was found in the size of the CRF between simple and complex cells. Further comparisons in each category determined that the CRFs for complex cells were significantly larger than those for simple cells in category type I neurons, with no significant differences between simple and complex cells in category type II and type III neurons. In addition, complex cells have higher peak firing rates than simple cells.
Article
Full-text available
Streetscapes are basic urban elements which play a major role in the livability of a city. The visual complexity of streetscapes is known to influence how people behave in such built spaces. However, how and which characteristics of a visual scene influence our perception of complexity have yet to be fully understood. This study proposes a method to evaluate the complexity perceived in streetscapes based on the statistics of local contrast and spatial frequency. Here, 74 streetscape images from four cities, including daytime and nighttime scenes, were ranked for complexity by 40 participants. Image processing was then used to locally segment contrast and spatial frequency in the streetscapes. The statistics of these characteristics were extracted and later combined to form a single objective measure. The direct use of statistics revealed structural or morphological patterns in streetscapes related to the perception of complexity. Furthermore, in comparison to conventional measures of visual complexity, the proposed objective measure exhibits a higher correlation with the opinion of the participants. Also, the performance of this method is more robust regarding different time scenarios.
Article
Full-text available
To test the hypothesis that fixational stability of the amblyopic eye in strabismics will improve when viewing provides both bifoveal fixation and reduced inter-ocular suppression by reducing the contrast to the fellow eye. Seven strabismic amblyopes (Age: 29.2 ± 9 years; five esotropes and two exotropes) showing clinical characteristics of central suppression were recruited. Interocular suppression was measured by a global motion task. For each participant, a balance point was determined which defined contrast levels for each eye where binocular combination was optimal (interocular suppression minimal). When the balance point could not be determined, this participant was excluded. Bifoveal fixation was established by ocular alignment using a haploscope. Participants dichoptically viewed similar targets (a cross of 2.3° surrounded by a square of 11.3°) at 40 cm. Target contrasts presented to each eye were either high contrast (100% to both eyes) or balanced contrast (attenuated contrast in the fellow fixing eye). Fixation stability was measured over a 5 min period and quantified using bivariate contour ellipse areas in four different binocular conditions; unaligned/high contrast, unaligned/balance point, aligned/high contrast and aligned/balance point. Fixation stability was also measured in six control subjects (Age: 25.3 ± 4 years). Bifoveal fixation in the strabismics was transient (58.15 ± 15.7 s). Accordingly, fixational stability was analysed over the first 30 s using repeated measures anova. Post hoc analysis revealed that for the amblyopic subjects, the fixational stability of the amblyopic eye was significantly improved in aligned/high contrast (p = 0.01) and aligned/balance point (p < 0.01) conditions. Fixational stability of the fellow fixing eye was not different statistically across conditions. Bivariate contour ellipse areas of the amblyopic and fellow fixing eyes were therefore averaged for each amblyope in the four conditions and compared with normals. This averaged bivariate contour ellipse area was significantly greater (reduced fixational stability, p = 0.04) in amblyopes compared to controls except in the case of aligned and balanced contrast (aligned/balance point, p = 0.19). Fixation stability in the amblyopic eye appears to improve with bifoveal fixation and reduced interocular suppression. However, once initiated, bifoveal fixation is transient with the strabismic eye drifting away from foveal alignment, thereby increasing the angle of strabismus.
Article
Objective To provide new and original images of the anterior segment (AS) of the eye of selected Ophidian, Chelonian, and Saurian species and to compare the AS architecture among and within these three groups. Animals studied17 Saurians, 14 Ophidians, and 11 Chelonians with no concurrent systemic or eye disease were included in the study. ProcedureAge, weight, nose-cloaca distance (NCD), and pupil shape were collected for each animal. The AS was examined by optical coherence tomography (OCT). After gross description of the appearance of the AS, the central and peripheral corneal thickness (CCT, PCT) and anterior chamber depth (ACD) were measured using the software provided with the OCT device. The ratio CCT/ACD was then calculated for each animal. ResultsPupil shape was a vertical slit in all the crepuscular or nocturnal animals (except for 1 chelonian and 1 ophidian). Each group had its own particular AS architecture. Saurians had a regularly thin cornea with a flat anterior lens capsule and a deep anterior chamber. Ophidians had a thick cornea with a narrow anterior chamber due to a very anteriorly anchored spherical lens. The spectacle was difficult to identify in all ophidians except in Python molurus bivitattus in which it was more obvious. Chelonians displayed an intermediate architecture which more closely resembled the Saurian type than the Ophidian type. Conclusion Despite grossly similar AS architecture, the three groups of reptiles in the study demonstrated differences that are suggestive of a link between anatomical disparities and variations in environment and lifestyle.
Article
Full-text available
In the present study we investigate the rules governing the perception of audiovisual synchrony within spatio-temporally cluttered visual environments. Participants viewed a ring of 19 discs modulating in luminance while hearing an amplitude modulating tone. Each disc modulated with a unique temporal phase (40 ms intervals), with only one synchronized to the tone. Participants searched for the synchronised disc whose spatial location varied randomly across trials. Square-wave modulation facilitated search: the synchronized disc was frequently chosen, with tight response distributions centred near zero-phase lag. In the sinusoidal condition responses were equally distributed over the 19 discs regardless of phase. To investigate whether subjective synchrony in the square-wave condition was limited by spatial or temporal factors we repeated the experiment with either reduced spatial density (9 discs) or temporal density (80 ms phase intervals). Reduced temporal density greatly facilitated synchrony perception but left the synchrony bandwidth unchanged, while no influence of spatial density was found. We conclude that audio-visual synchrony is not strongly constrained by the spatial or temporal density of the visual display, but by a temporal window within which audio-visual events are perceived as synchronous, with a full bandwidth of ~185 ms.
Chapter
The human brain uses information from various sensory systems to gauge orientation of the body with respect to the external environment. Our perception of space is based on the image of the external world as registered by various senses and continuously updated and stabilized through sensory feedback from motor activities. In this process, multisensory integration can resolve ambiguities associated with the inherent "noise" from discrete sensory modalities. Accordingly, convergence of visual and vestibular inputs plays a significant role in our perceptions of spatial orientation and motion, which are essential for motor planning and interaction with the external environment. Once movements are generated, the visualevestibular integration is imperative for optimizing vision and stabilizing the line of sight during movements of the head (i.e., gaze stabilization). Such visualevestibular interactions are vital for maintaining a coherent perception of spatial orientation during static or dynamic changes in positions of the head and body. In this chapter, we will discuss the basic principles of visualevestibular interaction within the frameworks of heading (e.g., walking or running) and head tilt with relation to gravity (e.g., a lateral tilt of the head on body). We first describe the fundamental aspects of multisensory integration in these processes along with the underlying physiological and anatomical correlates. We then discuss experimental hypotheses and research findings related to visualevestibular interaction and outline their clinical applications in human diseases. C H A P T E R 201 Multisensory Perception
Article
The visual system summarizes average properties of ensembles of similar objects. We demonstrated an adaptation aftereffect of one such property, mean size, suggesting it is encoded along a single visual dimension (Corbett, et al., 2012), in a similar manner as basic stimulus properties like orientation and direction of motion. To further explore the fundamental nature of ensemble encoding, here we mapped the evolution of mean size adaptation over the course of visually guided grasping. Participants adapted to two sets of dots with different mean sizes. After adaptation, two test dots replaced the adapting sets. Participants first reached to one of these dots, and then judged whether it was larger or smaller than the opposite dot. Grip apertures were inversely dependent on the average dot size of the preceding adapting patch during the early phase of movements, and this aftereffect dissipated as reaches neared the target. Interestingly, perceptual judgments still showed a marked aftereffect, even though they were made after grasping was completed more-or-less veridically. This effect of mean size adaptation on early visually guided kinematics provides novel evidence that mean size is encoded fundamentally in both perception and action domains, and suggests that ensemble statistics not only influence our perceptions of individual objects but can also affect our physical interactions with the external environment.
Article
Full-text available
When searching a target in a natural scene, both the target's visual properties and similarity to the background influence whether (and how fast) humans are able to find it. However, thus far it has been unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment participants searched natural scenes for six artificial targets with different spatial frequency throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known before the trial. If a saccade was programmed in the same direction as the previous saccade (saccadic momentum), fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical density at the endpoints of saccadic momentum saccades were comparatively low, indicating that these saccades were less selective. Our results demonstrate that searchers adjust their eye movement dynamics to the search target in a sensible fashion, since low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. Additionally, the saccade direction specificity of our effects suggests a separation of saccades into a default scanning mechanism and a selective, target-dependent mechanism.
Article
Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two interval forced choice task subjects pursued a Gaussian blob moving along a curved trajectory and then they indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during a fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain and larger catch-up saccades compared to less curved trajectories. Initially, target motion curvatures were underestimated, however, around 300 ms after pursuit onset, pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (deg) for a 7.9 deg curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus, smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination.
Article
Full-text available
Most gestural interaction studies on gesture elicitation have focused on hand gestures, and few have considered the involvement of other body parts. Moreover, most of the relevant studies used the frequency of the proposed gesture as the main index, and the participants were not familiar with the design space. In this study, we developed a gesture set that includes hand and non-hand gestures by combining the indices of gesture frequency, subjective ratings, and physiological risk ratings. We first collected candidate gestures in Experiment 1 through a user-defined method by requiring participants to perform gestures of their choice for 15 most commonly used commands, without any body part limitations. In Experiment 2, a new group of participants evaluated the representative gestures obtained in Experiment 1. We finally obtained a gesture set that included gestures made with the hands and other body parts. Three user characteristics were exhibited in this set: a preference for one-handed movements, a preference for gestures with social meaning, and a preference for dynamic gestures over static gestures.
Conference Paper
In a wearable camera video, we see what the camera wearer sees. While this makes it easy to know roughly , it does not immediately reveal . Specifically, at what moments did his focus linger, as he paused to gather more information about something he saw? Knowing this answer would benefit various applications in video summarization and augmented reality, yet prior work focuses solely on the “what” question (estimating saliency, gaze) without considering the “when” (engagement). We propose a learning-based approach that uses long-term egomotion cues to detect engagement, specifically in browsing scenarios where one frequently takes in new visual information (e.g., shopping, touring). We introduce a large, richly annotated dataset for ego-engagement that is the first of its kind. Our approach outperforms a wide array of existing methods. We show engagement can be detected well independent of both scene appearance and the camera wearer’s identity.
Article
Full-text available
We examined the effects of spatial frequency similarity and dissimilarity on human contour integration under various conditions of uncertainty. Participants performed a temporal 2AFC contour detection task. Spatial frequency jitter up to 3.0 octaves was applied either to background elements, or to contour and background elements, or to none of both. Results converge on four major findings. (1) Contours defined by spatial frequency similarity alone are only scarcely visible, suggesting the absence of specialized cortical routines for shape detection based on spatial frequency similarity. (2) When orientation collinearity and spatial frequency similarity are combined along a contour, performance amplifies far beyond probability summation when compared to the fully heterogenous condition but only to a margin compatible with probability summation when compared to the fully homogenous case. (3) Psychometric functions are steeper but not shifted for homogenous contours in heterogenous backgrounds indicating an advantageous signal-to-noise ratio. The additional similarity cue therefore not so much improves contour detection performance but primarily reduces observer uncertainty about whether a potential candidate is a contour or just a false positive. (4) Contour integration is a broadband mechanism which is only moderately impaired by spatial frequency dissimilarity.
Article
Full-text available
A recent study showed that adaptation to causal events (collisions) in adults caused subsequent events to be less likely perceived as causal. In this study, we examined if a similar negative adaptation effect for perceptual causality occurs in children, both typically developing and with autism. Previous studies have reported diminished adaptation for face identity, facial configuration and gaze direction in children with autism. To test whether diminished adaptive coding extends beyond high-level social stimuli (such as faces) and could be a general property of autistic perception, we developed a child-friendly paradigm for adaptation of perceptual causality. We compared the performance of 22 children with autism with 22 typically developing children, individually matched on age and ability (IQ scores). We found significant and equally robust adaptation aftereffects for perceptual causality in both groups. There were also no differences between the two groups in their attention, as revealed by reaction times and accuracy in a change-detection task. These findings suggest that adaptation to perceptual causality in autism is largely similar to typical development and, further, that diminished adaptive coding might not be a general characteristic of autism at low levels of the perceptual hierarchy, constraining existing theories of adaptation in autism.
Article
Full-text available
Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ.
Article
Full-text available
It is vitally important for humans to detect living creatures in the environment and to analyze their behavior to facilitate action understanding and high-level social inference. The current study employed naturalistic point-light animations to examine the ability of human observers to spontaneously identify and discriminate socially interactive behaviors between two human agents. Specifically, we investigated the importance of global body form, intrinsic joint movements, extrinsic whole-body movements, and critically, the congruency between intrinsic and extrinsic motions. Motion congruency is hypothesized to be particularly important because of the constraint it imposes on naturalistic action due to the inherent causal relationship between limb movements and whole body motion. Using a free response paradigm in Experiment 1, we discovered that many naïve observers (55%) spontaneously attributed animate and/or social traits to spatially-scrambled displays of interpersonal interaction. Total stimulus motion energy was strongly correlated with the likelihood that an observer would attribute animate/social traits, as opposed to physical/mechanical traits, to the scrambled dot stimuli. In Experiment 2, we found that participants could identify interactions between spatially-scrambled displays of human dance as long as congruency was maintained between intrinsic/extrinsic movements. Violating the motion congruency constraint resulted in chance discrimination performance for the spatially-scrambled displays. Finally, Experiment 3 showed that scrambled point-light dancing animations violating this constraint were also rated as significantly less interactive than animations with congruent intrinsic/extrinsic motion. These results demonstrate the importance of intrinsic/extrinsic motion congruency for biological motion analysis, and support a theoretical framework in which early visual filters help to detect animate agents in the environment based on several fundamental constraints. Only after satisfying these basic constraints could stimuli be evaluated for high-level social content. In this way, we posit that perceptual animacy may serve as a gateway to higher-level processes that support action understanding and social inference.
Article
Full-text available
When we look at the world-or a graphical depiction of the world-we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance-based on a boarder theoretical framework called gamut relativity-that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications.
Article
The body schema is a key component in accomplishing egocentric mental transformations, which rely on bodily reference frames. These reference frames are based on a plurality of different cognitive and sensory cues among which the vestibular system plays a prominent role. We investigated whether a bottom-up influence of vestibular stimulation modulates the ability to perform egocentric mental transformations. Participants were significantly faster to make correct spatial judgments during vestibular stimulation as compared to sham stimulation. Interestingly, no such effects were found for mental transformation of hand stimuli or during mental transformations of letters, thus showing a selective influence of vestibular stimulation on the rotation of whole-body reference frames. Furthermore, we found an interaction with the angle of rotation and vestibular stimulation demonstrating an increase in facilitation during mental body rotations in a direction congruent with rightward vestibular afferents. We propose that facilitation reflects a convergence in shared brain areas that process bottom-up vestibular signals and top-down imagined whole-body rotations, including the precuneus and tempero-parietal junction. Ultimately, our results show that vestibular information can influence higher-order cognitive processes, such as the body schema and mental imagery.
Article
Full-text available
Visual short-term memory (VSTM) is a capacity-limited system for maintaining visual information across brief durations. Limits in the amount of information held in memory reflect processing constraints in the intraparietal sulcus (IPS), a region of the frontoparietal network also involved in visual attention. During VSTM and visual attention, areas of IPS demonstrate hemispheric asymmetries. Whereas the left hemisphere represents information in only the right hemifield, the right hemisphere represents information across the visual field. In visual attention, hemispheric asymmetries are associated with differences in behavioral performance across the visual field. In order to assess the degree of hemifield asymmetries in VSTM, we measured memory performance across the visual field for both single- and two-feature objects. Consistent with theories of right-hemisphere dominance, there was a memory benefit for single-feature items in the left visual hemifield. However, when the number of features increased, the behavioral bias reversed, demonstrating a benefit for remembering two-feature objects in the right hemifield. On an individual basis, the cost of remembering an additional feature in the hemifields was correlated, suggesting that the shift in hemifield biases reflected a redistribution of resources across the visual field. Furthermore, we demonstrate that these results cannot be explained by differences in perceptual or decision-making load. Our results are consistent with a flexible resource model of VSTM in which attention and/or working memory demands result in representation of items in the right hemifield by both the left and right hemispheres.
Article
Full-text available
Previous research has shown that adults with dyslexia (AwD) are disproportionately impacted by close spacing of stimuli and increased numbers of distractors in a visual search task compared to controls [1]. Using an orientation discrimination task, the present study extended these findings to show that even in conditions where target search was not required: (i) AwD had detrimental effects of both crowding and increased numbers of distractors; (ii) AwD had more pronounced difficulty with distractor exclusion in the left visual field and (iii) measures of crowding and distractor exclusion correlated significantly with literacy measures. Furthermore, such difficulties were not accounted for by the presence of covarying symptoms of ADHD in the participant groups. These findings provide further evidence to suggest that the ability to exclude distracting stimuli likely contributes to the reported visual attention difficulties in AwD and to the aetiology of literacy difficulties. The pattern of results is consistent with weaker and asymmetric attention in AwD.
Article
Full-text available
This study investigated whether training-related improvements in facial expression categorization are facilitated by spontaneous changes in gaze behaviour in adults and nine-year old children. Four sessions of a self-paced, free-viewing training task required participants to categorize happy, sad and fear expressions with varying intensities. No instructions about eye movements were given. Eye-movements were recorded in the first and fourth training session. New faces were introduced in session four to establish transfer-effects of learning. Adults focused most on the eyes in all sessions and increased expression categorization accuracy after training coincided with a strengthening of this eye-bias in gaze allocation. In children, training-related behavioural improvements coincided with an overall shift in gaze-focus towards the eyes (resulting in more adult-like gaze-distributions) and towards the mouth for happy faces in the second fixation. Gaze-distributions were not influenced by the expression intensity or by the introduction of new faces. It was proposed that training enhanced the use of a uniform, predominantly eyes-biased, gaze strategy in children in order to optimise extraction of relevant cues for discrimination between subtle facial expressions.
Article
The retinal positions of objects in the world change with each eye movement, but we seem to have little trouble keeping track of spatial information from one fixation to the next. We examined the role of attention in trans-saccadic localization by asking participants to localize targets while performing an attentionally demanding secondary task. In the first experiment, attentional load decreased localization precision for a remembered target, but only when a saccade intervened between target presentation and report. We then repeated the experiment and included a salient landmark that shifted on half the trials. The shifting landmark had a larger effect on localization under high load, indicating that observers rely more on landmarks to make localization judgments under high than under low attentional load. The results suggest that attention facilitates trans-saccadic localization judgments based on spatial updating of gaze-centered coordinates when visual landmarks are not available. The availability of reliable landmarks (present in most natural circumstances) can compensate for the effects of scarce attentional resources on trans-saccadic localization.
Over 20 distinct cerebral cortical areas contain spatial map representations of the visual field. These retinotopic, or visuotopic, cortical areas occur not only in the occipital lobe but also in the parietal, temporal, and frontal lobes. The cognitive influences of visuospatial attention operate via these cortical maps and can support selection of multiple objects at the same time. In early visual cortical areas, spatial attention enhances responses of selected items and diminishes the responses to distracting items. In higher order cortex, the maps support a spatial indexing role, keeping track of the items to be attended. These maps also support visual short-term memory (VSTM) representations. In each hemisphere, all the known maps respond selectively to stimuli presented within the contralateral visual field. However, a hemispheric asymmetry emerges when the attentional or VSTM demands of a task become significant. In the parietal lobe, the right hemisphere visuotopic maps switch from coding only contralateral visual targets to coding memory and attention targets across the entire visual field. This emergent asymmetry has important implications for understanding hemispatial neglect syndrome, and supports a dynamic network form of the representational model of neglect. WIREs Cogn Sci 2013, 4:327–340. doi: 10.1002/wcs.1230 For further resources related to this article, please visit the WIREs website.
Article
Can participants make use of the large number of response alternatives of visual analogue scales (VAS) when reporting their subjective experience of motion? In a new paradigm, participants adjusted a comparison according to random dot kinematograms with the direction of motion varying between 0� and 360�. After each discrimination response, they reported how clearly they experienced the global motion either using a VAS or a discrete scale with four scale steps. We observed that both scales were internally consistent and were used gradually. The visual analogue scale was more efficient in predicting discrimination error but this effect was mediated by longer report times and was no longer observed when the VAS was discretized into four bins. These observations are consistent with the interpretation that VAS and discrete scales are associated with a comparable degree of metacognitive sensitivity, although the VAS provides a greater amount of information.
Article
The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similar searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than fixations on primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage.
Article
Full-text available
Purpose To measure binocular interaction in amblyopes using a rapid and patient-friendly computer-based method, and to test the feasibility of the assessment in the clinic. Methods Binocular interaction was assessed in subjects with strabismic amblyopia (n = 7), anisometropic amblyopia (n = 6), strabismus without amblyopia (n = 15) and normal vision (n = 40). Binocular interaction was measured with a dichoptic phase matching task in which subjects matched the position of a binocular probe to the cyclopean perceived phase of a dichoptic pair of gratings whose contrast ratios were systematically varied. The resulting effective contrast ratio of the weak eye was taken as an indicator of interocular imbalance. Testing was performed in an ophthalmology clinic under 8 mins. We examined the relationships between our binocular interaction measure and standard clinical measures indicating abnormal binocularity such as interocular acuity difference and stereoacuity. The test-retest reliability of the testing method was also evaluated. Results Compared to normally-sighted controls, amblyopes exhibited significantly reduced effective contrast (∼20%) of the weak eye, suggesting a higher contrast requirement for the amblyopic eye compared to the fellow eye. We found that the effective contrast ratio of the weak eye covaried with standard clincal measures of binocular vision. Our results showed that there was a high correlation between the 1st and 2nd measurements (r = 0.94, p<0.001) but without any significant bias between the two. Conclusions Our findings demonstrate that abnormal binocular interaction can be reliably captured by measuring the effective contrast ratio of the weak eye and quantitative assessment of binocular interaction is a quick and simple test that can be performed in the clinic. We believe that reliable and timely assessment of deficits in a binocular interaction may improve detection and treatment of amblyopia.
Article
Recently, we showed that salience affects initial saccades only in a static stimulus environment; subsequent saccades were unaffected by salience but, instead, were directed in line with task requirements (Siebold, van Zoest, & Donk, PLoS ONE 6(9): e23552, 2011). Yet multiple studies have shown that people tend to fixate salient regions more often than nonsalient ones when they are looking at images-in particular, when salience is defined by dynamic changes. The goal of the present study was to investigate how oculomotor selection beyond an initial saccade is affected by salience as derived from changing, as opposed to static, stimuli. Observers were presented with displays containing two fixation dots, one target, one distractor, and multiple background elements. They were instructed to fixate on one of the fixation dots and make a speeded eye movement to the target, either directly or preceded by an initial eye movement to the other fixation dot. In Experiment 1, target and distractor differed in orientation contrast relative to the background, such that one was more salient than the other, whereas in Experiments 2 and 3, the orientation contrast between the two elements was identical. Here, salience was implemented by a continuous luminance flicker or by a difference in luminance contrast, respectively, which was presented either simultaneously with display onset or contingent upon the first saccade. The results showed that in all experiments, initial saccades were strongly guided by salience, whereas second saccades were consistently goal directed if the salience manipulation was present from display onset. However, if the flicker or luminance contrast was presented contingent upon the initial saccade, salience effects were reinstated. We argue that salience effects are short-lived but can be reinstated if new information is presented, even when this occurs during an eye movement.
Article
Full-text available
Our ability to actively maintain information in visual memory is strikingly limited. There is considerable debate about why this is so. As with many questions in psychology, the debate is framed dichotomously: Is visual working memory limited because it is supported by only a small handful of discrete “slots” into which visual representations are placed, or is it because there is an insufficient supply of a “resource” that is flexibly shared among visual representations? Here, we argue that this dichotomous framing obscures a set of at least eight underlying questions. Separately considering each question reveals a rich hypothesis space that will be useful for building a comprehensive model of visual working memory. The questions regard (1) an upper limit on the number of represented items, (2) the quantization of the memory commodity, (3) the relationship between how many items are stored and how well they are stored, (4) whether the number of stored items completely determines the fidelity of a representation (vs. fidelity being stochastic or variable), (5) the flexibility with which the memory commodity can be assigned or reassigned to items, (6) the format of the memory representation, (7) how working memories are formed, and (8) how memory representations are used to make responses in behavioral tasks. We reframe the debate in terms of these eight underlying questions, placing slot and resource models as poles in a more expansive theoretical space.
Article
Full-text available
Continuous flash suppression (CFS) is a powerful interocular suppression technique, which is often described as an effective means to reliably suppress stimuli from visual awareness. Suppression through CFS has been assumed to depend upon a reduction in (retinotopically specific) neural adaptation caused by the continual updating of the contents of the visual input to one eye. In this study, we started from the observation that suppressing a moving stimulus through CFS appeared to be more effective when using a mask that was actually more prone to retinotopically specific neural adaptation, but in which the properties of the mask were more similar to those of the to-be-suppressed stimulus. In two experiments, we find that using a moving Mondrian mask (i.e., one that includes motion) is more effective in suppressing a moving stimulus than a regular CFS mask. The observed pattern of results cannot be explained by a simple simulation that computes the degree of retinotopically specific neural adaptation over time, suggesting that this kind of neural adaptation does not play a large role in predicting the differences between conditions in this context. We also find some evidence consistent with the idea that the most effective CFS mask is the one that matches the properties (speed) of the suppressed stimulus. These results question the general importance of retinotopically specific neural adaptation in CFS, and potentially help to explain an implicit trend in the literature to adapt one's CFS mask to match one's to-be-suppressed stimuli. Finally, the results should help to guide the methodological development of future research where continuous suppression of moving stimuli is desired.
Article
The hippocampus creates distinct episodes from highly similar events through a process called pattern separation and can retrieve memories from partial or degraded cues through a process called pattern completion. These processes have been studied in humans using tasks where participants must distinguish studied items from perceptually similar lure items. False alarms to lures (incorrectly reporting a perceptually similar item as previously studied) are thought to reflect pattern completion, a retrieval-based process. However, false alarms to lures could also result from insufficient encoding of studied items, leading to impoverished memory of item details and a failure to correctly reject lures. The current study investigated the source of lure false alarms by comparing eye movements during the initial presentation of items to eye movements made during the later presentation of item repetitions and similar lures in order to assess mnemonic processing at encoding and retrieval, respectively. Relative to other response types, lure false alarms were associated with fewer fixations to the initially studied items, suggesting that false alarms result from impoverished encoding. Additionally, lure correct rejections and lure false alarms garnered more fixations than hits, denoting additional retrieval-related processing. The results suggest that measures of pattern separation and completion in behavioral paradigms are not process-pure. © 2014 Wiley Periodicals, Inc.
Article
Full-text available
In normal illumination, retrograde motion of the background (the Filehne illusion) can be seen during ocular pursuit, in contrast to stability seen during a saccade. In the present experiment, two stimuli were presented sequentially: (I) at disparate physical locations such that the pursuit movement of the eye caused them to excite the same retinal location, (II) at the same physical location, with pursuit movement causing disparate retinal excitations, and (III) with stationary fixation but with disparate physical locations such that the retinal excitation was identical to that of Condition II. Optimal movement was never reported for Condition I but was reported with essentially equal frequency in Conditions II and III. These results indicate a failure of compensation for pursuit movement, as does the Filehne illusion. The nature of the pursuit extraretinal signal was discussed, and it was argued that a distinctly different extraretinal signal is necessary for perceived stability during the saccade.
Article
Full-text available
When the eyes are engaged in pursuit movements, the image of a stationary object shifts on the retina, but such a target is either perceived as stationary or seems to move only little. This is the result of a compensation process called position constancy, which takes the eye movements into account. Becklen, Wallach, and Nitzberg (1984) reported that position constancy does not operate when the target undergoes a motion of its own, in a direction that differs from the direction of the eye movements. Other findings have indicated that position constancy has an effect when the target motion is colinear with the eye movements, but the accuracy with which it then operates has not been known. We measured how correctly motions that were colinear with eye movements were perceived and found that the extents of target motions were accurately perceived when they were in the same direction as the eye movement, but that position constancy showed a small, but distinct, lag when eye-movement and target motions were in opposite directions.
Article
Full-text available
Experiments were performed to investigate the Filehne illusion, the apparent movement of the background during pursuit eye movements. In a dark room subjects tracked a luminous target as it moved at 3°/s or 10.5/s in front of an illuminated background which was either stationary or moved at a fraction of the target speed in the same or opposite direction. Subjects reported whether the background appeared to move and the direction of the movement. Results reveal only a partial loss of position constancy for the background during tracking. The stationary background is perceived to move slightly in the direction opposite to that in which the tracked target is moving. These results seemed best described as an instance of perceptual underconstancy and led to the speculation that the source of the illusion is an underestimation of the rate of pursuit eye movements. An experimental test of this hypothesis which produced supporting evidence is reported.
Article
Full-text available
When the eyes track a moving object, the image of a stationary target shifts on the retina colinearly with the eye movement. A compensation process called position constancy prevents this image shift from causing perceived target motion commensurate with the image shift. The target either appears stationary or seems to move in the direction opposite to the eye movement, but much less than the image shift would warrant. Our work is concerned with the question of whether position constancy operates when the image shift and the eye movement are not colinear. That can occur when, during the eye movement, the target undergoes a motion of its own. Evidence is reported that position constancy fails to operate when the direction of the target motion forms an angle with the direction of the eye movement.
Article
Full-text available
Our tendency to constantly shift our gaze and to pursue moving objects with our eyes introduces obvious problems for judging objects' velocities. The present study examines how we deal with these problems. Specifically, we examined when information on rotations (such as eye movements) is obtained from retinal, and when from extra-retinal sources. Subjects were presented with a target moving across a textured background. Moving the background allowed us to manipulate the retinal information on rotation independently of the extra-retinal information. The subjects were instructed to pursue the target with their eyes. At some time during the presentation the target's velocity could change. We determined how various factors influence a subject's perception of such changes in velocity. Under more or less natural conditions, there was no change in perceived target velocity as long as the relative motion between target and background was maintained. However, experiments using conditions that are less likely to occur outside the laboratory reveal how extra-retinal signals are involved in velocity judgements.
Article
Full-text available
A shaky hand holding a video camera invariably turns a treasured moment into an annoying, jittery momento. More recent consumer cameras thoughtfully offer stabilization mechanisms to compensate for our unsteady grip. Our eyes face a similar challenge in that they are constantly making small movements even when we try to maintain a fixed gaze. What should be substantial, distracting jitter passes completely unseen. Position changes from large eye movements (saccades) seem to be corrected on the basis of extraretinal signals such as the motor commands sent to the eye muscle, and the resulting motion responses seem to be simply switched off. But this approach is impracticable for incessant, small displacements, and here we describe a novel visual illusion that reveals a compensation mechanism based on visual motion signals. Observers were adapted to a patch of dynamic random noise and then viewed a larger pattern of static random noise. The static noise in the unadapted regions then appeared to 'jitter' coherently in random directions. Several observations indicate that this visual jitter directly reflects fixational eye movements. We propose a model that accounts for this illusion as well as the stability of the visual world during small and/or slow eye movements such as fixational drift, smooth pursuit and low-amplitude mechanical vibrations of the eyes.
Article
Full-text available
It has long been known that ocular pursuit of a moving target has a major influence on its perceived speed (Aubert, 1886; Fleischl, 1882). However, little is known about the effect of smooth pursuit on the perception of target direction. Here we compare the precision of human visual-direction judgments under two oculomotor conditions (pursuit vs. fixation). We also examine the impact of stimulus duration (200 ms vs. ~800 ms) and absolute direction (cardinal vs. oblique). Our main finding is that direction discrimination thresholds in the fixation and pursuit conditions are indistinguishable. Furthermore, the two oculomotor conditions showed oblique effects of similar magnitudes. These data suggest that the neural direction signals supporting perception are the same with or without pursuit, despite remarkably different retinal stimulation. During fixation, the stimulus information is restricted to large, purely peripheral retinal motion, while during steady-state pursuit, the stimulus information consists of small, unreliable foveal retinal motion and a large efference-copy signal. A parsimonious explanation of our findings is that the signal limiting the precision of direction judgments is a neural estimate of target motion in head-centered (or world-centered) coordinates (i.e., a combined retinal and eye motion signal) as found in the medial superior temporal area (MST), and not simply an estimate of retinal motion as found in the middle temporal area (MT).
Article
Full-text available
Although many studies have been devoted to motion perception during smooth pursuit eye movements, relatively little attention has been paid to the question of whether the compensation for the effects of these eye movements is the same across different stimulus directions. The few studies that have addressed this issue provide conflicting conclusions. We measured the perceived motion direction of a stimulus dot during horizontal ocular pursuit for stimulus directions spanning the entire range of 360 degrees. The stimulus moved at either 3 or 8 degrees/s. Constancy of the degree of compensation was assessed by fitting the classical linear model of motion perception during pursuit. According to this model, the perceived velocity is the result of adding an eye movement signal that estimates the eye velocity to the retinal signal that estimates the retinal image velocity for a given stimulus object. The perceived direction depends on the gain ratio of the two signals, which is assumed to be constant across stimulus directions. The model provided a good fit to the data, suggesting that compensation is indeed constant across stimulus direction. Moreover, the gain ratio was lower for the higher stimulus speed, explaining differences in results in the literature.
Article
Full-text available
It is known that people misperceive scenes they see during rapid eye movements called saccades. It has been suggested that some of these misperceptions could be an artifact of neurophysiological processes related to the internal remapping of spatial coordinates during saccades. Alternatively, we have recently suggested, based on a computational model, that transsaccadic misperceptions result from optimal inference. As one of the properties of the model, sudden object displacements that occur in sync with a saccade should be perceived as contracted in a non-linear fashion. To explore this model property, here we use computer simulations and psychophysical methods first to test how robust the model is to close-to-optimal approximations and second to test two model predictions: (a) contracted transsaccadic perception should be dimension-specific with more contraction for jumps parallel to the saccade than orthogonal to it, and (b) contraction should rise as a function of visuomotor noise. Our results are consistent with these predictions. They support the idea that human transsaccadic integration is governed by close-to-optimal inference.
Article
Full-text available
When observers pursue a moving target with their eyes, they use predictions of future target positions in order to keep the target within the fovea. It was suggested that these predictions of smooth pursuit (SP) eye movements are computed only from the visual feedback of the target characteristics. As a consequence, if the target vanishes unexpectedly, the eye movements do not stop immediately, but they overshoot the vanishing point. We compared the spatial and temporal features of such predictive eye movements in a task with or without intentional control over the target vanishing point. If the observers stopped the target with a button press, the overshoot of the eyes was reduced compared to a condition where the offset was computer generated. Accordingly, the eyes started to decelerate well before the target offset and lagged further behind the target when it disappeared. The involvement of intentionally-generated expectancies in eye movement control was also obvious in the spatial trajectories of the eyes, which showed a clear flexion in anticipation of the circular motion path we used. These findings are discussed together with neurophysiological mechanisms underlying the SP eye movements.
Article
Examined position constancy in human vision with 86 college students. When the eyes track a moving object, the image of a stationary target shifts on the retina colinearly with the eye movement. A compensation process called position constancy prevents this image shift from causing perceived target motion commensurate with the image shift. Four experiments were conducted to investigate whether position constancy operates when the image shift and the eye movement are not colinear. Exp I investigated the induced motion of a target that, while subject to induction, moved perpendicularly to its induced motion. In Exp II, the perceived target motion differed significantly from the objective target motion. In Exps III and IV, the perceived target motions were also in close agreement with the presumed image paths. Overall results indicate that position constancy can occur when, during the eye movement, the target undergoes a motion of its own. Results indicate that position constancy failed to operate when the direction of the target motion formed an angle with the direction of the eye movement. (12 ref) (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Chapter
It has been realised since the early work of Dallos and Jones (1963) that predictive behaviour must be present in human ocular pursuit for three main reasons: (1) performance is better in response to predictable periodic target motion than it is to more random stimuli; (2) when pursuing sinusoids, phase errors at frequencies above 0.5Hz are much less than would be expected from the time delay (100ms) in visual feedback (Carl and Gellman 1987); (3) in a linear velocity error feedback system a combination of high gain and a large time delay would lead to an unstable system. But evidence of predictive behaviour is not readily apparent because anticipatory smooth movements cannot normally be generated at will. Over recent years we have carried out experiments designed to facilitate the generation of anticipatory movements and investigate their role in predictive pursuit (Barnes et al. 1987; Barnes and Asselman 1991; Wells and Barnes 1998). The model presented here was developed on the basis of the results from these experiments and attempts to demonstrate how predictive processes reduce phase errors during periodic tracking through the short-term storage of pre-motor drive information and its subsequent release to form anticipatory smooth movements.
Article
Various visual cues provide information about depth and shape in a scene. When several of these cues are simultaneously available in a single location in the scene, the visual system attempts to combine them. In this paper, we discuss three key issues relevant to the experimental analysis of depth cue combination in human vision: cue promotion, dynamic weighting of cues, and robustness of cue combination. We review recent psychophysical studies of human depth cue combination in light of these issues. We organize the discussion and review as the development of a model of the depth cue combination process termed modified weak fusion (MWF). We relate the MWF framework to Bayesian theories of cue combination. We argue that the MWF model is consistent with previous experimental results and is a parsimonious summary of these results. While the MWF model is motivated by normative considerations, it is primarily intended to guide experimental analysis of depth cue combination in human vision. We describe experimental methods, analogous to perturbation analysis, that permit us to analyze depth cue combination in novel ways. In particular these methods allow us to investigate the key issues we have raised. We summarize recent experimental tests of the MWF framework that use these methods.
Article
The perception of motion of physically moving points of light was investigated in terms of the distinction between absolute and relative motion cues and the change in the effectiveness of the latter as a function of the frontoparallel separation between the points. In situations in which two competing relative motion cues were available to determine the perceived path of motion of a point of light, it was found that the relative motion cue between more adjacent points was more effective than the relative motion cue between more separated points. In situations in which only one relative motion cue was available to determine the perceived motion of a point it was found that the effectiveness of this cue as compared with the absolute motion cue decreased with increased separation. These results are predictable from the adjacency principle which states that the effectiveness of cues between objects is an inverse function of object separation. Some consequences of the study for the theory of motion perception are discussed.
Article
When each S receives a number of conditions in a balanced or random order, an unwanted range effect can sometimes reverse the rank order of the experimental results. With a range effect, responses are influenced by the range of stimuli, the range of responses used by the S, or both. Range effects generally involve a central tendency but not always. There is no way of discovering whether a within-S design has introduced an unwanted range effect, except by repeating parts of the experiment using a separate-groups design. It is suggested that textbooks and courses on experimental design in psychology should emphasize the dangers of within-S designs. Conflicts between experimental results can sometimes be resolved by discarding the results of within-S designs. Revisions of theory may then be necessary. (76 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
During smooth pursuit eye movement performance often an illusory motion of background objects is perceived. This so called Filehne illusion has been quantified and explored by Mack and Herman [Q. J.exp. Psychol.25, 71–84 (1973);Vision Res.18, 55–62 (1978)]. According to them two independent factors contribute to the Filehne illusion: (1) a subject relative factor, viz. the underregistration of pursuit eye movements by the perceptual system, and (2) an object relative factor, viz. adjacency of the pursued fixation point and the background stimulus. The evidence of the present experiment supports the former but rejects the latter as a contributing factor. Instead of the concept of adjacency, an alternative theoretical extension of the subject relative factor is offered.
Chapter
Statistics is a subject of many uses and surprisingly few effective practitioners. The traditional road to statistical knowledge is blocked, for most, by a formidable wall of mathematics. The approach in An Introduction to the Bootstrap avoids that wall. It arms scientists and engineers, as well as statisticians, with the computational techniques they need to analyze and understand complicated data sets.
Article
Experiments were made concerning the delay times involved in smooth eye movements in the case of a disappearing target. It is difficult to explain the results on the basis of smooth pursuit eye movements. An alternative point of view is entirely different from the largely accepted “servomotor” theory.
Article
With accurate measurement of eye position during smooth tracking, comparison of the retinal and perceived paths of spots of light moving in harmonic motion indicates little compensation for smooth pursuit eye movements by the perceptual system. The data suggest that during smooth pursuit, the perceptual system has access to information about direction of tracking, and assumes a relatively low speed, almost irrespective of the actual speed of the eye. It appears, then, that the specification of innervation to the extraocular muscles for smooth tracking is predominantly peripheral, i.e. it occurs beyond the stage in the efferent command process monitored by perception.
Article
Saccades, elicited by an identical visual stimulus in repeated trials, exhibit a certain amount of amplitude and direction scatter. The present paper illustrates how this scatter may be used to discern various properties of the subsystem that determines the metrics of a saccade. It is found in humans that scatter along the eccentricity axis is consistently more pronounced than along the direction axis. The ratio of amplitude scatter and direction scatter is approximately constant for all target positions tested. In addition, the absolute amount of scatter increases roughly linearly with target eccentricity but does not depend on target direction. We have explored whether these findings may reflect noisy variations in the neural representation of the saccade vector at the level of the collicular motor map. There are good reasons to assume that the motor map, at least in the monkey, (1) is organized in polar coordinates, (2) has a nonhomogeneous (roughly logarithmic) representation of saccade amplitude and (3) is anisotropic in nature (Robinson, 1972; Ottes, Van Gisbergen & Eggermont, 1986; Van Gisbergen, Van Opstal & Tax, 1987). To account for the intertrial variability in saccades, we have slightly extended an existing model for the collicular role in the coding of saccade metrics (Van Gisbergen et al., 1987) by allowing small variations in both the total amount and the location of the collicular population activity. We discuss how such noisy variations at the level of the motor map would be expressed in the metrics of saccadic responses and consider alternative models which could explain our data.
Article
If physical movements are to be seen veridically, it is necessary to distinguish between displacements over the retina due to self-motion and those due to object motion. When target motion is in a different direction from that of a pursuit eye movement, the perceived motion of the target is known to be shifted in direction toward the retinal path, indicating a partial failure of compensation for eye movements (Becklen, Wallach, & Nitzberg, 1984). The experiments reported here compared the perception of target motion when the head and/or eyes were moving in a direction different from that of the target. In three experiments, target motion was varied in direction, phase, and extent with respect to pursuit movements. In all cases, the compensation was less effective for head than for eye movements, although this difference was least when the extent of the tracked and target motions was the same. Compensation for pursuit eye movements was better than that reported in previous studies.
Article
Subjects adjusted the path of moving stimuli to produce apparent slopes of 45 degrees with respect to horizontal. The stimulus was either a single moving dot or a vertical or horizontal bar. In separate experiments either the stimuli were tracked or fixation was maintained on a stationary fixation target positioned 8 deg to the right of the center of stimulus motion. In both experiments the selected path slopes were in general more horizontal than 45 degrees. This pattern indicates that subjects overestimate the vertical component of motion along an oblique path, and is interpreted as a manifestation of the spatial anisometropy generally termed the 'horizontal-vertical illusion'. Additionally, paths selected for horizontal bars were more vertical than those for vertical bars. This finding is interpreted in the context of a previous report of the influence of stimulus orientation on perceived velocity.
Article
Eye movements were recorded in human subjects who tracked a target spot which moved horizontally at constant speeds. At random times during its trajectory, the target disappeared for variable periods of time and the subjects attempted to continue tracking the invisible target. The smooth pursuit component of their eye movements was isolated and averaged. About 190 ms after the target disappeared, the smooth pursuit velocity began to decelerate rapidly. The time course of this deceleration was similar to that in response to a visible target whose velocity decreased suddenly. After a deceleration lasting about 280 ms, the velocity stabilized at a new, reduced level which we call the residual velocity. The residual velocity remained more or less constant or declined only slowly even when the target remained invisible for 4 s. When the same target velocity was used in all trials of an experiment, the subjects' residual velocity amounted to 60% of their normal pursuit velocity. When the velocity was varied randomly from trial to trial, the residual velocity was smaller; for target velocities of 5, 10, and 20 deg/s it reached 55, 47, and 39% respectively. The subjects needed to see targets of unforeseeable velocity for no more than 300 ms in order to develop a residual velocity that was characteristic of the given target velocity. When a target of unknown velocity disappeared at the very moment the subject expected it to start, a smooth movement developed nonetheless and reached within 300 ms a peak velocity of 5 deg/s which was independent of the actual target velocity and reflected a "default" value for the pursuit system. Thereafter the eyes decelerated briefly and then continued with a constant or slightly decreasing velocity of 2-4 deg/s until the target reappeared. Even when the subjects saw no moving target during an experiment, they could produce a smooth movement in the dark and could grade its velocity as a function of that of an imagined target. We suggest that the residual velocity reflects a first order prediction of target movement which is attenuated by a variable gain element. When subjects are pursuing a visible target, the gain of this element is close to unity. When the target disappears but continued tracking is attempted, the gain is reduced to a value between 0.4 and 0.6.
Article
Various visual cues provide information about depth and shape in a scene. When several of these cues are simultaneously available in a single location in the scene, the visual system attempts to combine them. In this paper, we discuss three key issues relevant to the experimental analysis of depth cue combination in human vision: cue promotion, dynamic weighting of cues, and robustness of cue combination. We review recent psychophysical studies of human depth cue combination in light of these issues. We organize the discussion and review as the development of a model of the depth cue combination process termed modified weak fusion (MWF). We relate the MWF framework to Bayesian theories of cue combination. We argue that the MWF model is consistent with previous experimental results and is a parsimonious summary of these results. While the MWF model is motivated by normative considerations, it is primarily intended to guide experimental analysis of depth cue combination in human vision. We describe experimental methods, analogous to perturbation analysis, that permit us to analyze depth cue combination in novel ways. In particular these methods allow us to investigate the key issues we have raised. We summarize recent experimental tests of the MWF framework that use these methods.
Article
Smooth pursuit eye movements allow primates to keep gaze pointed at small objects moving across stationary surroundings. In monkeys trained to track a small moving target, we have injected brief perturbations of target motion under different initial conditions as probes to read out the state of the visuo-motor pathways that guide pursuit. A large eye movement response was evoked if the perturbation was applied to a moving target the monkey was tracking. A small response was evoked if the same perturbation was applied to a stationary target the monkey was fixating. The gain of the response to the perturbation increased as a function of the initial speed of target motion and as a function of the interval from the onset of target motion to the time of the perturbation. The response to the perturbation also was direction selective. Gain was largest if the perturbation was along the axis of ongoing target motion and smallest if the perturbation was orthogonal to the axis of target motion. We suggest that two parallel sets of visual motion pathways through the extrastriate visual cortex may mediate, respectively, the visuo-motor processing for pursuit and the modulation of the gain of transmission through those pathways.
Article
During smooth pursuit eye movements made across a stationary background an illusory motion of the background is perceived (Filehne illusion). The present study was undertaken in order to test if the Filehne illusion can be influenced by information unrelated to the retinal image slip prevailing and to the eye movement being executed. The Filehne illusion was measured in eight subjects by determining the amount of external background motion required to compensate for the illusory background motion induced by 12 deg/sec rightward smooth pursuit. Using a two-alternative forced-choice method, test trials, which yielded the estimate of the Filehne illusion, were randomly interleaved with conditioning trials, in which high retinal image slip was created by background stimuli moving at a constant horizontal velocity. There was a highly reproducible monotic relationship between the size and direction of the Filehne illusion and the velocity of the background stimulus in the conditioning trials with the following extremes: large Filehne illusions with illusory motion to the right occurred for conditioning stimuli moving to the left, i.e. opposite to the direction of eye movement in the test trials, while conversely, conditioning stimuli moving to the right yielded Filehne illusions close to zero. Additional controls suggest that passive motion aftereffects are unlikely to account for the modulation of the Filehne illusion by the conditioning stimulus. We hypothesize that this modification might reflect the dynamic character of the networks elaborating spatial constancy.
Article
When we make a smooth eye movement to track a moving object, the visual system must take the eye's movement into account in order to estimate the object's velocity relative to the head. This can be done by using extra-retinal signals to estimate eye velocity and then subtracting expected from observed retinal motion. Two familiar illusions of perceived velocity--the Filehne illusion and Aubert-Fleischl phenomenon--are thought to be the consequence of the extra-retinal signal underestimating eye velocity. These explanations assume that retinal motion is encoded accurately, which is questionable because perceived retinal speed is strongly affected by several stimulus properties. We develop and test a model of head-centric velocity perception that incorporates errors in estimating eye velocity and in retinal-motion sensing. The model predicts that the magnitude and direction of the Filehne illusion and Aubert-Fleischl phenomenon depend on spatial frequency and this prediction is confirmed experimentally.
Article
Eye movements add a constant displacement to the visual scene, altering the retinal-image velocity. Therefore, in order to recover the real world motion, eye-movement effects must be compensated. If full compensation occurs, the perceived speed of a moving object should be the same regardless of whether the eye is stationary or moving. Using a pursue-fixate procedure in a perceptual matching paradigm, we found that eye movements systematically bias the perceived speed of the distal stimulus, indicating a lack of compensation. Speed judgments depended on the interaction between the distal stimulus size and the eye velocity relative to the distal stimulus motion. When the eyes and distal stimulus moved in the same direction, speed judgments of the distal stimulus approximately matched its retinal-image motion. When the eyes and distal stimulus moved in the opposite direction, speed judgments depended on the stimulus size. For small sizes, perceived speed was typically overestimated. For large sizes, perceived speed was underestimated. Results are explained in terms of retinal-extraretinal interactions and correlate with recent neurophysiological findings.
Article
Using Sphaeroides spengleri (Bloch), the southern swell-fish, visual inversion of one eye was secured by 180° surgical rotation. The other eye was blinded. Visual inversion was accompanied by forced circling movements, which survived bilateral ablation of the forebrain, the cerebellum, or the inferior lobes of the infundibulum. Circling was not eliminated by bilateral labyrinthectomy and severance of the extraocular muscles, but was abolished if the eye was returned to its normal orientation, and if the optic lobe of the rotated eye was removed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
To perceive the external environment our brain uses multiple sources of sensory information derived from several different modalities, including vision, touch and audition. All these different sources of information have to be efficiently merged to form a coherent and robust percept. Here we highlight some of the mechanisms that underlie this merging of the senses in the brain. We show that, depending on the type of information, different combination and integration strategies are used and that prior knowledge is often required for interpreting the sensory signals.
Article
It is an essential feature for the visual system to keep track of self-motion to maintain space constancy. Therefore the saccadic system uses extraretinal information about previous saccades to update the internal representation of memorized targets, an ability that has been identified in behavioral and electrophysiological studies. However, a smooth eye movement induced in the latency period of a memory-guided saccade yielded contradictory results. Indeed some studies described spatially accurate saccades, whereas others reported retinal coding of saccades. Today, it is still unclear how the saccadic system keeps track of smooth eye movements in the absence of vision. Here, we developed an original two-dimensional behavioral paradigm to further investigate how smooth eye displacements could be compensated to ensure space constancy. Human subjects were required to pursue a moving target and to orient their eyes toward the memorized position of a briefly presented second target (flash) once it appeared. The analysis of the first orientation saccade revealed a bimodal latency distribution related to two different saccade programming strategies. Short-latency (<175 ms) saccades were coded using the only available retinal information, i.e., position error. In addition to position error, longer-latency (>175 ms) saccades used extraretinal information about the smooth eye displacement during the latency period to program spatially more accurate saccades. Sensory parameters at the moment of the flash (retinal position error and eye velocity) influenced the choice between both strategies. We hypothesize that this tradeoff between speed and accuracy of the saccadic response reveals the presence of two coupled neural pathways for saccadic programming. A fast striatal-collicular pathway might only use retinal information about the flash location to program the first saccade. The slower pathway could involve the posterior parietal cortex to update the internal representation of the flash once extraretinal smooth eye displacement information becomes available to the system.
Article
To perceive the real motion of objects in the world while moving the eyes, retinal motion signals must be compensated by information about eye movements. Here we study when this compensation takes place in the course of visual processing, and whether uncompensated motion signals are ever available. We used a paradigm based on asymmetry in motion detection: Fast-moving objects are found easier among slow-moving distractors than are slow objects among fast distractors. By coupling object motion to eye motion, we created stimuli that moved fast on the retina but slowly in an eye-independent reference frame, or vice versa. In the 100 ms after stimulus onset, motion detection is dominated by retinal motion, uncompensated for eye movements. As early as 130 ms, compensated signals become available: objects that move slowly on the retina but fast in an eye-independent frame are detected as easily as those that move fast on the retina.
Apparent motion of stimuli presented
  • A Stoper
Stoper, A. (1973). Apparent motion of stimuli presented
Modelling prediction in ocular pursuit: The importance of short-term storage Current oculomotor research: Physiological and psychological aspects (pp. 97–107) Prediction in the oculomotor system: Smooth pursuit during transient disappearance of a visual target
  • G R Barnes
  • S G Wells
Barnes, G. R., & Wells, S. G. (1999). Modelling prediction in ocular pursuit: The importance of short-term storage. In T. Mergner, W. Becker, & H. Deubel (Eds.), Current oculomotor research: Physiological and psychological aspects (pp. 97–107). New York: Plenum Press. Becker, W., & Fuchs, A. F. (1985). Prediction in the oculomotor system: Smooth pursuit during transient disappearance of a visual target. Experimental Brain Research, 57, 562–575.
U ¨ ber induzierte bewegung. Psycho-logische Forschung
  • K Duncker
Duncker, K. (1929). U ¨ ber induzierte bewegung. Psycho-logische Forschung, 12, 180–259.
Vision during pursuit eye movements: The role of oculomotor information. Unpublished doctoral dissertation
  • A Stoper
Stoper, A. (1967). Vision during pursuit eye movements: The role of oculomotor information. Unpublished doctoral dissertation, Brandeis University, Waltham, MA.