Perception of coherent motion, biological motion and form-from-motion under dim-light conditions

Department of Psychology, Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37240, USA.
Vision Research (Impact Factor: 1.82). 12/1999; 39(22):3721-7. DOI: 10.1016/S0042-6989(99)00084-X
Source: PubMed


Three experiments investigated several aspects of motion perception at high and low luminance levels. Detection of weak coherent motion in random dot cinematograms was unaffected by light level over a range of dot speeds. The ability to judge form from motion was, however, impaired at low light levels, as was the ability to discriminate normal from phase-scrambled biological motion sequences. The difficulty distinguishing differential motions may be explained by increased spatial pooling at low light levels.

17 Reads
  • Source
    • "Results that are consistent with the dissociation between visual acuity and motion coherence thresholds in patients with amblyopia have also been found in observers with normal vision. For example, motion coherence thresholds are unaffected by stimulus manipulations that significantly impair visual acuity such as low lighting conditions (Grossman & Blake, 1999) and optical defocus (Trick & Silverman, 1991; Trick, Steinman, & Amyot, 1995). Furthermore, no relationship between visual acuity and motion coherence thresholds was found in a group of 2-year old children born at risk of neonatal hypoglycemia (Yu et al., 2013). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Global motion processing depends on a network of brain regions that includes extrastriate area V5 in the dorsal visual stream. For this reason, psychophysical measures of global motion perception have been used to provide a behavioral measure of dorsal stream function. This approach assumes that global motion is relatively independent of visual functions that arise earlier in the visual processing hierarchy such as contrast sensitivity and visual acuity. We tested this assumption by assessing the relationships between global motion perception, contrast sensitivity for coherent motion direction discrimination (henceforth referred to as contrast sensitivity) and habitual visual acuity in a large group of 4.5-year-old children (n=117). The children were born at risk of abnormal neurodevelopment because of prenatal drug exposure or risk factors for neonatal hypoglycemia. Motion coherence thresholds, a measure of global motion perception, were assessed using random dot kinematograms. The contrast of the stimuli was fixed at 100% and coherence was varied. Contrast sensitivity was measured using the same stimuli by fixing motion coherence at 100% and varying dot contrast. Stereoacuity was also measured. Motion coherence thresholds were not correlated with contrast sensitivity or visual acuity. However, lower (better) motion coherence thresholds were correlated with finer stereoacuity (ρ=0.38, p=0.004). Contrast sensitivity and visual acuity were also correlated (ρ=-0.26, p=0.004) with each other. These results indicate that global motion perception for high contrast stimuli is independent of contrast sensitivity and visual acuity and can be used to assess motion integration mechanisms in children. Copyright © 2015. Published by Elsevier Ltd.
    Vision research 08/2015; 115. DOI:10.1016/j.visres.2015.08.007 · 1.82 Impact Factor
    • "Indeed, evidence shows that cortical area STS combines motion and form information, mediating the extraction of structure from motion (SFM) in humans (Beer et al., 2009; Orban, 2011). Although this has been investigated extensively using rigid-motion in experiments concerned with object recognition, it has also been implicated in the perception of non-rigid biological motion in pointlight displays (Beintema et al., 2006b; Grossman and Blake, 1999; Thirkettle et al., 2009b). Obtaining SFM in point-light stimuli requires an analysis of points to extract their patterns of accelerations and velocities to the extent they reveal three-dimensional structural information. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015. Published by Elsevier Ltd.
    Neuropsychologia 06/2015; 75. DOI:10.1016/j.neuropsychologia.2015.06.025 · 3.30 Impact Factor
  • Source
    • "Another common test used for spatial perception is the “Benton judgment for line orientation” [8], in which the subjects compare straight lines oriented in different directions with a pull of reference lines and they have to find the line in the pull that matches the orientation of the line under test. Motion blindness is often tested with Random Dot Cinematograms [9], patterns of dots moving in a direction that has to be recognized by the subjects. In all these tests, the results are usually the number of items correctly addressed at the different tasks and the diagnosis is given based on this number and on a comparison with normative data. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Higher visual functions can be defined as cognitive processes responsible for object recognition, color and shape perception, and motion detection. People with impaired higher visual functions after unilateral brain lesion are often tested with paper pencil tests, but such tests do not assess the degree of interaction between the healthy brain hemisphere and the impaired one. Hence, visual functions are not tested separately in the contralesional and ipsilesional visual hemifields. Methods A new measurement setup, that involves real-time comparisons of shape and size of objects, orientation of lines, speed and direction of moving patterns, in the right or left visual hemifield, has been developed. The setup was implemented in an immersive environment like a hemisphere to take into account the effects of peripheral and central vision, and eventual visual field losses. Due to the non-flat screen of the hemisphere, a distortion algorithm was needed to adapt the projected images to the surface. Several approaches were studied and, based on a comparison between projected images and original ones, the best one was used for the implementation of the test. Fifty-seven healthy volunteers were then tested in a pilot study. A Satisfaction Questionnaire was used to assess the usability of the new measurement setup. Results The results of the distortion algorithm showed a structural similarity between the warped images and the original ones higher than 97%. The results of the pilot study showed an accuracy in comparing images in the two visual hemifields of 0.18 visual degrees and 0.19 visual degrees for size and shape discrimination, respectively, 2.56° for line orientation, 0.33 visual degrees/s for speed perception and 7.41° for recognition of motion direction. The outcome of the Satisfaction Questionnaire showed a high acceptance of the battery by the participants. Conclusions A new method to measure higher visual functions in an immersive environment was presented. The study focused on the usability of the developed battery rather than the performance at the visual tasks. A battery of five subtasks to study the perception of size, shape, orientation, speed and motion direction was developed. The test setup is now ready to be tested in neurological patients.
    BioMedical Engineering OnLine 07/2014; 13(1):104. DOI:10.1186/1475-925X-13-104 · 1.43 Impact Factor
Show more

Preview (2 Sources)

17 Reads
Available from