3-D shape perception.
ABSTRACT In this paper, we analyze and test three theories of 3-D shape perception: (1) Helmholtizian theory, which assumes that perception of the shape of an object involves reconstructing Euclidean structure of the object (up to size scaling) from the object's retinal image after taking into account the object's orientation relative to the observer, (2) Gibsonian theory, which assumes that shape perception involves invariants (projective or affine) computed directly from the object's retinal image, and (3) perspective invariants theory, which assumes that shape perception involves a new kind of invariants of perspective transformation. Predictions of these three theories were tested in four experiments. In the first experiment, we showed that reliable discrimination between a perspective and nonperspective image of a random polygon is possible even when information only about the contour of the image is present. In the second experiment, we showed that discrimination performance did not benefit from the presence of a textured surface, providing information about the 3-D orientation of the polygon, and that the subjects could not reliably discriminate between the 3-D orientation of textured surface and that of a shape. In the third experiment, we compared discrimination for solid shapes that either had flat contours (cuboids) or did not have visible flat contours (cylinders). The discrimination was very reliable in the case of cuboids but not in the case of cylinders. In the fourth experiment, we tested the effectiveness of planar motion in perception of distances and showed that the discrimination threshold was large and similar to thresholds when other cues to 3-D orientation were used. All these results support perspective invariants as a model of 3-D shape perception.
- SourceAvailable from: Zygmunt Pizlo
Article: Curve detection in a noisy image.[Show abstract] [Hide abstract]
ABSTRACT: In this paper we propose a new theory of the Gestalt law of good continuation. In this theory perceptual processes are modeled by an exponential pyramid algorithm. To test the new theory we performed three experiments. The subject's task was to detect a target (a set of dots arranged along a straight or curved line) among background dots. Detectability was high when: (a) the target was long; (b) the density of target dots relative to the density of background dots was large; (c) the local change of angle was small along the entire line; (d) local properties of the target were known to the subject. These results are consistent with our new model and they contradict prior models.Vision Research 06/1997; 37(9):1217-41. · 2.14 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: When a planar shape is viewed obliquely, it is deformed by a perspective deformation. If the visual system were to pick up geometrical invariants from such projections, these would necessarily be invariant under the wider class of projective transformations. To what extent can the visual system tell the difference between perspective and nonperspective but still projective deformations of shapes? To investigate this, observers were asked to indicate which of two test patterns most resembled a standard pattern. The test patterns were related to the standard pattern by a perspective or projective transformation, or they were completely unrelated. Performance was slightly better in a matching task with perspective and unrelated test patterns (92.6%) than in a projective-random matching task (88.8%). In a direct comparison, participants had a small preference (58.5%) for the perspectively related patterns over the projectively related ones. Preferences were based on the values of the transformation parameters (slant and shear). Hence, perspective and projective transformations yielded perceptual differences, but they were not treated in a categorically different manner by the human visual system.Psychonomic Bulletin & Review 06/1997; 4(2):248-253. · 2.99 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: The process of surface perception is complex and based on several influencing factors, e.g., shading, silhouettes, occluding contours, and top down cognition. The accuracy of surface perception can be measured and the influencing factors can be modified in order to decrease the error in perception. This paper presents a novel concept of how a perceptual evaluation of a visualization technique can contribute to its redesign with the aim of improving the match between the distal and the proximal stimulus. During analysis of data from previous perceptual studies, we observed that the slant of 3D surfaces visualized on 2D screens is systematically underestimated. The visible trends in the error allowed us to create a statistical model of the perceived surface slant. Based on this statistical model we obtained from user experiments, we derived a new shading model that uses adjusted surface normals and aims to reduce the error in slant perception. The result is a shape-enhancement of visualization which is driven by an experimentally-founded statistical model. To assess the efficiency of the statistical shading model, we repeated the evaluation experiment and confirmed that the error in perception was decreased. Results of both user experiments are publicly-available datasets.IEEE Transactions on Visualization and Computer Graphics 01/2012; 18(12):2265-2274. · 1.90 Impact Factor