Vision-for-action: the effects of object property discrimination and action state on affordance compatibility effects.

Centre for Clinical and Cognitive Neuroscience, School of Psychology, University of Wales, Bangor, Gwynedd LL57 2AS, Wales.
Psychonomic Bulletin & Review (Impact Factor: 2.99). 07/2006; 13(3):493-8. DOI: 10.3758/BF03193875
Source: PubMed

ABSTRACT When a person views an object, the action the object evokes appears to be activated independently of the person's intention to act. We demonstrate two further properties of this vision-to-action process. First, it is not completely automatic, but is determined by the stimulus properties of the object that are attended. Thus, when a person discriminates the shape of an object, action affordance effects are observed; but when a person discriminates an object's color, no affordance effects are observed. The former, shape property is associated with action, such as how an object might be grasped; the latter, color property is irrelevant to action. Second, we also show that the action state of an object influences evoked action. Thus, active objects, with which current action is implied, produce larger affordance effects than passive objects, with which no action is implied. We suggest that the active object activates action simulation processes similar to those proposed in mirror systems.

  • [Show abstract] [Hide abstract]
    ABSTRACT: Use of current health care equipment for medical procedures (e.g., central line insertions and central line care) is primarily dependent on the cognition of the health care worker. That is, the present design of equipment (typically numerous, separately packaged individual items) provides minimal information about the optimal order of procedure steps and no defenses against human error, such as omitting steps in procedure. In this article, we propose patient safety may be improved by redesigning equipment to integrate a “checklist” using sequencing, color coding, and visual icons. We hypothesize this reduces cognitive demand by off-loading knowledge into the world, creating affordances that provide guidance reducing the likelihood of errors and promoting adherence to best practices. © 2011Wiley Periodicals, Inc.
    Human Factors and Ergonomics in Manufacturing 01/2012; 22(1). DOI:10.1002/hfm.20289 · 0.86 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A right-hand preference for visually-guided grasping has been shown on numerous accounts. Grasping an object requires the integration of both visual and motor components of visuomotor processing. It has been suggested that the left hemisphere plays an integral role in visuomotor functions. The present study serves to investigate whether the visual processing of graspable objects, without any actual reaching or grasping movements, yields a right-hand (left-hemisphere) advantage. Further, we aim to address whether such an advantage is automatically evoked by motor affordances. Two groups of right-handed participants were asked to categorize objects presented on a computer monitor by responding on a keypad. The first group was asked to categorize visual stimuli as graspable (e.g. apple) or non-graspable (e.g. car). A second group categorized the same stimuli but as nature-made (e.g. apple) or man-made (e.g. car). Reaction times were measured in response to the visually presented stimuli. Results showed a right-hand advantage for graspable objects only when participants were asked to respond to the graspable/non-graspable categorization. When participants were asked to categorize objects as nature-made or man-made, a right-hand advantage for graspable objects did not emerge. The results suggest that motor affordances may not always be automatic and might require conscious representations that are appropriate for object interaction. Copyright © 2014 Elsevier Inc. All rights reserved.
    Brain and Cognition 12/2014; 93C:18-25. DOI:10.1016/j.bandc.2014.11.003 · 2.68 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Tipper, Paul and Hayes found object-based correspondence effects for door-handle stimuli for shape judgments but not colour. They reasoned that a grasping affordance is activated when judging dimensions related to a grasping action (shape), but not for other dimensions (colour). Cho and Proctor, however, found the effect with respect to handle position when the bases of the door handles were centred (so handles were positioned left or right; the base-centred condition) but not when the handles were centred (the object-centred condition), suggesting that the effect is driven by object location, not grasping affordance. We conducted an independent replication of Cho and Proctor's design, but with behavioural and event-related potential measures. Participants made shape judgments in Experiment 1 and colour judgments in Experiment 2 on the same door-handle objects. Correspondence effects on response time and errors were obtained in both experiments for the base-centred condition but not the object-centred condition. Effects were absent in the P1 and N1 data, which are consistent with the hypothesis of little binding between visual processing of grasping component and action. These findings question the grasping-affordance view but support a spatial-coding view, suggesting that correspondence effects are modulated primarily by object location.
    Journal of Cognitive Psychology 07/2014; 26(6). DOI:10.1080/20445911.2014.940959 · 1.20 Impact Factor

Full-text (2 Sources)

Available from
May 31, 2014