Vision-for-Action: The effects of object property discrimination and action state on affordance compatibility effects

Centre for Clinical and Cognitive Neuroscience, School of Psychology, University of Wales, Bangor, Gwynedd LL57 2AS, Wales.
Psychonomic Bulletin & Review (Impact Factor: 2.99). 07/2006; 13(3):493-8. DOI: 10.3758/BF03193875
Source: PubMed


When a person views an object, the action the object evokes appears to be activated independently of the person's intention to act. We demonstrate two further properties of this vision-to-action process. First, it is not completely automatic, but is determined by the stimulus properties of the object that are attended. Thus, when a person discriminates the shape of an object, action affordance effects are observed; but when a person discriminates an object's color, no affordance effects are observed. The former, shape property is associated with action, such as how an object might be grasped; the latter, color property is irrelevant to action. Second, we also show that the action state of an object influences evoked action. Thus, active objects, with which current action is implied, produce larger affordance effects than passive objects, with which no action is implied. We suggest that the active object activates action simulation processes similar to those proposed in mirror systems.

Download full-text


Available from: Steven P Tipper,
    • "shape decision) but did not affect affordance-irrelevant information (e.g. colour decision, Tipper et al. 2006). Kritikos et al. (2001) showed that action execution kinematics was affected by the volumetric properties of the distracters (e.g. the size of the distracters) but not by their semantic properties (e.g. an apple distracter for a green bean target, Kritikos et al. 2001). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Perception is linked to action via two routes: a direct route based on affordance information in the environment and an indirect route based on semantic knowledge about objects. The present study explored the factors modulating the recruitment of the two routes, in particular which factors affecting the selection of paired objects. In Experiment 1, we presented real objects among semantically related or unrelated distracters. Participants had to select two objects that can interact. The presence of distracters affected selection times, but not the semantic relations of the objects with the distracters. Furthermore, participants first selected the active object (e.g. teaspoon) with their right hand, followed by the passive object (e.g. mug), often with their left hand. In Experiment 2, we presented pictures of the same objects with no hand grip, congruent or incongruent hand grip. Participants had to decide whether the two objects can interact. Action decisions were faster when the presentation of the active object preceded the presentation of the passive object, and when the grip was congruent. Interestingly, participants were slower when the objects were semantically but not functionally related; this effect increased with congruently gripped objects. Our data showed that action decisions in the presence of strong affordance cues (real objects, pictures of congruently gripped objects) relied on sensory-motor representation, supporting the direct route from perception-to-action that bypasses semantic knowledge. However, in the case of weak affordance cues (pictures), semantic information interfered with action decisions, indicating that semantic knowledge impacts action decisions. The data support the dual-route account from perception-to-action.
    Experimental Brain Research 05/2015; 233(8). DOI:10.1007/s00221-015-4296-7 · 2.04 Impact Factor
  • Source
    • "There is evidence supporting this hypothesis. A number of studies using a S–R compatibility as a paradigm have shown that the handle affordance of a viewed object automatically influences responseselection processes (e.g., McBride, Sumner, & Husain, 2012; Phillips & Ward, 2002; Tipper, Paul, & Hayes, 2006). Originally, this effect was reported by Tucker and Ellis (1998) whose participants were asked to decide whether a common graspable object, presented in a computer monitor, was upright or inverted and to respond as fast as possible with their left or right hand according to these categories. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Behavioural evidence has shown that the perception of an object's handle automatically activates the corresponding action representation. The activation appears to be inhibited if the object is a task-irrelevant prime mug that is presented very briefly prior responding to the target arrow. The present study uses an electrophysiological indicator of automatic response priming, the lateralized readiness potential (LRP), to investigate the mechanisms of this inhibition effect. We presumed that this effect would reflect motor self-inhibition processes. The self-inhibition explanation of the effect would assume that the effect reflects activation-followed-by-inhibition observed rapidly after the offset of the prime at the primary motor cortex. However, the results showed that the effect is not associated with modulation of the early LRP deflections. In contrast, the inhibition manifested itself in the later LRP deflections that we assume to be linked to interference in the processing of response-related aspects of the target. We propose that the LRP pattern is similar to what would be predicted from the negative priming explanation of the effect. The study sheds light on understanding inhibition mechanisms associated with automatically activated affordance representations.
    Quarterly journal of experimental psychology (2006) 11/2013; 67(9). DOI:10.1080/17470218.2013.868007 · 2.13 Impact Factor
  • Source
    • "Also in conflict with the notion of strictly automatic activation of affordances, attention to the semantic properties (e.g., goal-directed use) of the graspable objects was shown to reliably modulate the affordance effect. When the experimental task is relevant to the grasp-related potential of the perceived object, the resulting affordance effect is stronger (Creem and Proffitt, 2001; Shuch et al., 2010; Tipper, Paul, and Hayes, 2006). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Two experiments investigated (1) how activation of manual affordances is triggered by visual and linguistic cues to manipulable objects and (2) whether graspable object parts play a special role in this process. Participants pressed a key to categorize manipulable target objects copresented with manipulable distractor objects on a computer screen. Three factors were varied in Experiment 1: (1) the target's and (2) the distractor's handles' orientation congruency with the lateral manual response and (3) the Visual Focus on one of the objects. In Experiment 2, a linguistic cue factor was added to these three factors-participants heard the name of one of the two objects prior to the target display onset. Analysis of participants' motor and oculomotor behaviour confirmed that perceptual and linguistic cues potentiated activation of grasp affordances. Both target- and distractor-related affordance effects were modulated by the presence of visual and linguistic cues. However, a differential visual attention mechanism subserved activation of compatibility effects associated with target and distractor objects. We also registered an independent implicit attention attraction effect from objects' handles, suggesting that graspable parts automatically attract attention during object viewing. This effect was further amplified by visual but not linguistic cues, thus providing initial evidence for a recent hypothesis about differential roles of visual and linguistic information in potentiating stable and variable affordances (Borghi in Language and action in cognitive neuroscience. Psychology Press, London, 2012).
    Experimental Brain Research 08/2013; 229(4):545-559. DOI:10.1007/s00221-013-3616-z · 2.04 Impact Factor
Show more