ArticlePDF Available

Abstract and Figures

The acquisition of bidirectional action-effect associations plays a central role in the ability to intentionally control actions. Humans learn about actions not only through active experience, but also through observing the actions of others. In Experiment 1, we examined whether action-effect associations can be acquired by observational learning. To this end, participants observed how a model repeatedly pressed two buttons during an observation phase. Each of the buttonpresses led to a specific tone (action effect). In a subsequent test phase, the tones served as target stimuli to which the participants had to respond with buttonpresses. Reaction times were shorter if the stimulus-response mapping in the test phase was compatible with the action-effect association in the observation phase. Experiment 2 excluded the possibility that the impact of perceived action effects on own actions was driven merely by an association of spatial features with the particular tones. Furthermore, we demonstrated that the presence of an agent is necessary to acquire novel action-effect associations through observation. Altogether, the study provides evidence for the claim that bidirectional action-effect associations can be acquired by observational learning. Our findings are discussed in the context of the idea that the acquisition of action-effect associations through observation is an important cognitive mechanism subserving the human ability for social learning.
Content may be subject to copyright.
A preview of the PDF is not available
... These authors used a variant of the two-stage procedure introduced by Elsner and Hommel (2001) (see above). In Paulus, van Dam, et al. (2011) experiment, participants first underwent an observation phase in which they observed an actor pressing two buttons (left or right), each action triggering a specific effect (a high-pitched or lowpitched tone). The acquisition of A-E associations was probed in a subsequent test phase in which the participants had to discriminate the tones used in the acquisition phase by pressing a left or right button, according to a consistent (acquisition-compatible group) or inconsistent (acquisitionincompatible group) S-R mapping. ...
... Thus, the aim of the present study was twofold: (i) to provide further evidence for the acquisition of A-E associations through observation and (ii) to generalize this learning to the observation of virtual actions (here, a photo of a real hand). To this end and in line with Paulus, van Dam, et al.'s (2011) work, we followed a two-stage procedure inspired by Elsner and Hommel (2001). During an acquisition phase, we had participants watch on a screen a hand performing actions (index or little-finger lifting), which triggered either a highpitched or low-pitched tone. ...
... Auditory stimuli consisted in 400-Hz and 800-Hz tones, presented for 200 ms at a comfortable volume through headphones (Elsner & Hommel, 2001;Paulus, van Dam, et al., 2011). ...
Article
Full-text available
A core assumption of ideomotor theory is that learned bidirectional associations between actions and their effects enable agents to select and initiate actions by anticipating their sensory consequences. Although the acquisition of bidirectional action–effect (A-E) associations built on the experience of one’s own movements has received considerable empirical support, the available evidence for A–E learning through the observation of others’ actions and their effects remains limited. In two experiments, we tested whether A–E associations could be acquired through social learning in an experimental setup involving observation of virtual actions. In an acquisition phase, participants repeatedly observed finger movements on a screen, and each movement was consistently followed by a specific effect tone. In the subsequent test phase, tones were presented as imperative stimuli in a reaction-time task. In both experiments, reaction times were shorter when tones required the same response with which they had been linked in the preceding observation phase, compared with when they required a different response, revealing the impact of A–E associations acquired through observation. Similar results were obtained whether the movements observed during the acquisition phase were spatially aligned (Experiment 1) or not (Experiment 2) with participants’ responses in the test phase, ruling out the possibility that the results merely reflect spatial compatibility effects. Our findings add new evidence for an acquisition of A–E associations through observation. Importantly, we generalize this acquisition process to the observation of virtual actions. These findings further confirm effect-based action control, as proposed by ideomotor theory.
... For example, non-action and its action-effect, as well as action and its action-effect, can be similarly ideomotorically learned. The compatibility effects without the necessity of the role of the voluntariness or the efference copies can be interpreted to be due to the learning history (that is typically acquired in the acquisition phase of the variable outcome task) between action and action-effect (Paulus et al., 2011). 71 Ideomotor theory is a species of (associative) learning theory focusing exclusively on motor action contexts ...
Thesis
Full-text available
The Sense of agency (SoA) as conceived in experimental paradigms adheres to “cognitive penetration” and “cognitive phenomenology.” Cognitive penetrability is the assumption that agency states penetrate sensory modalities like time perception – the Intentional binding (IB) hypothesis – and auditory, visual and tactile perceptions – the Sensory attenuation (SA) hypothesis. Cognitive phenomenology, on the other hand, assumes that agency states are perceptual or experiential, akin to sensory states. I critically examine these operationalizations and argue that the SoA is a judgment effect rather than a perceptual/phenomenal state. My thesis criticizes the experimentally operationalized implicit SoA (in chapter 2), explicit SoA (in chapter 3) and cue-integrated SoA (in chapter 4) by arguing that: (a) There is uncertainty in the SoA experimental operationalization (making the participants prone to judgment effects); (b) There are inconsistencies and incoherence between different findings and reports in the SoA domain; (c) The SoA reports are influenced by prior as well as online-generated beliefs (under uncertainty); (d) The SoA operationalizations had inaccuracy or approximation standard for measuring perception/experience of agency; (e) Under certainty and accuracy standard (for perception), the SoA (biased or nonveridical) reports might not have occurred at all; and (f) Reported inconsistencies and, the effects of beliefs can be parsimoniously accounted by compositionality nature of judgment. Thus, my thesis concludes that SoA reports are not instances of feelings/perceptions but are judgments.
... For example, non-action and its action-effect, as well as action and its action-effect, can be similarly ideomotorically learned. The compatibility effects without the necessity of the role of the voluntariness or the efference copies can be interpreted to be due to the learning history (that is typically acquired in the acquisition phase of the variable outcome task) between action and action-effect (Paulus et al., 2011). 71 Ideomotor theory is a species of (associative) learning theory focusing exclusively on motor action contexts ...
Article
How does one know that (s)he is the causal agent of their motor actions? Earlier theories of sense of agency have attributed the capacity for perception of self-agency to the comparator process of the motor-control/action system. However, with the advent of the findings implying a role of non-motor cues (like affective states, beliefs, primed concepts, and social instructions or previews of actions) in the sense of agency literature, the perception of self-agency is hypothesized to be generated even by non-motor cues (based on their relative reliability or weighting estimate); and, this theory is come to be known as the cue-integration of sense of agency. However, the cue-integration theory motivates skepticism about whether it is falsifiable and whether it is plausible that non-motor cues that are sensorily unrelated to typical sensory processes of self-agency have the capacity to produce a perception of self-agency. To substantiate this skepticism, I critically analyze the experimental operationalizations of cue-integration—with the (classic) vicarious agency experiment as a case study—to show that (1) the participants in these experiments are ambiguous about their causal agency over motor actions, (2) thus, these participants resort to reports of self-agency as heuristic judgments (under ambiguity) rather than due to cue-integration per se, and (3) there might not have occurred cue-integration based self-agency reports if these experimental operationalizations had eliminated ambiguity about the causal agency. Thus, I conclude that the reports of self-agency (observed in typical non-motor cues based cue-integration experiments) are not instances of perceptual effect—that are hypothesized to be produced by non-motor cues—but are of heuristic judgment effect.
... The second stage (test phase) tests whether associations have been formed. In line with the assumption that such associations are bidirectional, exposing participants to previously encountered effects has been found to facilitate the respective associated actions (e.g., Elsner & Hommel, 2001;Paulus et al., 2011;Pfister, 2019;Shin et al., 2010). In the present research, we tested the idea that the acquisition of action-effect associations is not limited to actual behavior but can be acquired through verbal instructions. ...
Article
Full-text available
Action–effect learning is based on a theoretical concept that actions are associated with their perceivable consequences through bidirectional associations. Past research has mostly investigated how these bidirectional associations are formed through actual behavior and perception of the consequences. The present research expands this idea by investigating how verbally formulated action–effect instructions contribute to action–effect learning. In two online experiments (Exp. 1, N = 41, student sample; Exp. 2, N = 349, non-student sample), participants memorized a specific action–effect instruction before completing a speeded categorization task. We assessed the consequences of the instructions by presenting the instructed effect as an irrelevant stimulus in the classification task and compared response errors and response times for instruction-compatible and instruction-incompatible responses. Overall, we found evidence that verbal action–effect instructions led to associations between an action and perception (effect) that are automatically activated upon encountering the previously verbally presented effect. In addition, we discuss preliminary evidence suggesting that the order of the action–effect components plays a role; only instructions in a perception–action order showed the expected effect. The present research contributes evidence to the idea that action–effect learning is not exclusively related to actual behavior but also achievable through verbally formulated instructions, thereby providing a flexible learning mechanism that does not rely on specific actual experiences.
... Knowledge of tool use can originate from observation (e.g., Want and Harris, 2002;Flynn, 2008;Paulus et al., 2011). This is usually explained by the observer forming associations between the observed tool changes and the actions of the observed person triggering these changes (Paulus, 2012(Paulus, , 2014. ...
Article
Full-text available
Objects which a human agent controls by efferent activities (such as real or virtual tools) can be perceived by the agent as belonging to his or her body. This suggests that what an agent counts as “body” is plastic, depending on what she or he controls. Yet there are possible limitations for such momentary plasticity. One of these limitations is that sensations stemming from the body (e.g., proprioception) and sensations stemming from objects outside the body (e.g., vision) are not integrated if they do not sufficiently “match”. What “matches” and what does not is conceivably determined by long–term experience with the perceptual changes that body movements typically produce. Children have accumulated less sensorimotor experience than adults have. Consequently, they express higher flexibility to integrate body-internal and body-external signals, independent of their “match” as suggested by rubber hand illusion studies. However, children’s motor performance in tool use is more affected by mismatching body-internal and body-external action effects than that of adults, possibly because of less developed means to overcome such mismatches. We review research on perception-action interactions, multisensory integration, and developmental psychology to build bridges between these research fields. By doing so, we account for the flexibility of the sense of body ownership for actively controlled events and its development through ontogeny. This gives us the opportunity to validate the suggested mechanisms for generating ownership by investigating their effects in still developing and incomplete stages in children. We suggest testable predictions for future studies investigating both body ownership and motor skills throughout the lifespan.
... Of course, the dyadic observational SR binding task is not the only paradigm which reflects close parallels between merely observing versus self-performing an action. Additional evidence stems from a variety of other tasks like observational acquisition of response-effect bindings (Paulus et al., 2011), imitation tasks (Brass et al., 2001), behavioral mimicry (Van Baaren et al., 2009) or joint action tasks (Sebanz et al., 2003). Importantly, many of the effects investigated in these tasks are also moderated by social relevance. ...
Article
Full-text available
Merely observing how another person responds to a stimulus results in incidental stimulus-response (SR) bindings in memory. These observationally acquired SR bindings can be retrieved on a later occasion. Retrieval will bias current behavioral response tendencies towards re-execution of the observed response. Previous demonstrations of this effect endorsed a dyadic interaction paradigm in which two co-actors respond in alternating fashion. The present paper investigates a video-based version of the observational SR binding task in which videotaped responses are observed on screen. Whereas findings from the dyadic paradigm indicate that retrieval of observationally acquired SR bindings is modulated by social relevance, the video-based paradigm is not influenced by social moderators. Data of four experiments show that manipulations of visual perspective, natural and artificial group membership had no modulatory effect on retrieval of observationally acquired SR bindings in the video-based paradigm. The absence of any socially modulated effect in the video-based paradigm is supported by Bayesian statistics in favor of the null hypothesis. Data from a fifth experiment suggests that observational SR binding and retrieval effects in the video-based paradigm reflect the influence of spatial attention allocated towards response keys of observed responses. Implications for the suitability of both paradigms to study observational learning and joint action phenomena are discussed.
... 15 According to the ideomotor theory, the effects of voluntariness or self-generatedness are not unique, but belong to a family of (associatively learned) ideomotor effects (Hommel et al., 2017b;Janczyk et al., 2012;Pfister et al., 2011). Compatibility effects without the necessity of the role of the voluntariness or the efference copies can be interpreted to be due to the learning history (that typically happens in the acquisition phase of the variable action-outcome contingency task) between action and action-effect (Paulus et al., 2011). ...
Article
Full-text available
The explicit sense of agency (SoA) is characterized as the unique and exclusive feeling generated by action or agency states (or the comparator process of the motor control system, to be specific), and (thus) this characterization assumes "cognitive phenomenology, " the assumption that non-sensory states like actions or agency states, all by themselves, generate a unique feeling akin to typical sensory processes. However, the assumption of cognitive phenomenology is questionable as it fails to account for the necessity of causal interaction between the sensory organ and the phenomenal object in the production of phenomenology or experience. Thus, this paper criticizes the explicit SoA — as operationalized in experiments — by arguing that: (a) there is uncertainty in the explicit SoA operationalization (making the participants prone to judgment effects), (b) there are non-correlations or dissociations between agency states and explicit SoA reports, (c) explicit SoA reports are influenced by prior beliefs or online-generated heuristics, and (d) were the participants not uncertain about their agency (or the causal contingency between their actions and action-effects), they might not have produced non-veridical explicit SoA reports at all. Thus, this paper concludes that explicit SoA reports are not instances of (cognitive or agentive) phenomenology but are instances of heuristic judgment (under uncertainty).
... At the same time, the observer might recruit her cortical motor system, mentally simulating the observed action [69]. Once the action-sound association is formed, merely hearing the sound might already activate motor areas in the brain [102]. ...
Preprint
Full-text available
When newly born into this world, there is an overwhelming multitude of things to learn, ranging from learning to speak to learning how to solve a mathematical equation. Amidst this abundance, action perception is developing already in the first months of life. Why would learning about others’ actions be among the first items to acquire? What is the relevance of action perception for young infants? Part of the answer probably lies in the strong dependence on others. Newborn human infants need caretakers even for fulfilling their basic needs. Weak neck muscles make it hard for them to lift up their head, and most of their movements come across as uncoordinated. Clearly, getting themselves a drink or dressing themselves is not part of their repertoire. Their reliance on their caregivers makes these caregivers and their actions important for the young infant. Seeing that the caregiver responds to their calls can already reduce some of the stress that comes with being so dependent. As such, it is helpful for an infant to learn to distinguish different actions of the caregiver. Not only are the caregivers’ actions focused on the infant’s physical needs, but also on helping the infant to regulate her emotions. Parents typically comfort a baby by softly rocking them, and by talking and smiling to them. Social interaction between caregiver and infant thus starts immediately after birth, and these interactions help them to bond. In the context of social interaction, it is useful to be able to distinguish a smile from a frown. Interpreting the facial actions of others is vital to successful communication. Moreover, in the period in which infants are still very limited in their own actions, observing others’ actions forms a main resource for learning about the world. Making sense of others’ actions is therefore of central importance already during early development.
Article
Human actions sometimes aim at preventing an event from occurring. How these to-be-prevented events are represented, however, is poorly understood. Recent proposals in the literature point to a possible divide between effect-producing, operant actions, and effect-precluding, prevention actions, suggesting that the control of operant actions relies on codes of environment-related effects whereas prevention actions do not. Here we report two experiments on this issue, showing that spatial features (Experiment 1) as well as temporal features (Experiment 2) of to-be-prevented events influence actions in the same way as corresponding features of to-be-produced effects. This implies that selecting and executing prevention actions relies on anticipated environmental changes, comparable to operant actions.
Article
Full-text available
The Theory of Event Coding (TEC) has influenced research on action and perception across the past two decades. It integrates several seminal empirical phenomena and it has continued to stimulate novel experimental approaches on the representational foundations of action control and perceptual experience. Yet, many of the most notable results surrounding TEC originate from an era of psychological research that relied on rather small sample sizes as judged by today’s standards. This state hampers future research aiming to build on previous phenomena. We, therefore, provide a multi-lab re-assessment of the following six classical observations: response-effect compatibility, action-induced blindness, response-effect learning, stimulus–response binding, code occupation, and short-term response-effect binding. Our major goal is to provide precise estimates of corresponding effect sizes to facilitate future scientific endeavors. These effect sizes turned out to be considerably smaller than in the original reports, thus allowing for informed decisions on how to address each phenomenon in future work. Of note, the most relevant results of the original observations were consistently obtained in the present experiments as well.
Article
Full-text available
Human actions may be driven endogenously (to produce desired environmental effects) or exogenously (to accommodate to environmental demands). There is a large body of evidence indicating that these two kinds of action are controlled by different neural substrates. However, only little is known about what happens—in functional terms—on these different “routes to action”. Ideomotor approaches claim that actions are selected with respect to their perceptual consequences. We report experiments that support the validity of the ideomotor principle and that, at the same time, show that it is subject to a far-reaching constraint: It holds for endogenously driven actions only! Our results suggest that the activity of the two “routes to action” is based on different types of learning: The activity of the system guiding stimulus-based actions is accompanied by stimulus–response (sensorimotor) learning, whereas the activity of the system controlling intention-based actions results in action–effect (ideomotor) learning.
Article
Full-text available
There is evidence indicating that an individual can learn a motor skill by observing a model practising it. In the present study we wanted to determine whether observation would permit one to learn the relative timing pattern required to perform a new motor skill. Also, we wanted to determine the joint effects of observation and of physical practice on the learning of that relative timing pattern. Finally, we were interested in finding whether there was an optimal type of model, advanced or beginner, which would lead better to observational learning. Data from two experiments indicated that observation of either a beginner or an advanced model resulted in modest learning of a constrained relative timing pattern. Observation also resulted in significant parameterization learning. However, a combination of observation followed by physical practice resulted in significant learning of the constrained relative timing pattern. These results suggest that observation engages one in cognitive processes similar to those occurring during physical practice.
Article
Full-text available
According to the authors' 2-phase model of action control, people first incidentally acquire bidirectional associations between motor patterns and movement-contingent events and then intentionally use these associations for goal-directed action. The authors tested the model in 4 experiments, each comprising an acquisition phase, in which participants experienced co-occurrences between left and right keypresses and low- and high-pitched tones, and a test phase, in which the tones preceded the responses in forced- and free-choice designs. Both reaction time and response frequency in the test phase depended on the learned associations, indicating that presenting a tone activated the associated response. Results are interpreted as evidence for automatic action–outcome integration and automatic response priming through learned action effects. These processes may be basic for the control of voluntary action by the anticipation of action goals.
Article
4 factors are essential to learning: drives, cues, responses, and rewards. Social motivations which are secondary drives include imitativeness, a process by which matched acts are evoked in two people and connected to appropriate cues. "In matched-dependent behavior, the leader is able to read the relevant environmental cue, but the follower is not; the latter must depend upon the leader for the signal as to what act is to be performed and where and when." In copying behavior "the copier must slowly bring his response to approximate that of a model and must know, when he has done so, that his act is an acceptable reproduction of the model act." The authors present not only a theoretical analysis of these problems but also experiments on rats and children where the problem has been to teach the subject to imitate. There is a discussion of crowd behavior, an analysis of a case of lynching, and a discussion of the diffusion of culture. Appendices present a revision of Holt's theory of imitation and a historical review of the general topic. "Our position is that if there are any innate connections between stimuli and responses of the initiative type, they are few and isolated." "In summary, imitation can greatly hasten the process of independent learning by enabling the subject to perform the first correct response sooner than he otherwise would… . In order for imitation to elicit the first correct response, the essential units of copying or matched-dependent behavior must already have been learned." (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Motor imagery is viewed as a window to cognitive motor processes and particularly to motor control. Mental simulation theory [Jeannerod, M., 2001. Neural simulation of action: a unifying mechanism for motor cognition. NeuroImage 14, 103–109] stresses that cognitive motor processes such as motor imagery and action observation share the same representations as motor execution. This article presents an overview of motor imagery studies in cognitive psychology and neuroscience that support and extend predictions from mental simulation theory. In general, behavioral data as well as fMRI and TMS data demonstrate that motor areas in the brain play an important role in motor imagery. After discussing results on a close overlap between mental and actual performance durations, the review focuses specifically on studies reporting an activation of primary motor cortex during motor imagery. This focus is extended to studies on motor imagery in patients. Motor imagery is also analyzed in more applied fields such as mental training procedures in patients and athletes. These findings support the notion that mental training procedures can be applied as a therapeutic tool in rehabilitation and in applications for power training.
Article
Debates about the evolution of the ‘mirror neuron system’ imply that it is an adaptation for action understanding. Alternatively, mirror neurons may be a byproduct of associative learning. Here I argue that the adaptation and associative hypotheses both offer plausible accounts of the origin of mirror neurons, but the associative hypothesis has three advantages. First, it provides a straightforward, testable explanation for the differences between monkeys and humans that have led some researchers to question the existence of a mirror neuron system. Second, it is consistent with emerging evidence that mirror neurons contribute to a range of social cognitive functions, but do not play a dominant, specialised role in action understanding. Finally, the associative hypothesis is supported by recent data showing that, even in adulthood, the mirror neuron system can be transformed by sensorimotor learning. The associative account implies that mirror neurons come from sensorimotor experience, and that much of this experience is obtained through interaction with others. Therefore, if the associative account is correct, the mirror neuron system is a product, as well as a process, of social interaction.
Article
This paper investigates a two-stage model of infants' imitative learning from observed actions and their effects. According to this model, the observation of another person's action activates the corresponding motor code in the infants' motor repertoire (i.e. leads to motor resonance). The second process guiding imitative behavior results from the observed action effects. If the modeled action is followed by a salient action effect, the representation of this effect (i.e. perceptual code) will be associated with the activated motor code. If the infant later aims to obtain the same effect, the corresponding motor program will be activated and the model's action will therefore be imitated. Accordingly, the model assumes that for the imitation of novel actions the modeled action needs to elicit sufficient motor resonance and must be followed by a salient action effect. Using the head touch imitation paradigm, we tested these two assumptions derived from the model. To this end, we manipulated whether the actions demonstrated to the infants were or were not in the motor repertoire, i.e. elicited stronger or less strong motor resonance, and whether they were followed by salient action effects or not. The results were in line with the proposed two-stage model of infants' imitative learning and suggest that motor resonance is necessary, but not sufficient for infants' imitative learning from others' actions and their effects.
Article
A new framework for the understanding of functional relationships between perception and action is discussed. According to this framework, perceived events and planned actions share a common representational domain (common-coding approach). Supporting evidence from two classes of experimental paradigms is presented: induction paradigms and interference paradigms. Induction paradigms study how certain stimuli induce certain actions by virtue of similarity. Evidence from two types of induction tasks is reviewed: sensorimotor synchronisation and spatial compatibility tasks. Interference paradigms study the mutual interference between the perception of ongoing events and the preparation and control of ongoing action. Again, evidence from two types of such tasks is reviewed, implying interference in either direction. It is concluded that the evidence available supports the common coding principle. A further general principle emerging from these studies is the action effect principle that is, the principle that cognitive representations of action effects play a critical role in the planning and control of these actions.