Article

Forward Models for Physiological Motor Control

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Based on theoretical and computational studies it has been suggested that the central nervous system (CNS) internally simulates the behaviour of the motor system in planning, control and learning. Such an internal “forward” model is a representation of the motor system that uses the current state of the motor system and motor command to predict the next state. We will outline the uses of such internal models for solving several fundamental computational problems in motor control and then review the evidence for their existence and use by the CNS. Finally we speculate how the location of an internal model within the CNS may be identified. Copyright © 1996 Elsevier Science Ltd.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To measure motor prediction, we used motor imagery, in which participants performed the same 9 m locomotor path mentally. Forward models are believed to be involved in mental movement simulation (Miall & Wolpert, 1996;Wolpert & Flanagan, 2001) and temporal features of mental movements emerge from sensorimotor predictions (Papaxanthis et al., 2012;Sirigu et al., 1996). Finally, we manipulated action observation (locomotion inference) and prediction (mental locomotion) processes, by creating sensory illusions via peripheral mechanical muscle vibration on leg muscles, which has provided consistent results in the literature (Ivanenko et al., 2000). ...
... An important feature of motor behavior is the ability to predict the consequences of actions before sensory feedback is available. Evidence supports the hypothesis that motor prediction is generated by internal forward models, which are neural networks that mimic the causal flow of the physical process by predicting the future sensorimotor state (e.g., position, velocity) given the efferent copy of the motor command and the current state (Miall & Wolpert, 1996;Wolpert & Flanagan, 2001). Motor prediction is useful in motor learning through mental practice (Guillot et al., 2021;Ruffino et al., 2017aRuffino et al., , 2017bRuffino et al., , 2021 It is proposed that the temporal features of mental movements emerge from sensorimotor predictions of the forward model (Papaxanthis et al., 2012;Sirigu et al., 1996). ...
Preprint
The mirror neurons network in the human brain is activated both during the observation of action and the execution of the same action, facilitating thus the transformation of visual information into motor representations, to understand the actions and intentions of others. How this transformation takes place, however, is still under debate. One prevailing theory, direct matching, assumes a direct correspondence between the visual information of the actor's movement and the activation of the motor representations in the observer's motor cortex that would produce the same movement. Alternatively, the predictive coding theory postulates that, during action observation, motor predictions (e.g., position, velocity) are generated and compared to the visual information of the actor's movement. Here, we experimentally interrogate these two hypotheses during a locomotion task. The motor prediction process was assessed by measuring the timing of imagined movements: the participants had to imagine walking, forward or backward, for 9 m (linear path). Action observation was assessed by measuring time estimation in an inference locomotor task (the same 9 m linear path): after perceiving an actor walking forward or backward for 3 m, the vision of the observer was occluded and he/she had to estimate when the actor would reach the end of the 9 m path. We manipulated the timing processes during the two tasks by creating sensory illusions via peripheral mechanical muscle vibration on leg muscles, which has provided consistent results in the literature (acceleration of forward and deceleration of backward locomotion). We found that sensory illusions specifically affected the timing processes of both locomotion inference and mental locomotion, suggesting the involvement of sensorimotor predictions, common to both tasks. These findings seem to support the predictive coding hypothesis.
... A possible interpretation relies on the internal model's framework [26,27], as MI involves the internal forward model [28]. It predicts the future sensorimotor state through the expected consequences of motor execution, by integrating the efferent copy of the motor command and the current state of the sensorimotor system [27]. ...
... A possible interpretation relies on the internal model's framework [26,27], as MI involves the internal forward model [28]. It predicts the future sensorimotor state through the expected consequences of motor execution, by integrating the efferent copy of the motor command and the current state of the sensorimotor system [27]. In this case, while imagining pointing movements under prism exposure, the participants integrated the efferent copy of their successful pointing movement as well as the initial sensory information, i.e., the visually perceived position of the target and the perception of their hand through visual and proprioceptive information. ...
Article
Full-text available
Prism adaptation (PA) is a useful method to investigate short-term sensorimotor plasticity. Following active exposure to prisms, individuals show consistent after-effects, probing that they have adapted to the perturbation. Whether after-effects are transferable to another task or remain specific to the task performed under exposure, represents a crucial interest to understand the adaptive processes at work. Motor imagery (MI, i.e., the mental representation of an action without any concomitant execution) offers an original opportunity to investigate the role of cognitive aspects of motor command preparation disregarding actual sensory and motor information related to its execution. The aim of the study was to test whether prism adaptation through MI led to transferable after-effects. Forty-four healthy volunteers were exposed to a rightward prismatic deviation while performing actual (Active group) versus imagined (MI group) pointing movements, or while being inactive (inactive group). Upon prisms removal, in the MI group, only participants with the highest MI abilities (MI+ group) showed consistent after-effects on pointing and, crucially, a significant transfer to throwing. This was not observed in participants with lower MI abilities and in the inactive group. However, a direct comparison of pointing after-effects and transfer to throwing between MI+ and the control inactive group did not show any significant difference. Although this interpretation requires caution, these findings suggest that exposure to intersensory conflict might be responsible for sensory realignment during prism adaptation which could be transferred to another task. This study paves the way for further investigations into MI’s potential to develop robust sensorimotor adaptation.
... Here, individual differences in error perception and attention reorientation for error correction actions are argued to be different between experts and beginners (Walia et al., 2022a). In particular, errors can be preemptively corrected by prediction mechanisms based on the forward model (Wolpert and Miall, 1996), which are suggested to improve with expertise. Therefore, tDCS modulation of error-related brain activation measured using portable brain imaging, viz. ...
... functional near-infrared spectroscopy (fNIRS) (Nemani et al., 2018) and electroencephalography (EEG) (Ciechanski et al., 2019), and relating the brain activation changes to behavior can provide mechanistic insights. In particular, a distinction can be made between based on a predictive forward modeling framework (Wolpert and Miall, 1996) that can be modeled as error perception corrective action coupling (Kamat et al., 2022a;Walia et al., 2022a). Then, tDCS may facilitate mparator system in the cerebellum (Tanaka et al., 2020;Welniarz et al., 2021) event is sensed to drive the attention reorientation for skilled corrective action (Walia et al., 2022a). ...
Preprint
Full-text available
Transcranial direct current stimulation (tDCS) has been shown to facilitate surgical training and performance when compared to sham tDCS; however, the potency may be improved by selecting appropriate brain targets based on neuroimaging and mechanistic insights. Published studies have shown the feasibility of portable brain imaging in conjunction with tDCS during Fundamentals of Laparoscopic Surgery (FLS) tasks for concurrently monitoring the cortical activations via functional near-infrared spectroscopy (fNIRS). Then, fNIRS can be combined with electroencephalogram (EEG) where EEG band power changes have been shown to correspond to the changes in oxyhemoglobin (HbO) concentration, found from the fNIRS. In principal accordance with these prior works, our current study aimed to investigate multi-modal imaging of the brain response to cerebellar (CER) and ventrolateral prefrontal cortex (PFC) tDCS that may facilitate the most complex FLS suturing with intracorporal knot tying task. Our healthy human study on twelve novices (age: 22-28 years, 2 males, 1 female with left-hand dominance) from medical/premedical backgrounds aimed for mechanistic insights from neuroimaging brain areas that are related to error-based learning – one of the basic skill acquisition mechanisms. We found that right CER tDCS of the posterior lobe facilitated a statistically significant (q<0.05) brain response at the bilateral prefrontal areas at the start of the FLS task that was higher than sham tDCS. Also, right CER tDCS significantly (p<0.05) improved FLS score when compared to sham tDCS. In contrast, left PFC tDCS failed to facilitate a significant brain response and FLS performance improvement. Moreover, right CER tDCS facilitated activation of the bilateral prefrontal brain areas related to FLS performance improvement provided mechanistic insights into the CER tDCS effects. The mechanistic insights motivated future investigation of CER tDCS effects on the error-related perception action coupling based on directed functional connectivity studies.
... individual differences in the error perception and attention reorientation for corrective action are postulated to differ between experts and novices. Notably, the error can be preemptively corrected by a predictive mechanism based on a forward model [103] that is postulated to improve with expertise. The brain can be considered an information processing system during skill acquisition. ...
... In that case, the investigation of the error-related states of the system in the experts and novices can provide insights into how the error event drives the attention reorientation for skilled corrective action. Notably, a distinction can be made between "internal monitoring" of error based on a predictive forward modeling framework [103] and "external monitoring" of error based on the action in the environment. In our prior work [46], we presented a perception-action model for brain-behavior analysis of laparoscopic surgical skill training. ...
Article
Full-text available
Abstract Error-based learning is one of the basic skill acquisition mechanisms that can be modeled as a perception–action system and investigated based on brain–behavior analysis during skill training. Here, the error-related chain of mental processes is postulated to depend on the skill level leading to a difference in the contextual switching of the brain states on error commission. Therefore, the objective of this paper was to compare error-related brain states, measured with multi-modal portable brain imaging, between experts and novices during the Fundamentals of Laparoscopic Surgery (FLS) “suturing and intracorporeal knot-tying” task (FLS complex task)—the most difficult among the five psychomotor FLS tasks. The multi-modal portable brain imaging combined functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) for brain–behavior analysis in thirteen right-handed novice medical students and nine expert surgeons. The brain state changes were defined by quasi-stable EEG scalp topography (called microstates) changes using 32-channel EEG data acquired at 250 Hz. Six microstate prototypes were identified from the combined EEG data from experts and novices during the FLS complex task that explained 77.14% of the global variance. Analysis of variance (ANOVA) found that the proportion of the total time spent in different microstates during the 10-s error epoch was significantly affected by the skill level (p
... The 'model state estimate' in Miall and Wolpert's (1996) predictive comparator model is in fact a pure descriptive, since it is the result of transforming the (directive) motor program into another representation. 183-4 Fig. 7.1 A predictive comparator model from Miall and Wolpert (1996). ...
... The 'model state estimate' in Miall and Wolpert's (1996) predictive comparator model is in fact a pure descriptive, since it is the result of transforming the (directive) motor program into another representation. 183-4 Fig. 7.1 A predictive comparator model from Miall and Wolpert (1996). 184 The PFC colour/motion system ( §4.6b) splits into some pure descriptives and some pure directives. ...
Book
Full-text available
The representational theory of mind (RTM) has given us the powerful insight that thinking consists of the processing of mental representations. Behaviour is the result of these cognitive processes and makes sense in the light of their contents. There is no widely accepted account of how representations get their content – of the metaphysics of representational content. That question, usually asked about representations at the personal level like beliefs and conscious states, is equally pressing for the subpersonal representations that pervade our best explanatory theories in cognitive science. This book argues that well-understood naturalistic resources can be combined to provide an account of subpersonal representational content. It shows how contents arise in a series of detailed case studies in cognitive science. The account is pluralistic, allowing that content is constituted differently in different cases. Building on insights from previous theories, especially teleosemantics, the accounts combine an appeal to correlational information and structural correspondence with an expanded notion of etiological function, which captures the kinds of stabilizing processes that give rise to content. The accounts support a distinction between descriptive and directive content. They also allow us to see how representational explanation gets its distinctive explanatory purchase.
... whereF (s) is given by (17). Noting thatw m N =w N + n together with (22) yields (19). ...
... How does the cerebellum compensate for delays? It is proposed that the cerebellum could function through the state estimation process to compensate the delay [7], [22], [23]. ...
Preprint
Full-text available
Given a plant subject to delayed sensor measurement, there are several approaches to compensate for the delay. An obvious approach is to address this problem in state space, where the $n$-dimensional plant state is augmented by an $N$-dimensional (Pad\'e) approximation to the delay, affording (optimal) state estimate feedback vis-\`a-vis the separation principle. Using this framework, we show: (1) Feedback of the estimated plant states partially inverts the delay; (2) The optimal (Kalman) estimator decomposes into $N$ (Pad\'e) uncontrollable states, and the remaining $n$ eigenvalues are the solution to a reduced-order Kalman filter problem. Further, we show that the tradeoff of estimation error (of the full state estimator) between plant disturbance and measurement noise, only depends on the reduced-order Kalman filter (that can be constructed independently of the delay); (3) A subtly modified version of this state-estimation-based control scheme bears close resemblance to a Smith predictor. This modified state-space approach shares several limitations with its Smith predictor analog (including the inability to stabilize most unstable plants), limitations that are alleviated when using the unmodified state estimation framework.
... The influence of the relationship between body movements and object movements on sense of agency and sense of ownership 16 outcomes are predicted based on action goals and/or motor actions (e.g., Hommel, 2013;James, 1890James, /1981Koch, Keller, & Prinz, 2004;Miall & Wolpert, 1996;Shin, Proctor, & Capaldi, 2010;Waszak, Cardoso-Leite, & Hughes, 2012;Wolpert, 1997;Wolpert & Flanagan, 2001). ...
... Based on this sensory state a motor command is then chosen in order to produce an action which should result in the intended sensory goal state. The activation of this motor command however not only triggers the efferent activity itself, but also an additional simulation of it, a so called "efference copy" (e.g., Miall & Wolpert, 1996;Wolpert, 1997;Wolpert & Flanagan, 2001). This efference copy is essentially the prediction of the intended sensory goal state according to these models which is subsequently compared with the actually observed sensory states after the action has been performed. ...
Thesis
The “active self” approach suggests that any object we manipulate voluntarily and foreseeably becomes part of our “self” in the sense that we feel control over this object (sense of agency) and experience it as belonging to our own body (sense of ownership). While there is considerable evidence that we can indeed experience both a sense of agency and a sense of ownership over a broad variety of objects when we control these through our actions, the approach has also been criticized for exaggerating the flexibility of the human self. In this thesis, I investigate the influence that the relationship between the body movements controlling an object and the movements of the object itself has on the process of integrating an object into the self. I demonstrate that fully controlling an object is not sufficient for it to be integrated into the self since both explicit and implicit measures of the sense of agency and the sense of ownership indicate less or no integration when body movements are transformed into inverted object movements. Furthermore, I show that such inversions lead to the downregulation of sensory signals either from the body or from the controlled object in order to deal with the conflicting multisensory information when performing such actions. I argue that this downregulation is the underlying factor behind the diminished or eliminated integration of inverted body and object movements and I discuss further pathways for possible future studies building up on these findings.
... Furthermore, another crucial aspect that has only recently been considered is that motor-based sensory prediction can change within a short period of time, consistently with the adaptive nature of internal models (Miall & Wolpert, 1996). For instance, Kilteni et al. (2019) demonstrated that exposure to a systematic delay (100 ms) between the execution and reception of a self-generated touch led to a decrease of perceptual attenuation for immediately delivered self-initiated touch and an increase of the attenuation for the delayed touch, representing a retuning of the internal (forward) model. ...
Article
Full-text available
It has been suggested that during action observation, a sensory representation of the observed action is mapped onto one’s own motor system. However, it is largely unexplored what this may imply for the early processing of the action’s sensory consequences, whether the observational viewpoint exerts influence on this and how such a modulatory effect might change over time. We tested whether the event-related potential of auditory effects of actions observed from a first- versus third-person perspective show amplitude reductions compared with externally generated sounds, as revealed for self-generated sounds. Multilevel modeling on trial-level data showed distinct dynamic patterns for the two viewpoints on reductions of the N1, P2, and N2 components. For both viewpoints, an N1 reduction for sounds generated by observed actions versus externally generated sounds was observed. However, only during first-person observation, we found a temporal dynamic within experimental runs (i.e., the N1 reduction only emerged with increasing trial number), indicating time-variant, viewpoint-dependent processes involved in sensorimotor prediction during action observation. For the P2, only a viewpoint-independent reduction was found for sounds elicited by observed actions, which disappeared in the second half of the experiment. The opposite pattern was found in an exploratory analysis concerning the N2, revealing a reduction that increased in the second half of the experiment, and, moreover, a temporal dynamic within experimental runs for the first-person perspective, possibly reflecting an agency-related process. Overall, these results suggested that the processing of auditory outcomes of observed actions is dynamically modulated by the viewpoint over time.
... Their results showed that some users passed a few common states during teleoperation but the intermediate behavior was mostly random among the participants. Physiological studies on control and planning through the central nervous system further showed that humans use mental forward models [36] to predict the response to their motor commands [37]. ...
Preprint
Full-text available
The programming of robotic assembly tasks is a key component in manufacturing and automation. Force-sensitive assembly, however, often requires reactive strategies to handle slight changes in positioning and unforeseen part jamming. Learning such strategies from human performance is a promising approach, but faces two common challenges: the handling of low part clearances which is difficult to capture from demonstrations and learning intuitive strategies offline without access to the real hardware. We address these two challenges by learning probabilistic force strategies from data that are easily acquired offline in a robot-less simulation from human demonstrations with a joystick. We combine a Long Short Term Memory (LSTM) and a Mixture Density Network (MDN) to model human-inspired behavior in such a way that the learned strategies transfer easily onto real hardware. The experiments show a UR10e robot that completes a plastic assembly with clearances of less than 100 micrometers whose strategies were solely demonstrated in simulation.
... According to motor control theories, the motor system issues the commands necessary to transition from the current to the intended body position to achieve a goal (Blakemore, Wolpert, & Frith, 2002). An efference copy of these commands is used to evaluate the accuracy of the movement performed and apply any necessary corrections, which can happen in the absence of awareness (Fourneret & Jeannerod, 1998;Miall & Wolpert, 1996;Wolpert & Flanagan, 2001;Wolpert, Ghahramani, & Jordan, 1995). ...
Article
We can monitor our intentional movements and form explicit representations about our movements, allowing us to describe how we move our bodies. But it is unclear which information this metacognitive monitoring relies on. For example, when throwing a ball to hit a target, we might use the visual information about how the ball flew to metacognitively assess our performance. Alternatively, we might disregard the ball trajectory - given that it is not directly relevant to our goal - and metacognitively assess our performance based solely on whether we reached the goal of hitting the target. In two experiments we aimed to distinguish between these two alternatives and asked whether the distal outcome of a goal-directed action (hitting or missing a target) informs the metacognitive representations of our own movements. Participants performed a semi-virtual task where they moved their arm to throw a virtual ball at a target. After each throw, participants discriminated which of two ball trajectories displayed on the screen corresponded to the flight path of their throw and then rated their confidence in this decision. The task included two conditions that differed on whether the distal outcome of the two trajectories shown matched (congruent) or differed (incongruent). Participants were significantly more accurate in discriminating between the two trajectories, and responded faster in the incongruent condition and, accordingly, were significantly more confident on these trials. Crucially, we found significant differences in metacognitive performance (measured as meta-d'/d') between the two conditions only on successful trials, where the virtual ball had hit the target. These results indicate that participants successfully incorporated information about the outcome of the movement into both their discrimination and confidence responses. However, information about the outcome selectively sharpened the precision of confidence ratings only when the outcome of their throw matched their intention. We argue that these findings underline the separation between the different levels of information that may contribute to body monitoring, and we provide evidence that intentions might play a central role in metacognitive motor representations.
... Attenuation for self-generated sounds has been observed in the N1 component (Bäß et al., 2008;Elijah et al., 2018;Mifsud et al., 2016;Neszmélyi & Horváth, 2017;Oestreich et al., 2016;Pinheiro et al., 2019;van Elk et al., 2014), the P2 component (Horváth & Burgyán, 2013;Knolle et al., 2012), and the Tb component (Paraskevoudi & SanMiguel, 2022;SanMiguel et al., 2013;Saupe et al., 2013). The nature of these effects is often assumed to be predictive, since efference copies of motor commands are thought to serve as a basis for precise anticipation of sensory stimulation (Miall & Wolpert, 1996). Correctly predicted sensory stimulation is thought to elicit smaller neural responses than wrongly predicted or surprising input, in line with the predictive coding theory of neural processing (Blakemore et al., 1998;Kilner et al., 2007). ...
Preprint
Active engagement improves learning and memory, and self- vs. externally generated stimuli are processed differently: perceptual intensity and neural responses are attenuated. Whether the attenuation is linked to memory formation remains to be understood. This study investigates whether active oculomotor control over auditory stimuli – controlling for movement and stimulus predictability – benefits associative learning, and studies the underlying neural mechanisms. Using EEG and eyetracking we explored the impact of control during learning on the processing and memory recall of arbitrary oculomotor-auditory associations. Participants (N=23) learned associations through active exploration or passive observation, using a gaze-controlled interface to generate sounds. Our results show faster learning progress in the active condition. ERPs time-locked to the onset of sound stimuli showed that learning progress was linked to an attenuation of the P3a component. The detection of matching movement-sound pairs triggered a target-matching P3b response. There was no general modulation of ERPs through active learning. However, participants could be divided into different learner types: those who benefited strongly from active control during learning and those who did not. The strength of the N1 attenuation effect for self-generated stimuli was correlated with memory gain in active learning. Our results show that control helps learning and memory and modulates sensory responses. Individual differences during sensory processing predict the strength of the memory benefit. Taken together, these results help to disentangle the effects of agency, unspecific motor-based neuromodulation, and stimulus predictability on ERP components and establish a link between self-generation effects and active learning memory gain.
... Internal forward models are theorized to support the ability to simulate actions mentally. Specifically, it is held that the brain uses forward modeling to predict the consequences of action (Miall and Wolpert, 1996;Wolpert, 1997;Wolpert and Flanagan, 2001;Imamizu and Kawato, 2009). In this framework, when motor commands are sent to the motor system to achieve a particular intended endstate, an efference copy is issued in parallel. ...
Article
Full-text available
Studies showed that motor expertise was found to induce improvement in language processing. Grounded and situated approaches attributed this effect to an underlying automatic simulation of the motor experience elicited by action words, similar to motor imagery (MI), and suggest shared representations of action conceptualization. Interestingly, recent results also suggest that the mental simulation of action by MI training induces motor-system modifications and improves motor performance. Consequently, we hypothesize that, since MI training can induce motor-system modifications, it could be used to reinforce the functional connections between motor and language system, and could thus lead to improved language performance. Here, we explore these potential interactions by reviewing recent fundamental and clinical literature in the action-language and MI domains. We suggested that exploiting the link between action language and MI could open new avenues for complementary language improvement programs. We summarize the current literature to evaluate the rationale behind this novel training and to explore the mechanisms underlying MI and its impact on language performance.
... Such constant computation from a tabula rasa would require a certain amount of constant energy. Furthermore, such conceptualisation would contradict the existence of anticipatory and predictive mechanisms, which have been found to play a crucial role in the optimisation of the external world processing (see Kawato et al., 1987;Miall & Wolpert, 1996;Wolpert et al., 2003). ...
Thesis
The peripersonal space (PPS) has been defined as the action space immediately surrounding the body where individuals can easily interact with objects and people. PPS acts as a perception-action interface that allows a multisensory encoding of nearby stimuli and plays a crucial role in the organisation and guiding of goal-directed or defensive actions. PPS would be composed by multiple response-fields. Each response-field consists in a portion of space endowed with a given functional value that determines the most pertinent action to be potentially executed. Within this context, the aim of the present thesis was to assess whether and how social and motor factors are integrated when constructing such functional representation of space. Specifically, I tested the general hypothesis that when motor and social factors are concurrently involved, social factors modulate the influence of motor factors on the construction of PPS. The two facets of PPS construction were examined: PPS representation (i.e., the way individual represent their near-body space) and PPS exploitation (i.e., the way individuals act within their near-body space). Five studies were conducted in the present thesis. Study 1 showed that during a collaborative motor task with a confederate, individuals extend their PPS representation. However, they tend to avoid exploiting space when this coincides with the confederate's PPS, even when associated to a higher possibility to obtain a reward following a motor action. Study 2 showed that this effect is modulated by individuals' motor involvement in the task (i.e., acting vs. observing). Study 3, 4 and 5 focused specifically on PPS exploitation and showed respectively that the use of space during social interaction is modulated by the features of the final spatial target of the motor action, the availability of gaze and the sharing of a physical space. Therefore, while Study 1 and 2 showed that social factors modulate the effect of motor factors, Study 3, 4 and 5 suggested that the reverse effect is also possible. These findings suggest that social and motor factors are hierarchically taken into account when representing and exploiting peri-personal space (PPS), determining whether and how they prioritise a given portion of space during their interactions with the environ-ment. In light of the present findings and in order to offer an integrative view of PPS construction, the present thesis proposes a functional model of PPS, including three interconnected and mutually influencing layers (a perceptual priority map, a motor priority map and an action execution stage). From a wider perspective, the present thesis defends the idea that PPS construction is not stable, but computed in a specific instant as a function of the task demands, stimuli features and the physical and social context.
... Specifically, the theory proposes that the emulator's sensory predictions are actively compared to the actual sensory results of action during motor execution, and that the differences between prediction and outcome are continuously used to adjust the emulator's future predictions. Although prior work on forward models had already proposed the concept that motor learning was based on an active comparison process with sensory predictions (Jordan and Rumelhart, 1992;Miall and Wolpert, 1996), the novel contribution of MET is that the forward modeling process and the motor imagery process are one and the same, with imagery being the conscious experience of a forward model's sensory predictions. ...
Article
Full-text available
Over the past few decades, researchers have become interested in the mechanisms behind motor imagery (i.e., the mental rehearsal of action). During this time several theories of motor imagery have been proposed, offering diverging accounts of the processes responsible for motor imagery and its neural overlap with movement. In this review, we summarize the core claims of five contemporary theories of motor imagery: motor simulation theory, motor emulation theory, the motor-cognitive model, the perceptual-cognitive model, and the effects imagery model. Afterwards, we identify the key testable differences between them as well as their various points of overlap. Finally, we discuss potential future directions for theories of motor imagery.
... This view assumes that motor imagery, in a form of neural re-use (e.g., Anderson, 2010), draws upon the same neuronal networks and cognitive processes that underlie action execution itself (Jeannerod, 1994;. As a potential mechanism, it has been proposed that the brain predicts-via forward models-the sensory consequences that each of its motor commands will produce, so that it can anticipate the visual, tactile, and proprioceptive sensations that will soon be registered (e.g., Miall & Wolpert, 1996;Sperry, 1950). During overt action, such predictions may allow the actor to filter out predicted sensations (e.g., Reichenbach et al., 2014) or to correct for movement errors before they happen (e.g., Desmurget & Grafton, 2000;Shadmehr et al., 2010). ...
Article
Full-text available
Overt and imagined action seem inextricably linked. Both have similar timing, activate shared brain circuits, and motor imagery influences overt action and vice versa. Motor imagery is, therefore, often assumed to recruit the same motor processes that govern action execution, and which allow one to play through or simulate actions offline. Here, we advance a very different conceptualization. Accordingly, the links between imagery and overt action do not arise because action imagery is intrinsically motoric, but because action planning is intrinsically imaginistic and occurs in terms of the perceptual effects one want to achieve. Seen like this, the term ‘motor imagery’ is a misnomer of what is more appropriately portrayed as ‘effect imagery’. In this article, we review the long-standing arguments for effect-based accounts of action, which are often ignored in motor imagery research. We show that such views provide a straightforward account of motor imagery. We review the evidence for imagery-execution overlaps through this new lens and argue that they indeed emerge because every action we execute is planned, initiated and controlled through an imagery-like process. We highlight findings that this new view can now explain and point out open questions.
... Motor cortices, such as M2 in rats or the premotor cortex in primates, are anatomically connected to the visual cortex Leinweber et al., 2017). Some theories propose that motor-sensory cortex projections may act as efference copies used by an internal forward model to predict the sensory consequence of the motor act (Miall and Wolpert, 1996;Wolpert and Kawato, 1998). A proposed role for these projections is the sense of agency, allowing the nervous system to discriminate when a sensory consequence is externally or self-generated (Poletti et al., 2017). ...
Article
Full-text available
The ability of an organism to voluntarily control the stimuli onset modulates perceptual and attentional functions. Since stimulus encoding is an essential component of working memory (WM), we conjectured that controlling the initiation of the perceptual process would positively modulate WM. To corroborate this proposition, we tested twenty-five healthy subjects in a modified-Sternberg WM task under three stimuli presentation conditions: an automatic presentation of the stimuli, a self-initiated presentation of the stimuli (through a button press), and a self-initiated presentation with random-delay stimuli onset. Concurrently, we recorded the subjects' electroencephalographic signals during WM encoding. We found that the self-initiated condition was associated with better WM accuracy, and earlier latencies of N1, P2 and P3 evoked potential components representing visual, attentional and mental review of the stimuli processes, respectively. Our work demonstrates that self-initiated stimuli enhance WM performance and accelerate early visual and attentional processes deployed during WM encoding. We also found that self-initiated stimuli correlate with an increased attentional state compared to the other two conditions, suggesting a role for temporal stimuli predictability. Our study remarks on the relevance of self-control of the stimuli onset in sensory, attentional and memory updating processing for WM.
... As a first step, we tested the extent to which the timing of motor adjustments during manual interception might reflect compensations for predicted performance errors rather than responses to actual performance errors (Fishbach et al. 2007). Forward models of sensorimotor control posit the existence of neural mechanisms that predict future target and hand trajectories, which would facilitate corrections to ongoing movements based on expected outcomes (Desmurget and Grafton 2000;Kawato 1999;Wolpert and Miall 1996). If such models are correct, error corrections in our experiment would be detected early in the hand trajectory, and participants would correct ongoing movements before the target would have been reached if the initial hand movement had been successful, i.e., well before sensory confirmation of a successful target capture would have been received. ...
Article
Full-text available
We examined a key aspect of sensorimotor skill: the capability to correct performance errors that arise mid-movement. Participants grasped the handle of a robot that imposed a nominal viscous resistance to hand movement. They watched a target move pseudo-randomly just above the horizontal plane of hand motion and initiated quick interception movements when cued. On some trials, the robot's viscosity or the target's speed changed without warning coincident with the GO cue. We fit a sum-of-Gaussians model to mechanical power measured at the handle to determine the number, magnitude, and relative timing of submovements occurring in each interception attempt. When a single submovement successfully intercepted the target, capture times averaged 410 ms. Sometimes, two or more submovements were required. Initial error corrections typically occurred before feedback could indicate the target had been captured or missed. Error corrections occurred sooner after movement onset in response to mechanical viscosity increases (at 154 ms) than to unprovoked errors on control trials (215 ms). Corrections occurred later (272 ms) in response to viscosity decreases. The latency of corrections for target speed changes did not differ from those in control trials. Remarkably, these early error corrections accommodated the altered testing conditions; speed/viscosity increases elicited more vigorous corrections than in control trials with unprovoked errors; speed/viscosity decreases elicited less vigorous corrections. These results suggest that the brain monitors and predicts the outcome of evolving movements, rapidly infers causes of mid-movement errors, and plans and executes corrections—all within 300 ms of movement onset.
... Thus, when one intends to imagine an action, the intended action outcome is fed into the inverse models, which specifies the corresponding motor commands to execute the movement. During imagery, it is necessary to inhibit those motor commands to prevent actual movements (Guillot et al., 2012;Rieger et al., 2017), while a copy of the motor commands is sent to the forward model, which would then use this information to predict both the state of our body had we actually carried out the movement along with the sensory consequences that the movement is likely to generate (sensory) predictions (Blakemore & Sirigu, 2003;Davidson & Wolpert, 2005;Desmurget & Grafton, 2000;Grush, 2004;Johansson & Flanagan, 2009;Wolpert & Flanagan, 2001;Wolpert & Miall, 1996). One crucial aspect of the viability of a forward model as a means to compensate time delays in the online control of actions pertains to its ability to provide an internal trace of the sensory signals associated with the executed movements, a function that has been a prominent part in many theories of motor control and learning. ...
Article
Full-text available
Imagination can appeal to all our senses and may, therefore, manifest in very different qualities (e.g., visual, tactile, proprioceptive, or kinesthetic). One line of research addresses action imagery that refers to a process by which people imagine the execution of an action without actual body movements. In action imagery, visual and kinesthetic aspects of the imagined action are particularly important. However, other sensory modalities may also play a role. The purpose of the paper will be to address issues that include: (i) the creation of an action image, (ii) how the brain generates images of movements and actions, (iii) the richness and vividness of action images. We will further address possible causes that determine the sensory impression of an action image, like task specificity, instruction and experience. In the end, we will outline open questions and future directions.
... During voluntary movement, our sensorimotor loop suffers from ubiquitous delays due to sensory transduction, neural conduction and brain processing of the sensory feedback (Wolpert and Flanagan, 2001;Franklin and Wolpert, 2011). These delays have a nonnegligible magnitude even exceeding ~100 ms (Scott, 2016), and their impact can be detrimental through destabilizing our motor output and leading to oscillatory movements when rapidly correcting motor errors (Miall and Wolpert, 1996;Kawato, 1999). To compensate for the delayed feedback, the brain uses a forward model in combination with a copy of the motor command (efference copy) to predict the sensory consequences of the movement and thus, rely less on the delayed input (Shadmehr et al., 2010;McNamee and Wolpert, 2019). ...
Preprint
Full-text available
Intrinsic delays in sensory feedback can be detrimental for motor control. As a compensation strategy, the brain predicts the sensory consequences of movement via a forward model on the basis of a copy of the motor command. Using these predictions, the brain attenuates the somatosensory reafference to facilitate the processing of exafferent information. Theoretically, this predictive attenuation gets disrupted by (even minimal) temporal errors between the predicted and the actual reafference, but direct evidence for such disruption is lacking since previous neuroimaging studies contrasted conditions of nondelayed reafferent input with exafferent one. Here, we combined psychophysics with functional magnetic resonance imaging to test whether subtle perturbations in the timing of somatosensory reafference disrupt its predictive processing. Twenty-eight participants generated touches on their left index finger by tapping a sensor with their right index finger. The touches on the left index finger were delivered at the time of the two fingers’ contact or with a 100 ms delay. We found that such brief temporal perturbations disrupted the attenuation of the somatosensory reafference both at the perceptual and neural level, leading to greater somatosensory and cerebellar responses and weaker somatosensory connectivity with the cerebellum proportionally to perceptual changes. Moreover, we observed increased connectivity of the supplementary motor area with the cerebellum during the perturbations. We interpret these effects as the failure of the forward model to predictively attenuate the delayed somatosensory reafference and the return of the prediction error to the motor centers, respectively. Significance statement Our brain receives the somatosensory feedback of our movements with delay. To counteract these delays, motor control theories postulate that the brain predicts the timing of the somatosensory consequences of our movements and attenuates sensations received at that timing. This makes a self-generated touch feel weaker than an identical external touch. However, how subtle temporal errors between the predicted and the actual somatosensory feedback perturb this predictive attenuation remains unknown. We show that such errors make the otherwise attenuated touch feel stronger, elicit stronger somatosensory responses, weaken the cerebellar connectivity with somatosensory areas, and increase it with motor areas. These findings show that motor and cerebellar areas are fundamental in forming temporal predictions about the sensory consequences of our movements.
... Dans le cadre du contrôle prédictif hiérarchique, comme décrit dans Grandchamp et al. (2019) ou Pickering et Garrod (2013, chaque étape de production de parole entraine la production de copies d'efférence des sorties désirées (phonèmes en sortie de l'encodage phonologique, partitions gestuelles en sortie de la planification motrice, commandes motrices en sortie de la programmation motrice). Ces copies d'efférence sont utilisées pour générer des signaux prédits, évalués par les processus perceptifs de chaque niveau pour ajuster ou non l'étape de production en cours (Miall, Weir, Wolpert, et Stein, 1993;Miall et Wolpert, 1996 ;Wolpert et Kawato, 1998 ;Jeannerod, 2001). A chaque niveau, la comparaison entre l'élément désiré et l'élément prédit permet ainsi un ajustement avant que l'action ne soit produite. ...
Thesis
This thesis work investigates Speech Sound Disorders (SSD). SSD are defined as a delay in speech sound development affecting acceptability and intelligibility of speech. SSD are the most common communication disorder in the pediatric population. Children with SSD are at high risk for later academic and social inclusion. Despite the high prevalence and possible long-term consequences, there are currently few assessment tools available to diagnose French-speaking children with SSD and characterize the disorder. The theoretical part of this work highlights the lack of diagnostic tools.This work intends to create a speech sound assessment tool, based on a psycholinguistic model to diagnose children with SSD. The assessment tool, named EULALIES, includes five tasks, each involving different levels of the psycholinguistic model: (1) a lexicality judgment task, (2) a picture-naming task, (3) a nonword repetition task, (4) a diadochokinetic task, and (5) a syllable repetition task. Data were collected from 119 typical children and 9 children with SSD.The first part of the results focuses specifically on the nonword repetition task and describes the validity of this task. Our results highlight the fact that nonword repetition performance depends on children's age, phonological short-term memory skills, inclusion of a real word in the nonword, syllable structure, and length of the pseudoword. Data collected show that the task is not sensitive to sociolinguistic factors such as socioeconomic status or linguistic status, which is what we were looking for from a clinical perspective.The second part of the results addresses the differential diagnosis between childhood apraxia of speech and phonological disorder, both of which being subtypes of SSD. The assessment tool reveals specific markers for childhood apraxia of speech, such low diadochokinetic rate or errors on vowels. Contrary to what is described for English, some markers seem to be less relevant for French-speaking children. This is the case for schwa epenthesis.At the end of this work, we argue that children with SSD assessment should be based on the analysis of error patterns produced by the child as well as on a psycholinguistic approach. This helps to better describe the child's speech profile and to offer an adapted intervention.
... The multi-dimensional framework of prosthetic embodiment emphasizes the necessity of multisensory and volition integration among spatiotemporally coincident stimuli (Y-axis in Fig. 2). When correctly integrated, both in terms of "multisensory integration" [69] and of motor volition [32,70], the multimodal input affords a certain degree of prosthetic embodiment. The multi-dimensional framework further distinguishes between different degrees of interaction with the environment: here the X-axis in Fig. 2 should be interpreted as the level of interaction that is demanded of the subject in the study. ...
Article
Full-text available
The concept of embodiment has gained widespread popularity within prosthetics research. Embodiment has been claimed to be an indicator of the efficacy of sensory feedback and control strategies. Moreover, it has even been claimed to be necessary for prosthesis acceptance, albeit unfoundedly. Despite the popularity of the term, an actual consensus on how prosthetic embodiment should be used in an experimental framework has yet to be reached. The lack of consensus is in part due to terminological ambiguity and the lack of an exact definition of prosthetic embodiment itself. In a review published parallel to this article, we summarized the definitions of embodiment used in prosthetics literature and concluded that treating prosthetic embodiment as a combination of ownership and agency allows for embodiment to be quantified, and thus useful in translational research. Here, we review the potential mechanisms that give rise to ownership and agency considering temporal, spatial, and anatomical constraints. We then use this to propose a multi-dimensional framework where prosthetic embodiment arises within a spectrum dependent on the integration of volition and multi-sensory information as demanded by the degree of interaction with the environment. This framework allows for the different experimental paradigms on sensory feedback and prosthetic control to be placed in a common perspective. By considering that embodiment lays along a spectrum tied to the interactions with the environment, one can conclude that the embodiment of prosthetic devices should be assessed while operating in environments as close to daily life as possible for it to become relevant.
... Yet, in the face of limited research specifically on anxiety-induced endogenous resource shifts, this certainly warrants further scrutiny. A contrasting explanation may involve the impulse control phases of online control (Elliott et al., 2017) via the efference copies which store representations of the motor commands used to initiate a movement (Miall & Wolpert, 1996;Wolpert, Ghahramani, & Jordan, 1995;Wolpert & Kawato, 1998). From these representations, feed-forward models can generate expected trajectories to inform online correction. ...
Article
Full-text available
Via three experiments, we investigated heightened anxiety's effect on the offline planning and online correction of upper-limb target-directed aiming movements. In Experiment 1, the majority of task trials allowed for the voluntary distribution of offline planning and online correction to achieve task success, while a subset of cursor jump trials necessitated the use of online correction to achieve task success. Experiments 2 and 3 replicated and elaborated Experiment 1 by assessing movement-specific reinvestment propensity and manipulating the self-control resources of participants. This allowed more detailed inference of cognitive resource utilisation to tease apart the effects of conscious processing and distraction-based anxiety mechanisms. For the first time, we demonstrate that: anxiety-induced online-to-offline motor control shifts can be overridden when the need for online correction is necessitated (i.e., in jump trials); anxiety-induced online-to-offline shifts seem to be positively predicted by conscious processing propensity; and optimal spatial efficacy of limb information-based online correction seems to require cognitive resources. We conclude that long-standing definitions of limb information-based online correction require revision, and that both conscious processing and distraction theories appear to play a role in determining the control strategies of anxiety induced upper limb target directed aiming movements.
... Here, we capitalized on the unique features of the monitoring of voluntary movement: Unlike perception and memory, which are clearly primarily externally and internally generated, voluntary movements are associated with both kinds of sources of information that participants might concurrently monitor to make metacognitive judgments. Both internal efferent motor commands and external afferent signals (including vision, audition, and proprioception) (Miall & Wolpert, 1996) may be informative for metacognitive representations. We used a visuomotor metacognitive task, where externally generated information (visual and proprioceptive) and internally generated information (motor commands) are available for monitoring; as well as a motor task, where visual information is not available and monitoring is therefore less reliant on externally generated information. ...
Article
Full-text available
It is still debated whether metacognition, or the ability to monitor our own mental states, relies on processes that are “domain-general” (a single set of processes can account for the monitoring of any mental process) or “domain-specific” (metacognition is accomplished by a collection of multiple monitoring modules, one for each cognitive domain). It has been speculated that two broad categories of metacognitive processes may exist: those that monitor primarily externally generated versus those that monitor primarily internally generated information. To test this proposed division, we measured metacognitive performance (using m-ratio, a signal detection theoretical measure) in four tasks that could be ranked along an internal-external axis of the source of information, namely memory, motor, visuomotor, and visual tasks. We found correlations between m-ratios in visuomotor and motor tasks, but no correlations between m-ratios in visual and visuomotor tasks, or between motor and memory tasks. While we found no correlation in metacognitive ability between visual and memory tasks, and a positive correlation between visuomotor and motor tasks, we found no evidence for a correlation between motor and memory tasks. This pattern of correlations does not support the grouping of domains based on whether the source of information is primarily internal or external. We suggest that other groupings could be more reflective of the nature of metacognition and discuss the need to consider other non-domain task-features when using correlations as a way to test the underlying shared processes between domains.
... Negative feedback cannot perfectly correct for mismatches between a planned and desired movement. Biophysical delays between a movement mismatch, its sensation, and its correction can destabilise the movement [11,43,44,58]. As an example, imagine the overbalancing of a novice balancing on a tightrope, and correcting for leftwards sway with delayed rightwards movement. ...
Preprint
Full-text available
S ummary The cerebellum has a distinctive circuit architecture comprising the majority of neurons in the brain. Marr-Albus theory and more recent extensions 1–4 demonstrate the utility of this architecture for particular types of learning tasks related to the separation of input patterns. However, it is unclear how the circuit architecture facilitates known functional roles of the cerebellum. In particular, the cerebellum is critically involved in refining motor plans even during the ongoing execution of the associated movement. Why would a cerebellar-like circuit architecture be effective at this type of ‘online’ learning problem? We build a mathematical theory, reinforced with computer simulations, that captures some of the particular difficulties associated with online learning tasks. For instance, synaptic plasticity responsible for learning during a movement only has access to a narrow time window of recent movement errors, whereas it ideally depends upon the entire trajectory of errors, from the movement’s start to its finish. The theory then demonstrates how the distinctive input expansion in the cerebellum, where mossy fibre signals are recoded in a much larger number of granule cells, mitigates the impact of such difficulties. As such, the energy cost of this large, seemingly redundantly connected circuit might be an inevitable cost of precise, fast, motor learning.
... One explanation of the sensory attenuation effect is that self-caused sensory stimuli are predicted internally using a copy of the relevant neural outputs, and then subtracted out of the sensory inputs (Wolpert et al., 1995;Miall and Wolpert, 1996;Roussel et al., 2013;Klaffehn et al., 2019). This may well be the case, but even in a model where this predict-and-subtract mechanism would be a perfect solution, our GA instead found other viable alternatives. ...
Article
Full-text available
Living systems process sensory data to facilitate adaptive behavior. A given sensor can be stimulated as the result of internally driven activity, or by purely external (environmental) sources. It is clear that these inputs are processed differently—have you ever tried tickling yourself? Self-caused stimuli have been shown to be attenuated compared to externally caused stimuli. A classical explanation of this effect is that when the brain sends a signal that would result in motor activity, it uses a copy of that signal to predict the sensory consequences of the resulting motor activity. The predicted sensory input is then subtracted from the actual sensory input, resulting in attenuation of the stimuli. To critically evaluate the utility of this predictive approach for coping with self-caused stimuli, and investigate when non-predictive solutions may be viable, we implement a computational model of a simple embodied system with self-caused sensorimotor dynamics, and use a genetic algorithm to explore the solutions possible in this model. We find that in this simple system the solutions that emerge modify their behavior to shape or avoid self-caused sensory inputs, rather than predicting these self-caused inputs and filtering them out. In some cases, solutions take advantage of the presence of these self-caused inputs. The existence of these non-predictive solutions demonstrates that embodiment provides possibilities for coping with self-caused sensory interference without the need for an internal, predictive model.
... In this way, deeply informed predictions drive our (motor and autonomic) behavior such that top-down predictions are fulfilled. For further reading, we refer the reader to the rich literature on the role of forward and inverse inference models in the motor system (e.g., see Wolpert, 1996;Bhushan & Shadmehr, 1998). ...
Preprint
Full-text available
The perception of body signals play a crucial role in cognition and emotion, which may lead to catastrophic outcomes when it becomes dysfunctional. To characterize these mechanisms and intervene on interoception for either diagnostic or treatment purposes, a mounting body of research is concerned with interventions on interoceptive channels such as respiration, cardioception, or thermoception. However, we are still lacking a mechanistic understanding of the underlying psychophysiology. For example, interoceptive signals are often both the cause and consequences of some distress in various mental disorders, and it is still unclear how interoceptive signals bind with exteroceptive cues. In this article, we present existing technologies for manipulating interoception and review their clinical potential in light of the predictive processing framework describing interoception as a process of minimization of prediction errors. We distinguish between three kinds of stimuli: artificial sensations that concern the direct manipulation of interoceptive signals, interoceptive illusions that manipulate contextual cues to induce a predictable drift in body perception, and emotional augmentation technologies that blend artificial sensations with contextual cues of personal significance to generate specific moods or emotions. We discuss how each technology can assess and intervene on the precision-weighting of prediction errors along the cognitive and emotional processing hierarchy and conclude by discussing the clinical relevance of interoceptive technologies in terms of diagnostic stress tests for evaluating interoceptive abilities across clinical conditions and as intervention protocols for conditions such as generalized anxiety disorders, post-traumatic stress disorders, and autism spectrum disorders.
... De ce point de vue, la CD serait utilisée pour anticiper l'image de la scène visuelle post-saccadique en générant une prédiction de cette image qui sera ensuite comparée à la scène visuelle réelle pour détecter d'éventuels mouvements dans l'environnement (Sommer & Wurtz, 2008a, 2008b. Ce mécanisme peut donc s'apparenter à un « modèle forward » (Miall & Wolpert, 1996) dont la figure 3.5 illustre le concept. Contrairement à la théorie de l'annulation, l'avancée majeure de cette hypothèse est la reconnaissance explicite que la CD (en coordonnées motrices) doit être transformée en coordonnées sensorielles pour qu'elle puisse être directement comparée avec la réafférence (Webb, 2004). ...
Thesis
Une très grande majorité de nos activités quotidiennes (lire un livre, conduire une voiture, apprécier une œuvre d’art) sont guidées par la vision active, c’est-à-dire l’interaction dynamique et constante entre notre système visuel, qui nous permet d’acquérir la représentation de la scène qui nous entoure et notre système oculomoteur, qui nous permet de déplacer notre regard de façon brève et rapide (‘saccades oculaires’) d’un objet à un autre au sein de cette scène. L’interaction entre ces deux systèmes est éminemment remarquable puisque malgré ces déplacements balistiques de l’œil, nous arrivons systématiquement à : (1) diriger notre regard de façon précise sur le stimulus d’intérêt en dépit de perturbations physiologiques ou pathologiques et (2) à maintenir une représentation visuelle stable de l’environnement malgré l’exécution de la saccade qui pourrait générer une image floutée ou instable en raison de sa haute vitesse. Ceci est respectivement permis par deux processus : (1) un mécanisme de plasticité sensori-motrice appelé adaptation saccadique qui assure le contrôle permanent de nos mouvements oculaires et (2) un système prédictif qui permet d’anticiper l’image visuelle post-saccadique. L’objectif de cette thèse était de mieux comprendre comment les prédictions de nos actions oculomotrices structurent -du moins en partie- notre perception visuelle par trois études. La première a été menée auprès d’un patient présentant une lésion du cortex pariétal postérieur. Elle a permis de valider deux hypothèses : (1) un signal de prédiction du mouvement oculaire est -sous certaines conditions- nécessaire pour localiser de façon précise une cible visuelle après une saccade et (2) le cortex pariétal postérieur joue un rôle clé dans son intégration. Les études 2 et 3 ont été menées respectivement auprès d’un groupe de volontaires sains et de patients cérébelleux. Elles visaient à comprendre comment une phase de plasticité oculomotrice (induisant une discordance systématique entre l’image prédite et réelle de la scène visuelle post-saccadique -qui devait nécessairement être corrigée par le mécanisme d’adaptation-) altérait notre capacité à localiser précisément un objet dans l’espace. Les résultats obtenus ont montré que la correction oculomotrice de cette discordance était effective chez le sujet sain et entraînait en contrepartie un biais perceptif de localisation. En revanche, la lésion du cervelet entravait la capacité de ces patients à corriger cette discordance, ce qui leur permettait de maintenir des jugements de localisation précis. Enfin, deux patients présentaient une dissociation entre capacité d’adaptation et performances de localisation spatiale. Dans l’ensemble, ces données suggèrent que le cervelet joue un rôle clé à la fois dans les fonctions motrices mais aussi dans la transmission de signaux prédictifs au cortex cérébral pour la perception visuo-spatiale et que ces deux fonctions sont sous-tendues par des territoires cérébelleux distincts. Au-delà de l’aspect fondamental, les tâches expérimentales que nous avons utilisées dans ces études pourraient s’avérer utiles en tant que biomarqueurs afin d’identifier une atteinte précoce de ce codage prédictif, ce qui a été couramment documenté en contexte psychiatrique, notamment dans le cas de schizophrénie.
... Explicit mechanisms are thought to be important early in adaptation and reflect the cognitive strategies used to adapt to experienced perturbations. Implicit mechanisms, on the other hand, develop over the course of adaptation and reflect a recalibration of an internal model, thought to represent a mapping between the desired goal and the appropriate motor response to accomplish the goal [22,23]. Explicit processes are also found to be affected by a secondary cognitive task, while implicit processes are not affected by the interference [24]. ...
Article
Full-text available
Background Complex motor tasks in immersive virtual reality using a head-mounted display (HMD-VR) have been shown to increase cognitive load and decrease motor performance compared to conventional computer screens (CS). Separately, visuomotor adaptation in HMD-VR has been shown to recruit more explicit, cognitive strategies, resulting in decreased implicit mechanisms thought to contribute to motor memory formation. However, it is unclear whether visuomotor adaptation in HMD-VR increases cognitive load and whether cognitive load is related to explicit mechanisms and long-term motor memory formation. Methods We randomized 36 healthy participants into three equal groups. All groups completed an established visuomotor adaptation task measuring explicit and implicit mechanisms, combined with a dual-task probe measuring cognitive load. Then, all groups returned after 24-h to measure retention of the overall adaptation. One group completed both training and retention tasks in CS (measuring long-term retention in a CS environment), one group completed both training and retention tasks in HMD-VR (measuring long-term retention in an HMD-VR environment), and one group completed the training task in HMD-VR and the retention task in CS (measuring context transfer from an HMD-VR environment). A Generalized Linear Mixed-Effect Model (GLMM) was used to compare cognitive load between CS and HMD-VR during visuomotor adaptation, t-tests were used to compare overall adaptation and explicit and implicit mechanisms between CS and HMD-VR training environments, and ANOVAs were used to compare group differences in long-term retention and context transfer. Results Cognitive load was found to be greater in HMD-VR than in CS. This increased cognitive load was related to decreased use of explicit, cognitive mechanisms early in adaptation. Moreover, increased cognitive load was also related to decreased long-term motor memory formation. Finally, training in HMD-VR resulted in decreased long-term retention and context transfer. Conclusions Our findings show that cognitive load increases in HMD-VR and relates to explicit learning and long-term motor memory formation during motor learning. Future studies should examine what factors cause increased cognitive load in HMD-VR motor learning and whether this impacts HMD-VR training and long-term retention in clinical populations.
Chapter
Der vorliegende Beitrag gibt einen Überblick über die aktuellen Motoriktheorien. Grundlegende Verfahren der Bewegungskontrolle wie Steuerung und Regelung sowie präskriptive und emergente Ansätze werden vorgestellt und überführt in die aktuelle Theorie der internen Modelle. Im Anschluss wird erläutert, wie sich durch motorische Adaptation und durch motorisches Lernen die Kontrollmechanismen auf verschiedenen Zeitskalen verändern. Abschließend wird der Beitrag von expliziten und impliziten Prozessen auf die Motorik thematisiert.
Preprint
Full-text available
Video games present a unique opportunity to study motor skill. First-person shooter (FPS) games have particular utility because they require visually-guided hand movements that are similar to widely studied planar reaching tasks. However, there is a need to ensure the tasks are equivalent if FPS games are to yield their potential as a powerful scientific tool for investigating sensorimotor control. Specifically, research is needed to ensure that differences in visual feedback of a movement do not affect motor learning between the two contexts. In traditional tasks, a movement will translate a cursor across a static background, whereas FPS games use movements to pan and tilt the view of the environment. To this end, we designed an online experiment where participants used their mouse or trackpad to shoot targets in both contexts. Kinematic analysis showed player movements were nearly identical between conditions, with highly correlated spatial and temporal metrics. This similarity suggests a shared internal model based on comparing predicted and observed displacement vectors, rather than primary sensory feedback. A second experiment, modelled on FPS-style aim-trainer games, found movements exhibited classic invariant features described within the sensorimotor literature. We found that two measures of mouse control, the mean and variability in distance of the primary sub-movement, were key predictors of overall task success. More broadly, these results show that FPS games offer a novel, engaging, and compelling environment to study sensorimotor skill, providing the same precise kinematic metrics as traditional planar reaching tasks. Significance statement Sensorimotor control underpins human behaviour and is a predictor of education, health, and socioemotional wellbeing. First-person shooter (FPS) games hold promise for studying sensorimotor control at scale, but the visual feedback provided differs from traditional laboratory tasks. There is a need to ensure they provide measures that relate to traditional tasks. We designed an experiment where the visual contingency of movements could be varied whilst participants shot targets. Participant’s movements were similar between contexts, suggesting the use of a common internal model despite the sensory differences. A second experiment observed canonical learning patterns with practice and found two measures of mouse control strongly predicted overall performance. Our results highlight the opportunity offered by FPS games to study situated skilled behaviour.
Chapter
The cerebellum is a unique structure that is densely connected to both motor and nonmotor regions of the brain and plays a critical role in coordinating and adapting movements. The most debilitating effect of damage to the cerebellum is resultant ataxia. Ataxia, derived from the Greek word meaning “lack of order,” is a nonspecific term that refers to uncoordinated movements. Ataxia may also be used as a medical diagnosis. In this chapter, we will focus on this hallmark feature of cerebellar damage, which is incoordination of movements without overt muscle weakness, and we will discuss the potential benefits of rehabilitation and the importance of optimizing sensorial and motor experiences to promote motor learning.
Article
Monitoring the motor performance of others, including the correctness of their actions, is crucial for the human behavior. However, while performance (and error) monitoring of the own actions has been studied extensively at the neurophysiological level, the corresponding studies on monitoring of others' errors are scarce, especially for ecological actions. Moreover, the role of the context of the observed action has not been sufficiently explored. To fill this gap, the present study investigated electroencephalographic (EEG) indices of error monitoring during observation of images of interrupted reach-to-grasp actions in social (an object held in another person's hand) and non-social (an object placed on a table) contexts. Analysis in time- and time-frequency domain showed that, at the level of conscious error awareness, there were no effects of the social context (observed error positivity was present for erroneous actions in both contexts). However, the effects of the context were present at the level of hand image processing: observing erroneous actions in the non-social context was related to larger occipito-temporal N1 and theta activity, while in the social context this pattern was reversed, i.e., larger N1 and theta activity were present for the correct actions. These results suggest that, in case of easily predictable ecological actions, action correctness is processed as early as at the level of hand image perception, since the hand posture conveys information about the action (e.g., motor intention). The social context of actions might make the correct actions more salient, possibly through the saliency of the correctly achieved common goal.
Article
Background: Transcranial direct current stimulation (TDCS) is typically applied before or during a task, for periods ranging from 5 to 30 min. Hypothesis: We hypothesise that briefer stimulation epochs synchronous with individual task actions may be more effective. Methods: In two separate experiments, we applied brief bursts of event-related anodal stimulation (erTDCS) to the cerebellum during a visuomotor adaptation task. Results: The first study demonstrated that 1 s duration erTDCS time-locked to the participants' reaching actions enhanced adaptation significantly better than sham. A close replication in the second study demonstrated 0.5 s erTDCS synchronous with the reaching actions again resulted in better adaptation than standard TDCS, significantly better than sham. Stimulation either during the inter-trial intervals between movements or after movement, during assessment of visual feedback, had no significant effect. Because short duration stimulation with rapid onset and offset is more readily perceived by the participants, we additionally show that a non-electrical vibrotactile stimulation of the scalp, presented with the same timing as the erTDCS, had no significant effect. Conclusions: We conclude that short duration, event related, anodal TDCS targeting the cerebellum enhances motor adaptation compared to the standard model. We discuss possible mechanisms of action and speculate on neural learning processes that may be involved.
Preprint
Full-text available
Visual imagery, the ability to generate visual experience in the absence of direct external stimulation, allows for the construction of rich internal experience in our mental world. Most imagery studies to date have focused on cue-induced imagery, namely the to-be-imagined contents were triggered by external cues. It has remained unclear how internal experience derives volitionally in the absence of any external cues, and whether this kind of self-generated imagery relies on an analogous cortical network as cue-induced imagery. Here, leveraging a novel self-generated imagery paradigm, we systematically examined the spatiotemporal dynamics of self-generated imagery, by having participants volitionally imagining one of the orientations from a learned pool; and of cue-induced imagery, by having participants imagining line orientations based on associative cues acquired previously. Using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), in combination with multivariate encoding and decoding approaches, our results revealed largely overlapping neural signatures of cue-induced and self-generated imagery in both EEG and fMRI; yet, these neural signatures displayed substantially differential sensitivities to the two types of imagery: self-generated imagery was supported by an enhanced involvement of anterior cortex in generating and maintaining imagined contents, as evidenced by enhanced neural representations of orientations in sustained potentials in central channels in EEG, and in posterior frontal cortex in fMRI. By contrast, cue-induced imagery was supported by enhanced neural representations of orientations in alpha-band activity in posterior channels in EEG, and in early visual cortex in fMRI. These results jointly support a reverse cortical hierarchy in generating and maintaining imagery contents in self-generated versus externally-cued imagery.
Article
Full-text available
This study investigates the effects of error-based and reinforcement training on the acquisition and long-term retention of free throw accuracy in basketball. Sixty participants were divided into four groups (n = 15 per group): (i) the error-based group (sensory feedback), (ii) the reinforcement group (binary feedback including success or failure), (iii) the mixed group (sensory feedback followed by binary feedback), and (iv) the control group (without training). Free throws success was recorded before training (PreT), immediately after (Postd0), one day later (Postd1), and seven days later (Postd7). The error-based group, but not the reinforcement group, showed a significant immediate improvement in free throw accuracy (PreT vs Postd0). Interestingly, over time (Postd0 vs Postd1 vs Postd7), the reinforcement group significantly improved its accuracy, while the error-based group decreased it, returning to the PreT level (PreT vs Post7). The mixed group showed the advantage of both training methods, i.e., fast acquisition and retention on a long-term scale. Error-based learning leads to better acquisition, while reinforcement learning leads to better retention. Therefore, the combination of both types of learning is more efficient for both acquisition and retention processes. These findings provide new insight into the acquisition and retention of a fundamental basketball skill in free throw shooting.
Article
Full-text available
Navigating through an environment requires knowledge about one’s direction of self-motion (heading) and traveled distance. Behavioral studies showed that human participants can actively reproduce a previously observed travel distance purely based on visual information. Here, we employed electroencephalography (EEG) to investigate the underlying neural processes. We measured, in human observers, event-related potentials (ERPs) during visually simulated straight-forward self-motion across a ground plane. The participants’ task was to reproduce (active condition) double the distance of a previously seen self-displacement (passive condition) using a gamepad. We recorded the trajectories of self-motion during the active condition and played it back to the participants in a third set of trials (replay condition). We analyzed EEG activity separately for four electrode clusters: frontal (F), central (C), parietal (P), and occipital (O). When aligned to self-motion onset or offset, response modulation of the ERPs was stronger, and several ERP components had different latencies in the passive as compared with the active condition. This result is in line with the concept of predictive coding, which implies modified neural activation for self-induced versus externally induced sensory stimulation. We aligned our data also to the times when subjects passed the (objective) single distance d_obj and the (subjective) single distance d_sub. Remarkably, wavelet-based temporal-frequency analyses revealed enhanced theta-band activation for F, P, and O-clusters shortly before passing d_sub. This enhanced activation could be indicative of a navigation related representation of subjective distance. More generally, our study design allows to investigate subjective perception without interfering neural activation because of the required response action.
Article
Full-text available
The importance of action–perception loops necessitates efficient computations linking motor and sensory systems. Corollary discharge (CD), a concept in motor-to-sensory transformation, has been proposed to predict the sensory consequences of actions for efficient motor and cognitive control. The predictive computation has been assumed to realize via inhibiting sensory reafference when actions are executed. Continuous control throughout the course of action demands inhibitory function ubiquitously on all potential reafference when sensory consequences are not available before execution. However, the temporal and functional characteristics of CD are unclear. When does CD begin to operate? To what extent does CD inhibit sensory processes? How is the inhibitory function implemented in neural computation? Using a delayed articulation paradigm with three types of auditory probes (speech, nonspeech, and nonhuman sounds) in an electroencephalography experiment with 20 human participants (7 males), we found that preparing to speak without knowing what to say (general preparation) suppressed neural responses to each type of auditory probe, suggesting a generic inhibitory function of CD in motor intention. Moreover, power and phase coherence in low-frequency bands (1–8 Hz) were both suppressed, indicating that inhibition was mediated by dampening response amplitude and adding temporal variance to sensory processes. Furthermore, inhibition was stronger for sounds that humans can produce than nonhuman sounds, hinting that the generic inhibitory function of CD is regulated by the established motor–sensory associations. These results suggest a functional and temporal granularity of corollary discharge that mediates multifaceted computations in motor and cognitive control.
Article
Full-text available
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Article
The cerebellum is considered a 'learning machine' essential for time interval estimation underlying motor coordination and other behaviors. Theoretical work has proposed that the cerebellum's input recipient structure, the granule cell layer (GCL), performs pattern separation of inputs that facilitates learning in Purkinje cells (P-cells). However, the relationship between input reformatting and learning has remained debated, with roles emphasized for pattern separation features from sparsification to decorrelation. We took a novel approach by training a minimalist model of the cerebellar cortex to learn complex time-series data from time varying inputs, typical during movements. The model robustly produced temporal basis sets from these inputs, and the resultant GCL output supported better learning of temporally complex target functions than mossy fibers alone. Learning was optimized at intermediate threshold levels, supporting relatively dense granule cell activity, yet the key statistical features in GCL population activity that drove learning differed from those seen previously for classification tasks. These findings advance testable hypotheses for mechanisms of temporal basis set formation and predict that moderately dense population activity optimizes learning.
Conference Paper
Full-text available
Quasi tutte le specie animali collaborano tra loro e, in molti casi, possono arrivare ad esibire comportamenti cooperativi molto sofisticati. Psicologi, e�tologi e neuro-scienziati sono impegnati ad individuare i meccanismi genera�tivi (neurali e cognitivi) di un vasto spettro di comportamenti cooperativi. I�noltre, lo studio della cooperazione animale raccoglie l'interesse anche di scienziati di intelligenza/vita artificiale alla ricerca di idee e ispirazioni per potenziare ―l'intelligenza‖ dei loro organismi artificiali (agenti software e ro�bot fisici). Per cooperare, anche in modo complesso, non occorre essere in tanti. In natura si possono osservare coppie di individui che si aiutano reciprocamente al fine di ottenere un beneficio comune, in tal caso si parla di cooperazione diadica [1]. Tuttavia non è ancora chiaro quali siano le strutture e le funzioni neuro-cognitive alla base di molti comportamenti di cooperazione diadica. La simulazione in sistemi artificiali dei comportamenti degli animali in generale e della cooperazione diadica in particolare, ha un duplice vantaggio: da una parte aiuta a definire meglio i meccanismi neuro-cognitivi alla base dei comportamenti dei sistemi naturali e dall'altra potrebbe portare alla realizzazione di agenti artificiali più efficienti (si veda per esempio [1]). In questo lavoro presentiamo alcuni risultati preliminari relativi all'adde�stramento di coppie di robot nella risoluzione di una classica situazione di la�boratorio utilizzata dagli psicologi animali: il ―Loose String Task‖ [1]. In tale setting sperimentale, una coppia di animali si alimenta solo se attua un com�portamento cooperativo. Esistono varie versioni di crescente complessità del Loose String Task, non tutte le specie animali sono capaci di risolvere il compito ad ogni livello di complessità. I corvi, per esempio, risolvono solo la versione di base, mentre gli scimpanzé sono in grado di produrre risposte a�deguate per tutti i vari livelli di complessità del compito [1]. Questo conduce inevitabilmente ad alcune domande: le strutture neuro-cognitive sottostanti a ogni strategia di soluzione nei vari compiti cooperativi sono diverse? Se si, come funzionano? qual è la loro architettura?
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Chapter
Centered on three themes, this book explores the latest research in plasticity in sensory systems, focusing on visual and auditory systems. It covers a breadth of recent scientific study within the field including research on healthy systems and diseased models of sensory processing. Topics include visual and visuomotor learning, models of how the brain codes visual information, sensory adaptations in vision and hearing as a result of partial or complete visual loss in childhood, plasticity in the adult visual system, and plasticity across the senses, as well as new techniques in vision recovery, rehabilitation, and sensory substitution of other senses when one sense is lost. This unique edited volume, the fruit of an International Conference on Plastic Vision held at York University, Toronto, will provide students and scientists with an overview of the ongoing research related to sensory plasticity and perspectives on the direction of future work in the field.
Preprint
Full-text available
Animals use afferent feedback to rapidly correct ongoing movements in the presence of a perturbation. Repeated exposure to a predictable perturbation leads to behavioural adaptation that counteracts its effects. Primary motor cortex (M1) is intimately involved in both processes, integrating inputs from various sensorimotor brain regions to update the motor output. Here, we investigate whether feedback-based motor control and motor adaptation may share a common implementation in M1 circuits. We trained a recurrent neural network to control its own output through an error feedback signal, which allowed it to recover rapidly from external perturbations. A biologically plausible plasticity rule based on this same feedback signal allowed the network to learn to counteract persistent perturbations through a trial-by-trial process, in a manner that reproduced several key aspects of human adaptation. Moreover, the resultant network activity changes were also present in neural population recordings from monkey primary motor cortex. Online movement correction and longer-term motor adaptation may thus share a common implementation in neural circuits.
Article
Full-text available
Motor control requires the adaptive updating of internal models to successfully target desired outcomes. This adaptive control can be proactive, such that imminent actions and corresponding sensorimotor programs are anticipated prior to movement, or reactive, such that online error correction is necessary to adjust to sudden changes. While substantial evidence implicates a distributed cortical network serving adaptive control when behavioral changes are required (e.g., response inhibition), the neural dynamics serving such control when the target motor commands are to remain intact are poorly understood. To address this, we developed a novel proactive-reactive cued finger tapping paradigm that was performed during magnetoencephalography by 25 healthy adults. Importantly, to ensure condition-wise differences in adaptive cueing were not attributable to changes in movement kinematics, motor selection, and planning processes were held constant despite changes in task demands. All data were imaged in the time-frequency domain using a beamformer to evaluate the effect of proactive and reactive cues on movement-related oscillations and subsequent performance. Our results indicated spectrally-specific increases in low (i.e., theta) and high (i.e., gamma) frequency oscillations during motor execution as a function of adaptive cueing. Additionally, we observed robust cross-frequency coupling of theta and gamma oscillatory power in the contralateral motor cortex and further, the strength of this theta-gamma coupling during motor execution was differentially predictive of behavioral improvements and decrements during reactive and proactive trials, respectively. These data indicate that functional oscillatory coupling may govern the adaptive control of movement in the healthy brain and importantly, may serve as effective proxies for characterizing declines in motor function in clinical populations in the future.
Article
Full-text available
Since the mid-2000s, perturbation-based balance training has been gaining interest as an efficient and effective way to prevent falls in older adults. It has been suggested that this task-specific training approach may present a paradigm shift in fall prevention. In this review, we discuss key concepts and common issues and questions regarding perturbation-based balance training. In doing so, we aim to provide a comprehensive synthesis of the current evidence on the mechanisms, feasibility and efficacy of perturbation-based balance training for researchers and practitioners. We address this in two sections: “Principles and Mechanisms” and “Implementation in Practice.” In the first section, definitions, task-specificity, adaptation and retention mechanisms and the dose-response relationship are discussed. In the second section, issues related to safety, anxiety, evidence in clinical populations (e.g., Parkinson's disease, stroke), technology and training devices are discussed. Perturbation-based balance training is a promising approach to fall prevention. However, several fundamental and applied aspects of the approach need to be further investigated before it can be widely implemented in clinical practice.
Article
Full-text available
1. We recently showed that patients lacking proprioceptive input from their limbs have particular difficulty performing multijoint movements. In a pantomimed slicing gesture requiring sharp reversals in hand path direction, patients showed large hand path distortions at movement reversals because of failure to coordinate the timing of the separate reversals at the shoulder and elbow joints. We hypothesized that these reversal errors resulted from uncompensated effects of inertial interactions produced by changes in shoulder joint acceleration that were transferred to the elbow. We now test this hypothesis and examine the role of proprioceptive input by comparing the motor performance of five normal subjects with that of two patients with large-fiber sensory neuropathy. 2. Subjects were to trace each of six template lines presented randomly on a computer screen by straight overlapping out-and-back movements of the hand on a digitizing tablet. The lines originated from a common starting position but were in different directions and had different lengths. Directions and lengths were adjusted so that tracing movements would all require the same elbow excursion, whereas shoulder excursion would vary. The effects of varying interaction torques on elbow kinematics were then studied. The subject's dominant arm was supported in the horizontal plane by a low-inertia brace equipped with ball bearing joints and potentiometers under the elbow and shoulder. Hand position was monitored by a magnetic pen attached to the brace 1 cm above a digitizing tablet and could be displayed as a screen cursor. Vision of the subject's arm was blocked and the screen cursor was blanked at movement onset to prevent visual feedback during movement. Elbow joint torques were calculated from joint angle recordings and compared with electromyographic recordings of elbow joint musculature. 3. In control subjects, outward and inward paths were straight and overlapped the template lines regardless of their direction. As prescribed by the task, elbow kinematics remained the same across movement directions, whereas interaction torques varied substantially. The timing of the onsets of biceps activity and the offsets of triceps activity during elbow flexion varied systematically with direction-dependent changes in interaction torques. Controls exploited or dampened these interaction torques as needed to meet the kinematic demands of the task. 4. In contrast, the patients made characteristic errors at movement reversals that increased systematically across movement directions. These reversal errors resulted from improper timing of elbow and shoulder joint reversals.(ABSTRACT TRUNCATED AT 400 WORDS)
Article
Full-text available
We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network.
Article
Full-text available
In this paper, we examine grip forces and load forces during point-to-point arm movements with objects grasped with a precision grip. We demonstrate that grip force is finely modulated with load force. Variations in load force arise from inertial forces related to movement; grip force rises as the load force increases and falls as load force decreases. The same finding is observed in vertical and horizontal movements performed at various rates. In vertical movements, maximum grip force coincides in time with maximum load force. The maxima occur early in upward and later in downward movements. In horizontal movements, where peaks in load force are observed during both the acceleratory and deceleratory phases, grip force rises at the beginning of the movement and remains high until the end. The results suggest that when moving an object with the hand the programming of grip force is an integral part of the planning process.
Article
Full-text available
For the control of the movement of a multijoint manipulator a ''mental model'' which represents the geometrical properties of the arm may prove helpful. Using this model the direct and the inverse kinematic problem could be solved. Here we propose such a model which is based on a recurrent network. It is realized for the example of a three-joint manipulator working in a two-dimensional plane, i.e., for a manipulator with one extra degree of freedom. The system computes the complete set of variables, in our example the three joint angles and the two work-space coordinates of the endpoint of the manipulator. The system finds a stable state and a geometrically correct solution even if only a part of these state variables is given. Thus, the direct and the inverse kinematic problem as well as any mixed problem, including the underconstrained case, can be solved by the network.
Article
Full-text available
The aim of this study was to examine coordination control in eye and hand tracking of visual targets. We studied eye tracking of a self-moved target, and simultaneous eye and hand tracking of an external visual target moving horizontally on a screen. Predictive features of eye-hand coordination control were studied by introducing a delay (0 to 450 ms) between the Subject's (S's) hand motion and the motion of the hand-driven target on the screen. In self-moved target tracking with artificial delay, the eyes started to move in response to arm movement while the visual target was still motionless, that is before any retinal slip had been produced. The signal likely to trigger smooth pursuit in that condition must be derived from non-visual information. Candidates are efference copy and afferent signals from arm motion. When tracking an external target with the eyes and the hand, in a condition where a delay was introduced in the visual feedback loop of the hand, the Ss anticipated with the arm the movement of the target in order to compensate the delay. After a short tracking period, Ss were able to track with a low lag, or eventually to create a lead between the hand and the target. This was observed if the delay was less than 250-300 ms. For larger delays, the hand lagged the target by 250-300 ms. Ss did not completely compensate the delay and did not, on the average, correct for sudden changes in movement of the target (at the direction reversal of the trajectory). Conversely, in the whole range of studied delays (0-450 ms), the eyes were always in phase with the visual target (except during the first part of the first cycle of the movement, as seen previously). These findings are discussed in relation to a scheme in which both predictive (dynamic nature of the motion) and coordination (eye and hand movement system interactive signals) controls are included.
Article
Full-text available
The aim of this article is to describe the role of some neural mechanisms in the adaptive control of limb compliance during preplanned mechanical interaction with objects. We studied the EMG responses and the kinematic responses evoked by pseudorandom perturbations continuously applied by means of a torque motor before and during a catching task. The temporal changes of these responses were studied by means of an identification technique for time-varying systems. We found a transient reversal of EMG stretch reflex responses centered on the time of ball impact on the hand; this reversal results in a transient coactivation of antagonist muscles at both the elbow and the wrist. The kinematic responses describe the relation between torque input and position output. Thus, they provide a global measure of limb compliance. The changes in limb compliance during catching were quantified by computing error criteria either in the Cartesian coordinates of the hand or in the angular coordinates of the elbow and wrist joints. We found that only the hand compliance in Cartesian coordinates is consistently minimized around impact, in coincidence with the transient reversal of the stretch reflex responses. By contrast, the error criteria expressed in the angular coordinates of the joints have a variable time course and are not minimized around impact. It is known that hand compliance depends on both the pattern of muscle activities and the geometrical configuration of the limb. Therefore, the lack of consistent correlation between the changes in hand compliance and the changes in the geometrical configuration of the limb during catching indicates that the gating of the stretch reflex responses around impact time is based on an internal model of limb geometry.
Article
Full-text available
1. Two monkeys were trained to grasp, lift, and hold a device between the thumb and forefinger for 1 s. The device was equipped with a position transducer and strain gauges that measured the horizontal grip force and the vertical lifting or load force. On selected blocks of 20-30 trials, a force-pulse perturbation was applied to the object during static holding to simulate object slip. The animals were required to resist this displacement by stiffening the joints of their wrists and fingers to obtain a fruit juice reward. Single cells in the hand representation area of the paravermal anterior lobe of the cerebellar cortex were recorded during perturbed and unperturbed holding. If conditions permitted, the cell discharge was also recorded during lifting of objects of various weights (15, 65, or 115 g) or different surface textures (sandpaper or polished metal), and when possible the cutaneous or proprioceptive fields of the neurons were characterized with the use of natural stimulation. 2. On perturbed trials, the force pulse was always applied to the manipulandum after it had been held stationary within the position window for 750 ms. The perturbation invariably elicited a reflexlike increase of electromyographic (EMG) activity in wrist and finger muscles, resulting in a time-locked increase in grip force that peaked at a latency between 50 and 100 ms. 3. The object-slip perturbation had a powerful effect on cerebellar cortical neurons at a mean latency of 45 +/- 14 (SD) ms. Reflexlike increases or decreases in simple spike discharge occurred in 55% (53/97) of unidentified cells and 49% (21/43) of Purkinje cells recorded in the anterior paravermal and lateral cerebellar cortex. 4. The perturbation failed to evoke complex spike responses from any of the Purkinje cells examined. All the perturbation-evoked activity changes involved modulation of the simple spike discharge. The perturbations stimulated the simple-spike receptive field of most Purkinje cells recorded here, which suggests that the short-latency unit responses were triggered by afferent stimulation. Only one Purkinje cell was found with a distinct complex-spike receptive field on the thumb, but this neuron did not respond to the perturbation. It appears that simple- and complex-spike to receptive fields are not always identical or even closely related. 5. The majority of Purkinje and unidentified neurons that responded to the perturbation had cutaneous receptive fields, although some had proprioceptive fields. Seventy-seven neurons were examined for peripheral receptive fields and were also tested with the perturbation.(ABSTRACT TRUNCATED AT 400 WORDS)
Article
Full-text available
1. This study addressed potential neural mechanisms of the strength increase that occur before muscle hypertrophy. In particular we examined whether such strength increases may result from training-induced changes in voluntary motor programs. We compared the maximal voluntary force production after a training program of repetitive maximal isometric muscle contractions with force output after a training program that did not involve repetitive activation of muscle; that is, after mental training. 2. Subjects trained their left hypothenar muscles for 4 wk, five sessions per week. One group produced repeated maximal isometric contractions of the abductor muscles of the fifth digit's metacarpophalangeal joint. A second group imagined producing these same, effortful isometric contractions. A third group did not train their fifth digit. Maximal abduction force, flexion/extension force and electrically evoked twitch force (abduction) of the fifth digit were measured along with maximal integrated electromyograms (EMG) of the hypothenar muscles from both hands before and after training. 3. Average abduction force of the left fifth digit increased 22% for the Imagining group and 30% for the Contraction group. The mean increase for the Control group was 3.7%. 4. The maximal abduction force of the right (untrained) fifth digit increased significantly in both the Imagining and Contraction groups after training (10 and 14%, respectively), but not in the Control group (2.3%). These results are consistent with previous studies of training effects on contralateral limbs. 5. The abduction twitch force evoked by supramaximal electrical stimulations of the ulnar nerve was unchanged in all three groups after training, consistent with an absence of muscle hypertrophy. The maximal force of the left great toe extensors for individual subjects remained unchanged after training, which argues against strength increases due to general increases in effort level. 6. Increases in abduction and flexion forces of the fifth digit were poorly correlated in subjects of both training groups. The fifth finger abduction force and the hypothenar integrated EMG increases were not well correlated in these subjects either. Together these results indicate that training-induced changes of synergist and antagonist muscle activation patterns may have contributed to force increases in some of the subjects. 7. Strength increases can be achieved without repeated muscle activation. These force gains appear to result from practice effects on central motor programming/planning. The results of these experiments add to existing evidence for the neural origin of strength increases that occur before muscle hypertrophy.
Article
Full-text available
Excerpt It has been known for more than 100 years that loss or impairment of sensation in our limbs may produce severe disorders of movement and that sensory input plays a critical role in controlling movement. Indeed, the skin, muscles, and joints of our limbs are richly innervated by a variety of sensory receptors that convey proprioceptive information to all levels of the nervous system. What role this input plays in movement control has been a question of recurring interest but remains incompletely understood. In 1895, Mott and Sherrington demonstrated that surgical deafferentation of a monkey's limb produces severe disorders of movement and an unwillingness to use the limb in purposeful action. They therefore concluded that movement initiation requires the support of afferent information and proposed that coordinated movement results from the concatenation of reflex responses. Subsequently, however, it was established that deafferentation does not abolish the capacity to make purposeful...
Article
Full-text available
We address the problem of whether and how adaptation to suppression of visual information occurs in catching behavior. To this end, subjects were provided with advance information about the height of fall and the mass of a ball and an auditory cue signaled the time of release. Adaptation did occur, as indicated by the unimpaired ability to catch the ball without vision; however, it involved a major reorganization of the muscle responses. The subjects were unable to produce anticipatory activity consistently, but preset the responses elicited by the impact. These responses were more complex and prolonged than those observed in the control experiments (with vision). In particular, medium- and long-latency responses were much larger, and the changes in elbow, wrist, and metacarpophalangeal angles following impact were more oscillatory than in the control. The general pattern of the EMG responses switched from that characteristic of catching with vision to that characteristic of catching without vision from the first trial of each experiment. However, the responses produced without vision were calibrated adaptively in the course of an experiment. In fact, the limb oscillations induced by the impact were significantly larger in the first trial than in the following trials. This seems to suggest that the parameters of the responses are adjusted based on an internal model of the dynamic interaction between the falling ball and the limb. This model is initially constructed from a priori knowledge on impact parameters and is subsequently updated on the basis of the kinesthetic and cutaneous information obtained during the first trial.
Article
Full-text available
When the hand of the observer is used as a visual target, oculomotor performance evaluated in terms of tracking accuracy, delay and maximal ocular velocity is higher than when the subject tracks a visual target presented on a screen. The coordination control exerted by the motor system of the arm on the oculomotor system has two sources: the transfer of kinaesthetic information originating in the arm which increases the mutual coupling between the arm and the eyes and information from the arm movement efferent copy which synchronizes the motor activities of both subsystems (Gauthier et al. 1988; Gauthier and Mussa-Ivaldi 1988). We investigated the involvement of the cerebellum in coordination control during a visuo-oculo-manual tracking task. Experiments were conducted on baboons trained to track visual targets with the eyes and/or the hand. The role of the cerebellum was determined by comparing tracking performance defined in terms of delay, accuracy (position or velocity tracking errors) and maximal velocity, before and after lesioning the cerebellar dentate nucleus. Results showed that in the intact animal, ocular tracking was more saccadic when the monkey followed an external target than when it moved the target with its hand. After lesioning, eye-alone tracking of a visual target as well as eye-and-hand-tracking with the hand contralateral to the lesion was little if at all affected. Conversely, ocular tracking of the hand ipsilateral to the lesion side became more saccadic and the correlation between eye and hand movement decreased considerably while the delay between target and eyes increased. In normal animals, the delay between the eyes and the hand was close to zero, and maximal smooth pursuit velocity was around 100 degrees per second with close to unity gain; in eye-alone tracking the delay and maximal smooth pursuit velocity were 200 ms and 50 deg per second, respectively. After lesioning, delay and maximum velocity were respectively around 210 ms and 40 deg per second, that is close to the values measured in eye-alone tracking. Thus, after dentate lesioning, the oculomotor system was unable to use information from the motor system of the arm to enhance its performance. We conclude that the cerebellum is involved in the "coordination control" between the oculomotor and manual motor systems in visuo-oculo-manual tracking tasks.
Article
Full-text available
In order to control voluntary movements, the central nervous system (CNS) must solve the following three computational problems at different levels: the determination of a desired trajectory in the visual coordinates, the transformation of its coordinates to the body coordinates and the generation of motor command. Based on physiological knowledge and previous models, we propose a hierarchical neural network model which accounts for the generation of motor command. In our model the association cortex provides the motor cortex with the desired trajectory in the body coordinates, where the motor command is then calculated by means of long-loop sensory feedback. Within the spinocerebellum--magnocellular red nucleus system, an internal neural model of the dynamics of the musculoskeletal system is acquired with practice, because of the heterosynaptic plasticity, while monitoring the motor command and the results of movement. Internal feedback control with this dynamical model updates the motor command by predicting a possible error of movement. Within the cerebrocerebellum--parvocellular red nucleus system, an internal neural model of the inverse-dynamics of the musculo-skeletal system is acquired while monitoring the desired trajectory and the motor command. The inverse-dynamics model substitutes for other brain regions in the complex computation of the motor command. The dynamics and the inverse-dynamics models are realized by a parallel distributed neural network, which comprises many sub-systems computing various nonlinear transformations of input signals and a neuron with heterosynaptic plasticity (that is, changes of synaptic weights are assumed proportional to a product of two kinds of synaptic inputs). Control and learning performance of the model was investigated by computer simulation, in which a robotic manipulator was used as a controlled system, with the following results: (1) Both the dynamics and the inverse-dynamics models were acquired during control of movements. (2) As motor learning proceeded, the inverse-dynamics model gradually took the place of external feedback as the main controller. Concomitantly, overall control performance became much better. (3) Once the neural network model learned to control some movement, it could control quite different and faster movements. (4) The neural network model worked well even when only very limited information about the fundamental dynamical structure of the controlled system was available.(ABSTRACT TRUNCATED AT 400 WORDS)
Article
Full-text available
We have recorded from 306 neurons in the inferior olive of six alert cats. Most of the cats were trained to perform a simple task with the forelimb. We observed the neural responses to a wide variety of cutaneous and proprioceptive stimuli, as well as responses during spontaneous and learned active movements. Neurons responsive to somatosensory stimulation were found in all parts of the inferior olive, and they were roughly evenly divided between those responsive to cutaneous stimulation and those responsive to proprioceptive stimulation. In the dorsal accessory olive all neurons were responsive to somatosensory stimulation. In the medial accessory nucleus 88% and in the principal olive 74% of cells were responsive to somatosensory stimulation. Cells responsive to cutaneous stimulation usually had small receptive fields, commonly on the paw. These cells had low-threshold responses to one or more forms of cutaneous stimulation and typically fired one spike at the onset of the stimulus on 80% or more of stimulus applications. Cells responsive to proprioceptive stimulation most commonly responded to passive displacements of a limb. These cells were often very sensitive, responding to linear displacements of less than 1 cm in one specific direction. No cells in our sample responded reliably during active movement by the animal. Only 21% of cells responding to passive proprioceptive stimulation showed any modulation during active movement, and the modulation was weak. Likewise, cells responsive to cutaneous stimulation generally failed to respond when a similar stimulus was produced by an active movement by the animal. Exceptions to this were stimuli produced during exploratory movements or when the receptive field unexpectedly made contact with an object during active movement. Electrical stimulation applied in the inferior olive failed to evoke movements or to modify ongoing movement. Our results are consistent with the hypothesis that inferior olivary neurons function as somatic event detectors responding particularly reliably to unexpected stimuli.
Article
Full-text available
Recent research has shown that a simple form of adaptation to prism-produced displacement of the visual field consists primarily of a proprioceptive change––a change in the felt position of the arm seen through prisms––rather than a visual, motor, or visuomotor change. More complex sorts of adaptation (to inversion, reversal, and other optical transformations) can also be understood as resulting from changes in the felt locations of parts of the body relative to other parts. Contrary to the usual empiricist assumption, vision seems to be very stable, whereas the position sense is remarkably flexible. When the 2 senses provide discrepant information, it is the position sense that changes. (3 p. ref.)
Article
Full-text available
DETERMINED THE MINIMUM AMOUNT OF TIME NECESSARY TO PROCESS VISUAL FEEDBACK FROM A MOVEMENT. 8 STUDENTS RAPIDLY MOVED A STYLUS FROM A HOME POSITION TO A TARGET. ON 1/2 THE TRIALS ALL LIGHTS TURNED OFF AT THE START OF THE MOVEMENT SO THAT THEY WERE MADE IN THE DARK. VISUAL FEEDBACK DID NOT FACILITATE ACCURACY IN HITTING THE TARGET WHEN THE MOVEMENT WAS AS SHORT AS 190 MSEC. FOR DURATIONS OF 260 MSEC. OR LONGER, HAVING THE LIGHTS ON FACILITATED ACCURACY, SUGGESTING THAT IT TAKES 190-260 MSEC. TO PROCESS THE FEEDBACK.
Article
Full-text available
We have studied manual motor function in a man deafferented by a severe peripheral sensory neuropathy. Motor power was almost unaffected. Our patient could produce a very wide range of preprogrammed finger movements with remarkable accuracy, involving complex muscle synergies of the hand and forearm muscles. He could perform individual finger movements and outline figures in the air with his eyes closed. He had normal pre- and postmovement EEG potentials, and showed the normal bi/triphasic pattern of muscle activation in agonist and antagonist muscles during fast limb movements. He could also move his thumb accurately through three different distances at three different speeds, and could produce three different levels of force at his thumb pad when required. Although he could not judge the weights of objects placed in his hands without vision, he was able to match forces applied by the experimenter to the pad of each thumb if he was given a minimal indication of thumb movement. Despite his success with these laboratory tasks, his hands were relatively useless to him in daily life. He was unable to grasp a pen and write, to fasten his shirt buttons or to hold a cup in one hand. Part of his difficulty lay in the absence of any automatic reflex correction in his voluntary movements, and also to an inability to sustain constant levels of muscle contraction without visual feedback over periods of more than one or two seconds. He was also unable to maintain long sequences of simple motor programmes without vision.
Article
Two experiments were carried out to assess the cognitive contribution of a movement plan and its relationship to efferent-command information in explaining the superior accuracy of preselected movement. Subjects performed under conditions which differed with regard to efferent information (active versus passive) and the availability of a movement plan (preselected, subject defined vs. constrained, experimenter defined). In both experiments active preselection had significantly smaller reproduction errors than any other combination, indicating that the planning process in itself was insufficient to facilitate movement coding. The findings were interpreted in terms of the sometimes-synonymously viewed concepts of central monitoring of efference, efference copy, and corollary discharge. The latter notion, that a central signal occurs prior to movement which acts to facilitate the processing of information, seemed to provide the most appropriate account of the present data. The experiments reported here are a portion of a Ph.D. dissertation submitted to the University of Wisconsin-Madison under the supervision of George E. Stelmach, to whom appreciation is extended. Support for the research was provided by Grant MH 22081-01 from the National Institute of Mental Health and by Grant NE-G-3-0009 from the National Institute of Education. However, the opinions expressed herein do not necessarily reflect the position or policies of the above granting agencies and no official endorsement of them should be inferred. The author would like to thank Shelley Smith who assisted with data collection in these experiments.
Article
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation" com-pletely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary statistics. The variance equation is closely related to the Hamiltonian (canonical) differential equations of the calculus of variations. Analytic solutions are available in some cases. The significance of the variance equation is illustrated by examples which duplicate, simplify, or extend earlier results in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed side-by-side. Properties of the variance equation are of great interest in the theory of adaptive systems. Some aspects of this are considered briefly.
Article
This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.
Article
The constraints on learning new mappings between visual and proprioceptive spatial dimensions were assessed. Incomplete information was provided about a mapping by specifying only a few isolated visual–proprioceptive pairs of locations. The nature of the generalization occurring to untrained locations was then inspected to reveal the internal constraints. A new technique was developed to allow individual visual–proprioceptive pairs to be manipulated separately. In Experiment 1, training with only a single pair produced a rigid shift of one entire dimension with respect to the other. Training with two pairs caused linear interpolation to all untrained positions between the trained positions (Experiments 2 and 3). Finally, training with three new pairs also produced a linear change in behavior (Experiment 4), even though more adaptive solutions existed. The implications of these results for the learning process involved in acquiring new mappings are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
We extend the cerebellar learning model proposed by Kawato and Gomi (1992) to the case where a specific region of the cerebellum executes adaptive feed-back control as well as feedforward control. The model is still based on the feedback-error-learning scheme. The proposed adaptive feedback control model is developed in detail as a specific neural circuit model for three different regions of the cerebellum and the learning of the corresponding representative movements: (i) the flocculus and adaptive modification of the vestibulo-ocular reflex and optokinetic eye-movement responses, (ii) the vermis and adaptive posture control, and (iii) the intermediate zones of the hemisphere and adaptive control of locomotion. As a representative example, simultaneous adaptation of the vestibulo-ocular reflex and the optokinetic eye-movement response was successfully simulated while the Purkinje cells receive copies of motor commands through recurrent neural connections as well as vestibular and retinal-slip parallel-fiber inputs.
Article
Nine young infants were followed longitudinally from 4 to 15 months of age. We recorded early spontaneous movements and reaching movements to a stationary target. Time-position data of the hand (endpoint), shoulder, and elbow were collected using an optoelectronic measurement system (ELITE). We analyzed the endpoint kinematics and the intersegmental dynamics of the shoulder and elbow joint to investigate how changes in proximal torque control determined the development of hand trajectory formation. Two developmental phases of hand trajectory formation were identified: a first phase of rapid improvements between 16 and 24 weeks of age, the time of reaching onset for all infants. During that time period the number of movement units per reach and movement time decreased dramatically. In a second phase (28–64 weeks), a period of fine-tuning of the sensorimotor system, we saw slower, more gradual changes in the endpoint kinematics. The analysis of the underlying intersegmental joint torques revealed the following results: first, the range of muscular and motiondependent torques (relative to body weight) did not change significantly with age. That is, early reaching was not confined by limitations in producing task-adequate levels of muscular torque. Second, improvements in the endpoint kinematics were not accomplished by minimizing amplitude of muscle and reactive torques. Third, the relative timing of muscular and motion-dependent torque peaks showed a systematic development toward an adult timing profile with increasing age. In conclusion, the development toward invariant characteristics of the hand trajectory is mirrored by concurrent changes in the control of joint forces. The acquisition of stable patterns of intersegmental coordination is not achieved by simply regulating force amplitude, but more so by modulating the correct timing of joint force production and by the system's use of reactive forces. Our findings support the view that development of reaching is a process of unsupervised learning with no external or innate teacher prescribing the desired kinematics or kinetics of the movement.
Article
We have investigated how the control of hand transport and of hand aperture are coordinated in prehensile movements by delivering mechanical perturbations to the hand transport component and looking for coordinated adjustments in hand aperture. An electric actuator attached to the subject's right arm randomly pulled the subject backwards, away from the target, or pushed them towards it, during a quarter of the experimental trials. A compensatory adjustment of hand aperture followed the immediate, mechanical effects of the perturbation of hand transport. The adjustment appeared to return the subject towards a stereotyped spatial relation between hand aperture and hand transport. These spatial patterns suggest how the two components may be coordinated during prehension. A simple model of this coordination, based on coupled position feedback systems, is presented.
Article
We propose a computationally coherent model of cerebellar motor learning based on the feedback-error-learning scheme. We assume that climbing fiber responses represent motor-command errors generated by some of the premotor networks such as the feedback controllers at the spinal-, brain stem- and cerebral levels. Thus, in our model, climbing fiber responses are considered to convey motor errors in the motor-command coordinates rather than in the sensory coordinates. Based on the long-term depression in Purkinje cells each corticonuclear microcomplex in different regions of the cerebellum learns to execute predictive and coordinative control of different types of movements. Ultimately, it acquires an inverse model of a specific controlled object and complements crude control by the premotor networks. This general model is developed in detail as a specific neural circuit model for the lateral hemisphere. A new experiment is suggested to elucidate the coordinate frame in which climbing fiber responses are represented.
Article
A comprehensive theory of cerebellar function is presented, which ties together the known anatomy and physiology of the cerebellum into a pattern-recognition data processing system. The cerebellum is postulated to be functionally and structurally equivalent to a modification of the classical Perceptron pattern-classification device. It is suggested that the mossy fiber → granule cell → Golgi cell input network performs an expansion recoding that enhances the pattern-discrimination capacity and learning speed of the cerebellar Purkinje response cells.Parallel fiber synapses of the dendritic spines of Purkinje cells, basket cells, and stellate cells are all postulated to be specifically variable in response to climbing fiber activity. It is argued that this variability is the mechanism of pattern storage. It is demonstrated that, in order for the learning process to be stable, pattern storage must be accomplished principally by weakening synaptic weights rather than by strengthening them.
Article
We refer to Model Predictive Control (MPC) as that family of controllers in which there is a direct use of an explicit and separately identifiable model. Control design methods based on the MPC concept have found wide acceptance in industrial applications and have been studied by academia. The reason for such popularity is the ability of MPC designs to yield high performance control systems capable of operating without expert intervention for long periods of time. In this paper the issues of importance that any control system should address are stated. MPC techniques are then reviewed in the light of these issues in order to point out their advantages in design and implementation. A number of design techniques emanating from MPC, namely Dynamic Matrix Control, Model Algorithmic Control, Inferential Control and Internal Model Control, are put in perspective with respect to each other and the relation to more traditional methods like Linear Quadratic Control is examined. The flexible constraint handling capabilities of MPC are shown to be a significant advantage in the context of the overall operating objectives of the process industries and the 1-, 2-, and ∞-norm formulations of the performance objective are discussed. The application of MPC to non-linear systems is examined and it is shown that its main attractions carry over. Finally, it is explained that though MPC is not inherently more or less robust than classical feedback, it can be adjusted more easily for robustness.
Article
The minimum torque-change model predicts and reproduces human multi-joint movement data quite well. However, there are three criticisms of the current neural network models for trajectory formation based on the minimum torque-change criteria: (1) their spatial representation of time, (2) back propagation is essential, and (3) they require too many iterations. Accordingly, we propose a new neural network model for trajectory formation based on the minimum torque-change criterion. Our neural network model basically uses a forward dynamics model, an inverse dynamics model, and a trajectory formation mechanism, which generates an approximate minimum torque-change trajectory. It does not require spatial representation of time or back propagation. Furthermore, there are less iterations required to obtain an approximate optimal solution. Finally, our neural network model can be broadly applied to the engineering field because it is a new methodfor solving optimization problems with boundary conditions.
Article
This review presents a theory and prototype for a neural controller called INFANT that learns sensory-motor coordination from its own experience. Three adaptive abilities are discussed: locating stationary targets with movable sensors; grasping arbitrarily positioned and oriented targets in 3D space with multijoint arms, and positioning an unforeseen payload with accurate and stable movements despite unknown sensor feedback delay. INFANT adapts to unforeseen changes in the geometry of the physical motor system, the internal dynamics of the control circuits and to the location, orientation, shape, weight, and size of objects. It learns to accurately grasp an elongated object with almost no information about the geometry of the physical sensory-motor system. This neural controller relies on the self-consistency between sensory and motor signals to achieve unsupervised learning. It is designed to be generalized for coordinating any number of sensory inputs with limbs of any number of joints. The principle theme of the review is how various geometries of interacting topographic neural fields can satisfy the constraints of adaptive behavior in complete sensory-motor circuits.
Article
This paper examines the relationship between imagery and the acquisition of motor skills. Since most of the research in the motor domain has considered imagery under the topic of mental practice, a comparison between imagery and mental practice is first drawn. Then the basic mental practice paradigm is outlined and research on the effects of imagery is summarized. Factors influencing the use of imagery are considered, including the task, the imagery instructions, and individual imagery abilities. Implications for employing imagery in the teaching of motor skills are discussed and, finally, an approach to studying imagery and motor skills is put forward.
Article
Experiments with young infants provide evidence for early-developing capacities to represent physical objects and to reason about object motion. Early physical reasoning accords with 2 constraints at the center of mature physical conceptions: continuity and solidity. It fails to accord with 2 constraints that may be peripheral to mature conceptions: gravity and inertia. These experiments suggest that cognition develops concurrently with perception and action and that development leads to the enrichment of conceptions around an unchanging core. The experiments challenge claims that cognition develops on a foundation of perceptual or motor experience, that initial conceptions are inappropriate to the world, and that initial conceptions are abandoned or radically changed with the growth of knowledge.
Article
The conventional notion that peripheral muscle-related signals provide the basis for resistance to external perturbations is no longer sufficient. Proprioceptive information seems to be required for spatial steering of multi-joint movements, and also for temporal coordination among the joints in certain tasks. In rhythmic movements, peripheral and centrally generated signals appear to interact in a complementary manner. The complex effects of proprioceptive afferents on motor output continue to be delineated vigorously. Global effects of local perturbation in multi-joint contexts are emerging as being particularly significant.