Sensory motor remapping of space in human-machine interfaces

Department of Physiology, Northwestern University, Chicago, Illinois, USA.
Progress in brain research (Impact Factor: 2.83). 01/2011; 191:45-64. DOI: 10.1016/B978-0-444-53752-2.00014-X
Source: PubMed


Studies of adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. These studies have also pointed out that adaptation to novel dynamics is aimed at preserving the trajectories of a controlled endpoint, either the hand of a subject or a transported object. We review some of these experiments and present more recent studies aimed at understanding how the motor system forms representations of the physical space in which actions take place. An extensive line of investigations in visual information processing has dealt with the issue of how the Euclidean properties of space are recovered from visual signals that do not appear to possess these properties. The same question is addressed here in the context of motor behavior and motor learning by observing how people remap hand gestures and body motions that control the state of an external device. We present some theoretical considerations and experimental evidence about the ability of the nervous system to create novel patterns of coordination that are consistent with the representation of extrapersonal space. We also discuss the perspective of endowing human-machine interfaces with learning algorithms that, combined with human learning, may facilitate the control of powered wheelchairs and other assistive devices.

Full-text preview

Available from:
  • Source
    • "Alternatively, recent works have supported a shift in myoelectric control applications towards human-embedded controllers learned through interaction with a constant mapping function associating sEMG inputs with control outputs (Antuvan et al., 2014). Mussa-Ivaldi et al. (2011) propose that the human motor system is capable of learning novel inverse mappings relating the effect of motor commands on control outputs while interacting with myoelectric interfaces. This learning has been modeled and verified in the presence of closed-loop feedback (Radhakrishnan et al., 2008; Chase et al., 2009; Héliot et al., 2010), allowing users to perform tasks simply by learning controls in a given task space (Mosier et al., 2005; Liu and Scheidt, 2008; Liu et al., 2011; Pistohl et al., 2013). "
    [Show abstract] [Hide abstract]
    ABSTRACT: One of the hottest topics in rehabilitation robotics is that of proper control of prosthetic devices. Despite decades of research, the state of the art is dramatically behind the expectations. To shed light on this issue, in June, 2013 the first international workshop on Present and future of non-invasive peripheral nervous system (PNS)-Machine Interfaces (MI; PMI) was convened, hosted by the International Conference on Rehabilitation Robotics. The keyword PMI has been selected to denote human-machine interfaces targeted at the limb-deficient, mainly upper-limb amputees, dealing with signals gathered from the PNS in a non-invasive way, that is, from the surface of the residuum. The workshop was intended to provide an overview of the state of the art and future perspectives of such interfaces; this paper represents is a collection of opinions expressed by each and every researcher/group involved in it.
    Frontiers in Neurorobotics 08/2014; 8:22. DOI:10.3389/fnbot.2014.00022
  • Source
    • "Although intuitive decoders give better initial performance, other decoders with worse initial performance are capable of higher learning rates. Further, Mussa-Ivaldi et al. [20] propose that the human motor system attempts to uncover the novel inverse map relating the effect of motor commands on task-relevant variables through learning. This suggests that humans, while learning to perform a task with a novel control space, tend to explore the full space in order to form a complete inverse model. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Myoelectric controlled interfaces have become a research interest for use in advanced prosthetics, exoskeletons, and robot teleoperation. Current research focuses on improving a user's initial performance, either by training a decoding function for a specific user or implementing "intuitive" mapping functions as decoders. However, both approaches are limiting, with the former being subject-specific, and the latter task-specific. This paper proposes a paradigm shift on myoelectric interfaces by embedding the human as controller of the system to be operated. Using abstract mapping functions between myoelectric activity and control actions for a task, this study shows that human subjects are able to control an artificial system with increasing efficiency, by just learning how to control it. The method efficacy is tested by using two different control tasks and four different abstract mappings relating upper limb muscle activity to control actions for those tasks. The results show that all subjects were able to learn the mappings and improve their performance over time. More interestingly, a chronological evaluation across trials reveals that the learning curves transfer across subsequent trials having the same mapping, independent of the tasks to be executed. This implies that new muscle synergies are developed and refined relative to the mapping used by the control task, suggesting that maximal performance may be achieved by learning a constant, arbitrary mapping function rather than dynamic subject- or task-specific functions. Moreover, the results indicate that the method may extend to the neural control of any device or robot, without limitations for anthropomorphism or humanrelated counterparts.
    IEEE transactions on neural systems and rehabilitation engineering: a publication of the IEEE Engineering in Medicine and Biology Society 01/2014; 22(4). DOI:10.1109/TNSRE.2014.2302212 · 3.19 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Studies of speech sensorimotor learning often manipulate auditory feedback by modifying isolated acoustic parameters such as formant frequency or fundamental frequency using near real-time resynthesis of a participant's speech. An alternative approach is to engage a participant in a total remapping of the sensorimotor working space using a virtual vocal tract. To support this approach for studying speech sensorimotor learning, we have developed a system to control an articulatory synthesizer using electromagnetic articulography data. Articulator movement data from the NDI Wave System are streamed to a Maeda articulatory synthesizer. The resulting synthesized speech provides auditory feedback to the participant. This approach allows the experimenter to generate novel articulatory-acoustic mappings. Moreover, the acoustic output of the synthesizer can be perturbed using acoustic resynthesis methods. Since no robust speech-acoustic signal is required from the participant, this system will allow for the study of sensorimotor learning in any individuals, even those with severe speech disorders. In the current work, we present preliminary results that demonstrate that typically functioning participants can use a virtual vocal tract to produce diphthongs within a novel articulatory-acoustic workspace. Once sufficient baseline performance is established, perturbations to auditory feedback (formant shifting) can elicit compensatory and adaptive articulatory responses.
    The Journal of the Acoustical Society of America 05/2013; 133(5):3342. DOI:10.1121/1.4805649 · 1.50 Impact Factor
Show more