An experimental evaluation of a novel minimum-jerk Cartesian controller for humanoid robots

Conference Paper (PDF Available)inProceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems · October 2010with116 Reads
DOI: 10.1109/IROS.2010.5650851 · Source: DBLP
Conference: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 18-22, 2010, Taipei, Taiwan
Abstract
In this paper we describe the design of a Cartesian Controller for a generic robot manipulator. We address some of the challenges that are typically encountered in the field of humanoid robotics. The solution we propose deals with a large number of degrees of freedom, produce smooth, human-like motion and is able to compute the trajectory on-line. In this paper we support the idea that to produce significant advancements in the field of robotics it is important to compare different approaches not only at the theoretical level but also at the implementation level. For this reason we test our software on the iCub platform and compare its performance against other available solutions.

Figures

Figure
    • "Differently from the work of Hersch and Billard, we designed a feedback trajectory generator instead of the VITE (Vector-Integration-To-Endpoint) method used in open loop. A complete discussion of the rationale of the modifications to the trajectory generation is outside the scope of this paper; the interested reader is referred to Pattacini et al. [16] . Reasons to prefer a feedback formulation include the possibility of smoothly connecting multiple pieces of trajectories and correcting on line for accumulation of errors due to the enforcement of the constraints of the multireferential method. "
    [Show abstract] [Hide abstract] ABSTRACT: This paper is about a layered controller for a complex humanoid robot: namely, the iCub. We exploited a combination of precomputed models and machine learning owing to the principle of balancing the design effort with the complexity of data collection for learning. A first layer uses the iCub sensors to implement impedance control, on top of which we plan trajectories to reach for visually identified targets while avoiding the most obvious joint limits or self collision of the robot arm and body. Modeling errors or misestimation of parameters are compensated by machine learning in order to obtain accurate pointing and reaching movements. Motion segmentation is the main visual cue employed by the robot.
    Chapter · Jan 2017 · PLoS ONE
    • "Arm displacements and finger clicks are programmed to trigger display on the subject's tablet (show/hide items) and take notes (monitor correct responses). The arm gesture controller uses the iCub Cartesian Interface [19], which enables the control of the robot's arm directly on operational space by providing the desired position and orientation of one endeffector (here the index finger of the right hand). The arm controller also provides task-specific movements: preparing to click, clicking, and going back to rest position. "
    [Show abstract] [Hide abstract] ABSTRACT: Socially assistive robot with interactive behavioral capability have been improving quality of life for a wide range of users by taking care of elderlies, training individuals with cognitive disabilities or physical rehabilitation, etc. While the interactive behavioral policies of most systems are scripted, we discuss here key features of a new methodology that enables professional caregivers to teach a socially assistive robot (SAR) how to perform the assistive tasks while giving proper instructions, demonstrations and feedbacks. We describe here how socio-communicative gesture controllers – which actually control the speech, the facial displays and hand gestures of our iCub robot – are driven by multimodal events captured on a professional human demonstrator performing a neuropsychological interview. Furthermore, we propose an original online evaluation method for rating the multimodal interactive behaviors of the SAR and show how such a method can help designers to identify the faulty events.
    Full-text · Conference Paper · Oct 2016 · PLoS ONE
    • "These are weighted by the activations, a i (t), of the corresponding taxels' Therefore, the resultant position and the direction of motion of the avoidance/reaching 896 behavior are proportional to the activation of the taxels' representations and change 897 dynamically as the activation levels of different taxels varies. The velocity control loop 898 employs a Cartesian controller [57] whose reference speed was fixed to 10cm/s. "
    [Show abstract] [Hide abstract] ABSTRACT: This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.
    Full-text · Article · Oct 2016
Show more