Conference Paper

An experimental evaluation of a novel minimum-jerk Cartesian controller for humanoid robots

Dept. of Robot., Brain & Cognitive Sci. (RBCS), Italian Inst. of Technol. (IIT), Genova, Italy
DOI: 10.1109/IROS.2010.5650851 Conference: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 18-22, 2010, Taipei, Taiwan
Source: IEEE Xplore


In this paper we describe the design of a Cartesian Controller for a generic robot manipulator. We address some of the challenges that are typically encountered in the field of humanoid robotics. The solution we propose deals with a large number of degrees of freedom, produce smooth, human-like motion and is able to compute the trajectory on-line. In this paper we support the idea that to produce significant advancements in the field of robotics it is important to compare different approaches not only at the theoretical level but also at the implementation level. For this reason we test our software on the iCub platform and compare its performance against other available solutions.

Download full-text


Available from: Francesco Nori, Jun 18, 2014
  • Source
    • "Therefore, the resultant position and the direction of motion of the avoidance/catching behavior were proportional to the activation of the taxels' representations and changed dynamically as the activation levels of different taxels varied. The velocity control loop employed a cartesian controller [23] whose reference speed was fixed to 10cm/s. "
    [Show abstract] [Hide abstract]
    ABSTRACT: With robots leaving factory environments and entering less controlled domains, possibly sharing living space with humans, safety needs to be guaranteed. To this end, some form of awareness of their body surface and the space surrounding it is desirable. In this work, we present a unique method that lets a robot learn a distributed representation of space around its body (or peripersonal space) by exploiting a whole-body artificial skin and through physical contact with the environment. Every taxel (tactile element) has a visual receptive field anchored to it. Starting from an initially blank state, the distance of every object entering this receptive field is visually perceived and recorded, together with information whether the object has eventually contacted the particular skin area or not. This gives rise to a set of probabilities that are updated incrementally and that carry information about the likelihood of particular events in the environment contacting a particular set of taxels. The learned representation naturally serves the purpose of predicting contacts with the whole body of the robot, which is of clear behavioral relevance. Furthermore, we devised a simple avoidance controller that is triggered by this representation, thus endowing a robot with a " margin of safety " around its body. Finally, simply reversing the sign in the controller we used gives rise to simple " reaching " for objects in the robot's vicinity, which automatically proceeds with the most activated (closest) body part.
    IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS2015, Hamburg, Germany; 09/2015
  • Source
    • "A good measure of how a certain configuration is suitable for a robot's joints configuration is provided by the standard manipulability [30]. To compute this quantity, we make use of the method in [31] to solve the inverse kinematics (IK) of the arm and the torso. It provides the joint configuration that satisfies the desired position and orientation of the hand using 10 degrees of freedom (7 for the arm and 3 for the torso). "
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new method for three-finger precision grasp and its implementation in a complete grasping toolchain. We start from binocular vision to recover the partial 3D structure of unknown objects. We then process the incomplete 3D point clouds searching for good triplets according to a function that weighs both the feasibility and the stability of the solution. In particular, while stability is determined in a classical way (i.e. via force-closure), feasibility is evaluated according to a new measure that includes information about the possible configuration shapes of the hand as well as the hand’s inverse kinematics. We finally extensively assess the proposed method using the stereo vision and the kinematics of the iCub robot.
    Proceedings - IEEE International Conference on Robotics and Automation; 05/2014
  • Source
    • "Our experiments were carried out with iCub, a 53 Degrees Of Freedom (DOF) humanoid robot shaped as a child [30], using the upper body of the robot (head, torso, arms and hands -totally 41 DOF). The proprioceptive sensors, the inertial and the proximal force/torque sensors embedded on the platform, combined with its kinematics and dynamics modeling, are fed to numerous modules of the architecture: modules for controlling gaze, posture and reaching movements [65], modules for controlling the whole-body dynamics [35], the robot compliance and its contact forces [36], modules for learning incrementally the visuo-motor models of the robot [23], the vision modules recognizing the robot's self-body in the visual space [33], the basic modules for speech and gaze tracking [24], just to cite a few. 5 The architecture is implemented as a set of concurrent modules exchanging information with the robot thanks to the YARP middleware [66]. Some modules, developed in ROS [67], communicate bidirectionally with YARP thanks to a simple bridge between the two middlewares. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the problem of active object learning by a humanoid child-like robot, using a developmental approach. We propose a cognitive architecture where the visual representation of the objects is built incrementally through active exploration. We present the design guidelines of the cognitive architecture, its main functionalities, and we outline the cognitive process of the robot by showing how it learns to recognize objects in a human-robot interaction scenario inspired by social parenting. The robot actively explores the objects through manipulation, driven by a combination of social guidance and intrinsic motivation. Besides the robotics and engineering achievements, our experiments replicate some observations about the coupling of vision and manipulation in infants, particularly how they focus on the most informative objects. We discuss the further benefits of our architecture, particularly how it can be improved and used to ground concepts.
    IEEE Transactions on Autonomous Mental Development 03/2014; 6(1):56-72. DOI:10.1109/TAMD.2013.2280614 · 1.48 Impact Factor
Show more