Humanoid robots possess unique locomotive and manipulation capabilities which makes them predestined as assistants in households or even in disaster scenarios. Their legs allow them to walk across rough terrain and clutter, climb elevations, or pass narrow passages. At the same time, with their arms, they could deliver objects, remove debris, or even use power tools to cut through walls. However,
... [Show full abstract] to enable this kind of behavior, novel navigation techniques are required that exploit the special capabilities of humanoids.
One of the great challenges in navigation is that a robot always acts under uncertainty. It possesses only imperfect knowledge about itself and its environment, yet this knowledge is fundamental. Motions and observations are affected by noise and need to be handled appropriately. Data has to be associated in the presence of ambiguities to obtain consistent representations of the environment. For humanoid robots, the problem aggravates as the kinematic complexity that needs to be handled is higher compared to wheeled robots. The shaking motion of the humanoids adds further errors to the sensor data, making it harder to interpret. Additional constraints like balance and payload need to be considered.
In this thesis, we present novel methods that contribute to the development of autonomous humanoid robots. Hereby, we focus on cameras as primary sensor. First, we describe a method to self-calibration of the robot's kinematic model. Hereby, our approach automatically selects appropriate calibration postures. Further, we present a method to identify safe areas for the robot to step onto based on self-supervised classification of camera images. Additionally, we describe an integrated navigation system for robots equipped with depth cameras. The approach estimates the robot's 6D pose within a map, constructs a volumetric representation of the unknown parts of the environment and plans collision-free paths to a target location. We introduce extensions that allow navigation in challenging, cluttered scenarios based on anytime footstep planning. Thereby we enable the robot to step over or onto obstacles and traverse narrow passages. Finally, we demonstrate a method that enables accurate manipulation by tracking the pose of objects in the camera images.
All of our techniques are implemented and thoroughly evaluated on a Nao humanoid. Our contributions advance the state-of-the-art in humanoid robot navigation and enable autonomous navigation capabilities even for affordable humanoids.