The visual perception of a humanoid robot bridges the physical world with the internal world representation through visual skills such as self-localization, object recognition, detection, classification and tracking. Unfortunately, these skills are affected by internal and external sources of uncertainty. These uncertainties are present at various levels ranging from noisy signals and calibration
... [Show full abstract] deviations of the embodiment up to mathematical approximations and limited granularity of the perception-planning-action cycle. This aggregated uncertainty deteriorates and limits the precision and efficiency of the humanoid robot visual perception. In order to overcome these limitations, the depth perception uncertainty should be modeled in the skills of the humanoid robots. Due to the complexity of the aggregated uncertainty in humanoid systems, the visual depth uncertainty can be hardly modeled analytically. However, the uncertainty distribution can be conveniently attained by supervised learning. The role of the supervisor is to provide ground-truth spatial measurements corresponding to the humanoid uncertain visual depth percep-tion. In this article 1 , a supervised learning method for inferring a novel model of the visual depth uncertainty is presented. The acquisition of the model is autonomously attained by the humanoid robot ARMAR-IIIB, see Fig.1.