Article

Attention modulation using short- and long-term knowledge.

ICVS 2008, Lecture Notes in Computer Science, Springer 5008:151-160.
0 Bookmarks
 · 
64 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Humanoid robots are intended to act and interact in dynamically changing environments in the presence of humans. Current robotic systems are usually able to move in dynamically changing environments because of an inbuilt depth and obstacle sensing. However, for acting in their environment the internal representation of such systems is usually constructed by hand and known in advance. In contrast, this paper presents a system that dynamically constructs its internal scene representation using a model-based vision approach. This enables our system to approach and grasp objects in an previously unknown scene. We combine standard stereo with model-based image fitting techniques for a real-time estimation of the position and orien- tation of objects. The model-based image processing allows for an easy transfer to the internal, dynamic scene representation. For movement generation we use a task-level whole-body control approach that is coupled with a movement optimization scheme. Furthermore, we present a novel method that constrains the robot to keep certain objects in the FOV while moving. We demonstrate the successful interplay between model-based vision, dynamic scene representation, and movement generation by means of some interactive reaching and grasping tasks.
    IEEE International Conference on Robotics and Automation, ICRA 2011, Shanghai, China, 9-13 May 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A stable perception of the environment is a crucial prerequisite for researching the learning of semantics from human-robot interaction and also for the generation of behavior relying on the robots perception. In this paper, we propose several contributions to this research field. To organize visual perception the concept of proto-objects is used for the representation of scene elements. These proto-objects are created by several different sources and can be combined to provide the means for interactive autonomous behavior generation. They are also processed by several classifiers, extracting different visual properties. The robot learns to associate speech labels with these properties by using the outcome of the classifiers for online training of a speech recognition system. To ease the combination of visual and speech classifier outputs, a necessity for the online training and basis for future learning of semantics, a common representation for all classifier results is used. This uniform handling of multimodal information provides the necessary flexibility for further extension. We will show the feasibility of the proposed approach by interactive experiments with the humanoid robot ASIMO.
    Humanoid Robots, 2008. Humanoids 2008. 8th IEEE-RAS International Conference on; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A cognitive visual system is generally intended to work robustly under varying environmental conditions, adapt to a broad range of unforeseen changes, and even exhibit prospective behavior like systematically anticipating possible visual events. These properties are unquestionably out of reach of currently available solutions. To analyze the reasons underlying this failure, in this paper we develop the idea of a vision system that flexibly controls the order and the accessibility of visual processes during operation. Vision is hereby understood as the dynamic process of selective adaptation of visual parameters and modules as a function of underlying goals or intentions. This perspective requires a specific architectural organization, since vision is then a continuous balance between the sensory stimulation and internally generated information. Furthermore, the consideration of intrinsic resource limitations and their organization by means of an appropriate control substrate become a centerpiece for the creation of truly cognitive vision systems. We outline the main concepts that are required for the development of such systems, and discuss modern approaches to a few selected vision subproblems like image segmentation, item tracking and visual object classification from the perspective of their integration and recruitment into a cognitive vision system.
    01/1970: pages 215-247;

Sven Rebhan