Article

Attention modulation using short- and long-term knowledge.

ICVS 2008, Lecture Notes in Computer Science, Springer 5008:151-160.
0 Followers
 · 
4 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A cognitive visual system is generally intended to work robustly under varying environmental conditions, adapt to a broad range of unforeseen changes, and even exhibit prospective behavior like systematically anticipating possible visual events. These properties are unquestionably out of reach of currently available solutions. To analyze the reasons underlying this failure, in this paper we develop the idea of a vision system that flexibly controls the order and the accessibility of visual processes during operation. Vision is hereby understood as the dynamic process of selective adaptation of visual parameters and modules as a function of underlying goals or intentions. This perspective requires a specific architectural organization, since vision is then a continuous balance between the sensory stimulation and internally generated information. Furthermore, the consideration of intrinsic resource limitations and their organization by means of an appropriate control substrate become a centerpiece for the creation of truly cognitive vision systems. We outline the main concepts that are required for the development of such systems, and discuss modern approaches to a few selected vision subproblems like image segmentation, item tracking and visual object classification from the perspective of their integration and recruitment into a cognitive vision system.
    Full-text · Chapter · Jan 1970
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A stable perception of the environment is a crucial prerequisite for researching the learning of semantics from human-robot interaction and also for the generation of behavior relying on the robots perception. In this paper, we propose several contributions to this research field. To organize visual perception the concept of proto-objects is used for the representation of scene elements. These proto-objects are created by several different sources and can be combined to provide the means for interactive autonomous behavior generation. They are also processed by several classifiers, extracting different visual properties. The robot learns to associate speech labels with these properties by using the outcome of the classifiers for online training of a speech recognition system. To ease the combination of visual and speech classifier outputs, a necessity for the online training and basis for future learning of semantics, a common representation for all classifier results is used. This uniform handling of multimodal information provides the necessary flexibility for further extension. We will show the feasibility of the proposed approach by interactive experiments with the humanoid robot ASIMO.
    Preview · Conference Paper · Jan 2009
  • [Show abstract] [Hide abstract]
    ABSTRACT: Fast, reliable and demand-driven acquisition of visual information is the key to represent visual scenes efficiently. To achieve this efficiency, a cognitive vision system must plan the utilization of its processing resources to acquire only information relevant for the task. Here, the incorporation of long-term knowledge plays a major role on deciding which information to gather. In this paper, we present a first approach to make use of the knowledge about the world and its structure to plan visual actions. We propose a method to schedule those visual actions to allow for a fast discrimination between objects that are relevant or irrelevant for the task. By doing so, we are able to reduce the system’s computational demand. A first evaluation of our ideas is given using a proof-of-concept implementation.
    No preview · Conference Paper · Oct 2009
Show more