Kungshuset Lundagrd’s research while affiliated with Lund University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (19)


LUCS Minor 2, 1994. ISSN 1104--1609.
  • Article

December 2003

·

12 Reads

·

Kungshuset Lundagrd

BERRY III is a simulator for autonomous robots and artificial creatures. This report describes the various elements of the simulated environment and the implementation of the physical part of the simulator. The body, the sensor and motor systems of the simulated creatures are described in detail together with the algorithms used to model interaction between objects. These algorithms include calculation of smell diffusion, tactile and visual input, movement of the creature and collision detection.


A Robot With Autonomous Spatial Learning:

December 2003

·

7 Reads

Introduction To act in an unknown and continously changing environment, an autonomous robot must be able to react instantaneously on changes and unexpected events in order to avoid collisions and to update its maps. Successful navigation requires that the robot reacts primarily on its immediate sensory information and secondarily on its internal mapping of the spatial layout of the environment. We have developed and constructed an experimental mobile robot equipped with a number of complementary sensory systems (Balkenius and Kopp 1994a). A video camera is mounted on a movable head that also contains a pair of microphones. Ultrasonic sensors are located around the body of the robot and a set of tactile sensors (whiskers) and a bumper are used to detect obstacles at a short range. The project aims at developing the attention and navigation systems of the robot to include vision for spatial orientation. The choice of vision is natural since this modality contains the richest informati


Unknown

December 2003

·

9 Reads

Introduction A mobile robot navigating in an unstructured environment faces many difficult problems for which vision may potentially offer useful solutions. The XT-1 (eXpectation based Template matching) architecture was developed in an attempt to address many of these problems with similar constructions. The current system handles such diverse problems as landmark and place recognition, the generation of orienting and anticipatory saccades, smooth pursuit, as well as visual servoing during locomotion. Although all these tasks are highly interwoven, they can roughly be divided into subsystems for navigation and target tracking. The tracking system has been successfully implemented in a robot (figure 1). We are currently moving the navigational system from an experimental set-up to a real mobile robot (figure 2). The emphasis of the architecture has been on the actual tasks that a mobile robot needs to perform rather than on the more theoretical aspects of computer vision. Although we


Unknown

December 2003

·

15 Reads

A computational model of the context processing is presented. It is shown in computer simulations how a stable context representation can be learned from a dynamic sequence of attentional shifts between various stimuli in the environment. The mechanism can automatically create the required context representations, store memories of stimuli and bind them to locations. The model also shows how an explicit matching between expected and actual stimuli can be used for novelty detection.


Toward a Robot Model of Attention-Deficit

December 2003

·

11 Reads

We describe a behavioural experiment with the hawkmoth Deilephila elpenor and show how its behaviour in the experimental situation can be reproduced by a computational model. The aim of the model is to investigate what learning strategies are necessary to produce the behaviour observed in the experiment. Since very little is known about the nervous system of the animal, the model is mainly based on behavioural data and the sensitivities of its photoreceptors. The model consists of a number of interacting behaviour systems that are triggered by specific stimuli and control specific behaviours. The ability of the moth to learn the colours of different flowers and the adaptive processes involved in the choice between stimulus-approach and place-approach strategies is also modelled. The behavioural choices of the simulated model closely parallel those of the real animal. The model has implications both for the ecology of the animal and for robotic systems.


Figure 2.1 An elastic template is coded by a number of features f 0 , f 1 , f 2 together with their spatial relations r 01 , r 02 , r 12 .
Figure 4.1. Sensitivity to changes in the environment with a hit threshold of 0.7. TOP. The original template with 32 features. MIDDLE. Matching when the door has been opened and a chair inserted. The hit rate is 16/32 with the average correlation of 0.85. BOTTOM. Matching with people present in the image. The hit rate is 23/32 with average correlation of 0.73.
Robust Self-Localization using Elastic Templates
  • Article
  • Full-text available

December 2003

·

28 Reads

Introduction A visually orienting mobile robot must cope with a number of changes to its environment. Most importantly, it must be able to identify its location even when objects in the environment have been moved or when the illumination conditions have changed. We report experiments with a template-based self-localization method that operates in real time on a Pentium 133 MHz PC (Balkenius and Kopp 1996a, b; 1997). The algorithm has been implemented in an autonomous mobile robot as an important part of its navigational system (Balkenius and Kopp 1996a, b; 1997). The main task for the algorithm is to recognize landmarks robustly around the robot. These landmarks are represented as elastic templates and are automatically selected by the robot during learning. Our methods differ from other vision-based localization techniques in a number of respects. First, it does not require artificial landmark-symbols (Adorni, et al. 1996). Second, it can derive the exact angle toward a landmark wh

Download

Figure 1. LEFT. The initial configuration of a network with a two dimensional input. The weights for the two scale-spaces σ 0 and σ 1 (filled circles) lie on different distances around the origin while the input vectors (open circles) are presented on the unit circle. RIGHT. Two weights vectors on their way to the unit circle. The competition between the two nodes depends on both the angle ϕ i and the magnitude µ i of the category i. The arrows show the direction of change in the weight vector if its corresponding category wins.
Figure 2. The result of the categorisation process with different values on ss. The figure shows a categorisation in an euclidean coordinate system. ϕ and ω correspond to angles in figure 1. The number of nodes in the network is the same in all three cases, but a different number of categories are constructed.
Some Properties Of Neural Representations

December 2003

·

52 Reads

·

5 Citations

When a stimulus reaches our sensory system, it evokes a perceptual schema. This schema, subsequently, produces a central neiral representation of the stimulus. I want to investigate what properties these neural representations must have to support complex cognitive processes. To do this, I present a description of neural networks that makes it possible to bridge the gap between the neuronal and the conceptual level of description. Using this high level description , I state a hypothesis about neural representations that I have called the continuity hypothesis. It makes it possible to use knowledge of processes at the conceptual level at the neuronal level. Some properties of the representation of a stimulus are presented and it is shown that the central representation can be the result either of a categorisation or an associative process. Two minimal conditions on categorisation are suggested. These conditions are used to construct a new type of competitive learning that support representations that cohere with these conditions. Finally, I discuss the representation of relations between stimuli. It is suggested that the temporal domain is necessary to represent arbitrary relations among stimuli.


Unknown

May 2002

·

12 Reads

Our aim is to elucidate the similarities and differences between humans and apes as concerns cooperative behaviour and its relation to communication. In particular, we will point to the decisive role of symbolic communication for making more advanced forms of co-operation possible. We distinguish between competitive and collaborative co-operation and take this distinction as a starting point for our analysis. In competitive contexts, co-operation is triggered by what is present in the environment. The resource that is competed for is available and accessible, but not yet in possession. Humans, but not apes, can as well engage in collaborative co-operation. In this type of co-operation the resource is not manifest, but mainly imagined. The reason why only humans can co-operate collaboratively is that they can imagine what is not there. We submit that language has evolved as a tool by which humans can make their imaginations known to each other, in order to enhance co-operation. Language gives human beings a great advantage as concerns co-operative behaviour, especially regarding communication about goals and the ways to reach them. Symbolic communication makes use of representations as stand-ins for actual entities. Use of representations thus replaces the use of environmental features in communication. A consequence of this is that language makes it possible to jointly attend to imagined goals. Joint attention is a more basic capacity than language-use. It is necessary for all kinds of co-operation because it makes it possible for different subjects to attend to a common goal. Apes can engage in joint attention, but do not achieve the same complexity as humans. They can jointly attend only to things that are present in the context. This makes it difficult to co-operat...


Balkenius, C., Kopp, L. (1996). Visual tracking and target selection for mobile robots. Visual Tracking and Target Selection for Mobile Robots

October 2000

·

9 Reads

This paper describes how tracking and target selection are used in two behavior systems of the XT-1 vision architecture for mobile robots. The first system is concerned with active tracking of moving targets and the second is used for visually controlled spatial navigation. We overview the XT-1 architecture and describe the role of expectation-based template matching for both target tracking and navigation. The subsystems for low-level processing, attentional processing, single feature processing, spatial relations, and place/object-recognition are described and we present a number of behaviors that can make use of the different visual processing stages. The architecture, which is inspired by biology, has been successfully implemented in a number of robots which are also briefly described. 1.


The Origin of Symbols in the Brain

April 2000

·

138 Reads

·

1 Citation

Deacon's (1997) book is an interesting attempt to explain the critical aspects of the evolution of language as the learning of symbolic relationships. Deacon blurs the traditional distinction between syntax and semantics by arguing that the meaning of symbols is primarily determined via the combinatorial relations between symbols, and only secondarily via an indexical relation between a symbol and a referent (Deacon 1997, Ch. 3). However, this account of how acquisition of symbols involves multiple hierarchies of associative learning has proved rather difficult to understand (Hurford 1997), and even more difficult to incorporate into an explicit representational model. In this article, we want to use Deacon's theory as a platform for a more elaborated and precise model of symbol learning. Our model will be presented in rough phylogenetic order, and will contain only those cognitive elements that are minimally required for the learning of symbols. These mechanisms


Citations (4)


... Some of these modules have already been constructed while others are still under development. So far, we have designed modules for motivation and behavioural selection (Balkenius 1993), simple place recognition based on smell cues (see below), perceptual schema formation based on categorization and association (Balkenius 1994), reinforcement learning and self-organizing cognitive maps. Some recent progress concerning reactive problem solving that depends on the interaction between a reactive control system, motivation and reinforcement learning is reported below together with the overall architecture of the system. ...

Reference:

Natural Intelligence For Autonomous Agents
Some Properties Of Neural Representations

... The use of coding at several different resolutions has also been shown to speed up reinforcement learning with as much as an order of magnitude or more (Balkenius 1996;Sutton 1996). Features at low resolution are first associated with the rewarded responses, and the finer scales are subsequently used to fine-tune the behavior to specific input patterns. ...

Generalization in Instrumental Learning

... Another model is the nomadic π-calculus [140], similar in principles to actor theory with the main difference that an agent is associated with a host site and can migrate between sites during its execution. [53] for the relationship between belief revision and non-monotonic logic) and it deals with the fundamental problem of keeping an agent's set of beliefs consistent as new observations are made. ...

Belief revision: A vade-mecum

... (5) Accepting that strategic simplifications may need to be accepted; e.g., methodologically, by starting out with the assumption of uniform populations to find robust overall patterns of effects, or theoretically, by allowing that it is not always necessary or optimal to go beyond a general, ecological concept of attention as a naturally unified process of selection for action (Balkenius & Hulth, 1999). ...

Attention as Selection-for-Action: A Scheme for Active Perception