Behavior-based neuro-fuzzy controller for mobile robot navigation

Sch. of Inf. Technol. & Eng., Univ. of Ottawa, Ont., Canada
IEEE Transactions on Instrumentation and Measurement (Impact Factor: 1.36). 09/2003; DOI: 10.1109/TIM.2003.816846
Source: IEEE Xplore

ABSTRACT This paper discusses a neuro-fuzzy controller for sensor-based mobile robot navigation in indoor environments. The control system consists of a hierarchy of robot behaviors.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents the development of a vision-based neuro-fuzzy controller for a two axes gimbal system mounted on a small Unmanned Aerial Vehicle (UAV). The controller uses vision-based object detection as input and generates pan and tilt motion and velocity commands for the gimbal in order to keep the interest object at the center of the image frame. A readial basis function based neuro-fuzzy system and a learning algorithm is developed for the controller to address the dynamic and non-linear characteristics of the gimbal movement. The controller uses two separate, but identical radial basis function networks, one for pan and one for tilt motion of the gimbal. Each system is initialized with a fixed number of neurons that act as rules basis for the fuzzy inference system. The membership functions and rule strengths are then adjusted with the feedback from the visual tracking system. The controller is trained off-line until a desired error level is achieved. Training is then continued on-line to allow the system to accommodate air speed changes. The algorithm learns from the error computed from the detected position of the object in image frame and generates position and velocity commands for the gimbal movement. Several tests including lab tests and actual flight tests of the UAV have been carried out to demonstrate the effectiveness of the controller. Test results show that the controller is able to converge effectively and generate accurate position and velocity commands to keep the object at the center of the image frame.
    Journal of Intelligent and Robotic Systems 08/2013; · 0.83 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Wireless information transmission tools have been common in buildings and houses. Using the information which wireless anchor nodes periodically broadcast, a mobile robot can estimate its own location in indoor environments. In this paper, we propose three block localization methods for improving mobile robot tracking and navigation performance. Unlike a conventional localization method, “the joint localization method” jointly estimates a block of successive locations of a robot imposing the constraint that the distances among the locations are exactly known for the robot. On the other hand, “the robot's location-assisted localization method” estimates the present location as if the previously estimated locations were those of anchor nodes, and “its iterative version” repeatedly updates all the locations changing the previously estimated locations. We discuss the performance of the three localization methods by computer simulation and an experiment.
    Positioning Navigation and Communication (WPNC), 2013 10th Workshop on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper aims to investigate the formation control of multi-robot systems, where the kinematic model of a differentially driven wheeled mobile robot is considered. Based on the graph-theoretic concepts and locally distributed information, an adaptive neural fuzzy formation controller is designed with the capability of on-line learning. The learning rules of controller parameters can be derived from the analyzing of Lyapunov stability. In addition to simulations, the proposed techniques are applied to an experimental multi-robot platform for performance validations. From simulation and experimental results, the proposed adaptive neural fuzzy protocol can provide better formation responses compared to conventional consensus algorithms.
    International Journal of Fuzzy Systems 09/2013; 15(3):259-370. · 1.51 Impact Factor

Full-text (2 Sources)

Available from