# Integrated Control and Navigation for Omni-directional Mobile Robot Based on Trajectory Linearization

**ABSTRACT** In this paper, an integrated navigation and control for omni-directional mobile robot is developed. Both control and navigation algorithms are based on trajectory linearization. The robot control is based on trajectory linearization control (TLC), in which an open-loop kinematic inversion and a closed-loop linear time varying (LTV) stabilizer are combined together to provide robust and accurate trajectory tracking performance. The LTV stabilizer is designed along the nominal trajectory provided by the kinematic inversion. The robot navigation is based on a sensor fusion using nonlinear Kalman filter which is also designed along the nominal trajectory. The sensor fusion combines onboard sensor and vision system measurements together, and provides reliable and accurate location and orientation measurements. Gating technology is employed to remove the inaccurate vision measurement. A real-time hardware-in-the- loop (HIL) simulation system was built to verified the proposed integrated control and navigation. Test results show that the proposed method improves robot location and orientation measurements reliability and accuracy, thus it improves the robot controller performance significantly.

**0**Bookmarks

**·**

**87**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**1Robot Vision, 03/2010; , ISBN: 978-953-307-077-3 -
##### Conference Paper: Control of a desktop mobile haptic interface

[Show abstract] [Hide abstract]

**ABSTRACT:**Most haptic devices share two main limits: they are grounded and they have limited workspace. A possible solution is to create haptic interfaces by combining mobile robots and standard grounded force-feedback devices, the so called Mobile Haptic Interfaces (MHIs). However, MHIs are characterized by dynamical limitations due to performance of the employed devices. This paper focuses on basic design issues and presents a novel (prototype) Mobile Haptics Platform that employs the coordination of numerically controlled wheel torques to render forces to a user handle placed on the top of the device. The interface, consisting in a small omni-directional robot, is link-less, fully portable and it has been designed to support home-rehabilitation exercises. In the present paper we shall review relevant choices concerning the functional aspects and the control design. In particular a specific embedded sensor fusion was implemented to allow the device to move on a desk without drifting. The sensor fusion algorithm has been optimized to provide users with a quality force feedback while ensuring accurate position tracking. The two requirements are in contrast each other and a specific variant of the Extended Kalman Filter (EKF) was required to allow the device working.World Haptics Conference (WHC), 2011 IEEE; 07/2011 - SourceAvailable from: Jim Zhu[Show abstract] [Hide abstract]

**ABSTRACT:**In this paper, a nonlinear controller design for an omni-directional mobile robot is presented. The robot controller consists of an outer-loop (kinematics) controller and an inner-loop (dynamics) controller, which are both designed using the Trajectory Linearization Control (TLC) method based on a nonlinear robot dynamic model. The TLC controller design combines a nonlinear dynamic inversion and a linear time-varying regulator in a novel way, thereby achieving robust stability and performance along the trajectory without interpolating controller gains. A sensor fusion method, which combines the onboard sensor and the vision system data, is employed to provide accurate and reliable robot position and orientation measurements, thereby reducing the wheel slippage induced tracking error. A time-varying command filter is employed to reshape an abrupt command trajectory for control saturation avoidance. The real-time hardware-in-the-loop (HIL) test results show that with a set of fixed controller design parameters, the TLC robot controller is able to follow a large class of 3-degrees-of-freedom (3DOF) trajectory commands accurately.Robotics and Autonomous Systems. 01/2008;

Page 1

Integrated Control and Navigation for Omni-directional Mobile Robot

Based on Trajectory Linearization

Yong Liu, Robert L. Williams II and J. Jim Zhu

Abstract: In this paper, an integrated navigation and

control for omni-directional mobile robot is developed. Both

control and navigation algorithms are based on trajectory

linearization. The robot control is based on trajectory

linearization control (TLC), in which an open-loop kinematic

inversion and a closed-loop linear time varying (LTV)

stabilizer are combined together to provide robust and

accurate trajectory tracking performance. The LTV stabilizer

is designed along the nominal trajectory provided by the

kinematic inversion. The robot navigation is based on a sensor

fusion using nonlinear Kalman filter which is also designed

along the nominal trajectory. The sensor fusion combines

onboard sensor and vision system measurements together, and

provides reliable and accurate location and orientation

measurements. Gating technology is employed to remove the

inaccurate vision measurement. A real-time hardware-in-the-

loop (HIL) simulation system was built to verified the proposed

integrated control and navigation. Test results show that the

proposed method improves robot location and orientation

measurements reliability and accuracy, thus it improves the

robot controller performance significantly.

I. INTRODUCTION AND PROBLEM STATEMENT

An omni-directional mobile robot is a holonomic robot

[1][2]. The inherent agility of the omni-directional mobile

robot makes it widely studied for dynamic environment

applications. The annual international Robocup competition

in which teams of autonomous robots compete in soccer-like

games, is an example where the omni-directional mobile

robot can be used. The Ohio University (OU) Robocup

Team's entry is a cross-disciplinary research projectRobocat

intended for Robocup small-size league competition. The

current OU Robocup team members are comprised of Phase

V omni-directional mobile robots, as shown in Fig. 1. The

Phase V three Robocat has

arranged ° apart Each wheel is driven by a DC motor

"#!Þ

installed with an optical shaft encoder. An overhead camera

above the field of play can sense the position and the

orientation of robots.

In Robocup games, a precise trajectory tracking control

for the robot is one of the key areas to improve a team's

performance. A nonlinear controller based on trajectory

linearization control (TLC) was developed for Robocat

wheels,omni-directional

Manuscript received September 15th, 2007.

Yong Liu (email: yongliu@bobcat.ent.ohiou.edu) and J. Jim Zhu

(corresponding author, email: zhuj@ohio.edu) are with the School of

Electrical Engineering and Computer Science, Ohio University, Athens,

OH, 45701.

Robert L. Williams II is with Department of Mechanical Engineering,

Ohio University, Athens, Ohio, 45701(email: williar4@ohio.edu).

robots [3][4]. TLC combines nonlinear dynamic inversion

and linear time-varying eigenstructure assignment, and

provides robust stability and performance along the

trajectory without interpolation of controller gains [5]. TLC

has been successfully applied to missile and reusable launch

vehicle flight control systems [7-10]. Both hardware-in-the-

loop (HIL) simulation and the real-time competition

performance demonstrate

significantly improves the robot maneuverability. With the

same set of controller parameters, the robot is able to follow

various three-degree-of-freedom

accurately.

that the TLC controller

(3DOF) trajectories

Figure 1 Phase V Robocat Robot

It was observed that the accurate position and

orientation measurement is the throttle to further improve

the system performance. In the present

configuration, the robot location and orientation can be

measured from either onboard sensors or a roof camera.

Onboard sensors, including

accelerometer and gyroscope, measure the robot body rate,

including robot speed and rotation angular rate. The robot

location and orientation is estimated by integrating the

measured robot body rate. Such estimation has the

advantage of high sampling rate. Whereas it has unbounded

cumulative errors introduced the body rate measure noise or

wheel slippage [12]. If using the onboard sensor alone, the

robot drifts away from trajectory commands. The vision

system using the roof camera can measure the robot location

and orientation directly by image processing. However, the

vision system is slow and unreliable due to the roof camera's

slow capture rate, the random image processing failure and

the delay of the wireless communication between the off-

field vision system and the robot. If using merely vision

system, the delay and failures of image processing can

destabilize robot controller.

In this paper, an integrated control and navigation

technique for robot position and orientation control is

developed. The developed method augments the existing

controller with a navigation component using sensor fusion.

The sensor fusion combines vision system and onboard

sensor estimation together to provide an accurate and

reliable location measurement. It is based on a nonlinear

systemRobocat

motor shaft encoder,

Proceedings of the 2007 American Control Conference

Marriott Marquis Hotel at Times Square

New York City, USA, July 11-13, 2007

ThA01.3

1-4244-0989-6/07/$25.00 ©2007 IEEE.2153

Page 2

Kalman filter algorithm designed along the nominal

trajectory provided by the controller's dynamic inversion. A

gating technique is employed to remove the incorrect vision

data.

Kalman filter is a widely used method in sensor fusion

[13]. The standard Kalman filter is developed for linear

dynamic system with Gaussian noise distribution. For

nonlinear systems, techniques such as linearized Kalman

filter, extended Kalman filter, unscented Kalman filter and

Particle filter have been developed and applied successfully

in many practical applications[14][15]. Linearized Kalman

filter and extended Kalman filter (EKF) apply standard

Kalman filter by linearizing the original nonlinear system

[14]. In a linearzed Kalman filter, linearization is along a

nominal trajectory. The dynamic system state may diverge

from the nominal trajectory over time. Thus the linearized

Kalman filter is usually used in short time mission. In EKF,

the linearization is about the state estimation. Thus there is a

danger that error propagation and filter divergence may

occur. Both linearized Kalman filter and EKF are

computationally efficient. However, terms neglected in

linearization may lead to suboptimal performance. To

overcome such disadvantage, unscented Kalman filter

(UKF) [15]and particle filters (PFs)[16] are developed.

UKF and PFs require much larger computational power to

implement, compared to linearized Kalman filter and EKF.

The proposed nonlinear filter is motivated by TLC

observer design [11]. Its structure is similar to linearized

Kalman filter. In this structure, the nominal system

trajectory generated by TLC controller is used to linearize

the nonlinear robot kinematics in the filter. The TLC

controller drives the robot to follow the nominal trajectory.

Thus the divergence problem in the linearized Kalman filter

is alleviated. The proposed nonlinear Kalman filter is

computationally efficient, and is suitable for real-time

implementation on the robot onboard computer.

A real-time HIL simulation system was developed to

verify the proposed method. Real-time test results show that

the sensor fusion method provides reliable and accurate

location and orientation measurements. Thus it improves the

robot control system performance.

In section II, the controller structure is briefly reviewed.

Then the sensor fusion algorithm for navigation is

illustrated. In section III, the real-time HIL system is

described. In section IV, the real-time test results are

presented.

II. OMNI-DIRECTIONAL MOBILE ROBOT CONTROL

AND NAVIGATION BASED ON TLC

In this section, the kinematics model and TLC controller

design of omni-directional robot is first reviewed, then the

sensor fusion based on nonlinear Kalman filter is described.

A. Robocat Kinematics and TLC Controller

In the Robocat mobile robot controller design,

loop controller architecture is employed, as shown in Figure

2. The outer-loop controller adjusts the robot position and

orientation following command trajectories. The inner-loop

a two-

is a body rate controller which follows the body rate

commands from the outer-loop controller. Both outer-loop

and inner-lope controllers employ TLC structure. Detailed

design and test result of the omni-directional mobile robot

TLC controller is summarized in [3][4]. In this section, only

kinematics and outer-loop controller design is briefly

reviewed.

Inner-loop ControllerOuter-loop Controller

⎥

⎦

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎡

Ψ

y

x

Inverse

Kinematics

⎥

⎦

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎡

r

v

u

++

⎥

⎦

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎡

Ψ

y

x

_

PI

ControllerI

Inverse

Dynamics

Robot

Dynamics

Robot

Kinetics

+ PI ControllerII

⎥

⎦

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎡

3

2

1

E

E

E

⎥

⎦

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎡

r

v

u

+

_

⎥

⎦

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎡

com

r

com

v

com

u

Robot

_

⎥

⎦

⎥

⎥

⎤

⎢

⎣

⎢

⎢

⎡

3

2

1

E

E

E

Sensor Fusion

Encoder

Vision

System

Figure 2 Robot TLC Controller Structure

There are two coordinate frames used in the modeling:

the body frame {B} and the world frame {W}. The body

frame is fixed on the moving robot with the origin at the

center of chassis, as shown in the Figure 3(a). The world

frame is fixed on the field of play, as shown in Figure 3(b).

B

y

B x

1m

ω

δ

B y

B x

w x

w

y

Ψ

L

Wheel 3

Wheel 2

Wheel 1

2m

ω

3m

ω

r

(a) Body frame(b) World frame

e fef

F[

Figure 3 Coordinate Frames

Symbols used in the robot dynamic model are listed in

Table 1. Symbols

Body Frame

<

?ß @

?ß@ß<

?ß@ß<

ßß

Motor Sha

===

7"7# 7$

World Frame

BßC

G

Rotation angular rate

Velocity components

Nominal body rate

Body rate command

ft Speed

com comcom

()Robot location

Robot orientation angle

Wheel radius

Radius of robot body

Gear ratio

Wheel orientation angle

Mechanical Constants

V

P

8

$

The Robot kinematics is given by

×

Ø

!

The relationship between body rate

speed

===

7"7#7$

ßß

Ô

Õ

<P

Ô

Õ

Ô

Õ

×Ô

ØÕ

c

×

Ø

×

Ø

d

BÞ

CÞ

Þ

G

œ

-9=Ð Ñ

=38Ð Ñ

=38Ð Ñ

-9=Ð Ñ

!

!

!

"

?

@

<

G

G

G

G

(1)

and motor shaft

?@<X

is given by

cd

X

×

Ø

Ô

Õ

Ô

Õ

×

Ø

Î

Ï

Ñ

Ò

ÐÓ

?

@

"-9=Ð Ñ

=38Ð Ñ

P

-9=Ð Ñ

=38Ð Ñ

P

"

œ

$

$

$=

=

=

$

X

"

V

8

7"

7#

7$

(2)

ThA01.3

2154

Page 3

The outer-loop controller design is based on Eq. (1).

First, from (1), the nominal body rate for a desired trajectory

ÒBÐ>ÑßCÐ>ÑßÐ>ÑÓ

G

Ô

Õ

<

!

where is calculated from the command

ÒBÐ>ÑßCÐ>ÑßÐ>ÑÓ

G

ÒBÐ>ÑßCÐ>Ñß Ð>ÑÓ

G

order pseudo-differentiator is represented by the following

transfer function

diffa b

where is the damping ratio;

'

bandwidth that attenuates high frequency gain, thereby

making the pseudo-differentiator causal and realizable. In

the controller realization, the nominal trajectory is replaced

by filtered position command taken from the pseudo-

differentiator.

Define the robot position tracking error and the tracking

error control by

c

$ $ $

? @ <œ

?@<

com com

Linearizing along (1)

ÒBÐ>ÑßCÐ>Ñß Ð>ÑÓ Ò?Ð>Ñß@Ð>Ñß<Ð>ÑÓ

G

and

error dynamics

c

where

!! =38Ð Ñ? -9=Ð Ñ@

!! -9=Ð Ñ? =38Ð Ñ@

!!!

Secondly, a proportional-integral (PI) feedback control

law is designed to stabilize the tracking error.

Ô

Õ

$

<

/

G

where and are time-varying matrix gains. The body

OO

:"M"

rate command to the inner-loop is given by

c

B. Nonlinear Kalman Filter for Omni-directional Robot

Sensor Fusion

T is

Ô

Õ

×

Ø

×

Ø

Ô

Õ

×

Ø

ÖÙ

?

@

-9=Ð Ñ

=38Ð Ñ

=38Ð Ñ

-9=Ð Ñ

!

!

!

"

œ

BÐ>Ñ

CÐ>Ñ

Þ

Ð>Ñ

G

Þ

Þ

GG

GG

__

__

(3)

.

.

.

T

T using a pseudo-differentiator. A second

K= œÐ Ñ

4

=

=

is the low-pass filter

= #

=

'==

#

8ß

##

8ß

8ß

diff

diff

diff

=8ßdiff

dcdc

com

the

d

/ / /

B

c

C

G

X

d

X

X

œ

B

c

C

BC

X

G

G

c

,

dd

nominal

T

yields the linearized

c

Ô

Õ

?@<

X

X

trajectories

T

dcdd

/ / /

B C

œ / / /

B

? @ <

Þ Þ Þ

C

GG

XX

""

X

EF $ $ $

( ) 5

E œ

"

F œ

"

-9=Ð Ñ

=38Ð Ñ

!

=38Ð Ñ

-9=Ð Ñ

!

!

!

"

Ô

Õ

×

Ø

×

Ø

GGG

G

G

GGG

×

Ø

Ô

Õ

×

Ø

Ô

Õ

×

Ø

ÖÙ

'

'

'

$

$

?

@

œ O O

/

/

/ Ð>Ñ.>

/ Ð>Ñ.>

/ Ð>Ñ.>

G

:"M"

B

C

B

C

( ) 6

dcdcd

?@<œ ? @ <? @ <

$ $

Ð Ñ

7

comcomcom

XXX

$

(1) Location and Orientation Measurement Model

Approximate the robot kinematics (1) using forward

Eular method with time interval X

Ô

Õ

!

55"

GG

In omni-directional mobile robot, body rate can be measured

from onboard sensor. Position can be observed from the

vision system. The body rate measurement

time step is defined as

5

c

where , is body rate measurement noises

AAßA

"ß5#ß5$ß5

The vision system measurement

step can be defined as

5

×

Ø

Ô

Õ

×

Ø

Ô

Õ

×Ô

ØÕ

×

Ø

B

C

B

C

-9=Ð

=38Ð

Ñ † X

Ñ † X

=38Ð

-9=Ð

Ñ † X

Ñ † X

!

!

?

@

<

œ

!" † X

55"5"5" 5"

5 5" 5"5"5"

5"

G

G

G

G

at

cd

? @ <

s s s

555

X

dcdcd

? @ <

s s s

5

œ ? @ <

5

AA

,

ßA

5555

d

"ß5#ß5$ß5

X

XX

(8)

c

Þ

X

and at time

DßDD

"ß5#ß5$ß5

Ô

Õ

× Ô

Ø Õ

is vision system noise.

and

ßA

$ß5

×

Ø

Ô

Õ

×

Ø

D

D

D

.

.

.

B

C

G

Ð Ñ

9

"ß5"ß5

#ß5

d

#ß5

$ß5

X

$ß5

5

5

5

=

where

c

...

A

"ß5#ß5 $ß5

Both , are assumed

b

cdcd

A...

a

"ß5#ß5"ß5

b

#ß5$ß5

XX

to be white with normal distribution, such that

: AA

: ...

"ß5#ß5

where is the body rate measurement covariance,

U − ‘$‚$

and is the vision system observation noise

V − ‘$‚$

covariance.

ßA

µ R !ßV

µ R !ßU

b

a

aba

"ß5"#ß5"$ß5"

$ß5

,

(2) Nonlinear Kalman Filter Using Trajectory Linearization

Define as the

B C

555

G

from body rate measurement at time step . It can be

calculated as

Ô

Õ

!

5

5"

G

G

cd

X

priori location estimation

5

×

Ø

Ô

Õ

×

Ø

Ô

Õ

×

Ø

Ô

Õ

×

Ø

ÖÙ

B

C

-9=Ð

=38Ð

Ñ † X

Ñ † X

=38Ð

-9=Ð

Ñ † X

Ñ † X

!

!

?

@

s

< s

œ

B s

C s

s

s

s

G

s

G

s

G

! " † X

s

5

5

5"

5"

5"5"5"

5"5"

5"

5"

(10)

G

Then the predicted vision system observation can be defined

as

DDD

œ

"ß5 #ß5$ß5

Define prediction error and innovation error as

Ô

Õ

/

5

5

5

G

GG

11

cdcd

ÐÑ

BC

X

55

Ô

Õ

5

×

Ø

X

G

×

Ø

Ô

Õ

×

Ø

Ô

Õ

× Ô

Ø Õ

×

Ø

Ô

Õ

×

Ø

(12)

ÖÙ

/

/

D

œœ

B

C

B

C

/

/

/

D

D

D

D

D

B

C

5

5

5

5

D

D

D

"ß5

#ß5

$ß5

"ß5

#ß5

$ß5

5

5

"ß5

#ß5

$ß5

,

By linearizing equation (10) and along real position

c

approximated as

Ô

Õ

/

5

G

!!

Î

Ï

Ô

Õ

!!

Ô

Õ

/

/

D

$ß5

G

In Equation (13),

BC

ss

s

5"

5"

G

estimate of . In (13), the real position

///

BC

555

-1 -1-1

G

c

?@<

5"5" 5"

trajectory is driven close to nominal trajectory. Thus the

nominal position

BC

5"5"

5"

G

c

(13). Equation (13) and (14) can be rewritten as

Ô

Õ

/

5"

5

G

G

d

B C

555

X

G

, the prediction error dynamics can be

×

Ø

/

/

œ

B

C

5

5

Ô

Õ

×

Ø

"

!

!

"

=38Ð

-9=Ð

Ñ † ?

Ñ † ?

† X -9=Ð

† X =38Ð

"

Ñ † @

Ñ † @

† X

† X

GG

GG

5"5" 5"5"

5"

Ñ

Ò

5"5"5"

Ô

Õ

-9=Ð

=38Ð

×

Ø

Ô

Õ

×

Ø

× Ô

Ø Õ

.

.

#ß5

c

×

Ø

B s

C s

s

G

B

C

G

Ñ † X

Ñ † X

=38Ð

-9=Ð

×

Ø

Ñ † X

Ñ † X

!

!

Ô

Õ

B

" † X

†

A

A

A

×

Ø

C

5"

5"

5"

G

G

5

5

5

5"5"

5"5"

Ô

Õ

"ß5"

#ß5"

$ß5"

G

G

-1

-1

-1

(13)

×

Ø

/

/

œ

/

/

.

D

D

B

C

"ß5

$ß5

"ß5

#ß5

5

5

5

(14)

is an

’“

d

5"555

X

G

-1-1 -1

cd

X

d

Xare unknown. In TLC controller, the robot

d

Î

Õ

5

G

B

c

C

5"5"5"

X

G

and the real body rate

d

and nominal body rate

‘

X

?@

×

Ø

<

5"5"5"

Xcan be used to approximate actual state in

Ñ

Ø

Ô

Õ

×

ØÕØ

ÏÒ

Ô×Ô×

/

/

A

œ E [

B s

C s

s

B

C

A

A

B

C

55

5"

5"#ß5"

5

5

"ß5"

$ß5"

5

5

-1

-1

-1

ÐÑ

15

ThA01.3

2155

Page 4

whereE5œ–—

1

!

!

1

!

1

! =38Ð

-9=Ð

Ñ † ?

Ñ † ?

† X -9=Ð

† X =38Ð

Ñ † @

Ñ † @

† X

† X

GG

GG

5"5"5" 5"

5"5" 5"5"

×

Ø

[ œ

5

Ô

Õ

-9=Ð

=38Ð

Ñ † X

Ñ † X

=38Ð

-9=Ð

Ñ † X

Ñ † X

!

!

!!" † X

G

G

G

G

5"5"

5"5"

Equation (15) and (14) are linearized error dynamics

along the nominal trajectory. Assuming

slow-varying, a nonlinear Kalman filter can be constructed

based on (15) and (14). It is similar to linearized Kalman

filter, except that the TLC controller ensure the actual robot

trajectory close to the nominal trajectory. It is different from

extended Kalman filter (EKF), in which the estimation at the

preceding time step is used to linearize the error dynamics.

The nonlinear Kalman filter design is described in

Equation(17) to (22)

are

E5 and [5

(3) Gating and Integration with Vision System

In practice, the vision system may lose frames or give

wrong measurements due to image processing failure.

Gating is a technique for eliminating most unlikely

measurement [14]. There are several commonly used gating

algorithms. Rectangular gate is the simplest one.

Rectangular Gating is defined as the following

É5

where is the diagonal element of the vision system

5#

noise covariance, and

V

5#

T Ð3Ñ

5

element of the prediction covariance

residues satisfy the above gating condition, then the vision

system is considered as valid, and will be used in filter

correction. Otherwise, vision system data is determined as

invalid.

If there is no valid vision system data, such as when the

video frames are lost or vision data is rejected by gating, the

correction step of Kalman will not be executed. If the vision

system data is delayed, a fast than real-time Kalman filter

can be executed using recorded Kalman filter history data

once the vision system data is received. The overall

nonlinear Kalman filter based on trajectory linearization for

mobile robot location sensor fusion is summarized below.

Step 1: Read onboard sensor body rate measurement and

estimate robot position and orientation

Ô

Õ

5

5"

G

G

-9=ÐÑ † X =38Ð

ss

=38ÐÑ † X-9=Ð

ss

!

l/l Ÿ $ß3 œ "ß#ß$

D

#

VÐ3Ñ

#

T Ð3Ñ

"ß5

5

(16)

V 3 a b

is the appropriate diagonal

If all innovation

T Þ

5

×

Ø

Ô

Õ

G

G

×

Ø

B

C

œ

B s

C s

s

5

5

Ô

Õ

5"

5"

Ñ † X

Ñ † X

!

!

?

s

@ s

< s

! " † X

×

Ø

ÖÙ

Ô

Õ

×

Ø

(17)

G

G

5"5"5"

5"5"

5"

5"

T

c

œ E † T

D

"ß5

† E [ † U

5

d

† [

X

G

5

XX

5

5

5"

D

$ß5

5 5"

cd

D

œ B

C

#ß5

X

555

(18)

Step 2: Read vision system measurement.

If the vision system data is not available, go to Step 4à

If the vision system data is available, calculate innovation

residue by (12). If all innovation residues satisfy the gating

criterion (16), then the vision system data is valid. Go to

Step 3 otherwise, go to Step 4.

à

Step 3: Correction with valid vision data

O œ TT V

T œ M O T

55

5

where is the output measurement noise covariance. The

V5

posteriori estimation is

Ô

Õ

5

5

G

G

Goto Step 1.

Step 4: Calculate the prediction covariance without

correction.

BC

ss

55

5

G

T œ T

5

Goto Step 1.

5

55

"

ab

ab

(19)

×

Ø

Ô

Õ

×

Ø

Î

Ï

Ñ

Ò

ÕØÕØ

Ô×Ô×

B s

C s

s

œ O

B

C

B

C

G

D

D

D

5

5#ß5

5

5

5

5

5

5

"ß5

$ß5

(20)

‘

cd

s

œ B

C

X

555

X

G

(21)

(22)

5

III. HARDWARE-IN-THE-LOOP (HIL) SIMULATION

The integrated control and navigation for omni-

directional mobile robot was verified and tested in a real-

time HIL simulation. The HIL simulation system is shown in

Figure 4.

RobotRobot

Video

CameraCamera

Power

CableCable

Computer with

Wincon &

MultiQ PCI I/O

boardboard

Video

Power

Computer with

Wincon &

MultiQ PCI I/O

Figure 4 HIL Simulation System

In the HIL simulation, a Phase V robot was used.

Quanser's Wincon©, Mathworks' Simulink© and Real-time

Workshop© executed on a PC were used to develop a fast

prototype of the real-time TLC controller and the sensor

fusion system. Motors on the model mobile robot were

driven by Quanser's Multi-Q PCI I/O board and power

amplifiers. Motor shaft encoder signals were fed to Multi-Q

PCI I/O board to measure motor shaft speeds. A

Cognachrome 2000 vision system with a YC-100 CCD

camera was used to measure robot location. Congnachrome

2000© system identifies robot position and orientation, and

transfers these data to PC via a serial port. Vision system

data was then calibrated to the world frame using second

order polynomials.

The HIL simulation system has the most important

features in the real Robocat system that hinder the robot

controller performance. The robot position and orientation

estimate from the motor shaft speeds has cumulative errors

when slippage occurs. The power cable connecting the robot

generates large drag forces and causes robot wheel slip. It

occasionally blocks the camera, which results in loss of

image frame. The asynchronous RS-232 communication has

randomly mismatched data frame, which results in incorrect

vision data.

ThA01.3

2156

Page 5

IV. HIL SIMULATION RESULT

In the HIL simulation, two groups of tests were

conducted and compared: one used only the onboard

encoder data; whereas the other was augmented with sensor

fusion method. In the test, the sampling time interval

X œ !Þ!#(second). Over 20 real-time tests were conducted.

In all these tests, sensor fusion improved robot tracking

performance. In this section, several test results are

presented. To emphasis the significant improvement using

sensor fusion, a pen attached on the robot was used to plot

the actual trajectory.

A. Square Trajectory with Rotation

In this test, the robot was commanded to follow a square

trajectory at the speed of 0.2m/s on each side of the square,

while rotating 45 during side motion with a fixed rotation

rate. The command trajectory is of three degree of freedom

(3DOF). There are significant wheel slippage at each corner

due to robot acceleration. Real-time test results are

illustrated in Figure 5. Figure 5(a) is the controller tracking

performance. The controller can accurately follow the

command if it is given the correct measurement. Figure 5 (b)

shows the sensor fusion result. It can be seen that the

encoder estimation diverged slowly, while the vision system

had many corrupted data. The sensor fusion used the vision

system to calibrate the encoder estimation and discarded the

invalid vision system data. Figure 5(c) shows the gating

decision. In Figure 5(c) 1 means accept the vision data, 0

means reject the vision data. Figure 5(d) shows the robot

trajectory in plan drawn by recorded data. It can be

B C

seen from Figure 5(d) that at each corner, the encoder

estimation has a large orientation error. The robot cable drag

also introduced encoder estimation error, such as in regions

near point A and point B. The encoder estimation error

accumulates over time. The sensor fusion method corrects

the encoder error when the vision system data is available.

‰

B. Square Trajectory with Fixed Orientation

In this test, the command is the same to test 1 except the

robot orientation is fixed. The actual robot trajectory was

plotted on the field of play by an attached pen. The test

results are shown in Figure 6. Figure 6 (b) and (c) are photos

of actual robot trajectories drawn by an attached pen.

Experiment using vision system alone was also

conducted. The disturbance induced by vision system failure

destabilized the robot very soon.

C. Circular Trajectory and Rose Curves

In these tests, the first command is to accelerate from

the initial position, and draw a circle of

angular rate of 1 rad/s. The second command is to draw a

rose curve, which is a petalled flower curve generated by

< œ +Ð8 Ñ<

sin

))

, where and are the radius and the rotation

angle in polar coordinate,and is an integer determining the

8

number of petals. The robot orientation was fixed. Test

results are shown in Fig. 7 and 8. As the result of the shift,

the trajectory plotted when using encoder only is much

lighter than the one using sensor fusion.

m radius at an

!Þ#&

IV CONCLUSION AND FUTURE WORK

In this paper, an integrated control and navigation

method for omni-directional mobile robot is proposed, and

real-time HIL test results are presented. The method

augmented the robot controller with a navigation component

using sensor fusion, which combines onboard sensor and

vision system measurement together. It employs a nonlinear

Kalman filter technique based on trajectory linearization.

Gating technology is employed to remove the incorrect

vision data. Real-time HIL simulation test results show that

the proposed method is able to improve robot location and

orientation measurement reliability and accuracy, thus it

improves the robot controller performance significantly.

The proposed sensor fusion method is an experimental

example of the nonlinear Kalman filter based on trajectory

linearization. In the future, theoretical study for the general

case of nonlinear Kalman filter based on trajectory

linearization will be explored. The integrated control and

navigation method will be implemented and tested in the

real Robocat system.

Reference

[1] F. G. Pin, and S. M. Killough, “A new family of omnidirectional and

holonomic wheeled platforms for mobile robots,” IEEE Trans.

Robotics Automat. 10(2) (1994), pp480-489.

[2] M.-J. Jung, H.-S. Kim, S. Kim, and J.-H. Kim, “Omni-Directional

Mobile Base OK-II”, Proceedings of the IEEE International

Conference on Robotics and Automation, 4: 3449-3454, 2000.

[3] Y. Liu, X. Wu, J. J. Zhu and J. Lew, “Omni-Directional Mobile Robot

Controller Design by Trajectory Linearization", Proceedings of the

American Control Conference. v 4, 2003, p 3423-3428

[4] Y. Liu, J. J. Zhu, R. L. Williams II, and J. Wu, “Omni-Directional

Mobile Robot Controller Based on Trajectory Linearization”,

submitted to Robotics and Autonomous Systems.

[5] M. C. Mickle, R. Huang, and J. J. Zhu, “Unstable, Nonminimum

Phase, Nonlinear Tracking by Trajectory Linearization Control,"

Proceedings, pp. 812-818, 2004 IEEE Conference on Control

Applications, Taipei, Taiwan, Sept. 2004.

[6] J. J. Zhu, B. D. Banker, C. E. Hall, “X-33 Ascent Flight Controller

Design by Trajectory Linearization- A Singular Perturbational

Approach," 2000 AIAA Guidance, Navigation, and Control

Conference, Denver, CO, Aug. 2000.

[7] M. C. Mickle, and J. J. Zhu, “Bank-To-Turn Roll-Yaw-Pitch

Autopilot Design Using Dynamic Nonlinear Inversion and PD-

eigenvalue assignment”, Proc., 2000 American Control Conference,

Chicago, IL, to appear, June 2000

[8] J. J. Zhu, A. S. Hodel, K. Funston, K. and C. E. Hall, “X-33 entry

flight controller design by trajectory linearization - a singular

perturbational approach,” AAS-01-012, American Astronautical

Society Guidance and Control Conference, Breckenridge, Colorado,

Jan. 2001Þ

[9] X. Wu, Y. Liu, and J. J. Zhu, “Design and Real-Time Testing of a

Trajectory Linearization Flight Controller for the 'Quanser UFO'”,

Proceedings of the American Control Conference. v 4, 2003, p 3931-

3938.

[10] J. J. Zhu, and A. B. Huizenga, “A type two trajectory linearization

controller for a reusable launch vehicle - A singular perturbation

approach", Collection of Technical Papers - AIAA Atmospheric Flight

Mechanics Conference. v. 2 (2004) p. 1121-1137.

[11] R. Huang, M. C. Mickle, and J. J. Zhu, “Nonlinear Time-varying

Observer Design Using Trajectory Linearization”, Proceedings of the

American Control Conference. v 6, 2003, p 4772-4778.

[12] R. L. Williams II, B. E. Carter, P. Gallina, P. and G. Rosati, “Dynamic

Model with Slip for Wheeled Omni-Directional Robots”, IEEE

Transactions on Robotics and Automation, 18(3): 285-293. 2002.

ThA01.3

2157

Page 6

[13] D. L. Hall, and S. A. H. McMullen, Mathematical Techniques in

MultiSensor Data Fusion, Artech House, Boston, 2nd ed, 2004.

[14] R. G. Brown, and P. Y. C. Hwang, Introduction to random signals

and applied Kalman filtering : with MATLAB exercises and solutions,

3rd ed, Wiley, New York.

[15] J. Richard and N. D. Sigpurwalla, “Understanding the Kalman Filter”,

The American Statistician, May 1983, Volume 37, N0. 2

[16] S. Arulampalam, S. Maskell, N. J. Gordon, and T. Clapp, “A Tutorial

on Particle Filters for On-line Non-linear/Non-Gaussian Bayesian

Tracking", IEEE Transactions of Signal Processing, Vol. 50(2), pages

174-188.

Command and Response

051015202530

0

0.5

1

x (m)

0510 1520 2530

0

0.5

1

y (m)

051015202530

-2

0

2

4

ψ (rad)

time (s)

Command

Measurement

(a) Controller Performance Using Sensor Fusion

051015202530

-1

0

1

2

x (m)

time (s)

051015202530

-1

0

1

2

y (m)

time (s)

051015 20 25 30

-5

0

5

ψ (rad)

time (s)

Kalman

Vision

Encoder

(b) Kalman Filter Performance

051 01 52 02 53 0

0

0 .2

0 .4

0 .6

0 .8

1

G a tin g D e c is io n

ti m e ( s )

(c) Gating Decision

00.10.20.30.4

x (m)

0.50.60.70.8

-0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

y (m)

Command and Measurement

command

kalman

encoder

A

B

(d) Robot Trajectory Data

Figure 5 Square Trajectory with Rotation

0102030405060

0

0.5

1

Command and Response

x (m)

010203040 5060

0

0.5

1

y (m)

01020304050 60

1.5

1.6

1.7

ψ (rad)

time (s)

Command

Measurement

(a) Tracking Performance

(b) Trajectory

Using Sensor Fusion

(c) Trajectory

Using Encoder

0 102030405060

-1

0

1

2

x (m)

time (s)

0102030405060

-1

0

1

2

y (m)

time (s)

0102030405060

0

2

4

ψ (rad)

time (s)

Kalman

Vision

Encoder

(d) Kalman Filter Performance

Figure 6 Square Trajectory with Fixed Orientation

(a) Using Sensor Fusion

Figure 7 Circular Trajectory

(b)Using Encoder Alone

Ð+Ñ8 œ $Ð,Ñ8 œ $

Ð-Ñ8 œ

Figure 8 Actual Robot Rose Curve Trajectories

Ð.Ñ8 œ %

(Sensor Fusion)(Encoder)

4 (Sensor Fusion) (Encoder)

ThA01.3

2158