Content uploaded by Zunaidi Ibrahim
Author content
All content in this area was uploaded by Zunaidi Ibrahim on Dec 02, 2018
Content may be subject to copyright.
Artif Life Robotics (2006) t1:t49-756
DOI 10.1007/s10015-005-0371 -7 O ISAROB 2006
Ibrahim Zunaidi. Norihiko Kato . yoshihiko Nomura
Hirokazu Matsui
Path planning based on geographica! features information for an
autonomous mobile robot
Received: Iune27,2005 I Accepted: October 11,2005
Abstract We propose a path-planning algorithm for an
autonomous mobile robot using geographical information,
under the condition that the robot moves in an unknown
environment. Images input by a camera at every sampling
time are analyzed and geographical elements are recog_
nized, and the geographical information is embedded in an
environmental map. Then the path is updated by integrat_
ing the known information and the p.edi"tio., on the un_
known environment. We used u ,"rro. fusion method to
improve the mobile robot,s dead-reckoning accuracy. The
experimental results conflrm the effectiveness of the pro_
posed algorithm as the robot reached the goal successfuily
using the geographical information.
Key words Path planning Autonomous mobile Geo_
graphical information . Dead reckoning
ture elements that greatly influence robot movements are
not considered, and much of this research has only targeted
obstacle avoidance. If the geographical environment con_
sisted of a single flat element such as a floor or asphalt, and
if a robot were large, it would not be crucial to consider the
geographical feature elements. Flowever, if the geogra_
phical environment changed, and if the robot L sr"att,
we should take the geographical feature elements into
consideration.
For example, if a mobile robot faces different elements
of a geographical environment that it must traverse, the
robot will consider the easiest path to avoid other problems
during travqling. Without using our evaluation of difflculties
method, a robot might choose to travel on a sandy area, thus
having the probability of getting itself stuck in the sand,
and even, being damaged by it. Therefore, by combining the
proposed method with earlier obstacle avoidance methods,
we can make the robot choose a path and at the same time
avoid obstacles while flnding the easiest, fastest, and safest
way to reach the goal. We believe the proposed method will
be more effective, and the mobile roUot witt be able to move
autonomously in any outdoor conditions. Other advantages
of our proposed method that considers feature elements are
reducing the accumulated errors of position and orienta_
tion, and thus decreasing the probability of damage to the
mobile robot, and the saving of energy.
In this research, we propose a path_planning algorithm
using geographical feature informatio, io, u, autonomous
mobile robot to move in unknown environments.
1 Introduction
In recent years, the working area of autonomous mobile
robots has not been limited to indoors only, but has ex_
tended to the outdoors. Navigation systems are necessary
when robots move autonomously, and these have been
studied for many years. Studies began from the basic re_
search of COM, Class, Bug, etc., and in recent years some
promising techniques have been proposed, for example,
autonomous action planning for a mobile robot that "o.rrid_
ers errors from internal and external sensors together with
the uncertainty,of a map,1-3 an autonomous guidance system
that avoids wall-collisions by measuring thJdistances to the
wall based on the detection of edges,a ind a human_evading
action planning system using a GA.s The geographical feal 2 lmage features to recognize outdoor
geographical elements
l. Zunaidi. N. Kato (X) . y. Nomura . H. Matsui
Y:*:g"lt Engineering Department, Mie University, Tsu, Mie
514-8507, Japan
e-mai l: hori@robot.mach.m ie-u.ac.jp
2.1 Color theory
Our environment recognition research6 and character
recognition researchT have already been reported. We use
these methods to apply to the present systlm to continue
the previous research. Using image processing, where input
150
images will be recognized as various geographical elements
on the road surface, in this research the geographical
elements were analyzed and the information was integrated
to distinguish and recognize the elements of the geographi-
cal environment.
Based on that research, we applied the results to our
autonomous mobile robot path-planning algorithm. We
employed a six-sided pyramidal color model to recognize
outdoor geographical elements. This was based on the typi-
cal color-expressing system presented by Munsell: hue (H),
intensity (I), and saturation (S) are converted from red (R),
green (G), and blue (B) color values obtained by a color
video camera.
2.2 Image features of outdoor geographic elements
2.2.1 Categorization of geographical feature elements
Although many geographical elements exist in the outside
world, here we only considered four elements i.e., asphalt
or concrete (AC), grass (GR), gravel (GV), and sand or soil
(SS), based on the difflculty that a robot encounters when
moving.
2.2.2 Modeling geographical elements
When developing pattern recognition systems based on
data from visual information sources, we need numerical
measures representing certainty in each model. This section
describes the way of flnding the feature qluantity fn(a(i)) of
element a(i) from the visual information n. First, the aver-
age p and standard deviation o (i.e., I p, H t, S p, I o, H o,S,) for
element a(i) arc calculated from a population with the sta-
tistical quantity x, which was obtained from visual informa-
tion n. That is,
1-
Lt=-) r,
, l.n(i)|4l
tt
To recognize each element in advance, we need to model
the elements, which are the averages and standard deriva-
tions of 11, I and S for each sample image: Hu, Iu, and Su, and
Ho- Ioand, S". Next, the averages and standard derivations of
Ht, It. and S,, and 11",1o, and So are calculated for all the
sample images classifled into an identical category, and are
expressed 6 p4,, pro, Vn1,, Vao, Fss, ond [tso, Qu, oro, oas, Gflo,
oru, and os"" Fitting the normal distribution function in Eq.
2 to these averages and standard derivations, we define the
fitted normal distribution function as the model equation
for each element-
2.2.3 Recognition of geographical elements
A method of recognizing the geographical elements is
described in this section. Image segmentation processing
has great implications for image recognition.6 As for image
segmentation, there exists a method using a histogram of
the whole image, but it is difflcult to decide the threshold
value automatically in the outside world where the environ-
ment changes rapidly. Therefore, we utilized Bayes dis-
crimination for tessellated subimages. The finer the image
tessellation becomes, the smoother the outlines of objects
become. However, the information needed for recognition,
especially the values of the variance, will be lost. Consider-
ing these problems, we determined the number of tessella-
tions by experiment. The recognition process is explained
below.
First, the input images are tessellated into square grid
subimages, and image features such as .17u, Ip, Sp, Ho, Io, ar.d
So are extracted for each subimage. Next, each subimage is
classified into an element by Bayes' discriminant law. Take
the example that the feature /1u is extracted in some
subimage. Then, in the case of grass, we can obtain a condi-
tional probability by applying the extracted image feature
11, to the corresponding model functions p"u(xlGR), i.e.,
p"r(IlulGR). Thus, for all the elements, the conditional
probabilities are obtained as pou(HulAC), p"u(I1ulGR),
P nu(H I GV), and pru(/Ipl SS).
We then obtained an a posteriori probability for each
element by applying Bayes'discriminant law to the condi-
tional probabilities for all the elements: p(ACl 11-), p(Gnl
1{Q, f(GVl 11r). and p(SSl 11*). Here, when the six prob-
abilities for an element obtained from six kinds of informa-
tion are consistent with each other, there is no difficulty in
integrating the six pieces of information. However, when
the six probabilities for an element contradict each other, it
is much more difflcult to integrate them. For example, as
shown in Figs. 1-3, the distribution functions of some ele-
ments might be similar to one another. Therefore, we em-
ployed the Dempster-Shafer theory to integrate six kinds of
information, as previously reported.6'7
(1)
o'= i {, - u)' n(*) a*
So' - --
007
006
005
004
003
002
ool
0
Fig.1.
i - AsFha lt&C0n ErEte
j - Gr6s
,@=ed,;"--{-+i (2) Standard derivation of S
151
0.08
0.0t
Io rj fi6
005
0.04
0.0s
Fig.2. Standard derivation of I
0.r5
IJ U2
0.01
0
015
H 250
Fig. 3. Standard derivation of H
3 Generation of an environmenta! map
3.1 Evaluation values of geographical elements
When a robot generates an environmental map with geo-
graphical information, it is necessary to change into that
appropriate information for robot movement, rather than
simply using a recognition result. So we used a technique of
embedding the geographical feature information into an
environmental map, where the information was changed
into an evaluation value representing the robot's difficulty
in moving. We call this value the geographical evaluation
value.
Now consider that asphalt or concrete (AC), sand or soil
(SS), gravel (GV), and grass (GR) are regarded as outdoor
elements. The geographical evaluation value .I to be embed-
ded in the environmental map is given by
J =Wa6Pa6 + WssPss +WGvPGv +W'RPGR (3)
where W ro*o is the weight coefflcient that expresses the
difflculty the robot has when moving on a geographical
feature element LAND. The weight coefficients Wor, Wrr,
W"n, andW"oare determined by an ad hoc method, and will
be used to expresses the difflculties,
where
..LAND"e {..AC", ,,SS", ..Gy", ..GR"}
Fig.4. The flow of coordinates and four trapezoidal area points on I*
a;nd WAC < Wr, < W., { Woo are assumed, and P.r* is the
probability value of a traveling point being a geographical
feature element LAND when the camera takes an image
(0.0<P.IND<1.0).
3.2 Transformation from image coordinates to
environmental map coordinates
A camera captures an outdoor scene with Windows DIB
still images of 240 x 340 etc. pixels. Subimages of the cap-
tured DIB images are classifled into an outdoor element,
and geographical evaluation values are allocated accord-
ing to the elements. Next, the captured DIB still images
are transformed from image coordinates(Ir) to world
coor{inates(I*) via camera coordinates(I.) and robot
coordinates(I*), and the allocated geographical evaluation
values are embedded in an environmental map. The camera
is set at 210mm in height, and with an angle of depression of
28". By performing coordinate transformation, the four cor-
ner points on a DIB image is transformed into a trapezoidal
area on I*, as shown in Fig. 4. This trapezoidal area is then
transformed to \. We divide the transformed picture into
16 x 16 pieces, and update the environmental maps by em-
bedding the geographical evaluation values in them.
4 Path planning
The experimental conditions of the position and orientation
of the robot can be determined accurately enough. The
position of GOAL is given, but the environment is not
known in advance. There are no obstacles that the robot
cannot pass through, such as a wall or a precipice. A robot
can pass over the geographical feature elements described
above which exist in the environment, but when expressing
the robot's difflculty in moving, the weight coefficients for
these geographical features differ.
4.1 Region definition
When using a CCD camera as a vision sensor, we found
that two kinds of area exist. One is the area for which
o2
Straight
Minimum radius turning
Straight or turning
Fig. 5. A.1 example of the target path at "approach to GOAL"
Fig. 6. Examples of failing to reach GOAL. a GOAL exists inside the
minimum turning circle. b GOAL exists in the visible region
geographical elements can be recognized, and the other is
where they cannot be recognized because it is outside the
.visible area. We now deflne the area that can be captured by
the CCD camera, as shown in Fig. 4, as the visible region
(VR), and the area which cannot be captured by the camera
as the unknown region (UR). The areas that divide the VR
and the IJR were defined with a border value (as shown in
Fig.7).
4.2 Generation of the target path
As shown in Fig. 5, the target path was where the robot
turned or ran straight in the VR. Then in the UR, the robot
took the shortest path with a turn with minimum radius, and
then ran straight at GOAL.
However, when the distance from the present represen-
tation point (the central point of the front wheel shaft) to
GOAL was not far enough, the target path mentioned
above was not generated successfully. Figure 6 shows two
cases. These were the case where GOAL existed inside the
minimum turning circle in the UR, and the other was where
GOAL existed in the VR. Therefore, we made little change
to the generation of a target path in this case.
'When 1u,,, shown in Fig. 7, satisfles
Fig. 7. Border value between
hood of GOAL" "approach to GOAL" and "neighbor-
we deflne this case
when Loo, satisfles
Lbo,<zR*i,
we define this case
R-,, is the minimum
as "approach to GOAL". Conversely,
(s)
as "neighborhood of GOAL," where
turning radius.
4.3 Path plpnning at "approach to GOAL"
The robot goes toward GOAL searching for the optimal
path out of the target paths generated. When the mobile
robot starts moving, for each sampling step several imagi-
nary paths (reference paths which have the possibility of
connectin-gwith the temporary target at that time) towards
the temporary target (target which is generated at the end
of the visible area (VR), P""o at that time (see Fig. 8), were
generated. Now, the mobile robot will choose or consider
the smallest path evaluation value as a standard value for
searching for the optimal path among the paths generated.
This process will be repeated at each sampling time during
the mobile robot's movement toward the GOAL. The path
evaluation value expresses the grade of difflculty of move-
ment for the robot.
4.3.1 Calculation of a path evaluation value in the VR
The target path is generated by changing the target angle o
and the control angle 0, as shown in Fig. 8. The target angle
o is deflned as the angle between the Y* axis and the line
segment that connects the robot representation point and
the target point being set in the VR (P.d). The control angle
0 is the steering angle of the robot. The robot performs a
turning movement (o + e) or a straight movement (cr = 0) in
the VR by changing a and 0. When the robot moves in
the VR, it searches for the optimal path based on the
geographical evaluation value. When the robot moves, the
geographical features are examined only at the places
where the robot's wheel touches the ground.
Therefore, the robot's shape should not be represented
as a circle or a rectangle, but as two points, i.e., the left and
right wheel points. It is considered that only the geographi-
Lo",>2R_,, (4)
Fig.8. Target path at the visible region GOAL
cal features where the two points touch the ground should
be taken into consideration. Once a set of c and e is given,
the robot generates a target path in the VR. Moving along
the target path generated, the robot calculates the move-
ment evaluation value at each sampling step. The movement
evaluation value at a sampling step k, /(/c), is deflned by
t1t 1: x*!b!J-^(k)
where,/"(k) is the geographical evaluation value where the
robot's left wheel touches the ground at a sampling step k,
,Io(k) is the geographical evaluation value where the robot's
right wheel touches the ground at a sampling step k, and K
is the weight coefflcient used when the right and left wheels
touch different geographical feature elements.
As a result, the path evaluation value in the VR,./,, from
a set of cx and 0 is given by
t.=f,lr^,ft+t@)
where Lo", is the length of the path that the robot moves
along in one sampling step, as shown in Fig. 8.
4.3.2 Path evaluation value in the UR
In the UR, the robot performed the shortest movement, the
path of which was created by concatenating the minimum
rotation radius movement with the straight movement to
GOAL.
When the movement evaluation value at P.,u is given as
l,,the path evaluation value in the UR,,I,, is given by
J,=\!9-* L,,o (8)
"2
where "Io is the geographical evaluation value at GOAL
(given), and L,,o is the estimated shortest length of path
along which the robot will run in the UR.
153
Directional aiming by straight
Unknown region run or tuming
Visible region
Fig.9. One of the paths in the "neighborhood of GOAL"
4.3.3 Total path evaluation value
Finally, the total path evdluation value,I, is given by
Jr= l,* J,
The robot repeatedly chose the optimal course at every
sampling time for which the course evaluation value was at
the lowest, and moved toward GOAL.
4.4 Path planning in the "neighborhood of GOAL"
In the "neighborhood of GOAL," the target angle c was
fixed in the GOAL direction from the robot's position, and
the control angle 0 was changed in small increments as the
target path was generated. When turning (o * 0) or running
straight (a = e), the target path reaches the goal, as shown in
Fig.9.
The path evaluation value in the neighborhood of
GOAL is calculated by the same method that is used in the
approach to GOAL.
(e)
(6)
(j) 5 Mobile rcbot dead'reckoning
Dead-reckoning should minimize unbounded growth in
positional and orientation errors.&" This can be accom-
plished by meticulously modeling sensor errors and by
efficient filter design. We previously conducted detailed
research for the mobile robot's own positioning system.11'12
That method, whose effectiveness has already been con-
firmed, was used in our proposed path planning method.
5.1 Error model for the encoder
The mobile robot's position and orientation were calculated
from the output of the incremental encoder system. It is
well known that this system was subject to systematic errors
caused by factors such as unequal wheel diameters, impre-
cisely measured wheel diameters, or an imprecisely mea-
sured wheel separation distance. Subject to these errors, the
robot's position and orientation angle are computed as
error models.11'12
754
5.2 Error model for the gyro and accelerometer
Inertial navigation uses gyro sensors and acceleration sen-
sors to measure the rate of rotation and acceleration, re-
spectively. However, inertial sensor data drift with time
because of the need to integrate the rate data to yield the
position. Considering the bias drift of those sensors, the
robot's position and orientation are computed as error
models." "
5.3 Fusion of error model data
We used the Kalman filter tool to fuse all error measure-
ments provided from the sensors. The fusion method will
improve the dead-reckoning accuracy of a mobile robot
based on encoder systems, gyros, and accelerometers. We
used this mobile robot positioning method,"'" and con-
ducted the path-planning experiment using geographical
information.
6 Experiment
The experimental conditions were as follows. We set four
different conditions in the experiment to confirm the valid-
ity of our proposed method. We used only two geographical
elements, asphalt and grass, in the set-up environment. The
grass area causes the greatest difflculty in moving for a
mobile robot, followed by the asphalt area. In the first con-
dition, we set the grass area as wide and high. The mobile
robot must escape from that grass area and move toward
the target goal because the grass area was considered to be
the most difflcult area to travel in. In the second condition,
we set thd grass area as narrow. Even though the grass area
' is very difficult to move in, the mobile robot will traverse it
because that area is narrow. If the mobile robot passes that
area, it may affect the traveling time more then the effect
from traversing that area. In this case, we set the trapezoidal
region as bigger than the grass area. In the third condition,
we set the target goal in between two grass areas, so the
mobile robot must travel between the grass areas until it
reaches the goal successfully. In all experiments we set the
mobile robot speed at 0.3m/s, the width of a robot wheel
was 282mm by 220mm, and the length, Lo",,was 5.0mm. In
the initial stage, the environment was unknown and the
GOAL was given.
The experimental results are shown in Figs. 70-12. In
these flgures, the brighter the gray level, the lower the geo-
graphical evaluation value becomes. Conversely, the darker
the gray level, the higher the geographical evaluation value.
The geographical feature recognition also depended on the
experimental time and the weather conditions, in which a
minute difference in gray level contrast will affect the geo-
graphical evaluation value. However, this will not affect the
essence of the result. Our experiments were conducted in
clear weather conditions. If the geographical evaluation
value is the same, we set a priority for the robot to turn
right. The white area shows the regions that have not been
Fig. 10. Experimental results in the case of wide grass on the shortest
path
captured by the camera. All the area is unknown except the
area captured by the camera. The variable / represents the
sampling times, that are initiated from 0.
6.1 Wide grass area in the shortest path
In Fig. 10, the START position is (0, 0) mm, the GOAL
position is (0, 5000) mm, and the topJeft and bottom-right
points of t\e rectangular grassy fleld are (-1500, 4000) mm
and (1500, 2000) mm respectively.
When the mobile robot detected the grassy area, it made
a minimum radius turn to detect the extent of the grass
areay. The mobile robot moved in the direction that showed
the highest path evaluation value. In this result, the robot
detected the grassy area and detoured to the right. Finally,
the robot turned at the right bottom corner and again de-
toured along the grass toward the target goal. After passing
the grass area, the robot aimed toward the goal and success-
ful reached it. The total time taken from the start point until
reaching the goal was 24s.
6.2 Narrow grass area in the shortest path
In Fig. 11, the START position and the GOAL position are
the same as in Fig. 10. The top-left and bottom-right points
of the rectangular grassy are (-1500, 2230) mm and (1500,
2000) mm, respectively.
A detour round a grass area by the mobile robot will
occur not only because of the evaluation value of the dif-
flculties of moving, but also by using the field of view
captured by the camera. The mobile robot will make a
judgment and use the smallest evaluation values to choose
the path, but will also use the CCD camera to recognize the
area visually and to make decision whether to detour or
traverse the grass area. If the grass area is narrow, it not
Fig. 11. Experimental result in the case of a narrow area of grass on the
shortest path Fig. 12. Experimental results when the asphalt road has two bends
necessary to detour it because the accumulated positional
error of the mobile robot while detouring will be bigger
than when traversing that area. When the robot started to
detect the grass area, it made a minimum radius turn to
detect the extent of the grass area. However. by this time
the trapezoidal region had captured a new image beyond
the grass area which was the new asphalt. The mobile robot
judged that the grass area was either too small or too
narrow, and decided to traverse that area. After the robot
selected a path to traverse the grass area, it successfully
reached GOAL. The total time taken from the start point
until reaching the goal was 18s.
6.3 Asphalt road with two bends
In Fig. 72, the START position is (0, 0) mm, the GOAL
position is (-9000, 9000) mm, and the asphalt road has two
corners.
When the mobile robot started to detect the grass area,
it made a minimum radius turn to detect the extent of the
grass area. The mobile robot moved in the direction that
showed the highest path evaluation value. Under these con-
ditions, the robot detected the grass area, and aimed for
GOAL along the side of the grassy fleld. This process con-
tinued until these was no more grass. After passing the grass
area, the robot went straight toward GOAL. However, the
robot then detected another grass area, and aimed for
GOAL along the side of the new grass area. Finally, the
robot successfully reached GOAL. The total time taken
from the start point until reaching the goal was 47 s.
6.4 Running in "no grass area" conditions
For comparison purposes, we conducted another experi-
ment setting the condition that no grass area existed (only
asphalt). In this case, the robot will go straight to the goal
and the CCD camera function is disable. We performed 10
consecutive runs for this experiment, and took the average
of the error between the set goal and the last position of the
robot when it reached the goal area. For comparison, we
also performed another 10 trial experiments of the pro-
posed method, and also took the average error of the
robot's position. The average error with no grass area and
no CCD .bmera for the x-axis was +10mm, and for the y-
axis it was +13mm. Using our proposed method, for the
wide grass area the mobile robot detoured the grass area
until it reached the goal. The average error result for that
condition for the x-axis was +15mm, and for the y-axis it
was *51mm. For the narrow grass area, the mobile robot
traversed the grass area and reached the goal. The average
error result for this conditionfor the x-axls was +12mm. and
for the y-axis it was *13mm.
7 Gonclusion
In this research, we proposed the following algorithms.
Based on geographical information, paths were created.
The geographical information was transformed into a one-
dimensional evaluation value that expressed the difflculty
of movement for the robot. The target path was generated
by changing the target angle and the control angle. The
situations were classifled into either approach to GOAL or
neighborhood of GOAL, and the path-planning algorithm
was switched from one to the other. The robot was assumed
to be a two-point object.
From the experimental results, we can conclude that
after recognizinga geographical feature, the robot performs
path planning based on the geographical evaluation value
embedded in the environmental map generated, and suc-
156
cessfully reaches GOAL. The robot passed through a grassy
area when that grass area was narrow. Conversely, when the
grassy area was wide, the robot avoided the grassy area. The
robot will choose the optimal path based on the evaluation
value, and will reduce the accumulated errors of position
and orientation while traveling on that path. From the com-
parison experiment results, we can conclude that the aver-
age error between the set goal and the actual mobile robot
goal position is acceptable, and shows that our method can
be used for mobile robots facing different environment ele-
ments. In our discussion, we also considered the sand, soil,
gravel for the evaluation method. From that, we can use or
expand the proposed method by updating the geographical
evaluation values and their weight coefficients in the future.
References
1. Noborio H (1990) Several path-planning algorithms of a mobile
robot for an uncertain workspace and their evaluation. Proceed-
ings of the IEEE International Workshop on Intelligent Motion
Control, vol 1, August
2. Lumelsky V (1984) Effect of uncertainry on continuous path
planning for an autonomous vehicle. Proceedings of the 23rd
IEEE Conference on Decision and Control, Las Vegas, Nevada,
December
3. Kanbara T (1999) A method of flamiry firrenr€rfi and observa-
tion for a mobile robot comidering rwtaimi= of movement,
visual sensing, and a map. Trans Jp Sc Meri Frg 65- No. 629
4. Yamada M (1998) Autonomous narigatirm of m irrelligent vehicle
using 1-D optical flow. Trans Jpn SG MsrL t"g fl- !io- 617
5. Tadokoro S (1997) Generation of arr*lare mtin for robots
which coexist and cooperate with hmans.2nd Reput Avoidance
of moving obstacles for mobile robots Trens Jpo Sc Mech Eng 63 ,
No. 606
6. Hasegawa S (1998) Environment rEoglitirm bg inteEqared visual
information based on D-S theory- Prrce@sof rhe IEEE Inter-
national Workspace on Robot and Hrmn C-mrmi:ation
7. Semii N (1996) Character recognitim fa flrc E;ng Dempster-
Shafer theory. J Inst Electron lnf Cmm eng (D-tr) I79-D-II,
(s)
8. Maeyama S, Ohya A, Yuta S (f997) Robusr deailrectoning sys-
tem by fusion of odometry and giro fr noHe robot outdoor
navigation. Nihon Robot J 15:11&)-llEI
9. Maeyama S, Ishikawa N, Yuta S (fglfl Rde{ced flrering and
fusion of odometry and gyroscope fr a ffi-eefe dead-reckoning
system of a mobile robot- IEEE Inrerrriral Cmference on
Multisensor Fusion and Integratbn fr htdligence Systems,
pp 541-548
10. Borenstein J, Feng L (2fi)l) Gyrodmetry: a m method for com-
bining data from gyros and odometr-r im mobile robots- Proceed-
ings of the International Conferere m Roboti:s and Automation,
vol 1, pp 423428
1L. Zrnaidi I, Norihiko K, Yoshihiko )J- et aL (!Iffi) Positioning sys-
tem for 4-wheels mobile robot encodcr- giro and derator data
fusion with error model method- Ctiqs lilai Uriv (CMU) Int J
s(1), pp 1-1a
12. Zunaidi I, Norihiko K, Yoshihiko I\- €t .L (2!1ffi) Mobile robot
positioning with sensor fusioo method- Jp Sc Mech Eng Conf
No. 033-2, 709, p 174