Conference PaperPDF Available

Dead Reckoning for Mobile Robots Using Two Optical Mice.

Andrea Bonarini
Matteo Matteucci
Marcello Restelli
Department of Electronics and Information – Politecnico di Milano
Piazza Leonardo da Vinci, I-20133, Milan
Keywords: Optical sensors, dead redoking, cinematic independent odometry, . . .
Abstract: In this paper, we present a dead reckoning procedure to support reliable odometry on mobile robot. It is
based on a pair of optical mice rigidly connected to the robot body. The main advantages are that: 1) the
measurement given by the mice is not subject to slipping, since they are independent from the traction wheels,
nor to crawling, since they measure displacements in any direction, 2) this localization system is independent
from the kinematics of the robot, 3) it is a low-cost solution. We present the mathematical model of the sensor,
its implementation, and some empirical evaluations.
Since the very beginning of mobile robotics, dead
reckoning was used to estimate the robot pose, i.e.,
its position and its orientation with respect to a global
reference system placed in the environment. Dead
reckoning is a navigation method based on measure-
ments of distance traveled from a known point used
to incrementally update the robot pose. This leads to
a relative positioning method, which is simple, cheap
and easy to accomplish in real-time. The main disad-
vantage of dead reckoning is its unbounded accumu-
lation of errors.
The majority of the mobile robots use dead reckon-
ing based on odometry in order to perform their navi-
gation tasks (alone or combined with other absolute
localization systems (Borenstein and Feng, 1996)).
Typically, odometry relies on measures of the space
covered by the wheels gathered by encoders which
can be placed directly on the wheels or on the engine-
axis, and then combined in order to compute robot
movement along the x and y-axes and its change of
orientation. It is well-known that odometry is subject
systematic errors, caused by factors such as
unequal wheel-diameters, imprecisely measured
wheel diameters and wheel distance, or an impre-
cisely measured tread (Borenstein and Feng, 1996);
non-systematic errors, caused by irregularities of
the floor, bumps, cracks or by wheel-slippage.
In this paper, we present a new dead reckoning
method which is very robust towards non-systematic
errors, since the odometric sensors are not coupled
with the driving wheels. It is based on the mea-
sures taken by two optical mice fixed on the bottom
of the robot. We need to estimate three parameteres
(x, y, θ), so we cannot use a single mouse, since
it gives only two independent measures. By using two
mice we have four measures, even if, since the mice
have a fixed position, only three of these are indepen-
dent and the fourth can be computed from them. We
have chosen to use optical mice, instead of classical
ones, since they can be used without being in contact
with the floor, thus avoiding the problems of keeping
the mouse always pressed on the ground, the prob-
lems due to friction and those related to dust deposited
in the mechanisms of the mouse.
In the following section, we will present the moti-
vations for using this method for dead reckoning. In
Section 3, we show the geometrical derivation that al-
lows to compute the robot movement on the basis of
the readings of the mice. Section 4 describes the main
characteristics of the mice and how they affect the ac-
curacy and the applicability of the system. In Sec-
tion 5, we report the data related to some experiments
in order to show the effectiveness of our approach. Fi-
nally, we discuss related works in Section 6 and draw
conclusions in Section 7.
The classical dead reckoning methods, which use
the data measured by encoders on the wheels or on
the engine-axis, suffer from two main non-systematic
problems: slipping, which occurs when the encoders
measure a movement which is larger than the actually
performed one (e.g., when the wheels lose the grip
with the ground), and crawling, which is related to a
robot movement that is not measured by the encoders
(e.g., when the robot is pushed by an external force,
and the encoders cannot measure the displacement).
Our dead reckoning method, based on mice read-
ings, does not suffer from slipping problems, since
the sensors are not bound to any driving wheel. Also
the crawling problems are solved, since the mice go
on reading even when the robot movement is due to a
push, and not to the engines. The only problem that
this method can have is related to missed readings due
to a floor with a bad surface or when the distance be-
tween the mouse and the ground becomes too large
(see Section 4).
Another advantage of our approach is that it is in-
dependent from the kinematics of the robot, and so
we can use the same approach on several different
robots. For example, if we use classical systems, dead
reckoning with omni-directional robots equipped with
omnidirectional wheels may be very difficult both for
geometrical and for slipping reasons.
Furthermore, this is a very low-cost system which
can be easily interfaced with any platform. In fact, it
requires only two optical mice which can be placed
in any position under the robot, and can be connected
using the USB interface. This allows to build an accu-
rate dead reckoning system, which can be employed
on and ported to all the mobile robots which operate
in an environment with a ground that allows the mice
to measure the movements (indoor environments typ-
ically meet this requirement).
In this section we present the geometrical derivation
that allows to compute the pose of a robot using the
readings of two mice placed below it in a fixed posi-
tion. For sake of ease, we place the mice at a certain
distance D, so that they are parallel between them and
orthogonal w.r.t. their joining line (see Figure 1). We
consider their mid-point as the position of the robot
and their direction (i.e., their longitudinal axis point-
ing toward their keys) as its orientation.
Each mouse measures its movement along its hori-
zontal and vertical axes. We hypothesize that, during
the sampling period (discussed in Section 4), the robot
Figure 1: The relative positioning of the two mice
moves with constant tangential and rotational speeds.
This implies that the robot movement can be approx-
imated by an arc of circumference. So, we have to
estimate the 3 parameters that describe the arc of cir-
cumference (i.e., the (x,y)-coordinates of the center
of the circumference and the arc angle), given the 4
readings taken from the two mice. We call xrand yr
the measures taken by the mouse on the right, while
xland ylare those taken by the mouse on the left.
Actually, we have only 3 independent data; in fact,
we have the constraint that the respective position of
the two mice cannot change. This means that the mice
should read always the same displacement along the
line that joins the centers of the two sensors. So, if
we place the mice as in Figure 1, we have that the
x-values measured by the two mice should be always
equal: xl=xr. In this way, we can compute how
much the robot pose has changed in terms of x,y,
and θ.
If the robot makes an arc of circumference, it can
be shown that also each mouse will make an arc of cir-
cumference, which is characterized by the same cen-
ter and the same arc angle (but with a different radius).
During the sampling time, the angle αbetween the x-
axis of the mouse and the tangent to its trajectory does
not change. This implies that, when a mouse moves
along an arc of length l, it measures always the same
values independently from the radius of the arc (see
Figure 2). So, considering an arc with an infinite ra-
dius (i.e., a segment), we can write the following re-
From Equations 1 and 2, we can compute both the
angle between the x-axis of the mouse and the tangent
to the arc:
Figure 2: Two different paths in which the mouse readings are the same
Figure 3: The triangle made up of the joining lines and the
two radii
α= arctan µy
and the length of the covered arc:
l=½|x|, α = 0, π
sin α, otherwise (4)
In order to compute the orientation variation we ap-
ply the theorem of Carnot to the triangle made by the
joining line between the two mice and the two radii
between the mice and the center of their arcs (see Fig-
ure 3):
Figure 4: The angle arc of each mouse is equal to the change
in the orientation of the robot
l2 cos(γ)rrrl,(5)
where rrand rlare the radii related to the arc of cir-
cumferences described respectively by the mouse on
the right and the mouse on the left, while γis the an-
gle between rrand rl. It is easy to show that γcan be
computed by the absolute value of the difference be-
tween αland αr(which can be obtained by the mouse
measures using Equation 3): γ=|αlαr|.
The radius rof an arc of circumference can be com-
puted by the ratio between the arc length land the arc
angle θ. In our case, the two mice are associated to
arcs under the same angle, which corresponds to the
change in the orientation made by the robot, i.e. θ
Figure 5: The movement made by each mouse
(see Figure 4). It follows that:
If we substitute Equations 6 and 7 into Equation 5,
we can obtain the following expression for the orien-
tation variation:
r2 cos(γ)lllr
The movement along the x and y-axes can be de-
rived by considering the new positions reached by the
mice (w.r.t. the reference system centered in the old
robot position) and then computing the coordinates of
their mid-point (see Figure 5). The mouse on the left
starts from the point of coordinates (- D
2;0), while the
mouse on the right starts from ( D
2;0). The formulas
for computing their coordinates at the end of the sam-
pling period are the following:
r=rr(sin (αr+ ∆θ)sin (αr)) sign(∆θ) + D
r=rr(cos (αr)cos (αr+ ∆θ)) sign(∆θ)(10)
l=rl(sin (αl+ ∆θ)sin (αl)) sign(∆θ)
l=rl(cos (αl)cos (αl+ ∆θ)) sign(∆θ).(12)
From the mice positions, we can compute the
movement executed by the robot during the sampling
time with respect to the reference system centered in
the old pose using the following formulas:
The absolute coordinates of the robot pose at time
t+ 1 (Xt+1 , Yt+1,Θt+1 ) can be compute by know-
ing the absolute coordinates at time tand the rela-
tive movement carried out during the period (t;t+ 1]
(x, y, θ) through these equations:
Xt+1 =Xt+qx2+ ∆y2cos µΘt+ arctan µy
x¶¶ (15)
Yt+1 =Yt+qx2+ ∆y2sin µΘt+ arctan µy
x¶¶ (16)
Θt+1 = Θt+ ∆θ. (17)
Figure 6: The schematic of an optical mouse
The performance of the odometry system we have de-
scribed depends on the characteristics of the solid-
state optical mouse sensor used to detect the dis-
placement of the mice. Commercial optical mice use
the Agilent ADNS-2051 sensor (Agilent Technolo-
gies Semiconductor Products Group, 2001) a low cost
integrated device which measures changes in posi-
tion by optically acquiring sequential surface images
(frames hereafter) and determining the direction and
magnitude of movement.
The main advantage of this sensor with respect to
traditional mechanical systems for movement detec-
tion is the absence of moving parts that could be dam-
aged by use or dust. Moreover it is not needed a
mechanical coupling of the sensor and the floor thus
allowing the detection of movement in case of slip-
ping and on many surfaces, including soft, glassy, and
curved pavements.
4.1 Sensor Characteristics
The ADNS-2051 sensor is essentially a tiny, high-
speed video camera coupled with an image proces-
sor, and a quadrature output converter (Agilent Tech-
nologies Semiconductor Products Group, 2003). As
schematically shown in Figure 61, a light-emitting
diode (LED) illuminates the surface underneath the
sensorreflecting off microscopic textural features in
the area. A plastic lens collects the reflected light and
forms an image on a sensor. If you were to look at the
image, it would be a black-and-white picture of a tiny
section of the surface as the ones in Figure 7. The sen-
sor continuously takes pictures as the mouse moves at
1Images reported in Figure 6, 7 and 8 are taken
from (Agilent Technologies Semiconductor Products
Group, 2001) and (Agilent Technologies Semiconductor
Products Group, 2003) and are copyright of Agilent Tech-
nologies Inc.
Figure 7: Two simple images taken from the mice. Beside
the noise you can notice the displacement of the pixels in
the images from right to left and top to bottom.
1500 frames per second or more, fast enough so that
sequential pictures overlap. The images are then sent
to the optical navigation engine for processing.
The optical navigation engine in the ADNS-2051
identifies texture or other features in the pictures and
tracks their motion. Figure 7 illustrates how this is
done by showing two images sequentially captured as
the mouse was panned to the right and upwards; much
of the same “visual material” can be recognized in
both frames. Through an Agilent proprietary image-
processing algorithm, the sensor identifies common
features between these two frames and determines the
distance between them; this information is then trans-
lated into xand yvalues to indicate the sensor
By looking at the sensor characteristics, available
through the data sheet, it is possible to estimate
the precision of the measurement and the maximum
working speed of the device. The Agilent ADNS-
2051 sensor is programmable to give mouse builders
(this is the primary use of the device) 400 or 800 cpi
resolution, a motion rate of 14 inches per second, and
frame rates up to 2,300 frames per second. At recom-
mended operating conditions this allows a maximum
operating speed of 0.355 m/s with a maximum ac-
celeration of 1.47 m/s2.
These values are mostly due to the mouse sensor
(i.e., optical vs. mechanical) and the protocol used to
transmit the data to the computer. According to the
original PS/2 protocol, still used in mechanical de-
vices featuring a PS/2 connector, the xand ydis-
placements are reported using 9bits (i.e., 1byte plus
a sign bit) espressing values in the range from 255 to
+255.In these kind of mouse resolution can be set to
1, 2, 4, or 8 counts per mm and the sample rate can be
10, 20, 40, 60, 80, 100, and 200 samples per second.
The maximum speed allowed not to have an overflow
of the mouse internal counters is obtained by reducing
the resolution and increasing the sample rate. How-
ever, in modern mice with optical sensor and USB in-
terface these values are quite different, in fact the USB
standard for human computer interface (USB Imple-
menter’s Forum, 2001) restricts the range for xand
ydisplacements to values from 128 to +127 and
the sample rate measured for a mouse featuring the
ADNS-2051 is 125 samples per second.
4.2 Mice Calibration Process
The numbers we have reported in the previous subsec-
tion reflect only nominal values for the sensor char-
acteristics since its resolution can vary depending on
the surface material and the height of the sensor from
the floor as described in Figure 8. This variation in
sensor readings calls for an accurate mounting on the
robot and a minimal calibration procedure before ap-
plying the formulas described in the previous section
in order to reduce systematic errors in odometry. In
fact systematic errors in odometry are often due to a
wrong assumptions on the model parameters and they
can be significantly reduced by experimental estima-
tion of right values. Our model systematic errors are
mostly due to the distance Dbetween the mice and
the exact resolution of the sensors. To estimate the
last parameter, we have moved 10 times the mice on a
20 cm straight track and estimated in 17.73 the num-
ber of ticks per mm (i.e., 450 cpi) through the aver-
age counts for the mice ydisplacements.
The estimation of the distance between the two
mice has been calibrated after resolution calibration
by using a different procedure. We rotated the two
mouse around their middle point for a fixed angle η
(π/2rad in our experiments) and we measured again
their ydisplacements. These two measures have to
be equal to assure we are rotating around the real mid-
dle point and the xdisplacement should be equal to
zero when the mice are perpendicular to the radius.
Provided the last two constraints we can estimate their
distance according to the simple formula
D= ∆y/η. (18)
However, it is not necessary to rotate the mice
around their exact middle point, we can still estimate
their distance by rotating them around a fixed point
Figure 8: Typical resolution vs. Z (distance from lens refer-
ence plane to surface) for different surfaces
on the joining line and measuring the y1and y2.
Given these two measurements we can compute Dby
D=y1+ ∆y2
In order to validate our approach, we take two USB
optical mice featuring the Agilent ADNS-2051 sen-
sor, which can be commonly purchased in any com-
mercial store. We fix them as described in Section 3,
taking care of making them stay in contact with the
ground. We made our experiments on a carpet, like
those used in the RoboCup Middle Size League us-
ing the recommended operating setting for the sensor
(i.e., nominally 400 cpi at 1500 frames per second)
and calibrating the resolution to 450 cpi and the dis-
tance Dbetween the mice to 270 mm with the pro-
cedure previously described.
The preliminary test we made is the UMBmark
test, which was presented by (Borenstein and Feng,
1994). The UMBmark procedure consists of measur-
ing the absolute actual position of the robot in or-
der to initialize the on-board dead reckoning start-
ing position. Then, we make the robot travel along
a4x4msquare in the following way: the robot stops
after each 4mstraight leg and then it makes a 90o
turn on the spot. When the robot reaches the starting
area, we measure its absolute position and orientation
and compare them to the position and orientation cal-
culated by the dead reckoning system. We repeated
this procedure five times in clockwise direction and
five times in counter-clockwise. The measure of dead
reckoning accuracy for systematic errors that we ob-
tained is Emax,syst = 114 mm, which is compara-
ble with those achieved by other dead reckoning sys-
tems (Borenstein and Feng, 1996).
As we said in Section 1, Mobile Robot Position-
ing has been one of the first problem in Robotics
and odometry is the most widely used navigation
method for mobile robot positioning. Classical odom-
etry methods based on dead reckoning are inexpen-
sive and allow very high sample rates providing good
short-term accuracy. Despite its limitations, many re-
searcher agree that odometry is an important part of
a robot navigation system and that navigation tasks
will be simplified if odometric accuracy could be im-
The fundamental idea of dead reckoning is the inte-
gration of incremental motion information over time,
which leads inevitably to the unbounded accumula-
tion of errors. Specifically, orientation errors will
cause large lateral position errors, which increase pro-
portionally with the distance traveled by the robot.
There have been a lot of work in this field especially
for differential drive kinematics and for systematic er-
ror measurement, comparison and correction (Boren-
stein and Feng, 1996).
First works in odometry error correction were
done by using and external compliant linkage vehicle
pulled by the mobile robot. Being pulled this vehicle
does not suffer from slipping and the measurement
of its displacing can be used to correct pulling robot
odometry (Borenstein, 1995). In (Borenstein and
Feng, 1996) the authors propose a practical method
for reducing, in a typical differential drive mobile
robot, incremental odometry errors caused by kine-
matic imperfections of mobile encoders mounted onto
the two drive motors.
Little work has been done for different kinemat-
ics like the ones based on omnidirectional wheels. In
these cases, slipping is always present during the mo-
tion and classical shaft encoder measurement leads
to very large errors. In (Amini et al., 2003) the Per-
sia RoboCup team proposes a new odometric system
which was employed on their full omni-directional
robots. In order to reduce non-systematic errors,
like those due to slippage during acceleration, they
separate odometry sensors from the driving wheels.
In particular, they have used three omni-directional
wheels coupled with shaft encoders placed 60oapart
of the main driving wheels. The odometric wheels are
connected to the robot body through a flexible struc-
ture in order to minimize the slippage and to obtain a
firm contact of the wheels with the ground. Also this
approach is independent from the kinematics of the
robot, but its realization is quite difficult and, how-
ever, it is affected by (small) slippage problems.
An optical mouse was used in the localization sys-
tem presented in (Santos et al., 2002). In their ap-
proach, the robot is equipped with an analogue com-
pass and an optical odometer made out from a com-
mercially available mouse. The position is obtained
by combining the linear distance covered by the robot,
read from the odometer, with the respective instanta-
neous orientation, read from the compass. The main
drawback of this system is due to the low accuracy of
the compass which results in systematic errors.
However, odometry is inevitably affected by the
unbounded accumulation of errors. In particular, ori-
entation errors will cause large position errors, which
increase proportionally with the distance travelled by
the robot. There are several works that propose meth-
ods for fusing odometric data with absolute position
measurements to obtain more reliable position esti-
mation (Cox, 1991; Chenavier and Crowley, 1992).
We have presented a dead reckoning sensor based on
a pair of optical mice. The main advantages are good
performances w.r.t. the two main problems that affect
dead reckoning sensors: slipping and crawling. On
the other hand, this odometric system needs that the
robot operates on a ground with a surface on which
the mice always read with the same resolution.
Due to its characteristics, the proposed sensor can
be successfully applied with many different robot ar-
chitectures, being completely independent from the
specific kinematics. In particular, we have developed
it for our omnidirectional Robocup robots, which will
be presented at Robocup 2004.
Agilent Technologies Semiconductor Products Group
(2001). Optical mice and hiw they work: The optical
mouse is a complete imaging system in a tiny pack-
age. Technical Report 5988-4554EN, Agilent Tech-
nologies Inc.
Agilent Technologies Semiconductor Products Group
(2003). Agilent adns-2051 optical mouse sesor data
sheet. Technical Report 5988-8577EN, Agilent Tech-
nologies Inc.
Amini, P., Panah, M. D., and Moballegh, H. (2003). A new
odometry system to reduce asymmetric errors for om-
nidirectional mobile robots. In RoboCup 2003: Robot
Soccer World Cup VII.
Borenstein, J. (1995). Internal correction of dead-reckoning
errors with the compliant linkage vehicle. Journal of
Robotic Systems, 12(4):257–273.
Borenstein, J. and Feng, L. (1994). UMBmark - a
method for measuring, comparing, and correcting
dead-reckoning errors in mobile robots. In Proceed-
ings of SPIE Conference on Mobile Robots, Philadel-
Borenstein, J. and Feng, L. (1996). Measurement and
correction od systematic odometry errors in mobile
robots. IEEE Transactions on Robotics and Automa-
tion, 12(6):869–880.
Chenavier, F. and Crowley, J. (1992). Position estima-
tion for a mobile robot using vision and odometry.
In Proceedings of IEEE International Conference on
Robotics and Automation, pages 2588–2593, Nice,
Cox, I. (1991). Blanche - an experiment in guidance and
navigation of an autonomous mobile robot. IEEE
Transactions on Robotics and Automation, 7(3):193–
Santos, F. M., Silva, V. F., and Almeida, L. M. (2002). A
robust self-localization system for a small mobile au-
tonomous robot. In Preceedings of ISRA.
USB Implementer’s Forum (2001). Universal serial bus:
Device class definition for human interface devices.
Technical Report 1.11, USB.
... Some researchers have been replaced encoders with optical flow sensors to overcome slippage problem where some of them based on one [14]- [16], two [15], [17] or multi optical flow sensors [18]. [10]. ...
... Position and orientation of a mobile robot using two optical flow sensors have been driven in reference [17]. Figure 2.5 illustrates a block diagram of geometrical derivation of dead-reckoning based on two optical mice. ...
... In general, Robot position and navigation can be categorized into three types of Global, determining the robot position in absolute or map-referenced terms, Local, based on robot position relative to objects and Personal takes into account the position of various parts, their relative positions for handling objects [2] . Amongst mentioned methods personal and local navigation are used more for micro robots [2] but selfpositioning is more in demand in all research [4][5][6] . ...
... The main problem of the above method is due to drift in response with time which affects the accuracy of estimation, especially for ultrasonic and infra-red sensor by the instability in the environment leading to false reflection and echoes [2] . Commonly the odometry and inertial navigation methods use active beacons based on artificial landmark recognition, natural landmark recognition, model matching for global position monitoring [5] . Orientation and position detection methods based on precise references like the lasers or encoders, in most of the cases are accurate but have the drawback of restriction in object movements [3,9] . ...
Full-text available
Robot position monitoring and navigation with ease of use and implementation is a challenge for researchers. The Path and Position Monitoring system (PPMS) is designed for the robot Platform Mokhtar. The path followed by the robot during experimentation, is acquired and displayed graphically using PPMS. The System provides a log of the location (x, y), movement velocity, number of steps for each movement, along with the date and time as a text file. The data can be used to obtain the velocity and movement trajectory of robot for further study. The PPMS can be used for navigation or any other application in robotic studies. The paper presents the design and development of the system and its use in path monitoring of an autonomous wind tracking robot. Various experiments carried out and the results obtained are discussed.
... In [3] the authors have presented a very low-cost system which can be easily interfaced with any platform. This sensor system requires only minim two optical mice which can be placed in any position under the robot. ...
... The 3D reconstruction problem is solved easily with the proposed CALOS sensor system in detail in [7]. The initial pose in SLAM is the origin of the coordinate system that represents all possible locations and orientations [3]. ...
Full-text available
In recent years more and more emphasis was placed on the idea of autonomous mobile robots, researches being constantly rising. Mobile robots have a large scale use in industry, military operations, exploration and other applications where human intervention is risky. The accurate estimation of the position is a key component for the successful operation for most of autonomous mobile robots. The localization of an autonomous robot system refers mainly to the precise determination of the coordinates where the system is present at a certain moment of time. In many applications, the orientation and an initial estimation of the robot position are known, being supplied directly or indirectly by the user or the supervisor. During the execution of the tasks, the robot must update this estimation using measurements from its sensors. This is known as local localization. Using only sensors that measure relative movements, the error in the pose estimation increases over time as errors are accumulated. Localization is a fundamental operation for navigating mobile robots
... An alternative is to consider newer dead reckoning methods used in mobile robot applications. Bonarini et al. [21] showed that the problem of cumulative errors when using encoders on the device wheels for dead reckoning can be overcome using two optical mice attached at the bottom of a robot to accurately calculate robot pose. An alternate sensor fusion approach was successfully used in [22], where data from an optical mouse sensor, the yaw angle calculated from IMU data, and the wheel encoder data were combined using an extended Kalman filter to accurately estimate the position of a two-wheeled robot. ...
Full-text available
Access to graphical information plays a very significant role in today’s world. Access to this information can be particularly limiting for individuals who are blind or visually impaired (BVIs). In this work, we present the design of a low-cost, mobile tactile display that also provides robotic assistance/guidance using haptic virtual fixtures in a shared control paradigm to aid in tactile diagram exploration. This work is part of a larger project intended to improve the ability of BVI users to explore tactile graphics on refreshable displays (particularly exploration time and cognitive load) through the use of robotic assistance/guidance. The particular focus of this paper is to share information related to the design and development of an affordable and compact device that may serve as a solution towards this overall goal. The proposed system uses a small omni-wheeled robot base to allow for smooth and unlimited movements in the 2D plane. Sufficient position and orientation accuracy is obtained by using a low-cost dead reckoning approach that combines data from an optical mouse sensor and inertial measurement unit. A low-cost force-sensing system and an admittance control model are used to allow shared control between the Cobot and the user, with the addition of guidance/virtual fixtures to aid in diagram exploration. Preliminary semi-structured interviews, with four blind or visually impaired participants who were allowed to use the Cobot, found that the system was easy to use and potentially useful for exploring virtual diagrams tactually.
... Researchers used many technologies to achieve that, including the usage of Visible-Light Communication (VLC) [10]- [12], or exploiting wireless signal properties in radio-frequency based technologies like using the WiFi's Received Signal Strength Indicator (RSSI) to estimate the robot position with respect to known WiFi hotspots [13]- [16], or using Zigbee [17]- [19] and Bluetooth [20], [21] in a similar way, or using Ultrawide Band (UWB) and measure the Difference Time of Arrival (DToA) of the signal, then use this information to estimate distances to known UWB anchors [22]- [26]. Other researchers used dead-reckoning techniques to estimate the robot pose by integrating displacements in small time intervals; some studies used Inertial Measurement Units (IMUs) [27]- [29], others used Optical Flow Sensors widely known for being used in computer mice [30]- [33]. However, if used alone, dead-reckoning is known to be error cumulative; therefore, researchers generally use this technique along with other type of sensors like cameras, LiDARs, or radio-frequency sensors. ...
Conference Paper
Full-text available
Warehouses and industrial sites are getting more and more interest in automating their workflow; in such an environment, a robust localization method is required to accomplish safe navigation indoors. One widely used scheme is the usage of custom AGVs and dedicated infrastructures to automate moving goods within the warehouse; however, such a solution needs to modify the infrastructure or to make custom robots that fit the existing infrastructure, which requires an important investment. In this paper, we present and validate the SmartTrolley, a generic, modular, and scalable experimental platform for usage in warehouses and industrial sites; able to localize itself in the environment using a scan matching and EKF based indoor Simultaneous Localization and Mapping (SLAM) algorithm.
... The use of multiple optical mice has been proposed as a solution for robotic odometry [13]. Preliminary research for the present work included an attempt to reproduce one proposed solution using two optical mice and calculating position and orientation from the difference in arc lengths traced by these mice [14]. Accumulating error is inherent to any dead reckoning procedure, but the attempts to reproduce this work found the error so excessive as to make this approach impractical for arm motion capture. ...
This paper details the design, construction, and validation of a virtual air hockey table intended as a low-cost motion capture solution with potential entertainment and research applications through reliance on open-source software (e.g., Linux, Blender, GIMP, and Panda3D) and off-the-shelf hardware (e.g., rotary potentiometers, Arduino microcontroller, and LCD projector). In its current configuration, a subject attempts to hit a virtual puck onto a predetermined goal by means of a physical mallet comprised of a three-bar mechanism with potentiometers at its joints. The mallet-puck collision detection is performed in near-real time and the puck's initial velocity is calculated using finite difference method. Puck motion, including wall and obstacle collisions, is calculated using the Euler method. An example use case is provided wherein the device was used for a motor control study of inter-trial variation in skilled task performance.
Full-text available
C-arms are commonly used during orthopaedic surgeries to provide real-time static or dynamic fluoroscopic imaging. These devices are mostly utilized for qualitative assessment during operations; several advancements, such as C-arm tracking, must be accomplished to make them capable of providing quantitative measurements. This thesis presents in two major contributions to C-arm quantification: (1) development and testing of a monocular visual odometry method to track the C-arm base and (2) development and testing of a particular application, estimating the pose of an intramedullary (IM) nail for fracture surgery. The proposed base-tracking system can either be integrated with a C-arm joint tracking module or employed on its own, e.g. for steering. An IM-nail pose estimation method is proposed in this research that is capable of reporting the position and orientation of an inserted IM-nail. The offered IM-nail pose estimation method can help reduce both radiation exposure and time during surgery.
Conference Paper
A new generation of smartphone accessories are emerging to support 2-d and 3-d printing and model creation. These mobile, handheld devices require continuous self-location with position accuracy approaching 0.001 inch, well beyond the capabilities of commercial, consumer-grade wireless positioning technologies. We consider the problem of accurately tracking the motion of a handheld device that provides free- hand, high-resolution image `drawing' while being swiped across a fixed planar surface. We review the capabilities and limitations of existing short-range localization technologies, including optical navigation sensors, ultrasonic positioning devices, and Inertial Measurement Units (IMU). We describe the testing apparati we constructed to establish the ground truth position of a device. We show how combining the complementary capabilities of these sensors can accurately locate a handheld device, potentially enabling a new class of smartphone imaging peripherals and applications.
Omni-directional mobile robots are very attractive because they have a very good mobility, which make them appropriate when they have to move in tight areas, avoid obstacles, and find the way to the next location. To move with precision in such environments, the accurate estimation of the position is very important. The authors provide in this paper information about the design of an omni-directional robot and its control system. Also, preliminary ideas about the design of an odometer are presented.
Mobile robots have a large scale use in industry, military operations, exploration and other applications where human intervention is risky. When a mobile robot has to move in small and narrow spaces and to avoid obstacles, mobility is one of its main issues. An omni-directional drive mechanism is very attractive because it guarantees a very good mobility in such cases. Also, the accurate estimation of the position is a key component for the successful operation for most of autonomous mobile robots. In this work, some odometry aspects of an omni-directional robot are presented and a simple odometer solution is proposed.
Full-text available
This paper presents a low-cost self-localization system suit-able for small mobile autonomous robots. This system is particularly adapted to situations in which the robots might suffer undesirable movements, during which the wheels slip over the ground, caused either by collisions with non-detected obstacles or collisions with other robots. These situations are normally fatal for encoder-based odometry. The system was integrated in the Bulldozer IV robot, de-veloped to participate in a robotics contest, the Micro-Rato 2001, where situations as those referred to above are fre-quent. The system combines azimuth from an analog com-pass with odometry from an optical PC mouse and allows the robot, within an unknown maze, to return to the start point after having reached a given goal. This paper presents a brief discussion on the techniques commonly used to pro-vide the notion of self-localization in mobile autonomous robots. Then, the developed self-localization system is de-scribed, as well as its integration in the navigational system of the Bulldozer IV robot. Finally, a set of experimental re-sults is presented that allow characterizing the performance of the navigational/self-localization system, illustrating its robustness in terms of resilience to externally forced move-ments.
Conference Paper
Full-text available
The authors describe a method for locating a mobile robot moving in a known environment. This technique combines position estimation from odometry with observations of the environment from a mobile camera. Fixed objects in the world provide landmarks which are listed in a database. The system calculates the angle to each landmark and then orients the camera. An extended Kalman filter is used to correct the error between the observed and estimated angle to each landmark. Results from experiments in a real environment are presented
The principal components and capabilities of Blanche, an autonomous robot vehicle, are described. Blanche is designed for use in structured office or factory environments rather than unstructured natural environments, and it is assumed that an offline path planner provides the vehicle with a series of collision-free maneuvers, consisting of line and arc segments, to move the vehicle to a desired position. These segments are sent to a low-level trajectory generator and closed-loop motion control. The controller assumes accurate knowledge of the vehicle's position. Blanche's position estimation system consists of a priori map of its environment and a robust matching algorithm. The matching algorithm also estimates the precision of the corresponding match/correction that is then optimally (in a maximum-likelihood sense) combined with the current odometric position to provide an improved estimate of the vehicle's position. The system does not use passive or active beacons. Experimental results are reported
Conference Paper
This work describes an investigation to reduce positioning error of 3 wheel middle size robot by using a modified odometry system. In this technique the positioning sensor (shaft encoders) are mounted on 3 free-running wheels so the slippage of the driving wheels does not affect the measurements of the sen- sors. This will result in decreasing the cumulative error of the system. This mechanism accompanying by omnidirectional vision system presents reliable and accurate self-localization method for any 3 wheel driving robot. Ex- perimental results have shown performance improvement up to 86% in orienta- tion error and 80% in position error.
Conference Paper
This paper describes a practical method for reducing odometry errors caused by kinematic imperfections of a mobile robot. These errors, here referred to as “systematic” errors, stay almost constant over a prolonged period of time. Performing an occasional calibration as described here will increase the robot's odometric accuracy and reduce operation cost because an accurate mobile robot requires fewer absolute positioning updates. Many manufacturers or end-users calibrate their robots-usually in a time-consuming and non-systematic trial and error approach. By contrast the authors' method is systematic, provides near-optimal results, and can be performed easily and without complicated equipment. Experimental results are presented that show a consistent improvement of at least one order of magnitude in odometric accuracy (with respect to systematic errors) for a mobile robot calibrated with the procedure described in this paper
Odometry is the most widely used method for determining the momentary position of a mobile robot. This paper introduces practical methods for measuring and reducing odometry errors that are caused by the two dominant error sources in differential-drive mobile robots: 1) uncertainty about the effective wheelbase; and 2) unequal wheel diameters. These errors stay almost constant over prolonged periods of time. Performing an occasional calibration as proposed here will increase the odometric accuracy of the robot and reduce operation cost because an accurate mobile robot requires fewer absolute positioning updates. Many manufacturers or end-users calibrate their robots, usually in a time-consuming and nonsystematic trial and error approach. By contrast, the method described in this paper is systematic, provides near-optimal results, and it can be performed easily and without complicated equipment. Experimental results are presented that show a consistent improvement of at least one order of magnitude in odometric accuracy (with respect to systematic errors) for a mobile robot calibrated with our method
This paper presents Internal Position Error Correction (IPEC) --- a new method for accurate and reliable dead-reckoning with mobile robots. The IPEC method has been implemented on our recently developed Multi-Degree-of-Freedom (MDOF) mobile platform, a vehicle in which two differential-drive mobile robots (called "trucks") are physically connected through a compliant linkage. In addition to its four wheel encoders, the MDOF platform has one linear and two rotary internal encoders, which allow measurement of the relative distance and bearing between the two trucks. During operation, both trucks perform conventional dead-reckoning with their wheel encoders. But, in addition, the IPEC method uses information from the internal encoders to detect and correct dead-reckoning errors as soon as they occur. Our system, called Compliant Linkage Autonomous Platform with Position Error Recovery (CLAPPER), requires neither external references (such as navigation beacons, artificial landmarks, known floorplans, or satellite signals), nor inertial navigation aids (such as accelerometers or gyros). Nonetheless, the experimental results included in this paper show one to two orders of magnitude better positioning accuracy than systems based on conventional dead-reckoning. The CLAPPER corrects not only systematic errors, such as different wheel diameters, but also non-systematic errors, such as those caused by floor roughness, bumps, or cracks in the floor. These features are made possible by exploiting the new Growth-Rate Concept for deadreckoning errors that is introduced in this paper for the first time. The Growth-Rate Concept distinguishes between certain dead-reckoning errors that develop slowly while other deadreckoning errors develop quickly. Based on this concept, truck A freque...
Blanche - an experiment in guidance and navigation of an autonomous mobile robot A robust self-localization system for a small mobile au-tonomous robot Universal serial bus: Device class definition for human interface devices
  • I Cox
  • F M Santos
  • V F Silva
  • L M Almeida
Cox, I. (1991). Blanche - an experiment in guidance and navigation of an autonomous mobile robot. Transactions on Robotics and Automation, 7(3):193– 204. Santos, F. M., Silva, V. F., and Almeida, L. M. (2002). A robust self-localization system for a small mobile au-tonomous robot. In Preceedings of ISRA. USB Implementer’s Forum (2001). Universal serial bus: Device class definition for human interface devices. Technical Report 1.11, USB. UMBmark - a Measurement and Position estima-IEEE ICINCO 2004 - ROBOTICS AND AUTOMATION 94
Universal serial bus: Device class definition for human interface devices
USB Implementer's Forum (2001). Universal serial bus: Device class definition for human interface devices. Technical Report 1.11, USB.