Conference PaperPDF Available

The Khepera IV Mobile Robot: Performance Evaluation, Sensory Data and Software Toolbox


Abstract and Figures

Taking distributed robotic system research from simulation to the real world often requires the use of small robots that can be deployed and managed in large numbers. This has led to the development of a multitude of these devices, deployed in the thousands by researchers worldwide. This paper looks at the Khepera IV mobile robot, the latest iteration of the Khepera series. This full-featured differential wheeled robot provides a broad set of sensors in a small, extensible body, making it easy to test new algorithms in compact indoor arenas. We describe the robot and conduct an independent performance evaluation, providing results for all sensors. We also introduce the Khepera IV Toolbox, an open source framework meant to ease application development. In doing so, we hope to help potential users assess the suitability of the Khepera IV for their envisioned applications and reduce the overhead in getting started using the robot.
Content may be subject to copyright.
The Khepera IV Mobile Robot: Performance
Evaluation, Sensory Data and Software Toolbox
Jorge M. Soares1,2, I˜
naki Navarro1, and Alcherio Martinoli1
1Distributed Intelligent Systems and Algorithms Laboratory,
Ecole Polytechnique F´
erale de Lausanne (EPFL), 1015 Lausanne, Switzerland.
2Laboratory of Robotics and Systems in Engineering and Science,
Instituto Superior T´
ecnico, University of Lisbon, Av. Rovisco Pais, 1049-001 Lisboa, Portugal.
Taking distributed robotic system research from simulation to the real
world often requires the use of small robots that can be deployed and managed in
large numbers. This has led to the development of a multitude of these devices, de-
ployed in the thousands by researchers worldwide. This paper looks at the Khepera
IV mobile robot, the latest iteration of the Khepera series. This full-featured differ-
ential wheeled robot provides a broad set of sensors in a small, extensible body,
making it easy to test new algorithms in compact indoor arenas. We describe the
robot and conduct an independent performance evaluation, providing results for
all sensors. We also introduce the Khepera IV Toolbox, an open source framework
meant to ease application development. In doing so, we hope to help potential
users assess the suitability of the Khepera IV for their envisioned applications and
reduce the overhead in getting started using the robot.
1 Introduction
A wide range of mobile robotic platforms can be found nowadays in the market and across
robotic laboratories worldwide. Some of them have been used by many researchers,
reaching a critical mass that elevated them to de facto standards in their domain of
use [5, 10, 11]. Convergence to these shared platforms has been argued to improve
collaboration and repeatability, allowing for easy validation or refutation of algorithms
under study [1, 18].
The success of a robotic platform does not depend solely on its technical qualities,
but also on set of accompanying tools, such as libraries, management scripts, and
suitable simulators. Among small indoor robots, one platform that achieved widespread
acceptance is the Khepera III [13, 15]. Released in 2006, the Khepera III has seen over
600 sales to 150 universities worldwide and has been used in hundreds of publications.
In our lab, it has been successfully employed across diverse research topics, including
odor sensing [17], navigation and localization [13], formation control [6], flocking [12],
and learning [3], as well as numerous student projects.
In this paper we present and test the new Khepera IV robot designed by K-Team
, the
successor to the Khepera III. Released in January 2015, the Khepera IV is a differential
wheeled mobile robot with a diameter of 14 cm (see Fig. 1). It is equipped with 12
Fig. 1. The Khepera IV robot (image courtesy of K-Team).
infrared sensors, five ultrasound sensors, two microphones, and a camera. Proprioceptive
sensors include two wheel encoders and an inertial measurement unit (IMU). Wireless
communication can be accomplished using Bluetooth or 802.11b/g, and processing takes
place in an Gumstix embedded computer running GNU/Linux.
We perform an exhaustive test of the sensors and actuators in order to understand
their performance and create an accurate model of the robot. The data collected in the
process is made freely available to other researchers, who will be able to use it for
deriving and validating their own models. In addition, we present an open source toolbox
designed in our lab, composed of a collection of scripts, programs and code modules for
the Khepera IV robot, enabling fast application development and making it easier to run
multi-robot experiments. Both the datasets and the Khepera IV Toolbox are available for
download on our website4.
The remainder of this article is organized as follows. In Section 2 we describe
in detail the Khepera IV. Section 3 focuses on an exhaustive performance test of the
different sensors and actuators of the robot. In Section 4 we introduce two software
packages that complement the Khepera IV. Finally, Section 5 draws the conclusions
about the Khepera IV robot.
2 Technical Description
The Khepera IV is a small differential wheeled robot designed for indoor use. It is
shaped like a cylinder, with a diameter of 14.08cm and a ground-to-top height of
5.77 cm (wheels included). Its outer shell is composed of two hard plastic parts with
slots for the sensors and actuators. Inside, it follows a stacked PCB design. The complete
robot weighs 566 g. Figure 2 shows the technical drawings for the robot.
The two actuated wheels are 42 mm in diameter (including the O-rings that act as
tires) and are centered on each side of the robot, spaced 10.54 cm apart. Two caster
ball transfer units, at the front and at the back, provide the remaining contact points.
This solution results in 0.5-1 mm of ground clearance, making the robot very stable but
preventing its use on any surface that is not effectively flat and smooth.
Fig. 2. Bottom, top, front, and left views of the robot (image courtesy of K-Team).
2.1 Electronics
The brain of the robot is a Gumstix Overo FireSTORM COM, an off-the-shelf embedded
computer that carries a Texas Instruments DM3730 800MHz ARM Cortex-A8 Processor
with a TMS320C64x Fixed Point DSP core, 512 MB of DDR LPDRAM, and 512 MB of
NAND Flash memory. The robot ships with a pre-installed 4 GB microSD card for user
programs and data. A Wi2Wi W2CBW003C transceiver provides both 802.11b/g (WiFi)
and Bluetooth 2.0+EDR capabilities using internal antennas.
Low-level aspects of the robot are managed by a dsPIC33FJ64 GS608 micro-
controller that builds a bridge between the embedded computer and the built-in hardware.
Additional devices can be connected via an extension bus on top of the robot, as well as
an external USB port.
Energy is provided by a 3400 mAh 7.4 V lithium-ion polymer battery. The battery
is not swappable and can be charged in approximately 5 hours using the charging jack.
Support is also provided for charging from the extension bus (allowing for the use of
external, stackable battery packs) and from a set of contacts under the body of the robot
(designed for automatic charging stations).
2.2 Sensors and Actuators
The Khepera IV robot is equipped with a rich set of sensing devices:
Twelve Vishay Telefunken TCRT5000 reflective optical sensors. Eight of these
sensors are equally spaced in a ring around the robot body, while four of them
are downward-facing. When in proximity mode, the sensors emit a wavelength of
950 mm and their published range is 2-250 mm. They may also operate in passive
mode and measure ambient light. The sampling frequency for the infrared sensors is
200 Hz, regardless of the mode of operation.
Five Prowave 400PT12B 40 kHz ultrasonic transceivers. The sensors’ published
range is of 25-200 cm with a beam angle of 85
at -6 dBm, and a sensor can be
sampled every 20 ms. The effective sampling rate depends on the number of sensors
enabled, ranging from 50 Hz for a single sensor to 10 Hz if the whole set is in use.
A center-mounted single-package ST LSM330DLC iNEMO inertial measurement
unit (IMU), featuring a 3D accelerometer and a 3D gyroscope. The accelerometer is
configured to a ±2 g range and a 100 Hz data rate, and the gyroscope is configured
to a
2000 dps range and a 95 Hz data rate. Data are read by the micro-controller
in groups of 10, therefore a new set of accelerometer readings is available every
100 ms and a new set of gyroscope readings is available every 105 ms.
Two Knowles SPU0414HR5H-SB amplified MEMS microphones, one on each side.
The omnidirectional microphones have a gain of 20 dB and a frequency range of
100-10000 Hz. The rated SNR is 59 and the sensitivity is -22 dBV at 1 kHz.
One front-mounted Aptina MT9V034C12ST color camera with a 1/3 ” WVGA
CMOS sensor, yielding a resolution of 752x480 px. The robot comes with a fixed-
focus 2.1 mm lens with IR cut filter, mounted on a M12x0.5 thread. The specified
fields of view are 150diagonal, 131horizontal and 101vertical.
Motion capabilities are provided by two Faulhauber 1717 DC motors, one driving
each wheel. Each motor has 1.96 W nominal power, transferred through two gearboxes
with 38:1 overall gear ratio and 66.3 % overall efficiency, yielding 1.3W usable power
per wheel. The motors are paired with Faulhaber IE2-128 high-resolution encoders, with
a full wheel revolution corresponding to 19456 pulses. This yields approximately 147.4
pulses per millimeter of wheel displacement. The motor speed is regulated through pulse
width modulation (PWM), and the motors can be set to different modes: closed-loop
speed control, speed-profile control and position control, as well as open loop.
The robots are equipped with three RGB LEDs, mounted on the top of the robot in
an isosceles triangle, with light guides to the top shell. The LED color can be controlled
with 6-bit resolution on each channel, making them useful for tracking and identification.
Finally, a PUI Audio SMS-1308MS-2-R loudspeaker, with nominal power 0.7 W, SPL
88 dBA and frequency range 400-20000 Hz can be used for communication or interaction.
2.3 Extension Boards
The native functionality of the robot can be extended through the use of generic USB or
Bluetooth devices, or by designing custom boards plugging into the KB-250 bus. This
100-pin connection provides power, I
C, SPI, serial and USB buses, as well as more
specific lines for, e.g., LCD or dsPIC interfacing. K-Team commercializes several boards,
including a gripper, a laser range finder, and a Stargazer indoor localization module.
The interface is compatible with that of the Khepera III, and existing boards should
work with no alterations. Our lab has, in the past, developed several boards that we are
now using with the new robot, including an infrared relative localization board [14], an
odor sensing board [8] and a 2D hot-wire anemometer [8], as well as a power conversion
Mechanically, however, the different shape of the robot shell may require changes to
existing hardware. Boards with large components on the underside can be paired with an
additional spacer, while boards inducing significant stress on the connectors should be
attached either magnetically or using the screw-in mounting points. Depending on their
size and construction, boards may obstruct the view to the tracking LEDs.
Table 1. nbench results for Khepera IV, Khepera III and workstation
Khepera IV Khepera III Workstation
Numeric sort 212.92 168.76 2218.40
Fourier 856.75 184.46 43428.00
Assignment 3.65 1.10 52.65
3 Performance Assessment
A core part of this paper is the evaluation conducted for the Khepera IV robot and
its sensors. This work serves two purposes: informing potential users of the expected
behavior and performance of each component, and allowing for the development of
robot models. While data is presented here in summarized form, the datasets for each
experiment are available on our website.
To this effect, we have undertaken a diligent effort to test all relevant sensors and
actuators. We benchmarked the on-board computer, providing an idea of how much
algorithmic complexity the robot can handle. We determined the response of the infrared
and ultrasound sensors, to determine operational ranges and make it possible to define
sensor models. We looked into the camera distortion and the microphone directionality.
We assessed the accuracy of the odometry, and provided a superficial analysis of the
IMU signals. We have also tested the motor response and the robot’s energy use.
3.1 Computation
The computational performance of the embedded Overo Firestorm COM computer was
assessed using nbench, a GNU/Linux implementation of the BYTEmark benchmark
suite. The same tests were run in the Khepera IV, the Khepera III and a typical mid-range
desktop computer, equipped with an Intel Core i7 870 CPU. For both Khepera robots,
we used the precompiled binaries available on the Gumstix ipkg repositories, while for
our reference computer the program was compiled from source using gcc 4.8.2. The
results are presented in Table 1.
The three tests are respectively representative of integer arithmetic, floating point
arithmetic and memory performance. The benchmark evaluates single-core performance,
and therefore does not benefit from the additional cores on the desktop machine. Further-
more, it is not optimized for the DSP extensions on the FireSTORM.
In comparison to the Khepera III, there is a very significant increase in floating
point and memory performance, enabling the implementation of more complex algo-
rithms on-board. However, performance is still limited when compared to a desktop
computer, which may justify offloading computations, e.g., if undertaking extensive
video processing.
3.2 Infrared Sensors
To determine the response of the infrared sensors, the robot was placed in a dimly lit
room next to a smooth, light-colored wall, with the front sensor directly facing the wall.
The robot was moved along a perpendicular line to the wall, in steps of 1 cm from 0 cm
Infrared reading
Real distance (cm)
0 4 8 12 16 20 24 28
Measured distance (cm)
Real distance (cm)
20 60 100 140 180 220 260 300
Fig. 3.
(a) Box plot of the infrared sensor response. (b) Box plot of the ultrasound sensor response.
For distances greater than 260 cm, the sensor consistently return the code for no-hit (1000) with
only outliers present in the plot.
(i.e. touching the wall) up to 10 cm, and in steps of 2 cm up to a maximum distance of
30 cm. For each distance, 5000 sensor readings were collected. The data is presented in
Fig. 3a in box plot form.
Due to the very low variation in readings, most boxes in the plot degenerate to line
segments. Below 4 cm, the sensor saturates at the maximum reading (1000). In the range
of 4-12 cm the response follows a smooth curve and, for longer distances, the measured
value is indistinguishable from background noise in the absence of obstacles.
3.3 Ultrasound Sensors
A similar protocol was followed for the ultrasound sensors, measured against the same
target. All sensors were disabled except for the front-facing one, in order to maximize
the sampling rate. The robot was placed at distances from 20 cm up to 300 cm, in steps
of 20 cm. For each distance, 5000 measurements were obtained. The data is presented in
Fig. 3b in box plot form.
The sensor is accurate and precise, with typical sub-cm standard error across the
entire published range. Above the 250 cm published range, sensor performance degrades
rapidly, with large numbers of ghost detections, generally the product of ground reflec-
tions. From 280 cm, the sensor mostly reports no hits, as expected. During the initial
experiments, we observed some problems with multiples of the actual distance being
returned when the obstacle was positioned at 60 cm distance. These appear to be due
to multi-path additive effects involving the floor, and disappeared when tested with a
different floor material.
We experimentally determined the ultrasound sensor beam angle, which is of approx-
imately 92at 1 m, matching the specifications.
3.4 Camera
An example image capture using the robot camera in a well-lit room is presented in Fig.
4a. 3DF Lapyx was used to process a set of 33 full-resolution (752 x 480 px) images of
a checkerboard pattern taken with the robot camera and extract its intrinsic calibration
parameters. The calibration results using Brown’s distortion model are included in Table
-20 dB
-15 dB
-10 dB
-5 dB
0 dB
Fig. 4.
(a) Example image capture by the robot camera. (b) Directivity pattern for the left and right
microphones using a 1 kHz source. The maximum recorded amplitude was taken as the reference.
Table 2. Camera calibration parameters, using Brown’s distortion model.
Focal length Fx380.046
Principal point Cx393.38
Skew 0
Radial distortion
2. Over the entire set of images, this calibration results in a mean square reprojection
error of 0.251978 pixels.
3.5 Microphones
The microphones were tested using an omni-directional sound source emitting a 1 kHz
tone. The robot was placed one meter away, and slowly rotated in place while capturing
both microphones. The resulting wave files were bandpass filtered to remove motor noise
and extract the reference tone. Figure 4b shows the directionality diagrams for each
microphone, with a clear lateral main lobe for each microphone.
3.6 Odometry
The odometry was tested by having the robot move a significant distance while cal-
culating its position by integrating the wheel encoder data. Two paths were tested: a
square with one-meter sides, and a circle with one-meter diameter. Multiple experiments
were run for each path, with robots moving at approximately 20 % of the top speed. The
surface, wheels and rollers were cleaned, and the odometry was calibrated before the
experiments. The calibration was performed as described in the odometry calibration
example from [7], which consists of a simplified version of the Borenstein method [2].
An overhead camera was used with SwisTrack [9] to capture the ground truth at
20 Hz, while the robot odometry was polled at 100 Hz. The camera was calibrated using
y (m)
x (m)
0.5 0 0.511.5
Absolute error (m)
Time (s)
0 50 100 150
Fig. 5.
Odometry-derived and ground truth tracks of the robot while describing four laps around a
one-meter square, and associated absolute error over time.
y (m)
x (m)
10.5 0 0.51
Absolute error (m)
Time (s)
0 50 100 150 200 250 300
Fig. 6.
Odometry-derived and ground truth tracks of the robot while going in a one-meter diameter
circle for five minutes (approximately 14.5 laps), and associated absolute error over time.
Tsai’s method, reporting a mean distance error of 2 mm over the 16 calibration points. A
realistic estimate of the maximum error across the entire arena is in the order of 1 cm.
The origin of the trajectory was matched by subtracting the initially measured
position from the ground truth tracks, and the initial heading was matched by minimizing
the cumulative absolute error over the first 5% of position measurements. The error
metric is the Euclidean distance between the position estimated using the odometry and
the actual position of the robot.
The square experiment consisted of describing five laps around a square with side
length 1 m, totaling 20 m per experiment (discounting in-place corner turns). The trajec-
tory was programmed as a set of waypoints, with the position being estimated using the
odometry alone. As such, in this test, the odometry information is used in the high-level
control loop. An example run and resulting absolute error is presented in Fig. 5.
The circle experiment consisted of five minutes spent describing a circle of diameter
1 m, totaling approximately 14.5 laps and 46 m. The robot was set to a constant speed
at the beginning of the experiment, and the odometry was logged but not used for the
high-level control. The encoder information is, however, still used by the motor PID
controller to regulate the speed. An example run and resulting absolute error is presented
in Fig. 6.
Each experiment was repeated five times, in both the clockwise and counterclockwise
directions. The results are summarized in Table 3. For every set of experiments, we take
both the average absolute error over the trajectory and the maximum recorded error.
Table 3.
Mean and maximum absolute error for the odometry experiments. Each row is the result
of five runs.
Average error Maximum error
Square Clockwise 0.033 0.006 0.105 0.063
Counterclockwise 0.033 0.006 0.104 0.038
Circle Clockwise 0.056 0.031 0.127 0.067
Counterclockwise 0.093 0.027 0.206 0.054
Specific force [m/s2]
Time (s)
0 10 20 30 40 50
Angular rate [deg/s]
Time (s)
0 10 20 30 40 50
Fig. 7.
Accelerometer and gyroscope signals for a single square path. The robot starts moving at
and briefly pauses at the end of each segment. The peaks in
angular rates correspond to
the corners of the square.
Position estimation using odometry is, by nature, subject to error accumulation over
time. However, and while the error does show an upward tendency, it is obvious from
figures 5-6 that it is not monotonically increasing. As such, the average and maximum
error provide more useful information than the final error.
The error is larger for the circle experiments, as is the error variation. This is partially
expected, due to the longer experiment length and the larger fraction of circular motion.
The mechanical set-up of the robots appears to be very sensitive to minor imperfec-
tions or dirt on the floor. Namely, the spheres easily get clogged after some hours of
use, and even in seemingly flat surfaces the robot sometimes loses traction, significantly
reducing odometry performance. Nevertheless, the odometry is very accurate, typically
achieving maximum errors in the order of 0.5 % of the distance traveled.
3.7 IMU
Inertial sensors were tested by having the robot describe a square trajectory similar to
the one used in the previous section, with added pauses before and after each in-place
corner turn. The sensors were logged at their maximum frequency, and the captured
signals are plotted in Fig. 7. Separately, the scale and bias of the accelerometers were
calibrated using the procedure in [4], while for the gyroscopes the initial bias averaged
over 200 samples was subtracted from the measurements.
There is no visually apparent structure in the accelerometer data, while in the
gyroscope data the corner turns are easily observable. The fact that the robot is inherently
unstable in pitch, due to the 4-point support design, creates significant noise in the
accelerometer measurements due to the changing orientation of the gravity vector.
Overall power (W)
Motor command
0 250 500 750 1000 1250 1500
Speed (m/s)
Time (s)
0 0.1 0.2 0.3 0.4 0.5
Fig. 8.
(a) Self-reported overall power as a function of motor speed commands. Propulsion power
is the component over the 2.95 W baseline. (b) Motor speed over time, following a maximum
speed request at
. The speeds were obtained by capturing the encoder values differences
at 100 Hz.
Superficial analysis of the data suggests that the IMU can be used for pitch and roll
estimation and, while the robot is not equipped with an absolute heading sensor, the
yaw delta can be estimated over short time frames. Position and velocity estimation, on
the other hand, was not accurate enough to be of use even when using Kalman-based
techniques with zero-velocity updates [16].
Given the high quality odometry, the IMU seems of little use for single-sensor dead-
reckoning but may complement the odometry in a sensor fusion approach. Perhaps more
realistically, it can be useful for attitude estimation, alternative interaction modes, and
vibration monitoring.
3.8 Motors
Figure 8a shows the propulsion power curve, measured for a robot rotating in place at
different speed commands. Starting at a baseline consumption of 2.95 W with the robot
idle, power increases to a maximum of 5.63 W for speed 800, then decreasing to 4.64 W
at full speed.
The maximum speed achievable in speed control mode was determined by having the
robots move in a straight line while tracking their position with SwisTrack. For a speed
command of 1400, the robot achieved a speed of 0.93 m/s. At higher speed commands,
approaching the saturation limit of 1500, the robot is unable to move in a straight line
and the trajectory begins to curve, with no appreciable increase in linear speed.
The high torque-to-weight ratio allows the Khepera IV to quickly accelerate to
the desired speed. Figure 8b shows the results of requesting the maximum speed at
, obtained by sampling the wheel encoders at 100 Hz period. The robot
accelerates to 90 % of the top speed in 0.24 s, and achieves top speed in 0.339 s. This
results in an average acceleration a=2.74m/s2to 100 %.
3.9 Power
The number and type of devices activated and in use can significantly influence energy
use and limit autonomy. To estimate the potential impact, we used the built-in power
Table 4. Self-reported overall power for different activities.
Activity Power (W )
Idle 2.95
CPU load 3.12
Motors (50%) 5.58
Motors (100%) 4.63
Activity Power (W )
Camera 3.12
Ultrasound 3.00
Infrared 2.95
IMU 2.95
Voltage (mV)
Time (min)
0 100 200 300 400 500 600
Fig. 9. Battery voltage over time, for idle and full load situations.
management features to measure the energy use when performing different activities.
Note that the numbers in Table 4 do not necessarily reflect the power used by individual
components; for instance, when using the camera, the increase in power is mostly
justified by the heavy load placed on the CPU.
Some sensors, such as the infrared and IMU, use negligible energy when actively
queried. The largest chunk of power corresponds to idle consumption and is independent
of the devices in use. This is, in large part, due to the number of devices that are, by
default, initialized on boot, including the 802.11 and Bluetooth radios. Motor power is
strongly dependent on the load, and will vary depending on the robot weight, terrain,
changes in speed, and obstacles in contact with the robot.
We have also performed a long-term test intended to assess the maximum autonomy
of the robots, both in an idle situation (no devices active or programs running) and in a
full load situation (camera video streaming to computer, motors moving at full speed,
ultrasound sensors active). The voltage decay over time is shown in Fig. 9.
An idle robot starting from a full charge fails after approximately 8.7 h of continuous
use, while a robot using maximum power last approximately 5.2 h. This is in accordance
with previous results showing high idle consumption.
4 Software
There are, at the moment, two open source libraries for Khepera IV application devel-
opment: K-Team has developed
, which ships with the robots, and our
lab has developed the Khepera IV Toolbox, which we make available as an open source
package. Both provide similar base functionality, and are improvements on older libraries
for the Khepera III robot. The two libraries are independent, and programs using them
can coexist in the same robot, although it is not recommended that they run in parallel.
4.1 libkhepera
is distributed with the Khepera IV. It allows complete control over the
robot hardware, generally at a fairly low level. It allows the user to configure devices,
read sensor data and send commands. These operations can be accomplished using
simple wrapper functions in the main library. Most functions return primitive data types,
although some still output data in unstructured buffers.
Conceptually, the library should provide an easy upgrade path for those using the
Khepera III equivalent. However, improvements and simplifications in the new library
will require changes to existing applications. Full documentation is provided with
the library. The outstanding limitation of
is the lack of higher-level
constructs, often forcing the user to write verbose code and re-implement frequently
used functionality.
4.2 Khepera IV Toolbox
The Khepera IV Toolbox is an evolution of the Khepera III Toolbox [7], with which it
shares a significant portion of the code. The initial motivation for its development was
providing a straightforward API that could be used with little concern for the underlying
details, while also fixing some usability and technical constraints of the robot-provided
At a basic level, it provides the same functionality as
, albeit in a
different shell. We have developed the API in a way that minimizes the number of lines
of code, trying to provide simpler functions that yield, with a single call, the desired
result. Most querying functions fill C structures that neatly package complex data.
In addition to the core functionality, the toolbox provides higher level modules that
implement support for frequent tasks. These include:
NMEA, which allows easy processing of NMEA-formatted messages
Measurement, which handles periodic measurement taking and supports arbitrary
data sources
OdometryTrack, which integrates wheel odometry information to provide a position
OdometryGoto, which supports waypoint navigation using this position estimate
There are also modules for facilitating I
C communication and for each of our
custom boards. It is easy to extend the toolbox with additional reusable modules, and the
build system makes it trivial to include them in applications.
The toolbox also provides an extensive set of scripts that expedite building, deploying
and running applications. These scripts take multiple robot IDs as argument and perform
actions such as uploading programs, executing them, and getting the resulting logs,
allowing a user to coordinate experiments using relatively large swarms from a single
command line.
5 Conclusion
In this paper, we have presented the Khepera IV mobile robot and assessed the perfor-
mance of each of its parts. The robot clearly improves upon its predecessor, packing
powerful motors, more complete sensing capabilities, a capable computer, and a long
lasting battery, all inside a smaller and more stable shell.
The odometry is very accurate in non-slippery floors, making the less precise IMU
not very useful for navigation applications. The ultrasound sensors were found to be
precise along their entire operating range, while the infrared sensors have somewhat
limited range, creating a blind area between 12-20 cm in our experiments; these values
depend, of course, on the materials and environmental conditions, and longer ranges can
be obtained using specialized infrared-reflective material. The camera and microphones
provide good quality information, and are valuable additions to the robot. Among the
limitations, the restricted ground clearance has the greatest impact, making the robot
unfit for anything but flat surfaces.
We have also presented two software libraries for the robot, including our own, open
source, Khepera IV Toolbox. This library makes it easy to develop applications, and
enables the user to easily control multiple robots. It provides a clear upgrade path for
users working with the Khepera III robot and the corresponding Toolbox.
We have made the Khepera IV Toolbox code publicly available, together with the raw
datasets for all our experiments. In this way, we intend to help our colleagues develop
their own robot models and jump-start development on the Khepera IV, reducing the
platform overhead for future research and educational use.
We thank Claude-Alain Nessi and K-Team for their cooperation in the work leading up
to this paper and the material provided, and Thomas Lochmatter for his past work on the
Khepera III Toolbox and advice.
Bonsignorio, F.P., Hallam, J., del Pobil, A.P., Madhavan, R.: The role of experiments in
robotics research. In: ICRA workshop on the role of experiments in robotics (2010)
Borenstein, J., Feng, L.: Measurement and correction of systematic odometry errors in mobile
robots. IEEE Transactions on Robotics and Automation 12(6), 869–880 (1996)
Di Mario, E., Martinoli, A.: Distributed particle swarm optimization for limited time adapta-
tion with real robots. Robotica 32(02), 193–208 (2014)
Frosio, I., Pedersini, F., Alberto Borghese, N.: Autocalibration of MEMS accelerometers.
IEEE Transactions on Instrumentation and Measurement 58(6), 2034–2041 (2009)
Gouaillier, D., Hugel, V., Blazevic, P., Kilner, C., Monceaux, J., Lafourcade, P., Marnier,
B., Serre, J., Maisonnier, B.: Mechatronic design of NAO humanoid. In: IEEE International
Conference on Robotics and Automation, 2009. pp. 769–774 (2009)
Gowal, S., Martinoli, A.: Real-time optimized rendezvous on nonholonomic resource-
constrained robots. In: International Symposium on Experimental Robotics. Springer Tracts
in Advanced Robotics (2013), Vol. 88, pp.353-368 (2013)
Lochmatter, T.: Khepera III toolbox (wikibook), III Toolbox,
[accessed 16-September-2015]
Lochmatter, T.: Bio-inspired and probabilistic algorithms for distributed odor source local-
ization using mobile robots. PhD thesis 4628,
Ecole Polytechnique F
erale de Lausanne
Lochmatter, T., Roduit, P., Cianci, C., Correll, N., Jacot, J., Martinoli, A.: SwisTrack - a
flexible open source tracking software for multi-agent systems. In: IEEE/RSJ International
Conference on Intelligent Robots and Systems. pp. 4004–4010 (2008)
Mondada, F., Franzi, E., Guignard, A.: The development of Khepera. In: Proceedings of the
First International Khepera Workshop. pp. 7–14 (1999)
Mondada, F., Bonani, M., Raemy, X., Pugh, J., Cianci, C., Klaptocz, A., Magnenat, S.,
Zufferey, J.C., Floreano, D., Martinoli, A.: The e-puck, a robot designed for education in
engineering. In: Proceedings of the 9th Conference on Autonomous Robot Systems and
Competitions. pp. 59–65. Portugal (2009)
Navarro, I., Mat
ıa, F.: A framework for collective movement of mobile robots based on
distributed decisions. Robotics and Autonomous Systems 59(10), 685–697 (2011)
Prorok, A., Arfire, A., Bahr, A., Farserotu, J., Martinoli, A.: Indoor navigation research with
the Khepera III mobile robot: an experimental baseline with a case-study on ultra-wideband
positioning. In: Proceedings of the IEEE International Conference on Indoor Positioning and
Indoor Navigation (2010)
Pugh, J., Raemy, X., Favre, C., Falconi, R., Martinoli, A.: A fast onboard relative positioning
module for multirobot systems. IEEE/ASME Transactions on Mechatronics 14(2), 151–162
Schmidt, L., Buch, B., Burger, B., Chang, S.h., Otto, J.A., Seifert, U.: Khepera III mobile
robot practical aspects. Tech. rep., University of Cologne, Cologne (2008), http://www.uni- description.pdf
Simn Colomar, D., Nilsson, J.O., H
andel, P.: Smoothing for ZUPT-aided INSs. In: Proceed-
ings of the International Conference on Indoor Positioning and Indoor Navigation (2012),
Soares, J.M., Aguiar, A.P., Pascoal, A.M., Martinoli, A.: A distributed formation-based odor
source localization algorithm: design, implementation, and wind tunnel evaluation. In: IEEE
International Conference on Robotics and Automation. pp. 1830–1836. Seattle, WA, USA
Takayama, L.: Toward a science of robotics: Goals and standards for experimental research.
In: Robotics: Science and Systems, workshop on good experimental methodology in robotics
... Many commercial robotic platforms utilize them for high-end applications [4]. A lower-cost option is using Ultrasonic sensors, which calculate time-of-flight of the signal for range calculation [7,9,15]. However, they have slow refresh rates, and they are not much reliable in enclosed areas because of reflections [15]. ...
... A few projects featuring IR localization modules use direct IR communication between agents for sending sensing information [6,18], while others utilize Bluetooth and Wi-Fi modules and Internet-of-Things techniques for distribution of sensor data [9,10]. TIGER Square currently depends on communication with the central computer for sensor data distribution [19]. ...
The indoor multi-agent robotic testbed named TIGER Square at the LSU iCORE lab is aimed at experimentally validating various formation control laws. The testbed is currently being refined to implement distance-based formation control algorithms in decentralized control scheme in terms of effective sensing of the robot localization, communication protocols, and the control algorithm processing. The scope of work for this research is to modify the testbed subsystems for decentralized processing and establish that the modifications do not affect the formation performance when compared to the fully centralized mode of operation. The decentralized processing of the control algorithms is executed by robots’ upgraded firmware that is introduced in this work. The modifications were validated through a few formation acquisition and formation maneuvering experiments under different modes of operation and varying number of robots in the formation. Preliminary results towards fully decentralized communication between the robots were also presented in this study. The results obtained in this study would serve as a stepping-stone for successfully operating the system in fully decentralized manner. Moreover, it would allow us to consider practical scenarios where a need may arise to switch from centralized mode to decentralized mode due to, for example, loss of GPS or communication with the control station.
... Swarm density effects. To study the effect of information spreading with robots in motion, we simulated swarms of N = 25 Khepera IV robots [23] in the ARGoS multi-robot simulator [24]. Khepera IVs are equipped with a range and bearing communication device, ground sensors, and a ring of proximity sensors for obstacle avoidance. ...
Full-text available
Collective perception is a foundational problem in swarm robotics, in which the swarm must reach consensus on a coherent representation of the environment. An important variant of collective perception casts it as a best-of-$n$ decision-making process, in which the swarm must identify the most likely representation out of a set of alternatives. Past work on this variant primarily focused on characterizing how different algorithms navigate the speed-vs-accuracy tradeoff in a scenario where the swarm must decide on the most frequent environmental feature. Crucially, past work on best-of-$n$ decision-making assumes the robot sensors to be perfect (noise- and fault-less), limiting the real-world applicability of these algorithms. In this paper, we derive from first principles an optimal, probabilistic framework for minimalistic swarm robots equipped with flawed sensors. Then, we validate our approach in a scenario where the swarm collectively decides the frequency of a certain environmental feature. We study the speed and accuracy of the decision-making process with respect to several parameters of interest. Our approach can provide timely and accurate frequency estimates even in presence of severe sensory noise.
... Over the years, several versions of Khepera have been developed, improving unit processing, locomotion, and sensing but requiring an increase in size. The latest version, the Khepera IV [32], is still a differential wheeled mobile robot with a diameter of 14 cm. This robot houses twelve IR sensors, five ultrasound sensors, two microphones, encoders, an inertial measurement unit (IMU), and a camera. ...
Full-text available
The current state of electronic component miniaturization coupled with the increasing efficiency in hardware and software allow the development of smaller and compact robotic systems. The convenience of using these small, simple, yet capable robots has gathered the research community's attention towards practical applications of swarm robotics. This paper presents the design of a novel platform for swarm robotics applications that is low cost, easy to assemble using off-the-shelf components, and deeply integrated with the most used robotic framework available today: ROS (Robot Operating System). The robotic platform is entirely open, composed of a 3D printed body and open-source software. We describe its architecture, present its main features, and evaluate its functionalities executing experiments using a couple of robots. Results demonstrate that the proposed mobile robot is very effective given its small size and reduced cost, being suitable for swarm robotics research and education.
... For instance, a non-holonomic model that performs well for mobile robots turns into single/double-integrators with an adequate redenition of the inputs. Hence, we next consider nonholonomic dynamics to model the Khepera IV mobile robots [190], that will be simulated both in Matlab and Virtual Robot Experimentation Platform (V-REP) [191]. The model for a robot is shown in Fig. 5.9 and its equations of motion for rst and second-order model represented in terms of the coordinates (x i ,ȳ i ) are given in (5.33) and (5.34), respectively [192,193]. ...
Swarm robotics (SR) has recently gained a lot of interest for its desired properties: robustness, flexibility, and scalability. These properties are of paramount importance in many real-world applications. SR aims at coordinating a large number of low-cost robots by taking inspiration from the living systems that exhibit self-organized behaviour. In this survey, we review the SR literature and describe the characteristics of an SR system and its real-world applications, the existing SR research platforms and simulators, the SR design methods, and the taxonomy of the swarm behaviours. This work represents a brief and concise overview of the SR field that guides researchers to some seminal works and different tools to begin their research.KeywordsSwarm roboticsSwarm robotics applicationsSwarm robotics research platformsSwarm robotics design methodsSwarm behaviours
In this paper, we present an open-source integration of the Buzz swarm programming language with the Pi-puck robot platform. We then analyse the effect of packet loss and sensor noise on neighbour sensing in Buzz using a simple aggregation algorithm as a case study. Through infrastructural testing on this physical robot platform, we evaluate how well swarm algorithms developed in Buzz can cope with perturbations. We find that aggregation is reasonably tolerant to packet loss, however struggles to tolerate inaccurate neighbour sensing. Finally, we suggest future work that could mitigate the effect of embedded platform imperfections when developing swarm algorithms in Buzz.KeywordsSwarm roboticsSwarm programmingBuzzPi-puck
In this work, we address the design of tightly integrated control, estimation, and allocation algorithms allowing a group of robots to move collectively. For doing so, we leverage a modular framework that allows us to define precisely the needed functional components and thus consider and compare multiple algorithmic solutions for the same module. We demonstrate the effectiveness of such a framework through multiple spatial coordination challenges carried out both in simulation and reality and leveraging different distributed control laws (graph-based and behavior-based controllers). Moreover, we investigate the impact of different localization and communication constraints as well as that of real-time switching of control laws on selected coordination metrics. Finally, we also introduce additional algorithmic components for demonstrating further the modularity of the framework. We find that defining the modularity based on functionality is a very effective way to enable algorithm benchmarking and discover possible improvements of the overall software stack while at the same time being agnostic to the underlying hardware and middleware resources. This is an especially welcome feature in case of severely resource-constrained multi-robot systems. Moreover, an important benefit of such design process is that the resulting distributed control algorithms are very robust to the considered noise sources and amplitudes as well as to the diverse types of challenges considered.
Conference Paper
Full-text available
Robotic odor source localization is a promising tool with numerous applications in safety, search and rescue, and environmental science. In this paper, we present an algorithm for odor source localization using multiple cooperating robots equipped with chemical sensors. Laplacian feedback is employed to maintain the robots in a formation, introducing spatial diversity that is used to better establish the position of the flock relative to the plume and its source. Robots primarily move upwind but use odor information to adjust their position and spacing so that they are centered on the plume and trace its structure. Real-world experiments were performed with an ethanol plume inside a wind tunnel, and used to both validate the algorithm and assess the impact of different formation shapes.
Full-text available
As a first step towards the investigation of robotic artifacts within artistic, sound related contexts, it was decided as a part of the project "Artistic Interactivity in Hybrid Networks" 1 (subproject C10 of the collaborative research project SFB/FK 427 "Media and Cultural Communication" 2 , funded by the national german re-search foundation DFG) to utilize a pair of Khepera II robots 3 . One aim is the investigation of patterns of interaction at least at a rather small scale and to gain some insight into the technological issues relating to problems of human-animat (-artifact) interaction in general. The Khepera II robot is a popular device widely used for the investigation of behaviors such as obstacle avoidance or wall / line following (see examples in Pfeifer / Scheier 1999 [19], Chapter 5) or the implementation of the so-called Braitenberg vehicles (Braitenberg 1984 [1]; Pfeifer / Scheier 1999 [19], Chapter 6). Even evolutionary techniques have been attacked (Nolfi / Floreano 2001 [18]), and not least the Khepera II platform has functioned as an educational tool in the field of robotics (Ichbiah 2005 [4], page 420). As a consequence, a substantial body of project descriptions and applications for the Khepera II is available on the internet. As a rather advanced example for the use of the Khepera II platform, an ap-plication presented by Webb / Reeves / Horchler 2003 [25] can be considered: Here, the Khepera II – which in its standard configuration is not equipped for outdoor operation – is supplemented with sound sensors and a chassis with an extra controller and motor moving on so-called whegs (rotating sets of legs – see Figure 1). This setup was devised to test principles of cricket phonotaxis relating the orienting behavior of female crickets towards males making use of acoustic signals to the layout of their auditory and nervous systems.
Full-text available
Drawing from lessons learned in the combination of sciences and engineering in the field of human-computer interaction, this paper presents a set of issues relating to improving the scientific rigor of experiments in robotics, moving toward a sciences of robotics. It highlights the strengths of a variable-based approach to the study of technologies in com-parison to A/B testing. Two specific examples of human-robot interaction experiments are presented in terms of the method-ological design decisions made in the process of conducting that published research. Finally, a broader discussion of experimental methodologies addresses issues of experiment designs, statistical analyses, reporting methods and results, and other experiment work practices that may provide guidelines and work practices from other disciplines to improve the standards of experimental research in robotics.
Conference Paper
Full-text available
This article presents the mechatronic design of the autonomous humanoid robot called NAO that is built by the French company Aldebaran-Robotics. With its height of 0.57 m and its weight about 4.5 kg, this innovative robot is lightweight and compact. It distinguishes itself from existing humanoids thanks to its pelvis kinematics design, its proprietary actuation system based on brush DC motors, its electronic, computer and distributed software architectures. This robot has been designed to be affordable without sacrificing quality and performance. It is an open and easy-to-handle platform. The comprehensive and functional design is one of the reasons that helped select NAO to replace the AIBO quadrupeds in the 2008 RoboCup standard league.
Recent substantial progress in the domain of indoor positioning systems and a growing number of indoor location-based applications are creating the need for systematic, efficient, and precise experimental methods able to assess the localization and perhaps also navigation performance of a given device. With hundreds of Khepera III robots in academic use today, this platform has an important potential for single- and multi-robot localization and navigation research. In this work, we develop a necessary set of models for mobile robot navigation with the Khepera III platform, and quantify the robot’s localization performance based on extensive experimental studies. Finally, we validate our experimental approach to localization research by considering the evaluation of an ultra-wideband (UWB) positioning system. We successfully show how the robotic platform can provide precise performance analyses, ultimately proposing a powerful approach towards advancements in indoor positioning technology.
In this work, we consider a group of differential-wheeled robots endowed with noisy relative positioning capabilities. We develop a decentralized approach based on a receding horizon controller to generate, in real-time, trajectories that guarantee the convergence of our robots to a common location (i.e. rendezvous).Our receding horizon controller is tailored around two numerical optimization methods: the hybrid-state A* and trust-region algorithms. To validate both methods and test their robustness to computational delays, we perform exhaustive experiments on a team of four real mobile robots equipped with relative positioning hardware.
Evaluative techniques offer a tremendous potential for online controller design. However, when the optimization space is large and the performance metric is noisy, the overall adaptation process becomes extremely time consuming. Distributing the adaptation process reduces the required time and increases robustness to failure of individual agents. In this paper, we analyze the role of the four algorithmic parameters that determine the total evaluation time in a distributed implementation of a Particle Swarm Optimization (PSO) algorithm. For an obstacle avoidance case study using up to eight robots, we explore in simulation the lower boundaries of these parameters and propose a set of empirical guidelines for choosing their values. We then apply these guidelines to a real robot implementation and show that it is feasible to optimize 24 control parameters per robot within 2 h, a limited amount of time determined by the robots' battery life. We also show that a hybrid simulate-and-transfer approach coupled with a noise-resistant PSO algorithm can be used to further reduce experimental time as compared to a pure real-robot implementation.
Conference Paper
Due to the recursive and integrative nature of zero-velocity-update-aided (ZUPT-aided) inertial navigation systems (INSs), the error covariance increases throughout each ZUPT-less period followed by a drastic decrease and large state estimate corrections as soon as ZUPTs are applied. For dead-reckoning with foot-mounted inertial sensors, this gives undesirable discontinuities in the estimated trajectory at the end of each step. However, for many applications, some degree of lag can be tolerated and the information provided by the ZUPTs at the end of a step can be made available throughout the step, eliminating the discontinuities. For this purpose, we propose a smoothing algorithm for ZUPT-aided INSs. For near real-time applications, smoothing is applied to the data in a step-wise manner requiring a suggested varying-lag segmentation rule. For complete off-line processing, full data set smoothing is examined. Finally, the consequences and impact of smoothing are analyzed and quantified based on real-data.
We present an onboard robotic module that can determine relative positions among miniature robots. The module uses high-frequency-modulated infrared emissions to enable nearby robots to determine the range, bearing, and message of the sender with a rapid update rate. A carrier sense multiple access protocol is employed for scalable operation. We describe a technique for calculating the range and bearing between robots, which can be generalized for use with more sophisticated relative positioning systems. Using this method, we characterize the accuracy of positioning between robots and identify different sources of imprecision. Finally, the utility of this module is clearly demonstrated with several robotic formation experiments, where precise multirobot formations are maintained throughout difficult maneuvers.
In this paper, we present a novel procedure for the on-the-field autocalibration of triaxial micro accelerometers, which requires neither any equipment nor a controlled environment and allows increasing the accuracy of this kind of microsensor. The procedure exploits the fact that, in static conditions, the modulus of the accelerometer output vector matches that of the gravity acceleration. The calibration model incorporates the bias and scale factor for each axis and the cross-axis symmetrical factors. The parameters are computed through nonlinear optimization, which is solved in a very short time. The calibration procedure was quantitatively tested by comparing the orientation produced by MEMS with that measured by a motion capture system. Results show that the MEMS output, after the calibration procedure, is far more accurate with respect to the output obtained using factory calibration data and almost one order of magnitude more accurate with respect to using traditional calibration models.