Conference PaperPDF Available
MARVIN: Mobile Autonomous Robot Vehicle for Investigation &
Navigation
Luis Andrade1, Marcelo Fajardo1, Christian Tutiven1, Edwin Valarezo2, Angel Recalde2, Ricardo Cajo2and
Francisco Yumbla1
Abstract Self-driving vehicles are a rising field for inves-
tigation; both in the commercial and academic world, but
most framework platforms currently are not designed for
the Latin American infrastructure environment, especially in
manufacturing costs and their implementation. Therefore, this
project involves designing, building, and programming a scaled
self-driving vehicle based on the ROS2 infrastructure to make a
more economical alternative for this market. SLAM tools were
implemented in the prototype so that it could map its environ-
ment in real-time and self-navigate through it. In addition, a 3D
model and a Gazebo based simulation were developed as the
real environment test, so that the user can test the prototype
in a simulated. All this research including the list of the parts
necessary for the development of the car and the open-source
architecture are uploaded to a GitHub repository. The final
cost is comparably lower than its competition, therefore, this
product can be considered a viable alternative as a learning
platform in the actual market maintaining the characteristics
of autonomous vehicles.
Index Terms Self-driving vehicles, ROS 2, Accessibility,
movement control.
I. INTRODUCTION
Autonomous vehicles are considered the future of the
automotive world. This is because it enhances convenience
for any driver or passenger, but also seeks to solve problems
that normally occur as a consequence of human error, such
as long-distance/long-duration traffic conditions, accidents
caused by the driver’s state of consciousness or by road
hazards. Autonomous vehicles would also potentially opti-
mize the use of land dedicated for vehicles. Land dedicated
for parking slots in heavily dense cities would be greatly
reduced, increasing the amount dedicated for human occu-
pancy. [1]. The leaders at the moment in the autonomous
vehicle market are: Waymo, Tesla, and Ford [2]
The Autopilot feature that these companies implement
and constantly improve on is mostly based on the same
principle and with similar components. The driver sets the
destination and the car’s computer software calculates the
most efficient route to get there. The car uses a mounted
LIDAR (Light Detection and Ranging) sensor to create a 3D
map of its current environment. Radar systems are placed in
the front and back of the vehicle to help measure the distance
1Facultad de Ingenier´
ıa en Mec´
anica y Ciencias de la Pro-
ducci´
on, Escuela Superior Politecnica del Litoral, ESPOL, Campus Gus-
tavo Galindo, Km. 30.5 V´
ıa Perimetral, Guayaquil, 090902, Ecuador.
fryumbla@espol.edu.ec
2Facultad de Ingenier´
ıa en Electricidad y Computaci´
on, Escuela Superior
Politecnica del Litoral, ESPOL, Campus Gustavo Galindo, Km. 30.5 V´
ıa
Perimetral, Guayaquil, 090902, Ecuador.
Fig. 1. Mobile Autonomous Robot Vehicle for Investigation and Navigation
(MARVIN)
between obstacles. Ultrasonic sensors are placed to each side,
so it can detect objects that are nearby. A GPS (Global
Positioning System) provides real-time tracking location and
in combination with an IMU (Inertial Measurement Unit,
which uses a mix of accelerometers, magnetometers, and
gyroscopes), accurately calculates data regarding velocity
and orientation of the vehicle, especially in places where
the GPS cannot reach like tunnels or covered roads. Video
cameras are installed preferably on the windshield, as they
capture and build real-time 3D images of the road. They
are mostly used to detect and interpret road signs such as
stop signs, zebra crossings, traffic lights, and spot unexpected
obstacles like pedestrians or animals. The car’s computer
gathers all the obtained data from the sensors and processes
it at high speed, in order to simulate a human’s decision-
making and perception process to control the driving of the
vehicle. [2]
The platforms for Autonomous vehicles require a different
infrastructure for Latin America compared to the US/Europe
due to street infrastructure, traffic signs, density of people
crossing the street simultaneously (considering street ven-
dors) and traffic laws. Across different regions in LATAM,
the state of roads, highways, and streets may vary because of
low maintenance. In addition traffic signs differ from country
to country, unlike US/Europe, where most countries share
signage [3]. Countries such as Peru [4], Chile [5] and Brazil
[6] have started to implement self-driving vehicles, leaving
aside commercial purposes and focusing on trucks used for
the mining industry.
Following all these reasons, the MARVIN project was
developed (Fig. 1). The Mobile Autonomous Robot Vehicle
for Investigation and Navigation platform is a complete self-
driving robot with an Ackermann steering model. MARVIN
is affordable and accessible due to its compliance with
the following requirements: Implementation of components
that are accessible for the Latin American user. Parts and
pieces of the vehicle can be developed through additive
manufacturing. The main structure of the autonomous vehicle
allows scalability. An user manual is provided with an easy
to follow list of instructions about software installation and
how to use it.
The paper is organized as follows. Section II details cur-
rent platforms that implements different types of electronics
and frameworks. Section III describes design and compo-
nents of MARVIN. Section IV describes the framework of
the implemented framework. Section V shows the results
in a real-life environment. Finally, section VI presents the
conclusions of the project and future scopes.
II. REL ATED WORK
There have been several platforms that were designed
to replicate the user conditions of full scale self-driving
vehicles. Platforms like these must be flexible and versatile
considering is an ever evolving field. For this platform to
become practical teaching products, they have to grant the
user access to learning material where the concepts of elec-
tronics, hardware, building instructions and installation can
be illustrated and user-friendly, for example: the Ackermann-
drive F1TENTH, Donkeycar [7], MuSHR [8] and Qcar [9]
platforms.
F1TENTH is an Open Source platform that is based on
ROS infrastructure with the approach for autonomous control
of racing vehicles [7].It implements laser rangefinders, one
RGBD camera and the NVIDIA Jetson TX2 as a micro-
controller [10].The list of materials required must be pur-
chased by the customer, but the software and the steps for
installation are provided by the team itself. The project is
designed so that the vehicle can be driven autonomously and
manually by a client. It uses computer vision to perceive its
environment and a virtual environment in which clients can
test their own programs.
Donkeycar is an Open Source platform and is based on
libraries of Python [11]. It uses a Jetson Nano/Raspberry Pi
as a CPU along with a camera able to navigate through a race
track autonomously. The platform offers a virtual simulator
to learn about the vehicle in case you cannot physically
acquire it [11]. The platform offers an instruction manual
and list of materials along with their respective costs ready
for purchase or the complete kit can be purchased directly
from the suppliers.
Multi-agent System for non-Holonomic Racing (MuSHR)
is an Open Source platform designed for the purpose of
education and research. It comes with a built-in LiDAR, an
electronic speed controller for the motor, a Jetson Nano, a
step-down converter and a wireless joystick. The supplier
offers a list of materials with their respective costs, addition-
ally it also includes files for 3D printing parts and its own
software for vehicle management [8].
Finally, the Quanser Qcar platform offers a scale vehicle
model designed for academic research. It implements a
LIDAR radar, 360º vision, one RGBD camera, two side
cameras, a microphone, an IMU sensor, encoders and uses
an NVIDIA Jetson TX2 as a processor. It comes with a
custom-made motherboard [10] and a software developed by
the company Quanser, which allows the client to carry out
research on mapping, navigation, machine Learning, artificial
intelligence, among others [9]. This platform is distinctive
from the others considering its manufacturing quality and
costs.
Fig. 2. Overview of MARVIN (a) Labels for all the levels, including
MARVIN height and lenght, (b) Lower level, (c) Base level, (d) Middle
level, and (e) Top Level.
III. DES IGN P ROTOTYP E
To develop the most appropriate model, both the chassis
in which the parts will be placed and the components that
will be used had to be measured with precision. The design
must ensure that the customer can safely place the periph-
erals sequentially along with each custom piece designed
for additive manufacturing. The already assembled structure
must provide the greatest possible rigidity and level for the
components, guaranteeing their safety and optimizing their
usefulness. Cable management is also considered to be as
orderly as possible. The car is built on a Tbest RC Car
frame, with a car chassis beam 1:10, with 4 pcs shocks struts
dampers and 4WD. This frame is mostly used for building
racing RC cars; thus, it has both the same driving system as
a real-life vehicle and the size to carry a sturdy structure.
The following mechanical parameters have been con-
sidered: dimensions and positioning of each component,
comfort for the user when assembling the structure with
the peripherals included, rigidity of the structure, cable
management, ease of access in case a component needs
to be replaced and it must be possible for all pieces to
be developed with additive manufacturing. Furthermore, the
design must be attractive and eye-catching for the client. The
main structure will consist of 6 pieces (the main base, two
side pieces two bases for components, and the front plate
where the MARVIN logo is located) and two extra pieces
for fitting the RGBD camera on the surface of the structure
and the drive motor to the cars frame. The design of the
custom pieces allows each component to fit exactly where
they’re best intended. The structure is designed to be built
sequentially and it encompasses four levels showcased in
Figure 2, while Figure 3 shows the electronic components:
Lower level (orange): The Digital Servo 20 Kg that
manipulates the handlebars and the 4WD system driven by a
brushless motor (520 DC gear motor with hall encoder) will
be located at this level. Because the vehicle’s drive motor
does not fit directly into the chassis, a piece was designed
to fit it in such a way that the shaft along with its toothed
gear can come into contact.
Base level (green): The ROSMASTER control board and
USB expansion board are placed at this level. The piece is
227.00 x 122.00 x 5.00 mm. It was designed to be attached
at the same locations as the car frame. It also has two gaps
for cable management, placed there for both the motor and
the battery cables. The control board is placed in the front
of the base so that the power switch is easily reachable. The
USB expansion board is placed in the back, enabling a more
manageable cable control.
Middle level (purple): The middle level (purple) is de-
signed to have the main controller, in this case it will be
the Jetson Nano. To help with cable management, the Jetson
Nano’s USB ports will be facing in the same direction as the
USB expansion board located at the base level.
Top level (blue): The vehicle recognition peripherals
(LiDAR, depth camera and side IMX cameras) must be
placed at this level. These components are placed at 110
mm height, so that the vehicle can have a 180º view that
is not interrupted by other elements. The LiDAR on the
other hand is placed in the highest part at a height of
195 mm, so that, when mapping, it does not have any
obstacles blocking it. Because the structure is designed to
be built sequentially, it is recommended that all the cable
management be dealt with before going to the next level
as previously stated and featured in Figure 2. Once every
piece is created and assembled, it will have a height of 205
mm, 387 mm long and 185 mm wide. To make MARVIN be
able to be driven manually and by itself, it needs the proper
Fig. 3. Overview of MARVIN hardware components.
software framework to make all these individual parts work
as a whole.
IV. FRA M EW ORK
For the vehicle to achieve self-driving, each of the com-
ponents must work in tandem with the motor actuators in
a closed-loop system, not only to let the car determine its
position in its environment while it’s moving but also to
ensure that it will react appropriately and fast when an un-
known obstacle appears in its way. The MARVIN framework
consists of real-time mapping, autonomous navigation, and
computer vision. Figure 4 shows the conceptual diagram of
the implemented software framework. Because the MARVIN
was developed with a ROS2 architecture, the communication
between each component is based on nodes and topics. The
car can be manually controlled with either a wireless con-
troller or a computer keyboard as a backup safety controller.
The input topics for real-time mapping come from the IMU
sensor, odometer, and LiDAR. First, the operator must map
the environment in which the vehicle is located manually and
save it in a file. Then for the self-driving module, the input
topic comes from the information in the saved map, which
the vehicle will use for driving across it. The output of the
navigation module consists of the speed commands that will
be sent to the actuator controller and the remaining distance
between the robot and its final position so that it can perform
trajectory calculations. Furthermore, the topics published by
the cameras (RGBD and IMXs) contain raw information.
These are published as images to obtain a real-time view
from the vehicle’s perspective. The computer vision module
also encompasses the data for the LiDAR, this way the user
will always have a complete vision of the vehicle and its
Fig. 4. The conceptual framework design of MARVIN describes how the software architecture of the scaled and simulated vehicle work.
environment. The software architecture implements ROS2
packages and is divided into 3 modules: Real-time mapping,
computer vision, and self-driving navigation. Each part is
briefly described in the following section.
The Ackermann-steering controller: It’s based on a
differential drive controller plugin, which is commonly
subscribed to the /cmd vel topic, this topic is both
published by a Joystick or a keyboard controller node
(See Figure 4 for a showcase of the manual and the
autonomous driving). At the same time, information
from the IMU and the speed published by the expansion
board is combined to generate odometry data through
an output. This information is used when using the
positioning functions.
Real time mapping and localization: All sensors come
with an interface that translates raw input data into ROS
messages. Slam Toolbox takes data that comes from
the LiDAR sensor and the TF transforms (3D model
for visualization) from the robots’ origin link with the
base of the robot (odom - base footprint) to generate a
2D map of a space. The Jetson Nano process the data
from the sensors in “online asynchronous” mode, this
allows it to process a flow of information, instead of
using saved data and always processes the most recent
scan to avoid a delay while generating the map of
the environment. Slam Toolbox can also be used for
localization.
Self-driving navigation: The Nav2 package is imple-
mented. This navigation framework works with a be-
havior tree, the TF transforms and an already generated
map to create a “cost map” and with the data of the
LiDAR to calculate and provide the velocity commands
to the motors. This allows the car to be able to self-drive
from position A to B, avoiding any kind of obstacle that
crosses its path. If the LiDAR senses a new obstacle that
was not previously recorded, the transformation of the
robot’s position and orientation will adapt to it and will
navigate a round it. A Behavior tree is implemented so
it can create a smart navigation performance [12].The
tools to enable self-driving navigation are implemented
in the Rviz2 tool, it comes with localization and navi-
gation tools for the user.
Computer vision: Both the RGB stereo cameras and
the depth camera publish the image that they capture
through ROS topics, which can be visualized using
Rviz2 tool. A 3D 1:1 model of the car is shown
in the Rviz2 environment with its respective sensors
and is in this environment where the grid map of the
environment will be generated. Even though this feature
is not necessary for self-driving, it is implemented so
that the user can visualize what the computer ”sees” and
to work as a jumping point for research using RGBD
cameras in this for this context.
The framework was designed for the physical and simu-
lated prototype. A repository was designed on GitHub1with
the necessary packages to build the MARVIN framework
for both the simulation and the scale vehicle (See Figure
5 for a detailed view of the repository architecture). The
repository is organized for the client to download to their
machine and/or directly to the Jetson Nano. The marvin ws
folder contains the packages to control the scale vehicle,
while the marvin sim ws folder contains the packages for
the simulation. In the case of the scale vehicle, the repository
must be downloaded directly to the /root of the Jetson Nano,
then build the Docker container using the image located
inside the Dockerfile, this will oversee creating the container,
copying the repository folders inside from the container and
edit the file. /bashrc container so that ROS2 packages work
correctly. Inside the container, the client must compile only
the marvin ws package once to be able to use it. For the
simulation, the user must only download the repository di-
rectly to an Ubuntu 20 machine with ROS2 already installed.
The address /home/MARVIN/marvin sim is dedicated for
the simulation. The repository is designed to be easy to
install and use, emulating ”plug and play” technology. In
the following section, Once MARVIN hardware is assembled
and its software installed, it’s ready for testing.
Fig. 5. Overview of MARVIN Github repository
V. SIMULATION AND REAL TEST
The MARVIN simulator and the real-life prototype were
built with the same design principle, therefore prioritizing
scalability for the software framework based on ROS2 (Foxy)
and Gazebo physics simulator. The simulator is comprised
of the following parts: 3D Model, Gazebo environment,
Navigation control and Robot visualization. A demonstration
of MARVIN in a simulated environment is shown in Figure
6, while in Figure 7 is from the scaled prototype in a real
environment.
3D Model: The vehicle is described using a single XML
macro module, where the main structure base link is
defined, to which all the rest of the car components
1https://github.com/RAMEL-ESPOL/MARVIN
Fig. 6. Simulation environment
are connected. Each sensor has its own XML macro
module, to allow parameter customization depending on
the model implemented. Apart from the main structure,
the basic MARVIN model is equipped with a LiDAR,
2 stereo RGB cameras, a RGBD camera, an IMUs
(Inertial Measurement Unit) and an Ackermann-steering
controller. The 3D model is implemented in Rviz2 and
is used for robot visualization.
Gazebo environment: Gazebo was chosen for its in-
tegration with ROS2. The different racetracks made
for the simulator were designed using CAD with the
purpose of testing the car in different types of environ-
ments, for example: There is a basic obstacle course,
a labyrinth and a scaled city. Figure 6 showcases the
different maps.
Navigation control: The Slam toolbox package is used
creating the grid map. For localization an Adaptive
Monte Carlo Localization (AMCL) probabilistic algo-
rithm is implemented. It represents the position and ori-
entation of the robot as a particle distribution, in which
each particle presents a possible position/orientation
of the robot. With each movement, the robot takes
measurements and compares them with the map of
the environment to determine which of the particles is
the real position of the robot [13]. The Nav2 package
uses the data that comes from the IMU, odometer and
LiDAR sensors to develop a “cost map”, which can
be displayed using Rviz2. After the user specifies the
position and orientation of the vehicle, the user can use
the navigation tools in Rviz2 to move the car.
Robot visualization: Every time the simulation is
launched, the MARVIN simulator comes with config-
Fig. 7. Real test in a scale map
urations files for Rviz22 to help monitor the vehicle
while it’s in motion. This offers visual insights of the
data that the LiDAR, RGB cameras and the car itself
displays.
VI. CONCLUSIONS
Autonomous navigation in vehicles for commercial use
is an industry that is gradually growing. Therefore, access
to research and learning platforms using this technology is
currently in development, making MARVIN a new contri-
bution to this field. When using ROS2 to develop manual
and autonomous control of a mechatronics system, in both
a prototype and in a simulated environment, resulted in
attaining a relatively robust and scalable model.
The design offers a flexible structure that allows the
customer to extract a piece without the need to disassemble it
all. The SLAM algorithms implemented in the scaled vehicle
through the LiDAR sensor enabled autonomous navigation
in both a real and simulated environment. This means that
the scaled vehicle can map its environment, locating itself
regardless of whether it is or not at its point of origin.
Making it able to navigate across an environment while
simultaneously comparing information of the stored map, by
highlighting new obstacles the sensor picks up in real-time,
the vehicle can avoid new obstacles. A simulated environ-
ment was developed in which the MARVIN’s framework was
successfully implemented.
In conclusion, the design of the software structure and
framework implemented on the scale vehicle based on ROS2
offers a robust, efficient, accessible, and affordable platform
for clients with an interest in the autonomous navigation field
in Latin America. MARVIN serves as a valuable platform for
demonstrating diverse autonomous robotics research endeav-
ors. Future developments involve designing decentralized
multi-robot navigation for deployment within a controlled
environment.
REFERENCES
[1] e. a. Anderson, J., “Autonomous vehicle technology: A guide for poli-
cymakers. https://www.rand.org/pubs/research reports/RR443- 2.html,
2016, accessed: 2023.
[2] University of Michigan, Factsheet: Autonomous Vehicles, Center for
Sustainable Systems, Michigan, 2022.
[3] Lily, “Driving in latin america: Is it safe? best and worst
countries to drive in, 2021, accessed on 04/12/2023. [Online].
Available: https://www.howlanders.com/blog/en/south-america/
driving-latin-america/#::text=Unlike%20Europe%2C%20where%
20most%20traffic,different%20to%20European%20traffic%20signs.
[4] C. Peters. (2020) Ferreyros implementa proyecto
de camiones aut´
onomos. [Online]. Available: https:
//www.construccionlatinoamericana.com/news%20/%20ferreyros%
20-%20implementa%20- %20proyecto%20-%20de%20- camiones%
20-%20autonomos%20/%204146599.article
[5] K. Hinostroza. (2023) Lomas bayas se encamina a
la miner´
ıa 4.0 tras iniciar operaci´
on con camiones
aut´
onomos. [Online]. Available: https://www.rumbominero.com/chile/
lomas-bayas- mineria-4- 0- iniciar-operacion- camiones-autonomos/
[6] D. Zorrero. (2023) Cami ´
on sin camionero: el primer modelo de
veh´
ıculo de cargas aut´
onomo ya empieza a funcionar en brasil.
[Online]. Available: https://www.infobae.com/autos/2023/06/20
[7] F1TENTH, “F1tenth documentation,” https://github.com/f1tenth/
f1tenth doc, 2023.
[8] S. S. Srinivasa, P. Lancaster, J. Michalove, M. Schmittle, C. Sum-
mers, M. Rockett, J. R. Smith, S. Choudhury, C. Mavrogiannis,
and F. Sadeghi, “Mushr: A low-cost, open-source robotic racecar for
education and research,” 2019.
[9] Quanser. (2020) Qcar. [Online]. Available: https://www.quanser.com/
about/
[10] B. Vincke, S. Rodriguez Florez, and P. Aubert, “An open-source
scale model platform for teaching autonomous vehicle technologies,”
Sensors, vol. 21, no. 11, 2021.
[11] donkeycar, “Donkeycar: a python self driving library,” https://github.
com/autorope/donkeycar, 2023.
[12] O. N. LLC. (2020) Nav2. [Online]. Available: https://navigation.ros.
org/index.html
[13] P. Prakash. (2023) Robot localization and
amcl. [Online]. Available: https://www.linkedin.com/pulse/
robot-localization- amcl-pratheesh- prakash/
... They offer a torque of 1.8 Nm at 12V and include an internal controller with position feedback, which allows for precise closedloop control. The advanced control features and SDK for status monitoring provide significant advantages in developing and tuning the robot's control system [12]. ...
Article
Full-text available
Emerging technologies in the context of Autonomous Vehicles (AV) have drastically evolved the industry’s qualification requirements. AVs incorporate complex perception and control systems. Teaching the associated skills that are necessary for the analysis of such systems becomes a very difficult process and existing solutions do not facilitate learning. In this study, our efforts are devoted to proposingan open-source scale model vehicle platform that is designed for teaching the fundamental concepts of autonomous vehicles technologies that are adapted to undergraduate and technical students. The proposed platform is as realistic as possible in order to present and address all of the fundamental concepts that are associated with AV. It includes all on-board components of a stand-alone system, including low and high level functions. Such functionalities are detailed and a proof of concept prototype is presented. A set of experiments is carried out, and the results obtained using this prototype validate the usability of the model for the analysis of time- and energy-constrained systems, as well as distributed embedded perception systems.
Book
For the past hundred years, innovation within the automotive sector has created safer, cleaner, and more affordable vehicles, but progress has been incremental. The industry now appears close to substantial change, engendered by autonomous, or "self-driving," vehicle technologies. This technology offers the possibility of significant benefits to social welfare — saving lives; reducing crashes, congestion, fuel consumption, and pollution; increasing mobility for the disabled; and ultimately improving land use. This report is intended as a guide for state and federal policymakers on the many issues that this technology raises. After surveying the advantages and disadvantages of the technology, RAND researchers determined that the benefits of the technology likely outweigh the disadvantages. However, many of the benefits will accrue to parties other than the technology's purchasers. These positive externalities may justify some form of subsidy. The report also explores policy issues, communications, regulation and standards, and liability issues raised by the technology; and concludes with some tentative guidance for policymakers, guided largely by the principle that the technology should be allowed and perhaps encouraged when it is superior to an average human driver.
  • e. a. Anderson
Lomas bayas se encamina a la minería 4.0 tras iniciar operación con camiones autónomos
  • K Hinostroza
K. Hinostroza. (2023) Lomas bayas se encamina a la minería 4.0 tras iniciar operación con camiones autónomos. [Online]. Available: https://www.rumbominero.com/chile/ lomas-bayas-mineria-4-0-iniciar-operacion-camiones-autonomos/
Ferreyros implementa proyecto de camiones autónomos
  • C Peters
C. Peters. (2020) Ferreyros implementa proyecto de camiones autónomos. [Online].
Mushr: A low-cost, open-source robotic racecar for education and research
  • S S Srinivasa
  • P Lancaster
  • J Michalove
  • M Schmittle
  • C Summers
  • M Rockett
S. S. Srinivasa, P. Lancaster, J. Michalove, M. Schmittle, C. Summers, M. Rockett, J. R. Smith, S. Choudhury, C. Mavrogiannis, and F. Sadeghi, "Mushr: A low-cost, open-source robotic racecar for education and research," 2019.
Driving in latin america: Is it safe? best and worst countries to drive in
  • Lily
Lily, "Driving in latin america: Is it safe? best and worst countries to drive in," 2021, accessed on 04/12/2023. [Online]. Available: https://www.howlanders.com/blog/en/south-america/ driving-latin-america/#: ∼ :text=Unlike%20Europe%2C%20where% 20most%20traffic,different%20to%20European%20traffic%20signs.
Camión sin camionero: el primer modelo de vehículo de cargas autónomo ya empieza a funcionar en brasil
  • D Zorrero
D. Zorrero. (2023) Camión sin camionero: el primer modelo de vehículo de cargas autónomo ya empieza a funcionar en brasil. [Online]. Available: https://www.infobae.com/autos/2023/06/20
Robot localization and amcl
  • P Prakash
P. Prakash. (2023) Robot localization and amcl. [Online].