Content uploaded by Plinio Thomaz Aquino Junior
Author content
All content in this area was uploaded by Plinio Thomaz Aquino Junior on Jun 21, 2019
Content may be subject to copyright.
HERA: Home Environment Robot Assistant
Plinio Thomaz Aquino Junior, Bruno de Freitas Vece Perez, Douglas de Rizzo Meneghetti,
Fagner de Assis Moura Pimentel, Guilherme N. Marostica, Jo˜
ao Gabriel R. Amorim, Leonardo Contador Neves,
Lucas Iervolino Gazignato, Marina Yukari Gonbata, Rodrigo Mendes de Souza, Rodrigo de Carvalho Techi,
Thiago Spilborghs Bueno Meyer, Victor Salgado de Moura Schmiedl and William Yassuhiro Yaguiu
Abstract—This paper presents the Home Environment Robot
Assistant (HERA), developed by the RoboFEI@Home team at
FEI University Center, in its 2019 version. This is a robot for
the robotics design study for mechanical aspects, electronics and
computing. Its ultimate goal is to interact with humans in service
activities in real context of use. The scenario used in this article
is the domestic environment. For this reason, we discuss aspects
of human robot interaction and service robotics. Details of the
HERA robot design is shared in the paper motivating the creation
of new service robots. Challenges and integration with team
history projects are linked to future projects, considering the
challenges of robotics and autonomous intelligent service.
Index Terms—Service Robot, RoboCup@Home, Movel
Robotics, Assistive Robot, HERA
I. INTRODUCTION
Due to the concern regarding helping services, such as the
need to assist humans in domestic and personal environments,
assistive ways of using technology has increased. With the
purpose of advancing the state-of-the-art in assistive robotics,
the Robot Competition known as RoboCup@Home was cre-
ated in 2006. To participate in the competition, an autonomous
mobile robot is required and it must have a good Human-Robot
Interaction system, allowing the robot to execute tasks for
operators inside a house. The RoboCup@Home competition
was used in this project as a initial motivation for development
of our robot and is still being used as or main test platform
for the robot capabilities.
The RoboFEI@Home team created HERA (Home Environ-
ment Robot Assistant), shown in figure 1, a robot designed
to perform human-robot interaction and cooperation tasks.
At its core, HERA is based on a Mechanum Wheel robotic
platform, having a chest-level extension and a head to aid in
the interaction with both humans as well as the environment.
HERA also counts with a series of sensors to aid in mapping
and navigating the environment, as well as recognizing the
human silhouette, individuals by their faces and objects. A
gripper, designed and manufactured by the team, allows the
robot to interact with the environment.
Human-Robot Interaction has become essential due to do-
mestic and social robots apparition. Human-robot interaction
starts in science fiction literature with Isaac Asimov in early
50s. First works with robots appear on industrial scenario.
Production line robots are the beginning of interaction with
Artificial Intelligence Applied to Automation and Robotics (IAAAR), Robot
Engineering and Computer Science - FEI University Center, Sao Bernardo do
Campo, S˜
ao Paulo, Brazil robofei@fei.edu.br
Fig. 1. HERA: Home Environment Robot Assistant
humans. In this case, interactions occurs through commands
to execute a specific task. Recent scenarios have technolo-
gies improving like voice recognition and vision computers
techniques. So robots grant a new role. They are occupying
homes and social spaces with different aims. Some examples
are iRobot Roomba Vacuum 1and personal assistant JIBO 2.
The rest of this paper is organized as follows: session II
contextualizes the area of human-robot interaction, III presents
RoboFEI@Home research focus and interests. Session IV
describes the software stack used in the robot to solve daily
and domestic tasks and in session V, it’s presented the
main projects under development by RoboFEI@Home team,
describing how they contribute to the domestic service robotics
community, presenting their applications in the real world
as well as current results. Finally, section VII presents the
conclusions and future works.
II. HUMAN-ROBOT INTERACTION
Human-Robot Interaction (HRI) is the knowledge domain
which seeks to understand robots, so robots and humans may
work together or perform a certain task through interaction.
Due to this, the interaction should be less invasive and more
collaborative. The first HRI guide appeared in Isaac Asimovs
set of science fiction work. He presented the robotics laws
used by several prions in the subject. The first law says that
a robot cannot hurt a human being and should also protect it.
The second law says that a robot must obey humans except in
cases where the orders conflict with the first law. At last, the
1http://www.irobot.com/For-the-Home/Vacuuming/Roomba.aspx
2https://www.jibo.com/
BRAHUR-BRASERO 2019 II Brazilian Humanoid Robot Workshop and III Brazilian Workshop on Service Robotics
ISBN: 978-85-98123-14-1 68
third law says a robot should protect itself as long as it does
not conflict with other laws. These laws govern the work in
HRI to the present day [1].
Several types of robot have an interaction level, even
completely autonomous. The interaction level describes the
action robot degree from its own initiative. The interaction
can occur in two specific ways. First one is remote interactions
(robots and humans in different space-time locations). Second
one is upcoming interactions (robots and humans are in the
same place and sharing it) [1]. Teleoperate robots have controls
to guide them, such as video gaming joysticks. Autonomous
robots must consist the environment and facts that link its final
goal. Besides updating its data and the competent restrictions.
Some works interaction through a control or a command
center with the operation of a human being. The number of
autonomous robots works are growing in robotics assistive and
rescue [2]. Interaction is the activity of working together for
the same purpose, and there are five interaction factors affect
HRI. First, autonomy level and behavior. Followed by, natural
information exchange. Next, team structure. Passing to adap-
tation, learning, and training of people and robots. And finally,
define tasks. A robot that has a degree of autonomy can remain
inattentive for a period of time. Thenceforward, it continues
the task at the same point that stopped [1], [2]. Individuals in
social contexts of interaction recognize others and themselves
through emotions. The definition of this ability is emotional
intelligence. The use of emotional intelligence in HRI tends to
make the tasks performed more natural. Rani et al. [3] present
a model of the physiological effects. It correlates psycho-
physiological so the robot is able to infer about anxiety effect.
From this model it was possible to encourage the performance
improvement of people. The scenario is making baskets in a
basketball game. Researches consider psychological variables
categorized in personas and applied to the context of human-
computer interaction, important for the context of this research
[4].
Kitagawa et al. [5], show a work with Softbanks Pepper
robot. In this work a sleep control runs before the gestures
replication. This control assisted in the naturalness of the
execution of the gestures. The person reproducing the gestures
often ended up imitating the robot. To use robot to promote
collaboration between two humans.
Personal safety and damage cost caused by robot failure
is also investigated. The experiment shows some videos of
situations and task in HRI to people. After the video they
assessed the degree of critically of each situation. The results
are interesting. People have given a high score to the robot by
dropping liquid on the laptop. And low score to bumps and
hurt a person. But more realistic studies must carry out [6].
In the same sense, researchers consider human behavior
using usability tests with the AIBO pet robot, characterizing
the user profile with people [7]. A robot called MyKeepon
helps two children collaborate each other on a mobile game.
Results shows that it was possible to get them to cooperate
during the game. The next step is to create a high-level
reasoning mechanism. It adept to keep a group of people
already observed [8].
Anomaly detection systems to serving elderly people living
alone is another explored topic. The system detects anomaly
based on persons environment interaction pattern. It trigs home
care entities and initialize persons relief process. To reach an
accuracy 85%, it uses a set with mobile robot and environment
sensors to detect anomalies [9].
In Lampe et al. [10], a panel summary that took place at a
congress that discussed some robotics issues. How they inter-
act, cooperate, and collaborate with their respective owners
in social situations. In the panel, researches discussed the
frequency of works on robot team. In this scenario verbal
communication is essential. It means studies for semantic
interpretation of speech. Also, voice style for each situation.
It is verified that the human-robot interaction is a multidis-
ciplinary theme, which considers several aspects of human-
machine interaction, and is influenced by the robot’s charac-
teristics regarding sensors, mechanics, electronics and compu-
tational intelligence implemented.
III. RESEARCH FOCUS AND INTERESTS
RoboFEI@Home focuses its research in the interaction
between man and machine (computers, robots, autonomous
cars or smart houses). Research in the field of human-machine
interaction is crucial to the advancement of the state of the art,
in which machines can act together with humans in their daily
lives.
The project also makes heavy use of and contributes to
the development of methodologies, techniques, models and
algorithms in the following topics: adaptive interfaces, brain-
computer interfaces; planning; intelligent residential and build-
ing automation; autonomous systems and the internet of things
(IOT).
IV. DESCRIPTION OF THE APPROACH USED TO SOLVE
DOMESTIC TASKS
HERA uses ROS Kinetic Kame, a framework that assists
in writing software for robots. It has a collection of tools,
libraries and conventions to simplify the creation of programs
for complex and robust tasks in robots independent of the
platform [11].
In the next sessions, will be present the vision, speech and
navigation systems developed for the robot.
A. Robot Vision
In the RoboFEI@Home project developed for the robot,
Microsoft Kinect and a high definition camera are used to
achieve face recognition and object detection. Each framework
has a peculiarity such as, performance platform, objectives
and even competition. In the RoboFEI@Home project the
OpenNI [12], [13] is used with the objective of accessing the
main functions of the Kinect in several platforms.
The task developed primarily for this paper construction
was to follow a specific person without the interference of
the people in the surroundings. The robot performs a face
recognition using Haar Cascade Algorithms from OpenCV,
BRAHUR-BRASERO 2019 II Brazilian Humanoid Robot Workshop and III Brazilian Workshop on Service Robotics
ISBN: 978-85-98123-14-1 69
register a body by PCL ground-based RGBD detection [14]
and confirm the position using ROS leg detector package. The
person asks for the robot to follow her/him. Lastly, the robot
starts the process and only stops when commanded.
In order to perform object detection, two approaches were
developed. The first one consists in the use of key-point
detection and description algorithms, such as scale-invariant
feature transform (SIFT) [15]. The algorithm is applied to
a small database of object pictures, which are pre-loaded in
the robot’s memory (roughly 4 to 6 pictures by object type).
Then, the same key-point detection and description algorithm
is applied to frames of a live video feed and the key-points
from each frame are compared to the ones present in each
image of the database using a feature matching algorithm, such
as FLANN [16]. A minimum number of matches is necessary
for the algorithm to consider an object present in the scene.
This approach has the advantage of not requiring any model
fitting nor a large image dataset, making it suitable for the
competition environment. Its main disadvantage is in the high
computational time necessary to process each frame, which
grows linearly with the number of images in the robot’s
database.
The second object detection method is based on the use of
a Convolutional Neural Network and the Single-Shot Detector
(SSD) technique [17]. With this approach, a dataset containing
pictures of home objects is necessary. Object examples must
be manually marked with bounding boxes [18] to generate
a dataset, but another approach is being initialized, with
possibility of using the Open Images Dataset V4 [19], in this
case the images are already marked, however it is necessary to
choose the classes that will be used in the model. The neural
network is capable of greater generalization while reducing
processing time. In this work, the MobileNet v1 [20] is used
as the neural network architecture, due to its design being
focused for mobile hardware.
Objects are detected by any of the aforementioned methods
using the video feed from the Kinect camera and are later
placed in the 3D environment, with relation to the robot, by
finding each object’s corresponding location in the point cloud,
provided by the Kinect infrared sensor.
B. Voice Recognition
The team decided to use the Google Speech Recognition
API. For this, a ROS package was developed which operates
by a set of APIs. These tools are online tools that works
directly in Ubuntu. In addition, a comparison with generic
phrases is made through the Hamming distance for the recog-
nition of phrases variations.
The team, developed a usage of this API by researching
methods to make the code easier to adapt to a certain envi-
ronment, creating a new use of words choices in the speech
and a vocabulary, which is easy to use and adapt, depending
on the usage.
One of the works developed by the team is in the usage
of the MATRIX Creator [21], a board with sensors, wireless
communications and a FPGA. The main goal of using this
board is to make a directional voice recognition, this way being
able to recognize who is talking with the robot.
A Raspberry Pi connected to the MATRIX is used for
communicating with the core of our robot. The Raspberry is
responsible for reading the information of the many sensors
in the board, then send this information for the main system.
C. Robot Navigation
When the robot is in an unknown location, it must know
the environment where it is located, mapping and at the same
time defining its position in the space. This technique is
known as Simultaneous Localization and Mapping (SLAM).
In the navigation, the robot has the capacity to choose the
best possible route and avoid possible obstacles. For this to
happen, it determinate parameters where there is the slightest
path error and the robot is constantly correcting it.
HERA uses the laser sensor Hokuyo UTM-30LX-EW to
create the environment 2D map where it is located, and find
possible obstacles in the course. This device is used to scan
areas, it has a 30 meters range and a 270 angular degree.
Emitting an infrared light with a 0.36 step, the number of
scanned point is equal to 683 points. Through this it can locate
new obstacles that weren’t on the map.
V. ROBOFEI@HOME PROJECTS AND RESEARCHES
A. Robot Localization with an Omnidirectional Base
On this robot project, it is used a omnidirectional platform
customized in our lab. The advantages of using this base is the
high payload and the possibilities of hardware modifications
to improve the performance of the robot’s locomotion, for this
reason was chosen to use this base with the parts acquired in
a external company and assembled by the team. Before the
acquirement, the availability of the components for a short
term demand was analyzed, the possibility of coupling the
body of the robot on this platform and if it would conform
to the dimensions and limitations of a standard apartment was
studied. Thus, the team achieved the improvement goals which
were a greater payload, and interchangeability, facilitating
a quick maintenance. The kinetics and control was totally
implemented and implanted by the team. The new base can
be seen in robot’s figure 1.
1) Hardware: The omnidirectional mobile base is the
core of HERA navigation. The base is composed by four
Mechanum wheels that provides a high freedom, so the robot
can move in any desired direction with the less motion
possible needed, four 128 rpm motors, two Sabertooth mo-
tor controllers and one magnetic hall-effect encoder sensor
for each motor. From the motors and their hall-effect type
encoders is possible to receive the motor angular speed and
position information needed to send commands and control
the movement of the base. The base is mainly run by an
Arduino controller that has two fundamental functions: control
the motors and read encoders data to send them to a computer
running ROS in which, later, will compute the encoder values
to integrate an odometry data.
BRAHUR-BRASERO 2019 II Brazilian Humanoid Robot Workshop and III Brazilian Workshop on Service Robotics
ISBN: 978-85-98123-14-1 70
2) Kinematic and control: The most basic form of local-
ization is odometry, which is simply estimation of the vehicle
pose by integrating estimates of its motion. The estimation of
the robot pose is possible due to the constantly reading of the
wheels encoders and the integration of this information.
When there is a path defined for the robot to navigate, a
ROS package is responsible for sending velocity messages
(that is, values in the x and y directions for the robot to
move and the angular velocity for the robot rotate), so the
path can be followed and, consequently, the destination can
be reached. With the mecanum wheels kinematics its possible
to determine the needed motor speed for moving each wheel of
the robot so the commanded velocity message can be achieved.
The Arduino microcontroller then, is responsible for making
the wheels turn at the calculated angular speeds. At the same
time the microcontroller is sending the commands to the motor
drivers, its also sending encoder sensor information to the ROS
computer. Considering the encoder resolution and knowing the
number of ticks for a full wheel revolution its possible to do
the reverse path where the encoder data for the wheels, based
on the inverse mecanum wheels kinematics, is converted to
ROS velocity messages (x, y and angular). With the velocity
messages in a known period its possible to determine the robot
pose variation and integrate the odometry information.
B. Upper set
The main objective of developing a new upper set(figure 2)
for the robot was the need to keep the MATRIX Creator Board
static in its position when the head start rotating. Besides that,
the team desire a completely new appearance for the robot,
making the human-robot interaction more friendly.
1) Mechanics: In the first part we prototype a model and
tested it using a CAD(Computer-Aided Design) program. The
upper set was mainly made of aluminum and ABS plastic.
The head structure is made of a 30 x 30 mm square sec-
tion aluminum profile coupled to a shaft, a servo motor is
responsible for rotating the entire structure, including an Apple
tablet, a Microsoft Kinect and two RODE Microphones. The
set can move 85 for both sides. For the angular movement
of the Kinect sensor, a Dynamixel RX-24F servo motor was
coupled to its stand, allowing the necessary movement of the
sensor for people and object recognition. A 3D printer was
used to create the stands for the servo motors and the sensors,
because they are complex in shape and do not require a high
mechanical resistance. To keep the MATRIX static, an axle is
inserted through the hollow aluminum shaft and fixed in the
head platform, this way keeping the board in top of the robot,
allowing better sound caption.
C. Manipulator
As seen in [22], the manipulator used in past works, and
in the robot, was improved. The new manipulator(figure 3)
still have the same number of Degrees Of Freedom (DOF)
contained in a human arm, with the purpose of obtaining a
great similarity to real movements using the anthropomorphic
principle.
Fig. 2. New upper set developed for the robot.
Based on that, an anatomy and human kinesiology study
was initiated, more specifically on the skeleton of the free
portion of the upper limbs, which are: arm, forearm, carpus,
metacarpus. It was noticed that the main movements are
extension and flex.
Fig. 3. Manipulator developed for the robot.
With the geometry analysis of each component, the most
appropriate manufacturing process, for the parts with more
complex shapes, was a 3d printer and, for plain shape parts,
a production process using aluminum, which results in a
great resistance with less weight and a smaller dimension.
For the manipulator movement, two Dynamixel XM430 and
six XM540 actuators are used, which are distributed among
shoulders, elbow and handle. All actuators have 360of spin
amplitude.
1) Kinematics and control: For the manipulator mobility
some distinctive algorithms, which describe the freedom that
it has in the space where it is located, are used. Manipulator
extensions and junctions analysis methods that use direct and
inverse kinematic were implemented, this way allowing to de-
scribe the behavior and state in the space. Along with a control
algorithm, we can accomplish simple and even complex tasks,
as the space information can be captured through the sensors
and comprehended, so it can make decisions for new tasks and
interactions using the manipulator.
Our Dynamixel actuator provides a internal micro-controller
that sends some information about the state of motor. These
information, like the speed and goal error, are used into
control algorithm providing more natural movements, since
more complex movements can be executed. A easily way
example to get more natural movements with kinematic control
BRAHUR-BRASERO 2019 II Brazilian Humanoid Robot Workshop and III Brazilian Workshop on Service Robotics
ISBN: 978-85-98123-14-1 71
is through attributing different speeds for each DOF of the arm
when it is going from the point A to the point B, these different
speeds are proportional with the distances that each joint will
course.
D. Module for Human-Robot Adaptive Interaction Using Bi-
ological Signals
Environments involving robotics and intelligent machines
have been increasingly inserted and used in our daily lives,
both for industrial and domestic purposes. The environment
where robots interact with humans are called use case sce-
narios according to Human-Robot Interaction methods. The
use case scenarios present physical characteristics (furniture
height, door opening, among others), social and cultural (sig-
naling, language, color pattern, among others), which help to
define communication strategies between humans and inserted
machines in the environment. In the same sense, humans also
present states during the interactive process, assuming several
states that can be characterized as useful variables for the
decision making of the robot during the execution of tasks.
This scientific research focused on analyzing the envi-
ronmental and biological variables of the user to create a
more natural social interface between man and robot. Also
encompassing the perception of human characteristics and its
subsequent classification to enable a better understanding of
the robot in the environment that it is located and how it can
assist the user in their intentions and tasks at home.
These robots should consider important aspects such as:
human-machine interaction, being socially relevant, being
demand-driven, being scientifically challenging, being easily
adaptable to the environment, being intuitively interacting and
being interesting to viewers.
The idea at the beginning is that the user has the equipment
fixed in the body and so the robot can use the information
coming from the device, in their actions. The equipment used
in this project is the EMOTIV EPOC, considered to be flexible,
versatile and more affordable compared to others of its kind
and thus being ideal for the idea of wearable sensor.
1) Re-usability for other research groups: This module is
available online 3and can be accessed by anyone. researchers,
who are developing robots, on the decision making for the
robot so that the actions that the robot already executes depend
on the reading of the module created and thus, positive actions,
or that generate a certain happiness or neutrality, allow the
interaction to continue Similarly. Unlike these, actions that
leave the user with level of sadness detectable by the module
are changed, thus allowing a more understandable and natural
type of interaction, on the part of the robot.
2) Applicability in the real world: Research is conducted
in an attempt to understand how humans react in the inter-
active process with robots, but it is difficult to capture the
real feelings of the human during the construction of tasks
shared between the robot and the people in the scenario. As
the computer can understand our human reactions, feelings,
3https://bitbucket.org/leocneves/eeg hmi
emotions and social relationships, it can then better understand
what actions to take in various everyday situations in an
environment of human interaction at work, at home or in a
hospital for example.
3) Results: For the validation of the module two main parts
were analyzed, being one of them the statistical data of the
training and the validation of the classifier used, and the other
part the application of the module in real time in the human-
robot. The experiment was initially done with ten subjects
and Table I shows the accuracy of the best result obtained
from the training and validation of the implemented classifier,
where it is possible to see the percentage of correctness
of the classifier for each class. Table I shows the trained
classes for emotional states, happiness, sadness and neutrality,
respectively, represented by numbers 1, 2 and 3.
TABLE I
CONFUSION MATRIX OF THE LDA CLA SSI FIER .
Class 1 2 3
1 92.9% 5.8% 1.3%
2 5.0% 93.8% 1.3%
3 17.5% 17.5% 65.0%
E. HERA Bot: Exploring the Frontiers of Communication
By exploiting these usability advantages, the project’s pur-
pose is to enable human-robot interaction to be raised to
a non-face level, as a person can communicate through the
smartphone and thus be able to perform any actions in person.
For the development of the project, a python API was used to
create a bot in the Telegram instant messaging application that
communicates direct with the human-robot interaction actions
of HERA. This bot establishes a robot-user communication
path through the smartphone, such as a person on another
smartphone, so that a remote conversation with the robot can
be established.
The robot developed allows to perform the same functions
of commands and conversation that human-robot interaction
face-to-face would have, therefore, functions of recognition
of objects, people, voice, navigation actions, among others,
are included in the non-presence interface. Examples of ap-
plication are numerous, as the robot can talk to you at work,
traveling or in another room and be assigned household chores
such as checking items in a refrigerator, checking closed
doors or windows in case of rain, sending pictures of the
environment to verify people or even develop a conversation
with the user
VI. ROBOT TECHNICAL SPECIFICATIONS
1) Hardware Description:
Base: Mecanum Wheel Robot platform.
•Sensors:
–Hokuyo UTM-30LX-EW.
•Actuators:
–Omnidirectional wheels.
Chest: Custom made extension.
BRAHUR-BRASERO 2019 II Brazilian Humanoid Robot Workshop and III Brazilian Workshop on Service Robotics
ISBN: 978-85-98123-14-1 72
•Sensors:
–Emergency switch.
•Actuators:
–3D printed gripper 6 DOF.
Head: Apple Ipad 2” and Feedback Display.
•Sensors:
–Microsoft Kinect;
–Logitech c920 webcam;
–2 RODE VideoMic GO directional micro-
phones;
–MATRIX Creator Board;
Control:Intel Nuk i5 7500T CPU.
2) Software Description:
•OS: Ubuntu 16.04;
•Middleware: ROS Kinetic Kame;
•Localization/Navigation/Mapping: SLAM;
•Face detection: Haar cascades;
•Face recognition: LBP Algorithm;
•People Tracker: PCL and Leg detector;
•Gestures/movement recognition: OpenPose;
•Object recognition: SIFT + FLANN or MobileNet + SSD;
•Object manipulation: Inverse Kinematic implemented on
ROS own packages;
•Speech recognition: Google Speech Recognition and
other APIs;
•Speech synthesis: gTTs package.
VII. CONCLUSION
The robotic platform custom made by the team, with an om-
nidirectional base, had a good performance in accomplishing
the RoboCup service tasks. There are still some improvements
to be made, so the mobile robot will be able to accomplish
all the tasks purposed in the scenario of domestic service.
The first challenges have been successfully completed and
the robot’s autonomy is acceptable for the time required to
perform tasks in a home environment. The speech recognition
is currently improved, being able to even map the sound
sources in the environment, and Deep Learning concept will
be applied to the Object recognition and with this, researches
and implementation for constant improvement of the robot’s
functionalities. With this type of robot is expected to provide
better quality of life for people in their contexts of use.
REFERENCES
[1] Michael A. Goodrich and Alan C. Schultz. Human-robot interaction: A
survey. Found. Trends Hum.-Comput. Interact., 1(3):203–275, January
2007.
[2] A. Weiss. Validation of an Evaluation Framework for Human-robot In-
teraction: The Impact of Usability, Social Acceptance, User Experience,
and Societal Impact on Collaboration with Humanoid Robots. na, 2010.
[3] Pramila Rani, Changchun Liu, and Nilanjan Sarkar. Affective feedback
in closed loop human-robot interaction. In Proceedings of the 1st
ACM SIGCHI/SIGART Conference on Human-robot Interaction, HRI
’06, pages 335–336, New York, NY, USA, 2006. ACM.
[4] Aquino Junior P.T. de Araujo, C. F. Psychological personas for universal
user modeling in human-computer interaction. In Human-Computer
Interaction. Theories, Methods, and Tools, HCI 2014. Springer, Cham,
2014.
[5] Masahiro Kitagawa, Benjamin Luke Evans, Nagisa Munekata, and
Tetsuo Ono. Mutual adaptation between a human and a robot based
on timing control of ”sleep-time”. In Proceedings of the Fourth
International Conference on Human Agent Interaction, HAI ’16, pages
353–354, New York, NY, USA, 2016. ACM.
[6] Obehioye Adubor, Rhomni St. John, and Aaron Steinfeld. Personal
safety is more important than cost of damage during robot failure. In
Proceedings of the Companion of the 2017 ACM/IEEE International
Conference on Human-Robot Interaction, HRI ’17, pages 403–403, New
York, NY, USA, 2017. ACM.
[7] Thiago Freitas dos Santos, Danilo Gouveia de Castro, Andrey Araujo
Masiero, and Plinio Thomaz Aquino Junior. Behavioral persona for
human-robot interaction: A study based on pet robot. In Masaaki
Kurosu, editor, Human-Computer Interaction. Advanced Interaction
Modalities and Techniques, pages 687–696, Cham, 2014. Springer
International Publishing.
[8] Sarah Strohkorb and Brian Scassellati. Promoting collaboration with
social robots. In The Eleventh ACM/IEEE International Conference on
Human Robot Interaction, HRI ’16, pages 639–640, Piscataway, NJ,
USA, 2016. IEEE Press.
[9] J. Lundstrm, W. O. De Morais, and M. Cooney. A holistic smart
home demonstrator for anomaly detection and response. In 2015 IEEE
International Conference on Pervasive Computing and Communication
Workshops (PerCom Workshops), pages 330–335, March 2015.
[10] Cliff Lampe, Bob Bauer, Henry Evans, Dave Robson, Tessa Lau, and
Leila Takayama. Robots as cooperative partners... we hope... In Proceed-
ings of the 19th ACM Conference on Computer Supported Cooperative
Work and Social Computing Companion, CSCW ’16 Companion, pages
188–192, New York, NY, USA, 2016. ACM.
[11] ROS.org — Powering the world’s robots. http://www.ros.org/. Accessed
on: May, 10h 2017.
[12] OpenNI - Wikipedia, the free encyclopedia. https://en.wikipedia.org/
wiki/OpenNI. Accessed on: August, 14th 2014.
[13] OpenNI 2 Downloads and Documentation — the structure sensor. http:
//structure.io/openni. Accessed on: August, 14th 2014.
[14] Matteo Munaro and Emanuele Menegatti. Fast rgb-d people tracking
for service robots. Autonomous Robots, 37(3):227–242, 2014.
[15] D.G. Lowe. Object recognition from local scale-invariant features. In
Proceedings of the Seventh IEEE International Conference on Computer
Vision, volume 2, pages 1150–1157, 1999.
[16] Marius Muja and David G Lowe. Fast approximate nearest neighbors
with automatic algorithm configuration. VISAPP (1), 2(331-340):2,
2009.
[17] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott
Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox
detector. In European conference on computer vision, pages 21–37.
Springer, 2016.
[18] Tzutalin. Labelimg. git code. https://github.com/tzutalin/labelimg. free
software: Mit license, 2015.
[19] Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-
El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan
Popov, Shahab Kamali, Matteo Malloci, Jordi Pont-Tuset, Andreas
Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun,
Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and
Kevin Murphy. Openimages: A public dataset for large-scale multi-
label and multi-class image classification. Dataset available from
https://storage.googleapis.com/openimages/web/index.html, 2017.
[20] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko,
Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam.
Mobilenets: Efficient convolutional neural networks for mobile vision
applications. arXiv preprint arXiv:1704.04861, 2017.
[21] Matrix creator board. https://www.matrix.one/products/creator. Ac-
cessed on: October, 26th 2018.
[22] Marina Y Gonbata, Leonardo C Neves, Thiago S B Meyer, Rodrigo C
Techi, Pedro H S Domingues, Bruno F V Perez, Victor S M Schmiedl,
Amanda M Lima, William Y Yaguiu, Matheus V Domingos, Antonio
C O Busnello, Kimberlyn K G Cardoso, Jo˜
ao Victor, C M Santos, Fagner
Pimentel, Douglas De, R Meneghetti, Fl´
avio Tonidandel, and Plinio T
Aquino Junior. RoboFEI Team Description Paper for RoboCup@Home
2018. Technical report, RoboFEI, 2018.
BRAHUR-BRASERO 2019 II Brazilian Humanoid Robot Workshop and III Brazilian Workshop on Service Robotics
ISBN: 978-85-98123-14-1 73