ArticlePDF Available

Abstract and Figures

In research on small mobile robots and biomimetic robots, locomotion ability remains a major issue despite many advances in technology. However, evolution has led to there being many real animals capable of excellent locomotion. This paper presents a “parasitic robot system” whereby locomotion abilities of an animal are applied to a robot task. We chose a turtle as our first host animal and designed a parasitic robot that can perform “operant conditioning”. The parasitic robot, which is attached to the turtle, can induce object-tracking behavior of the turtle toward a Light Emitting Diode (LED) and positively reinforce the behavior through repeated stimulus-response interaction. After training sessions over five weeks, the robot could successfully control the direction of movement of the trained turtles in the waypoint navigation task. This hybrid animal-robot interaction system could provide an alternative solution to some of the limitations of conventional mobile robot systems in various fields, and could also act as a useful interaction system for the behavioral sciences.
Content may be subject to copyright.
Corresponding author: Phill-Seung Lee
Journal of Bionic Engineering 14 (2017) 327–335
Parasitic Robot System for Waypoint Navigation of Turtle
Dae-Gun Kim1, Serin Lee2, Cheol-Hu Kim1, Sungho Jo3, Phill-Seung Lee1
1. Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), 373-1 Guseong-dong,
Yuseong-gu, Daejeon 34141, Republic of Korea
2. Institute for Infocomm Research, 1 Fusionopolis Way, #21-01 Connexis, Singapore
3. Department of Computer Science, Korea Advanced Institute of Science and Technology, (KAIST) 373-1 Guseong-dong,
Yuseong-gu, Daejeon 34141, Republic of Korea
In research on small mobile robots and biomimetic robots, locomotion ability remains a major issue despite many advances
in technology. However, evolution has led to there being many real animals capable of excellent locomotion. This paper pre-
sents a “parasitic robot system” whereby locomotion abilities of an animal are applied to a robot task. We chose a turtle as our
first host animal and designed a parasitic robot that can perform “operant conditioning”. The parasitic robot, which is attached to
the turtle, can induce object-tracking behavior of the turtle toward a Light Emitting Diode (LED) and positively reinforce the
behavior through repeated stimulus-response interaction. After training sessions over five weeks, the robot could successfully
control the direction of movement of the trained turtles in the waypoint navigation task. This hybrid animal-robot interaction
system could provide an alternative solution to some of the limitations of conventional mobile robot systems in various fields,
and could also act as a useful interaction system for the behavioral sciences.
Keywords: parasitic robot, operant conditioning, waypoint navigation, red-eared slider, trachemys scripta elegans
Copyright © 2017, Jilin University. Published by Elsevier Limited and Science Press. All rights reserved.
doi: 10.1016/S1672-6529(16)60401-8
1 Introduction
Remarkable progress has been made in the devel-
opment of robot technology. Many variations of robot
products and machine systems are now used in numer-
ous industries. Moreover, the need for robots has ex-
tended to almost every segment of society. Notably, in
the defense sector and certain industries, demands exist
for small mobile robots that can explore hazardous areas
and dangerous environments, such as the scenes of ac-
cidents or disasters. However, small robots that can
operate in inhospitable environments can only do so for
a limited time and within a given range owing to battery
limitations. The actuators and sensors of the robot can
also be easily damaged or destroyed in harsh and humid
Therefore, researchers have strived to develop a
means of controlling animals to make use of their lo-
comotive abilities to perform particular tasks. Many
animals have extraordinary means of locomotion that
have evolved through natural selection over millions of
years. Thus, their bodies are ideal for designing small
mobile locomotion platforms. In this study, we exam-
ined the prospect of a hybrid animal-robot system.
Meanwhile, in recent years, there has been consid-
erable interest in virtual reality and augmented reality
through wearable computing. The commercialization of
products from several companies has become imminent
because of rapid technological advancements in the
fields of sensors, displays, and computing. Nevertheless,
with the technology available to date, it is difficult to
completely deceive human senses. Humans are very
intelligent and have highly attuned sense organs, making
it nearly impossible to deceive all human senses with
existing technology. On the other hand, lower-level
animals are more dependent on virtual stimuli than hu-
mans. These lower-level animals react to virtual reality
and assume it to be their reality, despite the existence of
limited stimuli.
Several studies have shown that certain animals can
effectively interact with virtual stimuli; accordingly, the
application of an animal control system has been pro-
Journal of Bionic Engineering (2017) Vol.14 No.2
posed. In some of these studies, the feasibility of a
biorobot was demonstrated. In 1997, Holzer and Shi-
moyama connected the antennae of a roach, which are
used to detect adjacent obstacles, to the output pin of an
8-bit μ-controller board. Thus, the roach movement was
induced[1]. In addition, rats have been guided through the
application of electric stimuli to the brain as cues and
rewards[2–5]. For agricultural applications, a virtual fence
system of sound stimuli used for restricting cattle
movement was proposed[6]. Moreover, Harvey et al.
recorded the behavior of mice on a virtual linear track.
They showed that the mice could interact with a virtual
reality device[7–9]. Furthermore, it was demonstrated that
trained dogs could be guided by remote audio com-
mands through wearable GPS unit control[10]. Lee et al.
controlled the movement of a turtle by leveraging the
characteristic obstacle avoidance capability of crea-
ture[11,12]. Meanwhile, Cai et al. modulated motor be-
haviors of pigeons by electric stimulation of specific
Studies have also been conducted for controlling rat
movement through automation control. Zhang et al.
developed an automatic control system for “rat-bot”
navigation[14]. Gao et al. developed a rat-bot automatic
navigation system based on a distance measurement in
unknown environments[15]. Furthermore, Sun et al. ap-
plied the General Regression Neural Network (GRNN)
algorithm for automated navigation of rat-bots by ena-
bling automatic decision control[16].
As mentioned above, a training system can be de-
veloped to automatically control animal behavior using a
particular virtual reality system. In this paper, we pro-
pose a “parasitic robot system” that mimics the behav-
iors of natural parasites. It is known that some parasites
that live in the bodies of host animals can influence the
host behavior to fulfill the specific objectives of parasite.
Similarly, the proposed parasitic robot is attached to a
target animal and invokes specific behavior through
virtual stimulation.
We selected the turtle as the host animal because it
can effectively sense visible light[17]. In addition, it is
relatively intelligent and has long-term memory, which
enables it to be trained to develop certain behaviors[18].
Furthermore, the turtle moves sufficiently slow to be
easily controlled and observed. Moreover, its hard shell
is an ideal surface for attaching the robot device.
We developed a parasitic robot for the turtle to
achieve a waypoint navigation task in a water tank. We
observed the parasitic robot and turtle interaction and
recorded the extent to which the waypoint navigation
performance of the parasitic robot is improved.
To this end, a heads-up display for the turtle was
adopted as the virtual stimulator for navigation. The
parasitic robot used a heads-up display (cue) and feeder
(reward) to train the turtles to move in a certain direction
of the heads-up display. The robot obtained the turtle
pose information and waypoint position from an indoor
monitoring system using wireless communication. All of
the experimental tests were conducted in a water tank.
The results validated the usefulness of the proposed
concept system.
The remainder of this paper is organized as follows.
Section 2 describes the concept of the parasitic robot and
the experimental setup. In addition, details of the para-
sitic robot and turtle are presented. In section 3, the ex-
perimental results are provided. In section 4, we discuss
the results and future work is described. In section 5,
conclusions of the study are presented and the contribu-
tions of this research are summarized.
2 Material and methods
We tested an example of the parasitic robot concept.
In this study, the turtle was selected as the host animal
and a parasitic robot was designed to induce the turtle to
navigate between waypoints.
2.1 Parasitic robot system concept
Parasitism is a life form relationship between two
organisms: one is a parasite, and the other is the host. A
parasite lives inside or on the body of a host, either
temporarily or permanently. It benefits from the host,
such as by removing nutrients from the host to sustain
itself and reproduce. Certain kinds of parasites can ma-
nipulate the behavior of the host to increase the prob-
ability of its own reproduction. For example, a three-
spined stickleback (Gasterosteus aculeatus) infected
with a bird tapeworm (Schistocephalus solidus) behaves
in a way that increases its exposure to piscivorous (car-
nivorous) birds. This behavior enables the tapeworm to
lay its eggs in the stomach of bird. The eggs are then
widely spread through the feces of the bird[19]. Likewise,
some parasites can change the behaviors of their hosts
through special interactions.
Similarly, in the proposed concept of a “parasitic
Kim et al.: Parasitic Robot System for Waypoint Navigation of Turtle 329
robot,” a specific behavior is induced by the robot in its
host to benefit the robot. The robot attaches to its host in
a way similar to an actual parasite, and it interacts with
the host through particular devices and algorithms. This
concept and the relationship between the parasitic robot
and the host animal are shown in Fig. 1. The parasitic
robot can achieve a task assigned by the human operator
by using the locomotion abilities of the host. This is
because the parasitic robot can induce the behavior of
the host through stimulus-response training. We believe
that this proposed architecture based on parasitic rela-
tionships often observed in nature can be applied to
various animals, robots, and algorithms.
In this study, we employ the “operant conditioning”
method that was first described by Skinner to induce the
animal behavior by the parasitic robot[20]. Operant con-
ditioning is a type of training that reinforces a certain
behavior toward a particular stimulus through reward or
punishment. Skinner demonstrated this notion with a
cage (Skinner box) that provides specific stimuli and
corresponding rewards or punishments. Likewise, the
parasitic robot can be regarded as a portable Skinner box
that is used to control the behavior of an animal. The
target animal is trained to exhibit the desired behavior
through its interaction with the parasitic robot. The
parasitic robot continues to provide stimuli and feedback
to prevent a decrease in its maneuverability. Many ad-
vanced animals possess a degree of cognitive ability and
intelligence that make them receptive to operant condi-
tioning and thus, well suited to the proposed parasitic
robot system.
Fig. 1 Overview of interaction between parasitic robot and host.
The parasitic robot borrows the locomotion ability of the host by
various interactions.
2.2 Parasitic robot design for the turtle
We first developed a parasitic robot that can train
and reinforce the behavior that a turtle typically relies on
to seek and obtain food. The parasitic robot presents a
virtual food source to the turtle and thereby induces the
turtle behavior described below.
2.2.1 Turtle as host animal
We chose the turtle as our host animal because it
offers several advantages over other animals to test our
concept. Turtles can effectively sense visible light, and
they have sufficient intelligence and long-term memory
to be trained in a certain behavior through operant con-
ditioning. Furthermore, they are deemed suitable for our
experiment because they move slowly and can thus be
easily observed. In addition, they have a hard shell onto
which the robot could be easily mounted. The parasitic
robot was mounted on the upper shell of a turtle, as
shown in Fig. 2.
The turtles used in this study were red-eared sliders
(Trachemys scripta elegans). Five turtles were housed in
a water-filled glass tub (91 cm × 66 cm × 16 cm) during
the laboratory experiments. The glass tub had a water
temperature controller, filter, dry platform for basking,
and an ultraviolet (UV) sunlamp. Turtles tend to sun-
bathe for six or seven hours each day. The turtles were
typically fed three times a week during the experimental
period. After at least 24 h without feeding in the tub, the
turtles were moved to the main water tank for the ex-
2.2.2 Robot as parasite
The parasitic robot consisted of three parts: a
Fig. 2 The parasitic robot is mounted on the carapace of the turtle.
It induces the turtle to move to the waypoint by using a head-up
Light Emitting Diode (LED) display as well as rewarding the
turtle with food when it performs well.
Journal of Bionic Engineering (2017) Vol.14 No.2
stimulation module, reward module, and control module.
Fig. 3 illustrates the interaction between the parasitic
robot and turtle. To lead the turtle to the waypoints, the
parasitic robot monitored the current position and head
angle through a comparison with the given waypoints. It
thus provided an appropriate stimulus and reward to the
turtle. For operant conditioning, it continued to train and
control the turtle while completing a navigation task.
The stimulation module guided the turtle to a de-
sired location by providing appropriate stimuli. The
turtle relies on its good vision for its movement deci-
sions. Thus, we used visual stimulation by means of red
Light Emitting Diodes (LEDs) with a wavelength of
635 nm, which is within the range of the visual dis-
crimination of turtle. Our parasitic robot was designed to
accomplish the waypoint navigation task by providing
visual cues to the host. We devised a heads-up LED
display consisting of a round carbon-fiber frame with
five LEDs to provide the visual cues. The LEDs were
installed in the frame at 30˚ intervals to cover a 120˚
range of movement. It was mounted on the turtle, al-
lowing it to easily view the LEDs in front of its eyes.
The reward module reinforced the behavior of the
turtle in response to the visual stimulation. Specifically,
when the turtle effectively responded to the LED
stimulation, the module ejected a gel-type food from a
syringe using a linear servo motor (PLS-5030, PoteNit).
Thus, the behavior of following the LEDs was trained by
operant conditioning. As the training progressed, the
parasitic robot caused the host turtle to follow the LEDs
through the positive reinforcement of the operant con-
The control module consisted of a microcontroller
board (ATMEGA8_Xbee Board, TESOL) and a ZigBee
radio modem (XBP24-AWI-001, DIGI) to transmit the
position, head angle, and waypoint information. The
Fig. 3 Schematic diagram of parasitic robot system and turtle.
The parasitic robot uses LEDs as a visual cue to train the turtle to
follow an object through operant conditioning with rewards.
control module operated the stimulation module and
reward module through the application of a navigation
training algorithm for operant conditioning. The entire
parasitic robot (19.36 cm × 10.84 cm, 133.5 g) was wa-
terproof to enable its operation in water, and it was
firmly attached to the turtle shell with an epoxy resin
2.2.3 Training method
As mentioned above, the operant conditioning
training method was used through the parasitic robot to
induce the desired behavior in the turtle. In the initial
trials, none of the turtles recognized the information
conveyed by the virtual stimulation (the illuminated
LED was a signal for food). Therefore, all five turtles
were guided to connect the reward (food) with the
stimulus (illuminated LED) before interacting with the
parasitic robot. This “shaping process” is a teaching
method that is often used when the host animal can not
recognize the stimulus before the start of operant condi-
tioning[21]. For 10 min at each meal time over two weeks,
a red LED was arbitrarily lit, and food was provided only
at the location of the lit LED. The turtles began to rec-
ognize the stimulus and eventually followed it. As a
result, after two weeks, each turtle showed the same
behavior; they all recognized the lit LED to obtain food.
After the shaping stage, we conducted training
experiments to test our parasitic robot system with the
five turtles. Before the training session began, we at-
tached the parasitic robot to the turtle and positioned the
turtle at a certain starting position in the water tank. In
each training session, three stages were involved in a
waypoint route:
Stage 1: Recognize: The turtles were trained to
recognize the lit LED controlled by the stimulation
module of the parasitic robot.
Stage 2: Follow: The turtles were trained to walk in
the direction of the illuminated LED. The parasitic robot
then guided the turtle to each waypoint by controlling
the stimulation module.
Stage 3: Reward: The turtles were rewarded for
walking toward the waypoints. Once the turtle reached
the acceptance area for each waypoint, the parasitic
robot provided food as a reward to further reinforce the
following behavior.
These three stages were repeated by the parasitic
robot from start to end points through five waypoints.
Kim et al.: Parasitic Robot System for Waypoint Navigation of Turtle 331
The navigation training was automatically per-
formed using the parasitic robot. If the turtle failed to
complete the task within 5 min, the session was deemed
complete. The experiment with each turtle was com-
prised of five sessions; each session was performed at a
one-week interval.
2.3 Robot algorithm
As a result of the operant conditioning, the turtle
could follow an LED representing a virtual target on the
way to a waypoint. Thus, if the LEDs of the stimulation
module were continuously controlled to guide the turtle
toward the waypoint, the turtle could reach the target
Fig. 4 illustrates the algorithm used to control the
visual cue (LED) that induced the following behavior.
Here, we applied the Line-of-Sight (LOS) guidance
algorithm[22] to guide the turtle p(x, y) to the nth way-
point wn(xlos, ylos) by controlling the stimulation module.
As shown in the figure, the head angle of the turtle θ
indicates the angle between the horizontal line and the
forward direction of turtle. The LOS angle φlos is defined
by 1
los los los
tan (( ) / ( ))
yx x
. The direction of the
LED was selected by the control angle δcontrol, which is
control los.
=− (1)
Then, the parasitic robot illuminates the appropriate
LED on the route to the waypoint using the control angle,
δcontrol. As shown in Fig. 4, five LEDs are installed in the
view frame of turtle at 30˚ intervals to cover a 120˚
range. Each LED has a specific angle (num1 = 60˚,
Fig. 4 Guidance algorithm for the parasitic robot from the
(n1) th to nth waypoints. The parasitic robot gets data on the
position of the turtle and waypoint, and controls the lit LED
(virtual target point) with a line of sight guidance algorithm.
num2 = 30˚, num3 = 0˚, num4 = 30˚, num5 = 60˚).
Depending on the calculated δcontrol, the robot illuminates
the LED with the closest angle to δcontrol.
Accordingly, the robot selects a virtual target point
closest to the line between the position and waypoint n
of the turtle. As expected, the turtle moves toward the
illuminated LED. While approaching the waypoint, the
parasitic robot continues to switch on the illuminated
LED to guide the turtle to the target waypoint. If the
position of turtle p(x, y) satisfies
with 22
los los
()(),Lyy xx=−+
then, the (n+1) th waypoint is selected. Hence, the para-
sitic robot ejects food to the turtle as part of the operant
conditioning. Here, R denotes the acceptance distance
for the waypoint. The position, head angle, and waypoint
data of turtle are continually provided to the parasitic
robot. The parasitic robot was operated in the water tank
as shown in Fig. 5. The robot task flow is described in
Table 1.
0.3 m
CMOS camera
Parasitic robot
2.0 m
1.5 m
Fig. 5 Overview of the experimental setup for navigation. The
parasitic robot induces the turtle to move to the waypoint in the
water tank.
Table 1 Navigation algorithm for turtle
1. n1
2. while navigating do
3. Get the waypoint wn(xlos, ylos) and robot pose p(x,y), θ
4. φlostan1(ylosy)/(xlosx)
5. δcontrolθφlos
6. Turn on the LED which is the closest to δcontrol
7. if LR then
8. Activate the feeder and give the reward
9. nn+1
10. end if
11. end while
Journal of Bionic Engineering (2017) Vol.14 No.2
3 Results
3.1 Evaluation metric
In this study, the parasitic robot trained the turtle in
following behavior through repeated operant condi-
tioning. We designed a “reaction speed” metric to
evaluate the training level of turtle during each experi-
mental session. By using this metric, we checked how
quickly and accurately the turtle moved toward the LED
to evaluate the performance of the parasitic robot. As
shown in Fig. 6, the parasitic robot provides an LED
stimulus at position pt and induces the reaction of turtle
during time step tΔ. We measured the displacement of
turtle U
at each tΔ and calculated the metric given by
cos( ) /VU t
for 90˚<θLED<90˚ , (3)
where VLED is the speed of reaction toward the LED
stimulation, that is, the velocity of movement toward the
LED stimulation source. If the metric is negative, it is set
to zero for a data comparison. We evaluated the naviga-
tion performance of the parasitic robot by using the
above metric.
3.2 Training results
As shown in Fig. 7, the parasitic robot trained the
turtle to move through a 5 m optimal path, which was a
straight line between the five waypoints in the water tank
(1.5 m × 2.0 m). The parasitic robot guided the turtle to
approach each waypoint with the stimulation module
and reward module based on the training algorithm. The
turtle sequentially passed through five waypoints. All of
Fig. 6 Graphic User Interface (GUI) of monitoring system.
Fig. 7 Illustration of reaction speed of turtle toward the LED
stimulation given by the parasitic robot.
the turtles were successfully trained to navigate the
specific trajectories by passing through all the way-
To evaluate how well the turtles followed the
stimulus as the session proceeded, we determined the
average reaction speed toward the LED (VLED) on the
path of each turtle. This describes the turtle training level.
Fig. 8 shows the learning curve for average VLED derived
in each session. The repeated analysis of variance
(ANOVA) measure[23] showed that the average VLED of
each turtle between the first and fifth sessions was sig-
nificantly different (F0.05, 4, 16=20.183, p<0.001). All five
turtles followed the LED of the parasitic robot from the
first day (average VLED = 2.36 mm·s1). During the ex-
periments, the strength of the following behavior of the
turtles was gradually reinforced by the parasitic robot.
The average strength, VLED, was dramatically increased
in 2.36 mm·s1, 3.03 mm·s1, 5.63 mm·s1, 9.52 mm·s1,
10.24 mm·s1, and 48.03 mm·s1, respectively. The
average performance score of the following behavior
showed an increasing curve with a rate of 333.41%
between the first and fifth weeks. In particular, Turtle 3
showed the highest increasing curve with a rate of
504.07%, while the performance of Turtle 5 improved
most slowly at 67.46%. Although each turtle did not
initially exhibit the following behavior, the reaction
speed of the turtles gradually improved as the parasitic
robot continued to reward the turtles when they followed
the illuminated LED. Differences also existed in the
learning among individual turtles.
Fig. 9 shows the travelled trajectories of the five
Kim et al.: Parasitic Robot System for Waypoint Navigation of Turtle 333
turtles that were recorded in the fifth session. The dotted
lines indicate the traveled pathways. The circles repre-
sent the waypoints, and the desired pathway is denoted
by the solid line. In the fifth session, each turtle suc-
ceeded in navigating all of the waypoints as a result of
the stimulation provided by the parasitic robot after
operant conditioning.
Table 2 summarizes the characteristic values of the
travelled trajectories shown in Fig. 9. The average travel
distance and elapsed time were 7.18 m and 75.07 s,
respectively. The average cross-track error for the five
turtles was 18.83 cm. The cross-track error is defined as
the shortest distance between the desired path line and
current position of the turtle. The average cross-track
error indicates how accurately the turtle moves toward
the desired path during the waypoint navigate mission.
The data represent the mean ± the Standard Error of
Mean (SEM). As shown in the table, the values repre-
senting the length of the travelled trajectory, elapsed
time, and average cross-track error of Turtle 5 are much
greater than those of the other turtles. In other words, the
well-trained turtles with a higher VLED were more accu-
rately and rapidly guided toward the waypoints.
Average of reaction speed toward
LED stimulation (mm·s1)
Fig. 8 Learning curves for average reaction speed toward the
Table 2 Navigation path characteristics in the 5th session
error (mm)
Average VLED
Turtle 1 6.16 57.02 170.98 ± 2.57 51 ± 0.99
Turtle 2 8.51 88.62 134.21 ± 1.32 46 ± 0.60
Turtle 3 5.87 45.02 169.32 ± 3.32 60 ± 1.16
Turtle 4 5.89 52.93 162.91 ± 2.79 55 ± 1.14
Turtle 5 9.45 131.78 244.11 ± 1.57 27 ± 0.67
Fig. 9 Travelled trajectories of turtles in the 5th session. A movie
of this figure is available at:
4 Discussion
We performed the training experiments with the
turtles to demonstrate the validity of our concept system.
The training test using the parasitic robot was success-
fully implemented, and the virtual stimulus (heads-up
display) of the parasitic robot guided the turtles to move
through predetermined routes. The performance of the
following behavior of the turtles, which were trained by
the parasitic robot, was enhanced at the rate of a
333.41% increasing curve (Fig. 8). On the last day of the
experiment, the average cross-track error of the way-
point navigation paths was only 3.76% in the 5 m pre-
determined routes (Table 2). The results of these tests
showed that our parasitic robot can be successfully op-
erated with an animal-robot interaction. In particular, the
test validated the possible use of our concept system in
which a robot can assume the role of a parasite on a host
Our proposed system presents the idea of a hybrid
animal-robot interaction. Through a combination of
simple robotic technology and traditional learning the-
ory, our system mimics parasitic relationships in nature.
The parasitic robot induces a specific behavior in its host
to benefit the robot. In this study, we selected the turtle
as the host animal and demonstrated the validity of our
system through the simple animal-robot interaction. As
an interaction example, we chose the “operant condi-
tioning” training method[20]. Operant conditioning is a
type of training method that reinforces a certain behavior
using a particular stimulus and reward. Likewise, the
parasitic robot trains and reinforces the following be-
havior of the turtle with the stimulus module (heads-up
display) and reward module (feeder). As mentioned,
Journal of Bionic Engineering (2017) Vol.14 No.2
lower-level animals effectively react to virtual reality or
artificial environments. Therefore, we developed the
parasitic robot to provide a virtual visual stimulus for the
This study was our first attempt to test this idea. In
this research we focused on concept design with simple
training method and robot algorithm. However, further
studies are required to apply the various algorithms and
increase the intelligence of the robot for real application
task. The results showed that the parasitic robot seems to
well guide the turtles. However, to date, this concept
system remains unsuitable for real applications. In the
real environment, turtles are affected by external stimuli,
such as obstacles, light, and vibration. This presents a
problem in terms of the stability and accuracy of the
To apply this idea in real application tasks, future
work should increase the reliability of the system. To
this end, our system can be a fully portable virtual reality
environment with the development of virtual reality
technology and an enhanced sensor system. This system
would eliminate the external stimulus, and the robot
could obtain the information of obstacles and pathways
through sensors and a planning algorithm. The host
animal would experience only virtual visual information
from the parasitic robot without the external stimuli of
the real environment.
Additionally, we can combine energy-harvesting
technologies with the parasitic robot system. Thus, the
parasitic robot could charge itself through the movement
of its host animal. This idea can increase the operational
time of the system. Moreover, the robot intelligence can
be developed by applying various robotics technologies,
such as infrared sensors and path planning algorithms.
After these technological enhancements, it is expected
that fully automatic animal control through the ani-
mal-robot interaction would be enabled in various task
applications. Unlike a robot, the animals can obtain their
own food and recover their stamina from the natural
environment. Thus, they are capable of long-range and
long-term missions, even in harsh environments such as
dense forest and desert.
5 Conclusion
In this paper, we proposed a concept for a hybrid
animal-robot interaction, which we call a “parasitic ro-
bot”. The robot can take the role of a parasite on the
turtles (host) and induce the following behavior through
“operant conditioning.” The robot reinforces the be-
havior of turtle using a virtual cue (LED) as a stimulus
and gelatin food is given as a reward. We demonstrated
its validity through a training test, whereby the turtles
performed a waypoint navigation task. The experiment
results showed that the proposed system could be effec-
tively operated in behavior reinforcement and can thus
effectively and automatically control movement toward
the waypoints. We expect that our research can be used
as an innovative framework for robot-animal interaction
systems. In the future, our system could be used in real
applications of long-range and long-term missions with
the development of a fully virtual stimulus frame and
various robot algorithms.
Ethical approval
All procedures performed in studies involving
human participants were in accordance with the ethical
standards of the institutional research committee and
with the 1964 Helsinki declaration and its later
amendments or comparable ethical standards. Also, all
applicable international and institutional guidelines for
the care and use of animals were followed. All proce-
dures performed in studies involving animals were in
accordance with the ethical standards of the institution
or practice at which the studies were conducted.
This research was supported by the KAIST High
Risk High Return Project (Grant No. N10120009) and a
grant [MPSS-CG-2015-01] through the Disaster and
Safety Management Institute funded by Ministry of
Public Safety and Security of Korean government. The
funders had no role in the study design, data collection
and analysis, decision to publish, or preparation of the
[1] Holzer R, Shimoyama I. Locomotion control of a bio-robotic
system via electric stimulation. Proceedings of the
IEEE/RSJ International Conference on Intelligent Robots
and Systems, 1997, 3, 1514–1519.
[2] Talwar S K, Xu S, Hawley E S, Weiss S A, Moxon K A,
Chapin J K. Behavioural neuroscience: Rat navigation
guided by remote control. Nature, 2002, 417, 37–38.
[3] Pi X, Li S, Xu L, Liu H, Zhou S, Wei K, Wang Z, Zheng X,
Kim et al.: Parasitic Robot System for Waypoint Navigation of Turtle 335
Wen Z. A preliminary study of the noninvasive remote con-
trol system for rat bio-robot. Journal of Bionic Engineering,
2010, 7, 375–381.
[4] Zhang D, Dong Y, Li M, Wang H. A radio-telemetry system
for navigation and recording neuronal activity in
free-roaming rats. Journal of Bionic Engineering, 2012, 9,
[5] Sun C, Zheng N, Zhang X, Chen W, Zheng X. Automatic
navigation for rat-robots with modeling of the human
guidance. Journal of Bionic Engineering, 2013, 10, 46–56.
[6] Butler Z, Corke P, Peterson R, Rus D. From robots to ani-
mals: Virtual fences for controlling cattle. International
Journal of Robotics Research, 2006, 25, 485–508.
[7] Harvey C D, Collman F, Dombeck D A, Tank D W. Intra-
cellular dynamics of hippocampal place cells during virtual
navigation. Nature, 2009, 461, 941–946.
[8] Yu Y, Pan G, Gong Y, Xu K, Zheng N, Hua W, Wu Z. Intel-
ligence-augmented rat cyborgs in maze solving. PLOS ONE,
2016, 11, e0147754.
[9] Su L, Zhang N, Yao M, Wu Z. A computational model of the
hybrid bio-machine MPMS for ratbots navigation. IEEE
Intelligent Systems, 2014, 29, 5–13.
[10] Britt W R, Miller J, Waggoner P, Bevly D M, Hamilton J A.
An embedded system for real-time navigation and remote
command of a trained canine. Personal and Ubiquitous
Computing, 2011, 15, 61–74.
[11] Lee S, Kim C H, Kim D G, Kim H G, Lee P S, Myung H.
Remote guidance of untrained turtles by controlling volun-
tary instinct behavior. PLOS ONE, 2013, 8, e61798.
[12] Kim C H, Choi B J, Kim D G, Lee S, Jo S, Lee P S. Remote
navigation of turtle by controlling instinct behavior via
human brain-computer interface. Journal of Bionic Engi-
neering, 2016, 13, 491–503.
[13] Cai L, Dai Z, Wang W, Wang H, Tang Y. Modulating motor
behaviors by electrical stimulation of specific nuclei in pi-
geons. Journal of Bionic Engineering, 2015, 12, 555–564.
[14] Zhang Y, Sun C, Zheng N, Zhang S, Lin J, Chen W, Zheng X.
An automatic control system for ratbot navigation.
IEEE/ACM International Conference on Green Computing
and Communication & International Conference on Cyber,
Physical and Social Computing (CPSCom), 2010, 895–900.
[15] Gao L, Sun C, Zhang C, Zheng N, Chen W, Zheng X. Ratbot
automatic navigation by electrical reward stimulation based
on distance measurement in unknown environments. 35th
Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC), Osaka, Japan, 2004,
[16] Sun C, Zheng N G, Zhang X L, Chen W D, Zheng X X.
Automatic navigation for rat-robots with modeling of the
human guidance. Journal of Bionic Engineering, 2013, 10,
[17] Arnold K, Neumeyer C. Wavelength discrimination in the
turtle Pseudemys scripta elegans. Vision Research, 1987, 27,
[18] Davis K M, Burghardt G M. Training and long-term memory
of a novel food acquisition task in a turtle (Pseudemys nel-
soni). Behavioural Processes, 2007, 75, 225–230.
[19] Macnab V, Barber I. Some (worms) like it hot: Fish parasites
grow faster in warmer water, and alter host thermal prefer-
ences. Global Change Biology, 2012, 18, 1540–1548.
[20] Skinner B F. The Behavior of Organisms, Appleton-Century,
New York, USA, 1938.
[21] Peterson G B. A day of great illumination: B F Skinner’s
discovery of shaping. Journal of the Experimental Analysis
of Behavior, 2004, 82, 317–328.
[22] Fossen T I, Breivik M, Skjetne R. Line-of-sight path fol-
lowing of underactuated marine craft. Proceedings of the 6th
IFAC MCMC, Girona, Spain, 2003.
[23] Norman G, Streiner D L. Biostatistics: The Bare Essentials.
B.C Decker Inc, USA, 2001.
... The locomotion of these augmented animals can then be externally controlled, spanning three modes of locomotion: walking/running, flying, and swimming. Notably, these capabilities have been demonstrated in jellyfish (Figure 4.A) [136,137], clams (Figure 4.B) [138], turtles (Figure 4.C) [139,140], and insects, including locusts (Figure 4.D) [141,142], beetles (Figure 4.E) [143][144][145][146][147][148][149][150][151][152][153][154][155][156][157], cockroaches (Figure 4.F) [158][159][160][161][162][163][164], and moths [165][166][167][168][169]. ...
... One of the main disadvantages of cyborg approaches when compared to their purely robotic counterparts is reduced control and time constraints because of variations in endogenous animal behaviour. For instance, a "parasitic robot" to control a live turtle required a five-week training period of positive reinforcement [139,140]. RoboRoach and other cyborg insects have also demonstrated neuroplasticity in their abilities to resist repeated robotic control, thus requiring rest periods before robotic control of their locomotion was possible again. In contrast, jellyfish were able to follow repeated neuromuscular stimulations overnight, but were limited to a narrow frequency range between 0.25 and 1.00 Hz. ...
Full-text available
The past ten years have seen the rapid expansion of the field of biohybrid robotics. By combining engineered, synthetic components with living biological materials, new robotics solutions have been developed that harness the adaptability of living muscles, the sensitivity of living sensory cells, and even the computational abilities of living neurons. Biohybrid robotics has taken the popular and scientific media by storm with advances in the field, moving biohybrid robotics out of science fiction and into real science and engineering. So how did we get here, and where should the field of biohybrid robotics go next? In this perspective, we first provide the historical context of crucial subareas of biohybrid robotics by reviewing the past 10+ years of advances in microorganism-bots and sperm-bots, cyborgs, and tissue-based robots. We then present critical challenges facing the field and provide our perspectives on the vital future steps towards creating autonomous living machines.
... In the last two decades, bio-robots have been applied to insects [1][2][3] pigeons [4,5] lizards [6] fish [7,8] turtles [9,10] and rats [1,[11][12][13][14][15][16][17][18]19] . In contrast with all efforts in brain-computer interface fields for decoding brain signals to control orders, other viewpoints of an interface tried for a proper communication with brain and influencing different brain tissues by electrical stimuli. ...
One of the most important topics in neuroscience is the issue of brain electrical stimulation and its widespread use. Based on this issue, rat robot, a rat navigation system was introduced in 2002, which has utilized brain electric stimulations as a guide and a reward for driving rats. Recently systems have been designed which are automatically navigated by a computer. One of the obstacles in the way of these systems is to select the stimulation frequency of the somatosensory cortex for the rotation action. In this paper, the stimulation parameters of the somatosensory cortex for rotation in the T-shaped maze were examined for the first time with applying only one pulse train. Then, the optimized parameters have been utilized in a complex maze. The results show that the performance is directly related to the pulse width and it has an inverse relationship with the pulse intervals. With optimal parameters, correctly controlling the animal in 90% of the trials in the T-maze, were managed, and in the complex maze, about 70% of the stimuli with optimized parameters were with only applying one pulse train. The results show that the stimulation parameters for navigation with only one pulse train are well optimized, and the results of this paper can be a trigger for an automatic navigation and reduce the computational costs and automatic system errors.
... In our previous study, we showed that it is possible to control the behavior of animal by evoking instinctive behaviors through virtual stimulation (Lee et al. 2013, Kim et al. 2017. Using a special device that provides visual stimulation, obstacle avoidance behavior in freshwater turtles (Trachemys scripta elegans) was evoked, allowing us to control their movement trajectories. ...
Full-text available
Fishes detect various sensory stimuli, which may be used to direct their behavior. Especially, the visual and water flow detection information are critical for locating prey, predators, and school formation. In this study, we examined the specific role of these two different type of stimulation (vision and vibration) during the obstacle avoidance behavior of carp, Cyprinus carpio. When a visual obstacle was presented, the carp efficiently turned and swam away in the opposite direction. In contrast, vibration stimulation of the left or right side with a vibrator did not induce strong turning behavior. The vibrator only regulated the direction of turning when presented in combination with the visual obstacle. Our results provide first evidence on the innate capacity that dynamically coordinates visual and vibration signals in fish and give insights on the novel modulation method of fish behavior without training.
In recent years, biosyncretic robots, a novel type of robot that integrates living and artificial materials, have gained significant attention. Biosyncretic robots are expected to leverage the advantages provided by living materials and offer a new approach to address the challenges faced by traditional robots, such as biocompatibility, intrinsic safety, and miniaturization. Therefore, this review aims to provide an overview of the current state of development of biosyncretic robots. First, the existing research on biosyncretic robots based on commonly used living materials is systematically categorized and introduced, including cardiomyocytes, skeletal muscle cells, insect dorsal vascular tissue, and microorganisms. Subsequently, their potential in the fields of in vivo medical diagnosis and treatment, organ‐on‐a‐chip, tissue engineering, environmental monitoring, and postdisaster rescue in detail is also discussed. Finally, the current challenges faced by biosyncretic robotic research, including establishing theoretical models, fabrication, cultivation, and maintenance of living materials, controllability, versatility, artificial material, and entering the human body or leaving the laboratory, are examined. This review summarizes the cutting‐edge progress of biosyncretic robots and provides insights into their future development.
Biological robot is a kind of creature controlled by human beings by applying intervention signals through control technology to regulate biological behavior. At present, the research on bio-robot mainly focuses on terrestrial mammals and insects, while the research on aquatic animal robot is less. Early studies have shown that the medial longitudinal fasciculus nucleus (NFLM) of carp midbrain was related to tail wagging, but the research has not been applied to the navigation control of the carp robot. The purpose of this study is to realize the quantitative control of the forward and steering behavior of the carp robot by NFLM electrical stimulation. Under the condition of no craniotomy, brain electrode was implanted into the NFLM of the carp midbrain, and the underwater control experiment was carried out by applying different electrical stimulation parameters. Using the ImageJ software and self-programmed, the forward motion speed and steering angle of steering motion of the carp robot before and after being stimulated were calculated. The experimental results showed for the carp robot that was induced the steering motion, the left and right steering motion of 30° to 150° could be achieved by adjusting the stimulation parameters, for the carp robot that was induced the forward motion, the speed of forward motion could be controlled to reach 100 cm/s. The research lays a foundation for the accurate control of the forward and steering motion of the aquatic animal robot.
Background In the field of animal robot control, brain control technology is currently used to achieve control. It is usually necessary to accurately implant brain electrodes into the animal's brain movement area with the help of a brain stereotaxic apparatus, and apply electrical stimulation to achieve control of the animal. The prerequisite for accurate electrode implantation is to study the internal tissues of the carp skull. New Method With the help of 3.0 T magnetic resonance imaging (MRI) instrument and 8_CH MRI scanning coil, carp brain magnetic resonance images was obtained. The visualization tool package VTK and the marching cube algorithm were used for surface rendering, the ray casting algorithm was used for volume rendering and reconstruction. Results The three-dimensional reconstruction results could show the carp skull surface contour and internal tissue details, and the measured coordinates after three-dimensional reconstruction of magnetic resonance images could be transformed into three-dimensional positioning coordinates suitable for brain stereotaxic apparatus. Comparison with Existing Methods The three-dimensional reconstruction images based on magnetic resonance could analyze the relative spatial position relationship between the surface structure of the carp's brain and the internal tissue at any angle, and the three-dimensional positioning coordinates of the brain could be obtained quickly and accurately. Conclusions The visualization of carp brain magnetic resonance images based on marching cubes algorithm and ray projection algorithm could obtain ideal reconstruction effects, which could be used in the brain control technology of carp robot.
Full-text available
In nature, surfaces are evolutionarily designed to allow adaptation of species to their environments. Insects make extensive use of interfacial phenomena because of their small size and thus, large surface-area-to-volume ratio. This enables them to walk on different terrains, dive, swim, and adhere to surfaces in air and underwater. Moreover, they toggle between different interfaces, move in confined spaces, and overcome a wide range of obstacles. This progress report summarizes emerging directions in the field of bioinspired robotics with an emphasis on micro and nanoscale dynamic interactions. It is envisioned that interfacial phenomena will allow to miniaturize robots and increase the complexity of their operation. The key to success is the combination of functional surfaces, structural design and multiple modes of locomotion. For this to be realized, however, new paradigms are needed in terms of materials, fabrication, energy consumption, and actuation. This report discusses the development of small robots inspired by water striders, the bell spider, the leaf and ladybird beetles, backswimmers and cockroaches, among many others. It also discusses small soft robots inspired by round-worms, larvae, and parasites. From a broader perspective, fabrication of many small robots will enable to study collective effects and self-assembly, group behavior and swarming.
Autonomous navigation in dynamic environment is aprerequisite of the mobile robot to perform tasks, and numerous approaches have been presented, including the supervised learning. Using supervised learning in robot navigation might meet problems, such as inconsistent and noisy data, and high error in training data. Inspired by the advantages of the reinforcement learning, such as no need for desired outputs, many researchers have applied reinforcement learning to robot navigation. This paper presents anovel method to address the robot navigation in different settings, through integrating supervised learning and analogical reinforcement learning into amotivated developmental network. We focus on the effect of the new learning rate on the robot navigation behavior. Experimentally, we show that the effect of internal neurons on the learning rate allows the agent to approach the target and avoid the obstacle as compounding effects of sequential states in static, dynamic, and complex environments. Further, we compare the performance between the emergent developmental network system and asymbolic system, as well as other four reinforcement learning algorithms. These experiments indicate that the reinforcement learning is beneficial for developing desirable behaviors in this set of robot navigation– staying statistically close to its target and away from obstacle.
Full-text available
Brain-Computer Interface (BCI) techniques have advanced to a level where it is now eliminating the need for hand-based activation. This paper presents a novel attempt to remotely control an animal's behavior by human BCI using a hybrid of Event Related Desynchronization (ERD) and Steady-State Visually Evoked Potential (SSVEP) BCI protocols. The turtle was chosen as the target animal, and we developed a head-mounted display, wireless communication, and a specially designed stimulation device for the turtle. These devices could evoke the turtle's instinctive escape behavior to guide its moving path, and turtles were remotely controlled in both indoor and outdoor environments. The system architecture and design were presented. To demonstrate the feasibility of the system, experimental tests were performed under various conditions. Our system could act as a framework for future human-animal interaction systems.
Full-text available
Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.
Full-text available
A bio-robot system refers to an animal equipped with Brain-Computer Interface (BCI), through which the outer stimulation is delivered directly into the animal's brain to control its behaviors. The development of bio-robots suffers from the dependency on real-time guidance by human operators. Because of its inherent difficulties, there is no feasible method for automatic controlling of bio-robots yet. In this paper, we propose a new method to realize the automatic navigation for bio-robots. A General Regression Neural Network (GRNN) is adopted to analyze and model the controlling procedure of human operations. Comparing to the traditional approaches with explicit controlling rules, our algorithm learns the controlling process and imitates the decision-making of human-beings to steer the rat-robot automatically. In real-time navigation experiments, our method successfully controls bio-robots to follow given paths automatically and precisely. This work would be significant for future applications of bio-robots and provide a new way to realize hybrid intelligent systems with artificial intelligence and natural biological intelligence combined together.
Pigeons (Columba livia) have excellent flying and orienting abilities and are ideal study subjects for biologists who research the underlying neurological mechanisms that modulate flying and allow birds to find their way home. These mechanisms also attract the engineers who want to apply pigeon locomotion to the design of flying robots. Here, we identified the motor-related brain nuclei and revealed their relationship in spatial distribution in pigeons under light anesthesia and freely moving conditions respectively. Flapping and lateral body movements were successfully elicited when electrical microstimulation was applied to the diencephalon, medial part of the midbrain, and medulla oblongata of lightly anesthetized pigeons (N = 28) whose heads were fixed. The current thresholds for stimulating different nuclei and behavior ranged from 10 μA to 20 μA. During freely moving tests (N = 24), taking off and turning were induced by a wireless stimulator through microelectrodes implanted in specific nuclei or brain regions. The results showed that electrical stimulation of these nuclei elicited the desired motor behavior. In addition, regulatory mechanisms were identified in the motor-related regions and nuclei of pigeons. Overlapping in the behavior elicited by stimulation of different regions indicates that complicated neural networks regulate motor behavior. Therefore, more studies need to be conducted involving simultaneous stimulation at multiple points within the nuclei involved in the networks.
We consider the problem of monitoring and controlling the position of herd animals, and view animals as networked agents with natural mobility but not strictly controllable. By exploiting knowledge of individual and herd behavior we would like to apply a vast body of theory in robotics and motion planning to achieving the constrained motion of a herd. In this paper we describe the concept of a virtual fence which applies a stimulus to an animal as a function of its pose with respect to the fenceline. Multiple fence lines can define a region, and the fences can be static or dynamic. The fence algorithm is implemented by a small position-aware computer device worn by the animal, which we refer to as a Smart Collar. We describe a herd-animal simulator, the Smart Collar hardware and algorithms for tracking and controlling animals as well as the results of on-farm experiments with up to ten Smart Collars.
As a typical cyborg intelligent system, ratbots possess not only their own biological brain but machine visual sensation, memory, and computation. Electrodes implanted in the medial forebrain bundle (MFB) connect the rat's biological brain with the computer, which presents a hybrid bio-machine parallel memory system in the ratbot. For the novel multiple parallel memory system (MPMS) with real-time MFB stimuli, a computational model is proposed to explain the learning and memory processes underlying the enhanced performance of the ratbots in maze navigation tasks. It's shown that the proposed computational model can predict the finish trial number of the maze learning task, which matches well with behavioral experiments. This work will be helpful to understand the memory and learning mechanisms of cyborg intelligent systems and has the potential significance of optimizing the cognitive performance of these systems as well.
A radio-telemetry recording system is presented which is applied to stimulate specific brain areas and record neuronal activity in a free-roaming rat. The system consists of two major parts: stationary section and mobile section. The stationary section contains a laptop, a Micro Control Unit (MCU), an FM transmitter and a receiver. The mobile section is composed of the headstage and the backpack (which includes the mainboard, FM transmitter, and receiver), which can generate biphasic microcurrent pulses and simultaneously acquire neuronal activity. Prior to performing experiments, electrodes are implanted in the Ventral Posterolateral (VPL) thalamic nucleus, primary motor area (M1) and Medial Forebrain Bundle (MFB) of the rat. The stationary section modulates commands from the laptop for stimulation and demodulates signals for neuronal activity recording. The backpack is strapped on the back of the rat and executes commands from the stationary section, acquires neuronal activity, and transmits the neuronal activity singles of the waking rat to the stationary section. All components in the proposed system are commercially available and are fabricated from Surface Mount Devices (SMD) in order to reduce the size (25 mm × 15 mm × 2 mm) and weight (10 g with battery). During actual experiments, the backpack, which is powered by a rechargeable Lithium battery (4 g), can generate biphasic microcurrent pulse stimuli and can also record neuronal activity via the FM link with a maximum transmission rate of 1 kbps for more than one hour within a 200 m range in an open field or in a neighboring chamber. The test results show that the system is able to remotely navigate and control the rat without any prior training, and acquire neuronal activity with desirable features such as small size, low power consumption and high precision when compared with a commercial 4-channel bio-signal acquisition and processing system.
A system is described here that can noninvasively control the navigation of freely behaving rat via ultrasonic, epidermal and LED photic stimulators on the back. The system receives commands from a remote host computer to deliver specified electrical stimulations to the hearing, pain and visual senses of the rat respectively. The results demonstrate that the three stimuli work in groups for the rat navigation. We can control the rat to proceed and make right and left turns with great efficiency. This experiment verified that the rat was able to reach a setting destination in the way of cable with the help of a person through the appropriate coordination of the three stimulators. The telemetry video camera mounted on the head of the rat also achieved distant image acquisition and helped to adjust its navigation path over a distance of 300 m. In a word, the non-invasive motion control navigation system is a good, stable and reliable bio-robot.
Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.