Conference PaperPDF Available

From Components to Caring: The Development Trajectory of a Socially Therapeutic Assistive Robot (STAR) Named Therabot™

Authors:

Abstract

Designing a socially therapeutic assistive robot (STAR) for use in mental healthcare is an emerging interdisciplinary process, for which there is a lack of comprehensive guidance and standards. This paper provides insight into the design and development process of the socially therapeutic assistive robotic dog, Therabot™. By outlining the project's eleven-year history, exploring a snapshot of its current state, and discussing its trajectories we aim to share a useful glimpse into the processes that guide the creation of a STAR.
From Components to Caring:
The Development Trajectory of a Socially Therapeutic
Assistive Robot (STAR) Named Therabot™
Cindy L. Bethel, Zachary Henkel, Kenna Henkel, Jade Thompson, and Kyler Smith
Department of Computer Science and Engineering
Mississippi State University
Mississippi, USA
{clb821, zmh68, kbb269, jet475, kss445}@msstate.edu
Abstract—Designing a socially therapeutic assistive robot
(STAR) for use in mental healthcare is an emerging interdis-
ciplinary process, for which there is a lack of comprehensive
guidance and standards. This paper provides insight into the
design and development process of the socially therapeutic
assistive robotic dog, Therabot™. By outlining the project’s
eleven-year history, exploring a snapshot of its current state,
and discussing its trajectories we aim to share a useful glimpse
into the processes that guide the creation of a STAR.
1. Introduction
The use of social robots in the mental healthcare domain
for therapeutic purposes holds great promise and carries con-
siderable potential for a positive societal impact. However,
the process of shepherding a robotic platform from design
sketches, through hardware/software prototype development,
cycles of iterative refinement, and ultimately into the role of
a useful therapeutic entity is not trivial. While social robots
have undoubtedly had measurable positive impacts upon the
world, the scope of opportunities remain far-reaching.
This paper reflects on the design path of the socially ther-
apeutic assistive robot (STAR) named Therabot™. Through
this examination we seek to present and preserve a cross
section of the research and development processes that are
less frequently examined in literature. Although Therabot™
has made substantial progress as a robotic platform over the
past eleven years, a path of tremendous refinement remains.
We begin by providing background on social robots in
mental healthcare and approaches to their development in
Section 2. In Section 3, we describe the most notable por-
tions of Therabot’s creation with increasing detail on more
recent developments. Subsequently, Therabot’s design goals
and development methodology are presented in Section 4,
followed by an overview of ongoing development activities
described in Section 5. Finally, in Section 6we conclude
with a look toward the future and discuss the areas we
believe will be most relevant over the coming years.
2. Related Work
Therabot™’s intended purpose is to provide a therapeu-
tic benefit in clinical mental healthcare. This section reviews
the domain of social robots within mental healthcare and
provides background on the approach to system creation
and refinement known as participatory design.
2.1. Socially Assistive Robots in Mental Healthcare
Socially assistive robotics (SAR), a specialized area of
social robotics, which focuses on robots that help users via
social interaction [1], is a rapidly developing and vibrant
area of study within human-robot interaction (HRI) research.
A subset of research efforts in SAR focuses specifically
on the use of socially therapeutic assistive robots (STARs)
in clinical mental health interventions. These efforts are
distinct from other social robotics endeavors in that they
are typically developed in collaboration with clinical experts
and are intended to ultimately produce a therapeutic effect
validated at the same level as other treatment components.
In a 2015 review, Rabbitt et al. examined the use of
SARs within mental healthcare and presented priority direc-
tions for research related to the use of a SAR within mental
healthcare [2]. This review highlighted the limited scope of a
SAR within mental healthcare, the lack of a strong evidence
base for their effectiveness, and a need for expanded and
closer collaboration between mental health professionals and
HRI researchers. Specifically, SAR applications with the
most rigorous evidence bases were those focused on autism
spectrum disorder and older adults experiencing cognitive
decline. A systematic review of randomized controlled trials
(RCT) of robots for mental health interventions conducted
in 2019 by Robinson et al. reveals a relatively similar
landscape with work in this area remaining sparse and
still in early stages [3]. This sparsity is found across three
dimensions: (1) the social robots utilized, (2) the problem
areas treated, and (3) the patient groups examined. The most
investigated areas remain children with autism spectrum
disorder and older adults. Other recent systematic reviews
have also described the limited nature of evaluations of
SARs and highlighted the opportunities for expansion in
scope, populations, and evaluation measures [4], [5], [6].
While prior research has been limited in scope, it is
important to note that broader contributions are emerging
as more research focuses on SARs. For example, Robinson
et al. have recently conducted RCTs focused on behav-
ioral interventions related to calorie intake [7] and mindful
breathing techniques [8]. Focusing on the domain of medical
diagnostic procedures, Trost et al. conducted trials that
demonstrated SARs were useful in decreasing pain and fear
during pediatric IV placement [9].
Apart from the resource intensive efforts required, a
formidable challenge in the development and evaluation of
SARs for mental healthcare is the vast array of funda-
mental questions related to ethics and privacy that must
be addressed for each use case. For a comprehensive ex-
amination of these issues we recommend review articles
by Lutz et al. [10], Koutsouleris et al. [11], Fiske et al.
[12], and Khanna et al. [13]. As emphasized by Rabbitt et
al., establishing evidence-based treatments involving SARs
within mental healthcare is a lengthy process, which requires
substantial investment across disciplines and poses a wide
range of design questions and integration challenges [2].
As described throughout this paper, our experiences striving
to develop Therabot™ into a useful clinical tool within
mental healthcare reflect the many challenges documented
by researchers working toward similar goals.
2.2. Participatory Design
Participatory Design (PD) is a design approach that em-
phasizes empowering all stakeholders to actively collaborate
with designers and developers in shaping the way a system
is constructed and deployed [14]. This approach involves
the stakeholders of a system in the generation of ideas and
design rather than exclusively in a requirements issuing or
evaluative role. The PD process may occur across phases
of a system’s development, involving stakeholders in an
iterative process until a suitable solution emerges.
Most relevant to Therabot™’s design process is the PD
methodology described by Lee et al. [15] for the purpose
of designing social robots to support mental healthcare.
This methodology encourages participants (e.g., clinicians,
clients, staff, etc.) and researchers to engage in a process
of mutual learning, aiding in achieving a common ground
in which each can present their concerns and work toward
solutions. In this paradigm users are considered contextual
experts and researchers strive to gain a comprehensive un-
derstanding of the contexts experienced by each user of the
system. This is achieved through a series of interactions
(i.e., workshops) in which researchers and participants work
together to share knowledge and iteratively design a system.
Initial sessions work to familiarize participants with existing
robots and gain feedback about these systems and how they
might fit into the participant’s daily activities. Participants
and researchers then work together iteratively designing
new robotic systems that could support the participant’s
activities. Throughout these cycles, researchers provide par-
ticipants with insights into the technical aspects of social
robots and present opportunities for interactive understand-
ing of components like actuators and sensors. Ultimately the
process yields a robotic system shaped for the nuances of
the specific context it will inhabit.
3. Design Trajectory
Since Therabot™’s inception in 2012 the platform has
accumulated a rich set of design and development ap-
proaches, successes, failures, and iterations. In this section
we provide a brief summary of the first seven years of
platform development (for a thorough description of these
years see [16]), followed by a more detailed accounting of
significant developments over the past four years.
3.1. Foundational Work (2012-2018)
Therabot™, designed to resemble a stuffed-animal dog,
is the culmination of extensive research and collaboration
[16]. The robot’s outer appearance was informed by feed-
back from an online survey of 1045 participants1. Utilizing
results from this design survey, we then began evaluating
which qualities were most important for a support com-
panion by conducting an open-ended survey of 36 past
trauma survivors regarding their experiences using comfort
objects and potential desired features for a robotic support
companion. Before transitioning to the prototyping phase,
we also conducted an open-ended survey with 13 clinicians
to explore their anecdotal experiences with patients and
how they envisioned utilizing a support companion in their
therapeutic practices.
1. Details available at: https://stars.msstate.edu/designsurvey
Figure 1: Therabot™’s internal structure at major milestones throughout the project’s history.
Following an analysis of the data collected in these
studies, a set of requirements was synthesized and an ex-
ternal consultant was commissioned to fabricate the robot’s
hardware while the research group developed software [17].
Ultimately, after extended delays the external consultant was
unable to deliver a hardware platform capable of meeting the
design requirements or functioning in a reliable manner to
be useful in research.
In late 2014, the research team began development of
both the hardware and software systems. The form-factor
of a beagle-shaped stuffed animal was used as a starting
point for the overall form. This concept was then refined to
accommodate the integration of electronic and mechanical
components, necessitating space within the outer layer along
with ample padding to create a soft feel. For the outer layer,
two different ”mini minky shaggy” fabrics were selected.
These fabrics were characterized by their soft, fur-like tex-
ture with the body fur longer and a shorter pile length for the
muzzle and the bottom of the paws. The robot’s mechanical
structure was surrounded by a padding layer encased in
4-way stretch material. The padding was then covered by
separate exterior coverings for each leg, the head, and body.
The exterior covering of the head and body connect at
the base of the robot’s neck and along the underside of
the robot using hook-and-loop fasteners. Additional padding
was sometimes added under the covering and on top of
the encased padding to fill in low spots in the covering
to shape the robot. The initial prototype, demonstrated at
the 2015 HRI conference [18], was fabricated by hand with
the primary internal structure composed of two flat wooden
platforms held together by a thick rubber strip. Components
attached to the internal structure included actuators for
each leg, a three degree-of-freedom neck, two degree-of-
freedom tail, piezoelectric and capacitive touch sensors, an
inertial measurement unit, and a microphone. The design
progression of the internal structure of the robot over time
is shown in Figure 1.
The initial prototype was improved by using CNC ma-
chining and 3D printing to build the internal structure. The
internal padding was updated to facilitate easier assembly
and operation. Following these updates, in early 2016 we
conducted a collaborative design study where sixteen par-
ticipants interacted with two versions of Therabot™ (one
with the robotic structure, stuffed and covered, but powered
down and the other a stuffed animal only, with no robotic
structure) and completed a semi-structured interview (see
[16] for further detail).
Following this design study, the robot’s hardware and
software were further refined with an emphasis on creating
interactive behaviors that allowed the robot to be responsive
to its surroundings. The robot’s touch sensing system was
optimized using capacitive fabric located throughout the
body paired with basic machine learning approaches to
classify touch interactions. The robot’s internal mechanical
structures transitioned to be mostly 3D-printed and the collar
was modified to include RGB LEDs to indicate the robot’s
status and power connectors for charging were located in a
leash attached to the collar. The software architecture was
refined to provide higher level abstractions for designing the
robot’s movements and stimulus-based behaviors.
Therabot™’s internal structure was composed of 3D-
printed ASA plastic sections, a pivoting joint, and rubber
legs actuated by Dynamixel AX-12A smart servos. Rubber
legs were used to allow flexibility during interactions. The
neck and head assembly used three Dynamixel servos for
pan, tilt, and roll, and two SG90 servos attached to the head
actuated the robot’s ears. The plastic sections providing the
structure for the robot’s body housed components including
a Raspberry Pi 3A+, a USB Dynamixel motor interface,
a Teensy 3.2 microcontroller, a lithium-ion battery, and an
audio surface transducer. Sensors, an inertial measurement
unit and boards to support capacitive touch sensing, were
Figure 2: Therabot™’s current exterior appearance.
also supported by this frame. The software was primarily
written in JavaScript and executed in a Node.js runtime,
excluding the microcontroller code. A web interface allowed
configuration of the robot and provided status information
to represent its sensing and actuation. This version was
demonstrated at the 2018 HRI conference [19].
3.2. Recent Work (2019 - 2023)
In early 2019 we conducted a study in which 32 par-
ticipants engaged in a structured interview about anxiety
(derived from the American Psychiatric Association’s Struc-
tured Clinical Interview, SCID-5-CV) while holding either
an interactive Therabot™ robot, a stuffed animal version of
Therabot, or nothing. Following the interview, all partici-
pants interacted with and provided design feedback related
to the interactive version of Therabot (results are reported in
[20]). Section 5in this paper discusses how we are currently
implementing features based on participant feedback.
In order to move Therabot™ toward clinical use, the
project expanded to include multiple HRI research groups
collaborating with each other and with the intended users
of the robot (e.g., clinicians, clients, etc.). Beginning in late
2019 researchers at the R-House Laboratory for Human-
Robot Interaction (led by Dr. Selma ˇ
Sabanovi´
c) at Indiana
University and at Hanyang University (led by Dr. Casey
Bennett) joined with the existing team at the Social, Ther-
apeutic, and Robotics Systems (STaRS) Lab at Mississippi
State University to work toward integrating the robot into
therapeutic interventions for individuals with chronic de-
pression. The R-House research group has valuable expe-
rience and expertise in developing and utilizing participa-
tory design methods for STARs within mental healthcare
contexts, an essential element for guiding the robot toward
producing a useful therapeutic effect. Dr. Bennett has ex-
pertise in developing AI and ML techniques within clinical
contexts, and his group’s focus is on developing models of
sensor data that modulate the robot’s behavior, a critical
element of an adaptive therapeutic system.
Through this collaboration we are working to understand
how Therabot™ can contribute to effective interventions for
individuals diagnosed with chronic depression. As described
further in Section 4.2, our research approach uses an ongo-
ing participatory design process to provide a strong signal
for guiding the design and iterative refinement of the hard-
ware and software that ultimately facilitate the interactions
between humans and the robot.
In the following sub-sections we first summarize the
participatory design work that has been conducted from
2019 through early 2023. Next, we discuss the development
and use of a supplementary hardware device that allowed for
the evaluation of sensors and the collection of contextual
data across a variety of social robots deployed in home
settings. Finally, we describe the technical changes to the
robot that have occurred during this timeframe as a result
of knowledge gained from the participatory design and
supplemental hardware home deployments.
3.2.1. Participatory Design Activities. Over the past three
years, our collaborators at the R-House Laboratory for
Human-Robot Interaction have employed and further devel-
oped participatory design (PD) techniques to work directly
with users to gain critical design insights for Therabot™’s
development. The PD processes drive the technical devel-
opment of Therabot while cultivating social and clinical
practices that are essential to the usability and effectiveness
of the robot in a clinical setting. PD work began with the de-
velopment of seven workshops for individuals with moderate
to severe depression (client workshops) and ve workshops
for therapists and case workers (clinician workshops).
The client workshops begin by introducing participants
to key concepts like STARs and sensing technologies. Par-
ticipants then design a customized robot while keeping
contextual factors in mind. This design is refined throughout
several sessions. Using techniques like ”design fiction,
participants explore how their design would function in their
daily lives. Finally, participants spend time reviewing their
custom designs and conveying their feedback.
Like the client workshops, clinician workshops begin
with an introduction to key concepts and progress into
designing a robot based on their experiences with their
clients. Through design fiction prompts, clinicians provide
insights into how they envision robots being implemented
in specific scenarios. Finally, clinicians review their robot
designs and those of their clients.
In both workshop series, participants spend time exam-
ining a variety of existing robots including Therabot™ in
its current form. The design process gains insights for the
design of Therabot through direct feedback related to its
present design and through the general concerns and design
ideas that participants present surrounding their context.
The client and clinician series of workshops have been
completed by 10 and five participants respectively, and the
cumulative findings are undergoing analysis. Additionally,
in a condensed version of the client workshop series ten
individuals living with depression designed unique exterior
appearances for Therabot, discussed robot behaviors, and
provided insights concerning privacy and data collection.
While a variety of physical robot designs were produced,
participants converged on a preference for a soft exterior, the
desire for realistic sounds and movements, and a willingness
to share data with their therapists [21].
3.2.2. Modular Sensor Collar. As our team began conduct-
ing initial participatory design activities in early 2020, the
emergence of the global pandemic necessitated rethinking
the best way to conduct our research. As mental healthcare
professionals were overwhelmed and the ability to interact
in-person was constrained, we shifted our focus to devel-
oping a sensor suite that provides the robot with important
environmental context.
Expanding upon prior work [22], [23], [24], initiated
within the R-House research group, the team developed a
modular sensor suite fashioned as a pet collar for attachment
to zoomorphic robotic pets used in home environments2.
Employed as an adjunctive device for robotic companions,
the sensor collar aids in understanding human-robot inter-
actions in naturalistic contexts while taking into account the
competing needs of flexibility, cost, privacy, and durability.
Figure 3: Fully assembled V1 (top) and V2 (bottom) sensor
collar systems with CAD illustrations of their respective
compute modules shown on the right.
The sensor collar enables researchers to experiment with
different sensors based on PD feedback without the sub-
stantial technical investment of integrating them into a robot
platform. This is a cost-effective method to test various sen-
sor technologies before finalizing a robot design. Our design
(shown in Figure 3) employs a modular paradigm where
the sensor collar comprises a central compute module and
one or more sensor modules. The compute module handles
tasks like computation, power, data storage, and the main
user interface; each sensor module offers unique sensing
capabilities and is managed by the collar’s compute mod-
ule. The system resembles a pet collar with attached elec-
tronic modules. These modules connect electrically along
the collar’s underside. The collar was crafted using a nylon
webbing strap, colored ribbon, polyester thread, elastic cord,
2. Details available at: https://stars.msstate.edu/sensorcollar
side release buckles, and 3D printed tri-glide slides. Wires
between each module are channeled through elastic cord
stitches on the collar’s inside. We designed and deployed
the compute module, a primary multi-sensor module, and
the underlying collar infrastructure.
Figure 4: The modular sensor collar worn by Roarin’ Tyler
the Playful Tiger™& Therabot™
We deployed our sensor suite (as a collar on a dog or
cat robot, see Figure 4for an example of the collar being
worn on two different zoopomorphic robots) in different
home settings in South Korea and the United States and
established ground truth data by using an ecological mo-
mentary assessment (EMA) smartphone application, which
requested information about participant’s interactions with
or around the STAR three times per day, throughout a 2-
3 week period. Correlating the ground truth data with data
collected within the relevant time frame by the sensor suite
and training deep learning models with these correlations
created a model with 75% - 80% accuracy at identifying
many common household activities [25], [26].
3.2.3. Technical Improvements. Across this time period
our technical efforts focused on moving the robotic platform
toward a modular yet robust physical design with an empha-
sis on reliability and consistent fabrication processes. The
robot’s design is continuously evolving, and the hardware
and software components need to perform in a stable man-
ner without imposing rigid restrictions that prevent design
explorations.
Mechanical Design: We transitioned the robot’s me-
chanical design from a set of disparate processes into a
single CAD software system with version control and a
primary fabrication method of 3D printing with ASA ma-
terial on widely available consumer printers. This allows
for consistent tolerances and defined physical feature limits.
Hardware fastener sizes were standardized, with an effort
to use M2 screws paired with heat-set threaded inserts
throughout the system. Documentation for the robot’s full
mechanical design was established at a level that facilitated
fabrication by anyone with ordinary skill in the art.
Actuator Modifications: The robot’s zoomorphic form
factor requires an active approach to managing the heat
generated by actuators and electronics. The robot’s primary
joints are actuated by the widely used and relatively low cost
AX-12A Dynamixel smart servos. In their off-the-shelf form
Figure 5: The 3D-printed parts used to modify off-the-
shelf AX-12A actuators (left) along with a modified and
unmodified actuator (right).
the actuators, composed of a geared DC motor and driving
electronics encased in ABS plastic, are unable to dissipate
heat quickly enough to remain continuously operational
when inside of the robot. We developed a method for adding
air ventilation channels to the actuator’s standard casing,
which allows heat to be efficiently removed using small fans.
The modification process uses custom designed 3D printed
parts (see Figure 5) that fit around the actuator to facilitate
using a drill in a patterned fashion to create air vents3.
Modular Electronics: In order to accommodate elec-
tronic components with differing physical dimensions (e.g.,
as a result of supply chain constraints) and to create a
more robust platform, the robot’s primary structures were
updated to allow the attachment of modules containing
electronic components rather than incorporating their attach-
ment directly. Additionally, the custom electronics boards
that facilitate the routing of power and signals between
components were encased in printed plastic parts to improve
their durability. In order to support our collaborators assem-
bling and/or fabricating additional systems, the system’s full
design was cataloged and a guided assembly manual and
videos were produced.
Transducer-Based Heartbeat: Prior to our 2019 design
study, the robot’s hardware was updated to include a surface
transducer audio output attached to the front internal struc-
ture. While initially intended to provide fuller sounds that
felt rooted in the structure of the robot, it also proved to be
an effective means of simulating a heartbeat to increase the
sense of aliveness perceived by users.
Hangtag Interface and Collar: To provide an interface
for quick, device-free robot control, updates were made to
Therabot™’s collar to include a dog-bone hangtag interface
with a screen and three buttons. The hangtag interface offers
an easy way to toggle settings such as movement and sound.
The collar serves as an intuitive interface for config-
uration of and communication with the robot, reducing
the need for users to involve a secondary device. It has
an OLED screen (SSD1306, 0.96”, 128x64 pixels), three
buttons, and eight Neopixel LEDs (SK6812RGBW). The
hangtag interface provides toggles for basic functionalities
3. Details available at: https://stars.msstate.edu/ax12acooling
such as movement, sound, and aliveness. It also displays
essential information about Therabot™, including IP address
and battery level. Currently, the collar Neopixel LEDs emit a
light blue glow. However, future designs might utilize these
LEDs to convey more detailed robot statuses. For instance,
a pulsing orange light could indicate a low battery, signaling
the user to charge the robot soon.
Over the years, the collar’s reliability and aesthetics have
evolved significantly. The initial design was adapted from a
commercially available dog collar with a paw print design.
This approach had its challenges. For instance, the paw print
design dictated the placement of the Neopixels, requiring
each LED to be individually cut from their original spools,
soldered, then affixed into place on the nylon base of the
collar to correspond with the paw print design. This method
resulted in bumps beneath the ribbon where each LED was
placed. To achieve a cleaner and more adaptable design, we
now custom-make the collars using grosgrain ribbon, vinyl,
nylon, side release buckles, and a crocheted connector for
routing wires to the hangtag interface. This approach allows
for a more discreet embedding of electronics compared to
the previous designs.
Updated Collar and Hangtag Design: In the latest
collar fabrication workflow, nylon is precisely cut to length
using a hot knife. A wooden jig, crafted with a drill press,
aids in creating perfectly spaced holes corresponding to
the Neopixels placement on a standard spool. The nylon
is secured beneath this jig, and by pressing in a hot air
gun tip, perfect circles are formed for the LEDs. Once the
nylon has the necessary holes, it’s aligned with the ribbon,
then a fabric pen is used to mark the location of each hole.
Using a Cricut vinyl cutter, a paw print design, surrounded
by a circle, is plotted; the LEDs will illuminate through the
design’s negative space. The paw print vinyl is heat-pressed
onto the ribbon over the LED’s exact position.
To diffuse the LEDs, given their close proximity to the
ribbon, we’ve experimented with various materials. PolyGel
(an acrylic powder suspended in UV curable gel, typically
used for fingernails [27]) and Model Magic clay (used by
Shah et al. [28] to prototype morphing robots) have shown
the most promise. PolyGel offers a more smooth, diffused
appearance, which, due to its suspended powder formula-
tion, does not noticeably sacrifice luminosity, but once cured
it can expose a hard edge if the nylon band bends too
sharply. On the other hand, Model Magic remains somewhat
soft even after drying, meaning it can flex and deform when
the collar’s placement or angle is changed; however, the
material’s high density can diminish the vibrancy of the
Neopixels. Current deployments of the collar use a thin layer
of Model Magic clay, which provides a nice, aesthetic glow4.
4. Design Approach
As a social robot focused on supporting therapeutic
mental health interventions, Therabot™’s design is primar-
ily guided by the needs of the therapeutic process. While
4. Details available at: https://stars.msstate.edu/collarinterface
the standards of clinical practice provide important design
guideposts, direct engagement with a diverse group of care
providers (clinicians) and care receivers (clients) through
participatory design provides the nuanced design insights
necessary to build a useful system. Additionally, the inter-
disciplinary nature of social robot development provides a
continuous stream of new ideas and visions for exploration.
In this section we distill Therabot’s core design goals,
examine the sources that spur development, and describe the
timing of events that form the overall development cycle.
4.1. Design Goals
Although the robot’s development is driven primarily
through participatory design and iterative cycles, four high-
level design goals persist across minor and major platform
changes. These goals guide the robot’s primary function as
a therapeutic device and shape its behavior as a social robot:
Do no harm: The robot’s behaviors must recognize
and abide by the ethical principles that govern the
environment it inhabits. As a therapeutic device,
the robot must be cognizant of medical ethics and
perform in a manner that upholds these values.
In the realm of therapy a social robot occupies a
space between an automated tool (e.g., a biofeedback
device) and a human clinical professional. As robots
in this role continue to incorporate modern develop-
ments in AI, the associated uncertainties must be
carefully navigated to ensure continued satisfaction
of this goal.
A sampling of critical areas related to this goal
includes: maintaining privacy [10], producing a ther-
apeutic effect in an empirically testable and accepted
manner [29], understanding context and bias [11],
and preventing long-term attachment [30].
Contribute to the therapeutic process: Ultimately,
the robot should supply a predictable and empirically
verifiable therapeutic effect that is consistent with
evidence-based practice [4], [31]. At a minimum the
robot should be provide value to the individual client
as assessed by the clinician.
The robot should be adaptable to a variety of thera-
peutic orientations, cultural contexts, and treatment
plans. Achieving success in this area requires exten-
sive collaboration between developers and clinicians
[12], [13]. This collaboration should leverage the
expertise of each field to identify effects that may be
unique to HRI in specific clinical conditions. For ex-
ample, an initial investigation by Zhang et al. found
that while some rewarding properties of human-
human interaction were diminished for depressed
individuals, the rewarding properties of human-robot
interaction were less likely to be affected [32].
Demonstrate a social understanding of the world:
As a physically embodied social agent, the robot
should understand the world it is situated in and
attend to events appropriately. As humans are often
predisposed to relate socially to robots [33], the
robot should leverage these expectations to facili-
tate an enjoyable interaction. Furthermore, the robot
should be aware of its own form and related implica-
tions. For example, the robot should produce sounds
that are congruent with its physical appearance.
Adapt to diverse circumstances, needs and pref-
erences: As a STAR, the system should adapt to
the needs of the user and the context [34]. This
may include adapting to environmental conditions,
learning from user behavior, or using knowledge
of user traits combined with prior HRI findings to
optimize behavior choices. For example, Graaf et al.
found that gender and prior expectations about life-
likeness affected an individual’s intention to treat a
zoomorphic robot as a companion [35].
4.2. Design and Development Cycles
While Therabot™’s development is primarily driven by
the needs of its users, these needs often traverse multiple
project teams, disciplines, and design iteration cycles prior
to being incorporated into the platform. In addition to user
needs, development may also be initiated by system perfor-
mance results, new external research findings, or the avail-
ability of new technology. Sources of development activity
can be segmented into three distinct areas:
Participatory Design: The primary input signal for
guiding the design of the platform is generated by
PD activities that directly involve all stakeholders.
These take the form of workshops and collaborative
interactions between those who will use the system
(e.g., clinicians, clients, etc.) and those who will
implement and support the system (e.g., technical
developers, social science researchers, etc.).
Studies, Trials and Performance Data: After suffi-
cient refinement, the robotic platform is deployed in
a variety of contexts from in-lab demonstrations to
interactive use in design workshops to deployments
with clinicians and clients. These deployments gen-
erate a wealth of data regarding how the robot per-
forms and how that performance can be improved.
The ultimate form of performance for therapeutic
use is data that is generated in randomized controlled
trials that use the robot as a therapeutic device.
Research and Technical Developments: Indepen-
dent of design activities that include all users, tech-
nical developments that allow the robot to perform
better, more efficiently, and at a lower overall cost
are continuously integrated into the platform. This
includes hardware developments like more accurate
sensors and software developments like novel AI and
ML techniques.
Currently, the project has contributors spread across
four universities with one university focused on creating
hardware and software (located in the Southeastern U.S.),
one focused on participatory design with clinicians and
clients (located in the Midwestern U.S.), and two focused
on the collection of sensor data and the development of
machine learning pipelines (located in Seoul, South Korea
and another university in the Southeastern U.S.).
In order to prevent blocking dependencies (i.e., one team
waiting on the output of another), we use a scheme of
scheduling minor and major releases of the platform. This
allows development of hardware and software to continue
while the current iteration is used in PD activities.
Minor releases focus on development stemming from
previously analyzed PD insights or as a result of perfor-
mance data. These releases are designed to provide smaller
improvements to the platform and are fashioned as upgrade
kits that are applied to the current generation platforms as
they complete their PD activities.
Major releases involve design changes that require fab-
rication of new hardware and include major PD insights and
features that require more substantial engineering time and
effort. After a major release, existing systems are returned
for analysis and reuse of components in the following
design cycles. This approach allows more efficient use of
resources, as it assumes design feedback will be generated
and processed asynchronously.
5. Ongoing Development
This section presents a snapshot of ongoing development
work that will remain in progress over approximately the
next year. Divided into three major categories, these activi-
ties are representative of the development goals and scope of
Therabot™. Presented in the following subsections are the
three development categories: (1) Participatory Design, (2)
Technical Platform Development, and (3) Machine Learning
and Context Understanding.
5.1. Participatory Design
Our collaborators from the R-House research team are
currently analyzing data collected in a workshop, which
builds upon insights from deployments of the modular sen-
sor collar by having participants identify the behaviors that
would be useful for the robot to exhibit in specific situations.
Next, workshops focused on the robot’s external appearance
and sensors will begin in the coming months. In collabora-
tion with researchers in the field of costume technology,
prototypes of different dog and cat appearances have been
developed for the robot and will soon be incorporated into
studies and workshops.
In addition to workshops examining specific areas of
robot use and behavior, the R-House research team con-
tinues to perform ongoing PD work with clinicians and
their clients. The next important milestone in this work will
involve deploying the robot into the homes of participants
for a multi-week time interval and engaging in a PD process
to guide further development of the robot.
5.2. Technical Platform Development
As we aim to foster beneficial therapeutic relationships
between users and the robot, the invaluable insights col-
lected from clinicians regarding use cases and therapeutic
applications for Therabot™ coupled with the principles of
participatory design will play a pivotal role in steering the
trajectory of our development. Recent updates to the robot
introduce features such as a grid providing high spatial
resolution touch sensing using capacitive fabric, adaptive
heartbeat, haptic subsystem for a sense of aliveness, adaptive
audio generation, and customizable user interfaces.
5.2.1. Basic Performance Improvements. Improving the
robot’s performance and capabilities requires an ongoing
effort to integrate updated components and refine existing
designs for easier fabrication. Currently efforts are under-
way to: upgrade the robot’s smart servos to newer models,
transition all custom electrical boards to PCB designs, re-
duce actuation sounds related to tail wagging, integrate a
more powerful single board computer, transition the front
microcontroller to a newer model, and test alternative joint
configurations for the robot’s neck.
5.2.2. Assembly and Update Kits. In addition to incorpo-
rating platform updates into the documentation and guided
materials for our collaborators to fabricate additional robots,
we are working to understand the best approaches to dis-
seminating this type of information and supporting others
through their fabrication processes. Insights from our first
use of these materials will also help guide our future minor
hardware update kits that are sent to collaborators to provide
small performance updates. As more robots are actively in
use it is essential that the platform is able to be maintained
without the need for extensive external support.
5.2.3. Updates to Hangtag and Collar. Building upon
our previous work with the sensor collar, we are in the
process of designing a new iteration of Therabot’s hangtag
interface. The new design will incorporate sensors from
the collar while offering a more user-friendly interface for
the robot. Our current focus is on designing printed circuit
boards (PCBs) that will integrate with an RP2040-based
round 1.28 inch touchscreen LED panel with a resolution
of 240x240 pixels, to serve as the hangtag interface. This
significant upgrade to the user interface will allow users
to interact with and control the robot in a more intuitive
manner, compared to the previous three-button configuration
that requires cycling through multiple pages of features. The
custom PCB, as a peripheral for the RP2040, will feature a
real-time clock and sensors to measure luminosity, volatile
organic compounds, sound intensity, and movement. The
expanded screen size and enhanced interaction capabilities
provide numerous possibilities for data communication, sta-
tus updates, and more refined user controls.
Of great interest to us is implementing and experiment-
ing with a privacy mode. This mode not only allows users
to determine when environmental data is collected but also
offers the flexibility to enable or disable specific sensors as
per their preference. To foster trust, it is essential to provide
users with clear indicators about the collar’s data collection
activities. For example, we plan to display the number of
records collected, providing a trust cue observable by the
user [36]. When a user pauses data collection, this number
would be greyed out, signaling inactivity. Upon resuming,
the number would become active again, reassuring the user
that no additional data was collected during the pause. Such
transparent communication is pivotal in reinforcing user
trust [37]; furthermore, with the enhanced screen resolution,
we can explore visualizations that offer meaningful insights
to the user. This includes showcasing data categorization
and quantities, detailing environmental attributes, presenting
interaction statistics, and updating the robot’s status.
To enhance collaboration and contribute to the Human-
Robot Interaction (HRI) community, the updated, sensor-
integrated hangtag interface will (with minimal hardware
reconfiguration) support two distinct software modes:
1) Therabot Control Interface: Here, the collar acts
as an extended control interface for Therabot. It
gathers and conveys environmental data to the robot
via an inter-integrated circuit (I2C). Additionally, it
displays aggregated environmental and interaction
statistics along with other relevant information to
the user.
2) Standalone Mode for Zoomorphic Platforms: In
this mode, the collar operates independently when
equipped with a battery module for power and an
SD card for data storage. It records timestamped
environmental data, which can later be correlated
with ground-truth data or analyzed on its own. The
collar can be attached to any zoomorphic platforms
to create a smart sensing system.
5.2.4. Touch Sensing Improvements. For touch sensing
improvements in Therabot™, we are focusing on two pri-
mary areas. The first is a capacitive fabric touch sensing grid
and the second a photo-reflective touch sensing approach.
Capacitive Fabric Touch Sensing Grid: Over the past
year we developed a soft, capacitive fabric-based touch-
sensing grid (seen in Figure 6), which enables Therabot to
detect thirty-three distinct ”touch zones” along its back. We
are also exploring other locations on the robot for the place-
ment of capacitive touch sensing. In our implementation, the
self-capacitance of each electrode is measured and analyzed.
We have considered another approach, mutual capacitance.
This requires a ground plane and consumes more power,
but it provides greater precision. Research indicates that
processing the raw capacitance values of self-capacitive
electrodes arranged in a grid results in an accuracy similar
to that of a mutual capacitance configuration, as reported
in [38]. As a result, we are implementing a self-capacitance
system, a better fit for our use case. A real-time visualization
shows activation of the sensors, arranged in columns and
rows, and precisely where the robot was touched (based
on concurrent sensor activations), demonstrated in Figure 7.
This visualization also approximates proximity and move-
ment direction by altering the opacity of an active sensor to
correspond with its capacitance.
To create the capacitive touch grid, we adapted the
electrode design presented by Wu et al. [39] by scaling their
electrode design to fit optimally on Therabot™. To create
our electrodes, we first used iron-on adhesive to secure the
conductive fabric to a thick, insulating nylon fabric; then
for cutting the final electrode shape, we used a Cricut vinyl
cutter. For stability during the cutting process, the nylon side
was placed against the Cricut adhesive sheet, with transfer
tape applied to the conductive fabric. Once cut, we arranged
the electrodes in a grid pattern and attached them to Lycra
using iron-on adhesive; each electrode was aligned in rows,
secured using the iron-on adhesive method. Following this,
the column electrodes were attached, using the rows as
alignment references. Each electrode was sewn in place on
the base using conductive thread to route the trace to specific
locations along the edge of the Lycra fabric where the thread
connected to a conductive knit ribbon cable. This cable
interfaced with a flexible protoboard adapter, transitioning
into a Japan Solderless Terminal (JST) connector compatible
with a plug on the MPR121 capacitive sensing breakout
board. Presently, Therabot™ is equipped with a hardware
mounting rack to support up to four MPR121s, enabling 48
touch sensing channels in total.
To improve reliability and durability of the touch grid,
we are exploring a promising new type of conductive and
insulated thread from SewIY [40]. This will enable the
routing of more safely-insulated touch zones throughout
the robot while maintaining the plush, padded outer feel-
ing. Combined with connectors such as those discussed by
Stanley et al. [41], our preliminary testing indicates that
this thread will greatly improve the robustness of our soft,
capacitive touch sensor system. Considering the insulated
property of this thread, the touch-sensing grid’s granularity
can be increased by implementing more complex patterns.
This also opens up the possibility of embroidery or appliqu´
e
designs that allow thread to overlap itself while maintaining
the integrity of the signal.
Figure 6: Testing setup for observing raw capacitance values
Photo-Reflective Touch Sensing: Though Therabot’s
current capacitive touch sensing grid is made up of dif-
Figure 7: Capacitive grid (foreground) and real-time touch
visualization (background)
ferent types of soft fabric, it can occasionally be felt when
interacting with the robot, even through the covering, due to
wire routing and additional fabric making up each electrode.
To mitigate this side-effect, we are researching a system
that uses photo-reflectivity: by affixing infrared (IR) LEDs
and photo sensors to the robot’s base and a photo-reflective
fabric to the inside of the covering, changes to the density of
the padding can be approximated by measuring reflectivity
across multiple photo sensors. This would allow the location
and magnitude of touches to be distinguished in a manner
similiar to the interfaces examined in SofTouch [42] and
FuwaFuwa [43]. This is would reduce the need to route
wiring or sew additional fabric sections near the outer
surface of the robot.
5.2.5. Ultra-Wideband (UWB) Radar Sensing. Extending
beyond the capacitive touch grid, we are also exploring an
additional sensing approach to allow understanding a user’s
physical interactions with the robot, as well as to gain an
understanding of the spatial location of people and objects
within its immediate environment.
Most living creatures use multiple senses (e.g., vision,
hearing, etc.) to maintain awareness of their surroundings.
For example, a pet dog may orient toward people that it
sees entering its environment or may respond to hearing
its name. While the robot is able to sense touch events and
basic sound levels within the environment, it currently lacks
the abilities that are traditionally served by a vision system.
While sound localization can provide some awareness of the
scene that surrounds it, the robot’s current sensors do not
empower it to understand at the level of fidelity that would
be expected of an actual animal.
Although the addition of traditional cameras and a vision
processing pipeline would enable the robot to attend to its
environment in a more responsive manner, we have not
incorporated this into the robot as it affects the actual and
perceived privacy of those using the system. Our design pro-
cess is ongoing and based on discussions with participants,
we have noted their desire for strong privacy assurances.
Many users have expressed appreciation for the system’s
lack of cameras.
To help balance user privacy with the robot’s need to
understand its surroundings, we plan to investigate the use
of emerging techniques that use UWB radar to facilitate
reliable person and activity detection. Early efforts using
this approach to monitor patients in assisted living facilities
while maintaining privacy have shown promising results
[44], [45]. However, incorporating the necessary hardware
into the robot’s form factor is not trivial. Overall the devel-
opment of more privacy-sensitive techniques for sensing is a
pursuit that could greatly widen the acceptance of robots in
particular domains, especially related to companion robots.
5.2.6. Adaptive Heartbeat and Haptics. To improve the
perceived haptic feedback of its simulated heartbeat, we
are testing the use of small eccentric rotating mass (ERM)
vibration motors that are able to support a wide range of
haptic outputs [46] [47]. This approach may also allow better
simulation of general aliveness as well as creature specific
behaviors, like purring in a cat form factor.
In an effort to explore the therapeutic potential of re-
sponsiveness through haptic feedback, Therabot™’s heart-
beat functionality has been expanded to include adaptive
behaviors. When connected to a compatible physiological
sensor, such as EmotiBit [48], the robot is able to wirelessly
receive and respond to a user’s heart rate by matching its
simulated heart rate to the heart rate of the user. Using the
user’s heart rate data, Therabot™ is also able to recognize
and respond to physiological states that indicate distress.
The robot’s response to potential distress is to slow its sim-
ulated heart rate, with the intention of portraying a calmed
state and inducing a positive physiological response from the
user. The current thresholds for normal and distressed user
heart rates are based on the ranges of a typical adult heart
rate [49], but we aim to create a truly adaptive heart rate
that can accurately identify distress in users with elevated
heart rate ranges. Although currently the adaptive nature
of the heartbeat only responds to distress found in live
physiological data, alternative methods for sensing distress,
such as detecting affect through touch and gestures, are
feasible. In the future, we plan to implement additional
adaptive behaviors, beyond the slowed heart rate behavior,
that will contribute to the responsiveness of Therabot™ as
a whole. These could include movements, sounds, or other
calming behaviors exhibited by the robot to calm a user.
5.2.7. Audio Generation. In an effort to provide further
customization options for Therabot™, its audio production
system has been expanded to facilitate the generation of
more natural and user-aligned sounds. While the system
previously selected audio from a library of audio clips,
novel sounds are now generated by the robot based on a
number of specified sampling parameters. We are currently
conducting studies examining the ability to convey emotions
through augmentation of sound generation parameters. Our
initial implementation, focused on the robot’s dog form-
factor, makes use of pitch and spacing adjustment parame-
ters adapted from dog bark analyses performed by Pongr´
acz
and Moln´
ar in 2005 [50].
Given Therabot™’s primary function as an in-home
therapeutic companion, the generation of novel and distinct
audio is not merely an enhancement but a necessity. To shift
away from procedural audio, a generative flow model like
the one presented by Andreu and Aylagas, proven to suc-
cessfully generate explosion sound effects, would serve as
an ideal paradigm for unique audio creation [51]. A custom
dataset of approximately one hundred labeled audio clips
would be sufficient to train a model capable of generating
complex and variable audio samples. While our initial efforts
have been concentrated on Therabot™’s canine form-factor,
if trained on a variety of video game and fantasy sounds,
the user could describe their creature’s sound profile. The
model could then apply it, when generating audio, to the
appropriate cadence to communicate a desired emotion [52].
5.2.8. Software: interfaces and architecture. In conjunc-
tion with ongoing participatory design work, customizable
software interfaces and mobile apps are being developed
for both clinicians and their clients. In contrast to the ex-
isting system development and research-oriented interfaces,
these interfaces are tailored to the individual contexts of
their specific users. Alongside these visible developments,
the internal software architecture that regulates the robot’s
behavior continues to be refined and further developed. As
part of this refinement, geofencing is being considered for
inclusion in the interfaces and apps to awaken Therabot™
to greet the user when they arrive home.
5.3. Machine Learning and Context Understanding
Our collaborators in Dr. Bennett’s lab at Hanyang Uni-
versity are currently fabricating a Therabot™ platform that
will be used to test the deployment of deep learning models
onboard the robot. Sensor models developed from data col-
lected with the modular sensor collar will be used to allow
the robot to understand its context. Contexts will be linked
with behaviors developed in participatory design workshops
and integrated into the robot’s control architecture to assist
in shaping its autonomous behaviors.
6. Future Development
As we envision Therabot™’s development trajectory
over the next five years, four crucial areas of development
emerge: clinically focused design, privacy-oriented sensing,
AI within participatory design, and broadening interfaces.
In this section we describe the importance of each area as
they relate to Therabot’s overall design goals.
6.1. Clinically Focused Design and Validation
The integration of social robots into mental healthcare
presents a vast landscape of questions regarding effective-
ness, ethics concerns, and societal implications [11], [12],
[13], [53], [54], [55]. While our expertise is in human-robot
interaction, clinicians and clinical practice guide our efforts.
As a useful therapeutic robot cannot exist without strong
collaborations between these fields, we believe continued
partnerships between all stakeholders are essential [13]. We
plan to continue close collaborations with clinical profes-
sionals and their clients indefinitely.
While researching, building, and evaluating social robots
for therapeutic uses continues to advance with promising
results, most efforts to date stop short of the rigorous
randomized controlled trials (RCTs) that are used to eval-
uate clinical treatments (several recent exceptions exist [3],
[7]). Over the next five years of development we aspire to
evaluate Therabot via RCTs.
Separate from the direct social interactions that occur
as part of therapies using social robots, the ability of these
devices and other popular consumer electronics (e.g., smart
watches, fitness bands, smart homes) to collect extensive
data related to health is a promising area for further inquiry.
We are early in our investigations into the uses of sensor data
collected with Therabot [25], but plan to continue working
with clinicians and clients to understand how this data can
be leveraged in an ethical and beneficial manner.
6.2. Optimizing Privacy-oriented sensing
As privacy and user perceptions of privacy are among
the most critical design elements of a social robot used in
mental healthcare, we foresee substantial value in continued
efforts to further develop this area.
Moving forward, we plan to develop both strong tech-
nical assurances of user privacy and investigate interfaces
and design conventions that align user perceptions with the
true state of the system. Within the realm of mental health
care the therapeutic alliance between clinicians and clients
is pivotal to enabling meaningful change. We will continue
to work with clinicians directly to design a system that
provides information that enables clinicians and clients to
decide how the robot should be used in their therapy.
Our engineering efforts will continue to emphasize the
requirement of local on-device data processing and strong
physical and digital security. We will continue to refine our
sensor systems to collect the data necessary by default. For
example, we will pursue technologies like ultra wide band
radar for person detection rather than RGB cameras as a
means of protecting privacy [56]. Contextual understand will
be provided by using sensor fusion techniques with several
low fidelity data streams.
Furthermore, we will work to develop mechanisms that
allow the user to be assured each sensor is physically
disconnected when desired. Alongside these features, we
will endeavor to provide interfaces that allow a user to easily
review and delete any data the system has stored.
6.3. Incorporating AI into the Participatory Design
Process
Artificial intelligence presents a revolutionary oppor-
tunity for participatory design. With the capabilities of
photo , fluffy photo , very fluffy
In a living room.
Realistic, depicted in home environment
photo , polymer clay digital art , cartoon
On an adventure.
Stylized, showcasing personality
Figure 8: {type of image}of a cute {aspect of subject}stuffed beagle puppy {activity}
generative AI in the PD context, researchers can delve
deeper with design study participants by quickly creating
a high fidelity image of the participant’s described support
companion. After generating an initial design, the researcher
and participants can use the generated artifacts to refer to
specific features and elicit details that might have previously
required multiple PD cycles. In this paradigm the generative
artificial intelligence process may also become a contributor
of important novelty as it may produce design elements that
the participant or researcher did not consider.
In the realm of participatory design for social robots,
there are vast design possibilities with a platform like
Therabot™, which uses an external covering that can eas-
ily be changed or redesigned independent of the robot’s
functionality. Another consideration for the PD process is
whether participants would prefer to create their robot’s
character in a 3D format, resulting in a more realistic
representation true to the robot design, or opt for two-
dimensional, stylized versions. Furthermore, designs may
begin as a more realistic image with adjustments made to
specific features, or they may begin in a stylized cartoon-like
image, capturing the personality and key features demon-
strated in the digital art. As illustrated in Figure 8, gen-
erated by DALL-E 2 5, realistic images offer a glimpse
of the potential robot inhabiting a more true-to-life form
factor and placement. A wide variety of customizations
can be made to facilitate discussion and discoverability;
the images ”In a living room. demonstrate the addition of
the word ”very” in the prompt: ”a photo of a cute [very]
fluffy stuffed beagle puppy in a living room.” The images
depicting the robot ”On an adventure” explore stylized and
character-focused versions of the prompt. While the two-
dimensional representation may not translate perfectly to the
stuffed animal dimensions, the real-life Therabot covering
would still capture the main design features requested by
the participant. As a result of advances in generative AI,
creating a participant’s ideal socially therapeutic assistive
5. Available at: https://openai.com/dall-e- 2
robot starting from a design concept and then creating
a custom covering has become a more viable possibility.
Beyond visual design, Therabot could adapt to any animal
type by generating audio in real-time using a text-to-audio
latent diffusion model such as AudioLDM [57] to mimic
the participant’s chosen animal. Combining a customized
appearance and sound profile with Therabot’s contextual
awareness to adapt behaviors, would enable participants to
more rapidly experience their design choices. This pipeline
may also be extended using the generated images to teach
a character-focused model like Dreambooth [58] to enable
easy asset generation for a virtual version of the robot. This
would allow participants to engage with a virtual version
of their customized companion through an app, embarking
on adventures, achieving self- or clinician-guided goals, or
playing games.
6.4. Broadening Interfaces
Extended Reality (XR) modalities have the potential to
be powerful tools for improving mental health conditions. A
review by Ma et al. found that more immersive environments
tended to be more effective for mindfulness training [59].
Engaging more senses, such as touch, has the potential to
augment the immersive environment in a beneficial way
[60]. A study conducted by Lin et al. explored immersing
participants in environments with virtual pets, followed by a
discussion of the experience [61]. Their reporting on virtual
pets provides important insights for the Therabot™ project.
Participants hoped to build relationships with the virtual pet
and highlighted the importance of the virtual pet initiating
interactions. They felt that for a meaningful connection, the
virtual pet should exhibit an awareness of its environment
and appropriately respond to it. For instance, just as a real
dog responds to its owner’s emotional prosody, a virtual
pet should do the same. They also suggested virtual pets
learning new tricks and showcasing a long-term understand-
ing would facilitate the relationship. A significant point of
interest was the desire of participants to physically touch
the virtual pet.
Therabot, being an embodied agent, meets many of
these needs, and its integration with an XR paradigm has
the potential to create a meaningful immersive experience.
Using its integrated sensors, the robot could supply virtual
elements with useful signals, like detecting when a user
is nearby to cue the reaction of excitement, inviting en-
gagement. Adding the data from UWB radar and capacitive
sensors, the robot can gauge as a user approaches and
touches it, perhaps scaling down movements that were broad
to elicit attention from the user. Diffusion models can be
leveraged to generate custom virtual assets, allowing users to
experience an interactive extended reality with their chosen
Therabot. By utilizing local network connections, Bluetooth,
or alternative technologies such as audio cues, actions in the
virtual space can prompt responses from the physical robot.
This offers users a tangible Therabot that responds to virtual
interactions.
Therabot™ has been an evolutionary design and devel-
opment process ongoing since its inception in 2012. With
so many possible future directions, it is expected that this
evolutionary process will continue for the foreseeable future.
Our eventual goal is to make Therabot more widely available
to developers and eventually for clinician and public use.
Acknowledgment
This research was sponsored by the National Science
Foundation Under Grant IIS-1249488 and IIS-1900883. In
addition to the authors of this article, the full research team
includes: Selma ˇ
Sabanovi´
c, Casey C. Bennett, Jennifer A.
Piatt, Megan Stubbs-Richardson, ˇ
Cedomir Stanojevi´
c, and
Sawyer Collins.
References
[1] D. Feil-Seifer and M. J. Mataric, “Defining socially assistive
robotics,” in 9th International Conference on Rehabilitation Robotics,
2005. ICORR 2005. IEEE, 2005, pp. 465–468.
[2] S. M. Rabbitt, A. E. Kazdin, and B. Scassellati, “Integrating so-
cially assistive robotics into mental healthcare interventions: Appli-
cations and recommendations for expanded use,” Clinical Psychology
Review, vol. 35, pp. 35–46, 2015.
[3] N. L. Robinson, T. V. Cottier, and D. J. Kavanagh, “Psychosocial
Health Interventions by Social Robots: Systematic Review of Ran-
domized Controlled Trials, Journal of Medical Internet Research,
vol. 21, no. 5, p. e13203, 2019.
[4] A. A. Scoglio, E. D. Reilly, J. A. Gorman, and C. E. Drebing, “Use of
Social Robots in Mental Health and Well-Being Research: Systematic
Review, Journal of Medical Internet Research, vol. 21, no. 7, p.
e13322, 2019.
[5] I. Guemghar, P. P. d. O. Padilha, A. Abdel-Baki, D. Jutras-Aswad,
J. Paquette, and M.-P. Pomey, “Social Robot Interventions in Mental
Health Care and Their Outcomes, Barriers, and Facilitators: Scoping
Review, JMIR Mental Health, vol. 9, no. 4, p. e36094, 2022.
[6] S. Riches, L. Azevedo, A. Vora, I. Kaleva, L. Taylor, P. Guan,
P. Jeyarajaguru, H. McIntosh, C. Petrou, S. Pisani, and N. Hammond,
“Therapeutic engagement in robot-assisted psychological interven-
tions: A systematic review,” Clinical Psychology & Psychotherapy,
vol. 29, no. 3, pp. 857–873, 2022.
[7] N. L. Robinson and D. J. Kavanagh, “A social robot to deliver
a psychotherapeutic treatment: Qualitative responses by participants
in a randomized controlled trial and future design recommenda-
tions,” International Journal of Human-Computer Studies, vol. 155,
p. 102700, 2021.
[8] N. Robinson, J. Connolly, G. Suddrey, and D. J. Kavanagh, “A Brief
Wellbeing Training Session Delivered by a Humanoid Social Robot:
A Pilot Randomized Controlled Trial, arXiv, 2023.
[9] M. J. Trost, G. Chrysilla, J. I. Gold, and M. Matari´
c, “Socially-
Assistive Robots Using Empathy to Reduce Pain and Distress
during Peripheral IV Placement in Children,” Pain Research and
Management, vol. 2020, p. 7935215, 2020.
[10] C. Lutz, M. Sch¨
ottler, and C. P. Hoffmann, “The privacy implications
of social robots: Scoping review and expert interviews, Mobile
Media & Communication, vol. 7, no. 3, pp. 412–434, 2019.
[11] N. Koutsouleris, T. U. Hauser, V. Skvortsova, and M. D. Choudhury,
“From promise to practice: towards the realisation of AI-informed
mental health care,” The Lancet Digital Health, vol. 4, no. 11, pp.
e829–e840, 2022.
[12] A. Fiske, P. Henningsen, and A. Buyx, “Your Robot Therapist Will
See You Now: Ethical Implications of Embodied Artificial Intel-
ligence in Psychiatry, Psychology, and Psychotherapy,” Journal of
Medical Internet Research, vol. 21, no. 5, p. e13216, 2019.
[13] R. Khanna, N. Robinson, M. O’Donnell, H. Eyre, and E. Smith,
“Affective Computing in Psychotherapy,” Advances in Psychiatry and
Behavioral Health, vol. 2, no. 1, pp. 95–105, 2022.
[14] J. Simonsen and T. Robertson, Routledge international handbook of
participatory design. Routledge, 2012.
[15] H. R. Lee, S. ˇ
Sabanovi´
c, W.-L. Chang, D. Hakken, S. Nagata, J. Piatt,
and C. Bennett, “Steps toward participatory design of social robots:
Mutual learning with older adults with depression,” in 2017 12th
ACM/IEEE International Conference on Human-Robot Interaction
(HRI, 2017, pp. 244–253.
[16] C. L. Bethel, Z. Henkel, S. Darrow, and K. Baugus, “Therabot-an
adaptive therapeutic support robot, in 2018 World Symposium on
Digital Intelligence for Systems and Machines (DISA), 2018, pp. 23–
30.
[17] D. Scherer, “Movie magic makes better social robots, Journal of
Human-Robot Interaction, vol. 3, no. 1, pp. 123–141, 2014.
[18] C. Collins, D. Duckworth, Z. Henkel, S. Wuisan, and C. L.
Bethel, “Therabot™: A robot therapy support system in action,”
in Proceedings of the Tenth Annual ACM/IEEE International
Conference on Human-Robot Interaction Extended Abstracts, ser.
HRI’15 Extended Abstracts. New York, NY, USA: Association
for Computing Machinery, 2015, p. 307. [Online]. Available:
https://doi.org/10.1145/2701973.2702695
[19] S. Darrow, A. Kimbrell, N. Lokhande, N. Dinep-Schneider, T. Ciufo,
B. Odom, Z. Henkel, and C. L. Bethel, “Therabot™: A robotic
support companion,” in Companion of the 2018 ACM/IEEE
International Conference on Human-Robot Interaction, ser. HRI ’18.
New York, NY, USA: Association for Computing Machinery, 2018,
p. 37. [Online]. Available: https://doi.org/10.1145/3173386.3177842
[20] K. Henkel, Z. Henkel, A. Aldridge, and C. L. Bethel, “The evolving
design of a socially therapeutic robotic dog,” in To appear in 2023
World Symposium on Digital Intelligence for Systems and Machines
(DISA), 2023.
[21] S. Collins, D. Hicks, Z. Henkel, K. Baugus Henkel, J. A. Piatt,
C. L. Bethel, and S. Sabanovic, “What skin is your robot in?”
in Companion of the 2023 ACM/IEEE International Conference on
Human-Robot Interaction, ser. HRI ’23. New York, NY, USA:
Association for Computing Machinery, 2023, p. 511–515. [Online].
Available: https://doi.org/10.1145/3568294.3580137
[22] C. C. Bennett, S. Sabanovic, J. A. Piatt, S. Nagata, L. Eldridge, and
N. Randall, “A robot a day keeps the blues away, in 2017 IEEE
International Conference on Healthcare Informatics (ICHI), 2017, pp.
536–540.
[23] C. C. Bennett, v. Stanojevi´
c, S. ˇ
Sabanovi´
c, J. A. Piatt, and
S. Kim, “When no one is watching: Ecological momentary
assessment to understand situated social robot use in healthcare,” in
Proceedings of the 9th International Conference on Human-Agent
Interaction, ser. HAI ’21. New York, NY, USA: Association
for Computing Machinery, 2021, p. 245–251. [Online]. Available:
https://doi.org/10.1145/3472307.3484670
[24] N. Randall, C. C. Bennett, S. ˇ
Sabanovi´
c, S. Nagata, L. Eldridge,
S. Collins, and J. A. Piatt, “More than just friends: in-home use
and design recommendations for sensing socially assistive robots
(sars) by older adults with depression,” Paladyn Journal of Behavioral
Robotics, vol. 10, no. 1, pp. 237–255, 2019.
[25] C. C. Bennett, C. Stanojevic, S. Kim, J. Lee, J. Yu, J. Oh,
S. Sabanovic, and J. A. Piatt, “Comparison of in-home robotic
companion pet use in south korea and the united states: A case study,
in Proceedings of the 2022 9th IEEE RAS/EMBS International
Conference for Biomedical Robotics and Biomechatronics. IEEE
Press, 2022, p. 01–07. [Online]. Available: https://doi.org/10.1109/
BioRob52689.2022.9925468
[26] J.-Y. Oh and C. C. Bennett, “The answer lies in user experience:
Qualitative comparison of us and south korean perceptions of
in-home robotic pet interactions,” in Companion of the 2023
ACM/IEEE International Conference on Human-Robot Interaction,
ser. HRI ’23. New York, NY, USA: Association for Computing
Machinery, 2023, p. 306–310. [Online]. Available: https://doi.org/10.
1145/3568294.3580094
[27] S. Murphy, “Methacrylates use in nail enhancements, Methacrylate
Producers Association, Inc., Tech. Rep., 2020.
[28] D. S. Shah, M. C. Yuen, L. G. Tilton, E. J. Yang, and R. Kramer-
Bottiglio, “Morphing robots using robotic skins that sculpt clay,
IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 2204–2211,
2019.
[29] M. Ghassemi, L. Oakden-Rayner, and A. L. Beam, “The false hope
of current approaches to explainable artificial intelligence in health
care,” The Lancet Digital Health, vol. 3, no. 11, pp. e745–e750, 2021.
[30] K. Cresswell, S. Cunningham-Burley, and A. Sheikh, “Health Care
Robotics: Qualitative Exploration of Key Challenges and Future
Directions,” Journal of Medical Internet Research, vol. 20, no. 7,
p. e10410, 2018.
[31] B. Spring, “Evidence-based practice in clinical psychology: What
it is, why it matters; what you need to know, Journal of Clinical
Psychology, vol. 63, no. 7, pp. 611–631, 2007.
[32] D. Zhang, J. Shen, S. Li, K. Gao, and R. Gu, “I, robot: depression
plays different roles in human–human and human–robot interactions,”
Translational Psychiatry, vol. 11, no. 1, p. 438, 2021.
[33] S. Turkle, Alone Together: Why We Expect More from Technology
and Less from Each Other. SpringerLink, Jan 2011.
[34] A. Tapus, C. Tapus, and M. J. Mataric, “Hands-off therapist robot
behavior adaptation to user personality for post-stroke rehabilitation
therapy, in Proceedings 2007 IEEE International Conference on
Robotics and Automation, 2007, pp. 1547–1553.
[35] M. M. A. d. Graaf and S. B. Allouch, “The Influence of Prior
Expectations of a Robot’s Lifelikeness on Users’ Intentions to Treat a
Zoomorphic Robot as a Companion,” International Journal of Social
Robotics, vol. 9, no. 1, pp. 17–32, 2017.
[36] I. Thielmann and B. E. Hilbig, “Trust: An integrative review from
a person–situation perspective, Review of General Psychology,
vol. 19, no. 3, pp. 249–277, 2015. [Online]. Available: https:
//doi.org/10.1037/gpr0000046
[37] J. Kraus, F. Babel, P. Hock, K. Hauber, and M. Baumann,
“The trustworthy and acceptable hri checklist (ta-hri): questions
and design recommendations to support a trust-worthy and
acceptable design of human-robot interaction,” Gruppe. Interaktion.
Organisation. Zeitschrift f¨
ur Angewandte Organisationspsychologie
(GIO), vol. 53, no. 3, pp. 307–328, Sep 2022. [Online]. Available:
https://doi.org/10.1007/s11612-022-00643-8
[38] A. L ¨
uttgen, S. K. Sharma, D. Zhou, D. Leigh, S. Sanders, and
C. D. Sarris, “A fast simulation methodology for touch sensor panels:
Formulation and experimental validation, IEEE Sensors Journal,
vol. 19, no. 3, pp. 996–1007, 2019.
[39] T.-Y. Wu, L. Tan, Y. Zhang, T. Seyed, and X.-D. Yang,
“Capacitivo: Contact-based object recognition on interactive fabrics
using capacitive sensing, in Proceedings of the 33rd Annual
ACM Symposium on User Interface Software and Technology,
ser. UIST ’20. New York, NY, USA: Association for Computing
Machinery, 2020, p. 649–661. [Online]. Available: https://doi.org/10.
1145/3379337.3415829
[40] “Sewiy. [Online]. Available: https://sewiy.com/
[41] J. Stanley, J. A. Hunt, P. Kunovski, and Y. Wei, “Novel interposer for
modular electronic textiles: Enabling detachable connections between
flexible electronics and conductive textiles, IEEE Sensors Letters,
vol. 6, no. 6, pp. 1–4, 2022.
[42] N. Furui, K. Suzuki, Y. Sugiura, and M. Sugimoto, “Softouch:
Turning soft objects into touch interfaces using detachable photo
sensor modules,” in Entertainment Computing ICEC 2017:
16th IFIP TC 14 International Conference, Tsukuba City, Japan,
September 18-21, 2017, Proceedings. Berlin, Heidelberg: Springer-
Verlag, 2017, p. 47–58. [Online]. Available: https://doi.org/10.1007/
978-3- 319-66715- 7 6
[43] Y. Sugiura, T. Igarashi, and M. Inami, “Cuddly user interfaces,
Computer, vol. 49, no. 7, pp. 14–19, July 2016.
[44] G. Diraco, A. Leone, and P. Siciliano, “A radar-based smart sensor
for unobtrusive elderly monitoring in ambient assisted living appli-
cations,” Biosensors, vol. 7, no. 4, p. 55, 2017.
[45] F. M. Noori, M. Z. Uddin, and J. Torresen, “Ultra-Wideband Radar-
Based Activity Recognition Using Deep Learning,” IEEE Access,
vol. 9, pp. 138 132–138 143, 2021.
[46] C. DiSalvo, F. Gemperle, J. Forlizzi, and E. Montgomery, “The hug:
an exploration of robotic form for intimate communication,” in The
12th IEEE International Workshop on Robot and Human Interactive
Communication, 2003. Proceedings. ROMAN 2003. IEEE, 2003,
pp. 403–408.
[47] K. Isbister, P. Cottrell, A. Cecchet, E. Dagan, N. Theofanopoulou,
F. A. Bertran, A. J. Horowitz, N. Mead, J. B. Schwartz, and P. Slovak,
“Design (not) lost in translation: A case study of an intimate-space
socially assistive “robot” for emotion regulation, ACM Transactions
on Computer-Human Interaction (TOCHI), vol. 29, no. 4, pp. 1–36,
2022.
[48] S. M. Montgomery, N. Nair, P. Chen, and S. Dikker, “Introducing
emotibit, an open-source multi-modal sensor for measuring research-
grade physiological signals,” Science Talks, vol. 6, p. 100181,
2023. [Online]. Available: https://www.sciencedirect.com/science/
article/pii/S2772569323000567
[49] A. Guyton and J. Hall, Textbook of Medical Physiology. Elsevier
Saunders, 2006.
[50] P. Pongr´
acz, C. Moln´
ar, A. Mikl´
osi, and V. Cs´
anyi, “Human listeners
are able to classify dog (canis familiaris) barks recorded in different
situations.” Journal of comparative psychology, vol. 119, no. 2, p.
136, 2005.
[51] S. Andreu and M. Villanueva Aylagas, “Neural synthesis of sound
effects using flow-based deep generative models, Proceedings of
the AAAI Conference on Artificial Intelligence and Interactive
Digital Entertainment, vol. 18, no. 1, pp. 2–9, Oct. 2022. [Online].
Available: https://ojs.aaai.org/index.php/AIIDE/article/view/21941
[52] E. A. Smit, F. A. Dobrowohl, N. K. Schaal, A. J. Milne, and
S. A. Herff, “Perceived emotions of harmonic cadences, Music &
Science, vol. 3, p. 2059204320938635, 2020. [Online]. Available:
https://doi.org/10.1177/2059204320938635
[53] L. D. Riek, “Robotics Technology in Mental Health Care, arXiv,
2015.
[54] D. D. Luxton, Artificial Intelligence in Behavioral and Mental Health
Care. Academic Press, 2016, ch. Chapter 11: Ethical Issues and
Artificial Intelligence Technologies in Behavioral and Mental Health
Care, pp. 1–26.
[55] S. Al-Saif, “Animal Healthcare Robots: The Case for Privacy Regu-
lation,” Washington Journal of Law, Technology & Arts, 2019.
[56] S. P. Rana, M. Dey, M. Ghavami, and S. Dudley, “Signature Inspired
Home Environments Monitoring System Using IR-UWB Technol-
ogy, Sensors, vol. 19, no. 2, p. 385, 2019.
[57] H. Liu, Z. Chen, Y. Yuan, X. Mei, X. Liu, D. Mandic, W. Wang,
and M. D. Plumbley, Audioldm: Text-to-audio generation with latent
diffusion models, 2023.
[58] N. Ruiz, Y. Li, V. Jampani, Y. Pritch, M. Rubinstein, and K. Aberman,
“Dreambooth: Fine tuning text-to-image diffusion models for subject-
driven generation, in Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR), June 2023, pp.
22 500–22 510.
[59] J. Ma, D. Zhao, N. Xu, and J. Yang, “The effectiveness of immersive
virtual reality (vr) based mindfulness training on improvement
mental-health in adults: A narrative systematic review,” EXPLORE,
vol. 19, no. 3, pp. 310–318, 2023. [Online]. Available: https:
//www.sciencedirect.com/science/article/pii/S1550830722001227
[60] C. Jewitt, D. Chubinidze, S. Price, N. Yiannoutsou, and N. Barker,
“Making sense of digitally remediated touch in virtual reality
experiences,” Discourse, Context & Media, vol. 41, p. 100483,
2021. [Online]. Available: https://www.sciencedirect.com/science/
article/pii/S2211695821000209
[61] C. Lin, T. Faas, and E. Brady, “Exploring affection-oriented virtual pet
game design strategies in vr attachment, motivations and expectations
of users of pet games,” in 2017 Seventh International Conference
on Affective Computing and Intelligent Interaction (ACII), 2017, pp.
362–369.
... Es el caso de los robots asistenciales, que están ayudando en el cuidado de personas mayores y con necesidades especiales dando apoyo emocional y físico; como es el caso de Therabot, un perro robótico que ha conseguido reducir los niveles de estrés en pacientes y contribuir al proceso terapéutico (Bethel et al. 2023); o SnuggleBot, un peluche robotizado con forma de manatí diseñado para reconfortar física y mentalmente a los pacientes en estado de soledad mediante los abrazos a este (Passler Bates y Young 2020). ...
Article
Full-text available
La robótica social destaca, en parte, con robots de sobremesa, que permanecen en la ubicación donde se colocan. Esto limita su área de percepción a la zona donde sus sensores alcanzan. En aplicaciones donde se desea una monitorización del usuario esta puede ser una importante limitación. Para intentar paliarlo, en el presente trabajo se propone ampliar las capacidades de percepción del robot social Mini, diseñado y construido en el grupo Robotics Lab de la Universidad Carlos III de Madrid, integrando la información proporcionada por una red de sensores IoT. Se ha desarrollado un sistema de reglas que analiza los valores ambientales medidos por la red de sensores en tiempo real y detecta situaciones anómalas o de riesgo para el usuario. Cuando se identifica una, el robot avisa al usuario y le sugiere una acción para paliarlo, además pregunta al usuario si se encuentra bien. En caso de no recibir respuesta, avisa a un familiar empleando una aplicación de mensajería instantánea.
... Furthermore, the robot's underlying structure is designed in a modular fashion that supports exchanging the dog tail-wagging module for a more articulated cat tail module. We are currently conducting user evaluations of a generative audio subsystem and expanded haptic capabilities (see [5] for more details) that provide support for additional types of animal the robot can exemplify. ...
Article
Full-text available
Mental health and psychological distress are rising in adults, showing the importance of wellbeing promotion, support, and technique practice that is effective and accessible. Interactive social robots have been tested to deliver health programs but have not been explored to deliver wellbeing technique training in detail. A pilot randomised controlled trial was conducted to explore the feasibility of an autonomous humanoid social robot to deliver a brief mindful breathing technique to promote information around wellbeing. It contained two conditions: brief technique training (‘Technique’) and control designed to represent a simple wait-list activity to represent a relationship-building discussion (‘Simple Rapport’). This trial also explored willingness to discuss health-related topics with a robot. Recruitment uptake rate through convenience sampling was high (53%). A total of 230 participants took part (mean age = 29 years) with 71% being higher education students. There were moderate ratings of technique enjoyment, perceived usefulness, and likelihood to repeat the technique again. Interaction effects were found across measures with scores varying across gender and distress levels. Males with high distress and females with low distress who received the simple rapport activity reported greater comfort to discuss non-health topics than males with low distress and females with high distress. This trial marks a notable step towards the design and deployment of an autonomous wellbeing intervention to investigate the impact of a brief robot-delivered mindfulness training program for a sub-clinical population.
Article
Full-text available
In this Series paper, we explore the promises and challenges of artificial intelligence (AI)-based precision medicine tools in mental health care from clinical, ethical, and regulatory perspectives. The real-world implementation of these tools is increasingly considered the prime solution for key issues in mental health, such as delayed, inaccurate, and inefficient care delivery. Similarly, machine-learning-based empirical strategies are becoming commonplace in psychiatric research because of their potential to adequately deconstruct the biopsychosocial complexity of mental health disorders, and hence to improve nosology of prognostic and preventive paradigms. However, the implementation steps needed to translate these promises into practice are currently hampered by multiple interacting challenges. These obstructions range from the current technology-distant state of clinical practice, over the lack of valid real-world databases required to feed data-intensive AI algorithms, to model development and validation considerations being disconnected from the core principles of clinical utility and ethical acceptability. In this Series paper, we provide recommendations on how these challenges could be addressed from an interdisciplinary perspective to pave the way towards a framework for mental health care, leveraging the combined strengths of human intelligence and AI.
Conference Paper
This paper describes a user experience comparison study to explore whether a user’s ‘cultural background’ affects their interaction with in-home pet robots designed for health purposes, e.g. socially-assistive robots (SARs). 11 Koreans and 10 Americans were interviewed after interacting in their own homes with a SAR. Statistical analyses and TF-IDF keyword analyses were conducted to detect significant differences between groups in terms of code co-occurrences. Results showed that American participants were more likely to focus on the interactive experience itself, whereas Korean participants focused more on critiquing technical aspects of the technology. Such differences suggest that Koreans tend to treat robotic pets as “tools”, while Americans view the robotic pet through the lens of their past experience raising real-life pets. We discuss implications of this for human-robot interaction (HRI) regarding SARs may be dependent on users’ cultural characteristics, e.g. necessitating customized content that takes into account culturally-specific modes of use.
Article
Creating variations of sound effects for video games is a time-consuming task that grows with the size and complexity of the games themselves. The process usually comprises recording source material and mixing different layers of sound to create sound effects that are perceived as diverse during gameplay. In this work, we present a method to generate controllable variations of sound effects that can be used in the creative process of sound designers. We adopt WaveFlow, a generative flow model that works directly on raw audio and has proven to perform well for speech synthesis. Using a lower-dimensional mel spectrogram as the conditioner allows both user controllability and a way for the network to generate more diversity. Additionally, it gives the model style transfer capabilities. We evaluate several models in terms of the quality and variability of the generated sounds using both quantitative and subjective evaluations. The results suggest that there is a trade-off between quality and diversity. Nevertheless, our method achieves a quality level similar to that of the training set while generating perceivable variations according to a perceptual study that includes game audio experts.
Article
We present a Research-through-Design case study of the design and development of an intimate-space tangible device perhaps best understood as a socially assistive robot, aimed at scaffolding children’s efforts at emotional regulation. This case study covers the initial research device development, as well as knowledge transfer to a product development company toward translating the research into a workable commercial product that could also serve as a robust “research product” for field trials. Key contributions to the literature include: (1) sharing of lessons learned from the knowledge transfer process that can be useful to others interested in developing robust products (whether commercial or research) that preserve design values, while allowing for large scale deployment and research; (2) articulation of a design space in HCI/HRI (Human Robot Interaction) of intimate space socially assistive robots , with the current artifact as a central exemplar, contextualized alongside other related HRI artifacts.