ArticlePDF AvailableLiterature Review

Measures of incentives and confidence in using a social robot

Authors:

Abstract and Figures

Measures of incentives and confidence in using a social robot were stable, predictive, and sensitive to changes in robot behaviors.
Content may be subject to copyright.
Title Measures of incentives and confidence in using a social robot
Authors
N. L. Robinson,1* J. Connolly, 2 G. M. Johnson,3 Y. Kim,3L. Hides, 4 D. J. Kavanagh, 2
*Correspondence to: Nicole Robinson, n7.robinson@qut.edu.au
†These authors contributed equally to this work.
Affiliations
1Australian Centre for Robotic Vision, Centre for Children’s Health Research, Institute of
Health & Biomedical Innovation and School of Psychology & Counselling, Queensland
University of Technology.
2Centre for Children’s Health Research, Institute of Health & Biomedical Innovation and
School of Psychology & Counselling, Queensland University of Technology.
3Somerville House.
4School of Psychology, The University of Queensland.
Abstract
New measures of incentives and confidence in using a social robot had a stable subscale
structure, predicted intentions to use the robot and were sensitive to changes in robot
behaviors.
Introduction
Rapid recent advances in the applications of social robots run the risk of exceeding
potential users’ willingness to engage with them. This highlights a need to assess factors
that influence their acceptance and use (1). Current measures of users’ perceptions
typically assess a single psychological dimension (2), confound multiple constructs (3), or
assess global attitudes towards robots (4), and do not directly measure the level of social
and emotional connection with social robots. To advance research in this area, an
assessment measure that can quantify changes in social robot characteristics or behaviours
and predict willingness to use them is required.
These new measures should be based on well-established psychological theory about the
prediction of behavior. Two predictors have proved particularly powerful: self-efficacy
(confidence in meeting current performance demands), and incentives (expectations about
outcomes from an action) (5, 6). Currently, there is no published measure of self-efficacy
or incentives about interpersonal interactions with a social robot. We developed and tested
measures to fill this gap, examining their sensitivity to contrasting robot behaviors, and
their prediction of intentions to use a social robot.
Initial development of the measures involved administering them to groups of female high
school students who had observed an interaction with a NAO robot. Exploratory factor
analyses indicated the Robot Incentives Scale (RIS) had 3 subscales: ‘Emotional’- liking
or enjoyment of social robots, ‘Utility’- whether they were useful or solved problems, and
‘Social/Relational’- their potential for social connection. The Robot Self-Efficacy Scale
(RSES) had two subscales: ‘Operation’- confidence in operating a social robot, and
‘Application’- confidence in completing a task or goal using the robot. Intentions to use a
social robot formed a single scale. The internal structures of the scales were confirmed in
an online adult sample (Table 1), although the RSES required one item to be removed and
errors on two pairs of similar items to be correlated before acceptable fit was obtained.
The internal consistencies of the subscales for all three scales were very high.
We tested whether the new scales were sensitive to more mechanical or humanoid
behaviors by the NAO robot using a student sample. The more humanoid robot received
higher scores on all three measures, but only if the students saw both types of behavior.
The RIS and RSES subscales jointly predicted 78% of the variance in intentions to use the
social robot in the student group. On their own, RIS subscales predicted 77% of the
variance, while RSES subscales predicted only 40%. The adult group gave similar results,
with RIS and RSES predicting 83% of the variance in intentions when used together (82%
and 54% separately). In both groups, all RIS and RSES subscales except RSES
‘Operation’ contributed unique predictive variance.
In summary, the three scales provided a coherent and stable factor structure across the
studies despite differences in the nature of the samples and observed interaction. However,
the RSES required omission of one item to finalize acceptable fit, suggesting a need for
further replication. High internal consistencies of RSES and RIS subscales suggest a
potential for shortening the length of assessment measures without threatening reliability.
The measures were sensitive to comparisons of different social robot behaviors when
individuals were able to contrast the behaviors. Since most participants had little prior
exposure to robots, observation of the interaction was presumably dominated by its
novelty. Future groups that have extensive experience of robotics or human-robot
interaction may not need contrasting interactions to obtain differential ratings, since
comparisons with previous experience would be available.
All RIS and RSES subscales made unique contributions to a concurrent prediction of
intentions to use a social robot, except for RSES Operation. However, almost all of the
predictive power was from the RIS subscales, suggesting that assessments of incentives
may be sufficient to predict intentions to use a robot for a social interaction. The limited
prediction from self-efficacy was surprising, since it is typically a stronger predictor of
performance attainments than incentives (6). However, the focus of the intentions measure
was on a social interaction rather than on controlling or programming the robot, making
self-efficacy less relevant than incentives. If participants were required to undertake a
more demanding role, a different pattern of results may be obtained. As yet, we have not
tested the ability of the measures to predict actual use of the robot. Generalization of
results to other social robots, characteristics and contexts also await determination. These
studies provide strong initial support for the new measures and the RIS may have wide
potential application in assessing the acceptability of social robots.
Table 1: The Robot Incentives Scale, Robot Self-Efficacy Scale, and Robot Usage
Intention Items
Robot Incentives Scale
(RIS)
Robot Self-Efficacy Scale
(RSES)
Robot Usage Intention
(RUI)
How confident are you, that
you can do the following
with this robot:
If this robot were readily
available…
Emotion
I like this robot
I would enjoy interacting
with this robot
I would be happy to talk to
this robot
I would like to have this
robot around me
This robot is entertaining
Utility
This robot would be able to
help solve problems
This robot would be useful
for me to have in my life
This robot would be able to
provide me with the things
that I want from a robot
This robot would provide
reliable assistance to me
Social/Relational
I would open up easily to
this robot
I would talk to this robot
about anything
I would talk to this robot
about things I could not talk
about to my family or
friends
Operation
Use this robot
Control this robot
Understand what this robot
is saying
Learn what to do with this
robot
Work out what to do if this
robot isn’t doing what I want
it to do
Communicate clearly with
this robot
Application
Work with this robot to
solve a problem
Work out what to do by
talking to this robot
Get this robot to do
something for me
Get this robot to help me
with something
Make sure this robot does
the task I set it
I would interact with this
robot often
I would ask this robot for
assistance
I would spend time with this
robot
I would ask this robot to
help me with a task on a
regular basis
I would interact with this
robot for a long time
All items are rated a 0-10 scale, with only endpoints labelled. RSES items are rated from
‘Sure I can’t’ to ‘Sure I can’; RIS and RUI from ‘Not at all’ to ‘Definitely’
References and Notes
1. K. Dautenhahn, Socially intelligent robots: dimensions of human–robot interaction.
Philos. Trans. Royal Soc. B 362, 679-704 (2007).
2. R. E. Yagoda, D. J. Gillan, You want me to trust a ROBOT? The development of a
human–robot interaction trust scale. Int J Soc Robot 4, 235-248 (2012).
3. T. Nomura, T. Kanda, Rapport–expectation with a robot scale. Int J Soc Robot 8, 21-30
(2016).
4. T. Nomura, T. Kanda, T. Suzuki, Experimental investigation into influence of negative
attitudes toward robots on human-robot interaction. AI Soc 20, 138-150 (2006).
5. V. J. Strecher, B. M. DeVellis, M. H. Becker, I. M. Rosenstock, The role of self-efficacy
in achieving health behavior change. Health Educ Q 13, 73-92 (1986).
6. A. Bandura, Social foundations of thought and action: A social cognitive theory. Social
foundations of thought and action: A social cognitive theory. (Prentice-Hall, Inc,
Englewood Cliffs, NJ, US, 1986), pp. xiii, 617-xiii, 617.
7. A. Bandura, On the functional properties of perceived self-efficacy revisited. J. Manag 38,
9-44 (2011).
Acknowledgments: We thank the Australian Centre for Robotic Vision, Centre for Children’s
Health Research, Institute of Health & Biomedical Innovation and Queensland University of
Technology for their support in this project. Funding: Leanne Hides is funded by an NHMRC
Senior Research Fellowship and by Lives Lived Well, a not-for profit charity. Nicole Robinson
was supported by the Research Training Program (RTP) scheme on behalf of the Department of
Education and Training; Author contributions: Nicole Robinson designed and conducted the
studies, data collection, analysed the data and completed the first draft of this paper. Jennifer
Connolly contributed to the design, supervision of the conduct of the studies and to the analyses,
and gave comments on the paper. Genevieve Johnson and Yejee Kim assisted with collection of
the student data and commented on the draft of the manuscript. Leanne contributed to the
analyses and commented on the paper. David Kavanagh was the senior researcher on this project
and supervised all stages of it; Competing interests: Authors declare no competing interests;
Data and materials availability: All data needed to evaluate the conclusions in the paper are
present in the paper or the Supplementary Materials. The data for this study have been deposited
on https://github.com/nrbsn/scirobo. Copies of the code and surveys for the two studies are
available from the corresponding author.
Supplementary Materials for
New measures of incentives and confidence in using a social robot
N. L. Robinson,1* J. Connolly, 2 G. M. Johnson,3 Y. Kim,3† L. Hides, 4 D. J. Kavanagh, 2
*Correspondence to: n7.robinson@qut.edu.au
†These authors contributed equally to this work
This PDF file includes:
Text S1. Study Design, Materials and Methods
Text S2. Exploratory Factor Analyses: Further Notes
S2 Table 1. Results of Exploratory Factor Analyses: Robot Incentives Scale (RIS)
S2 Table 2. Results of Exploratory Factor Analyses: Robot Self-Efficacy Scale (RSES)
S2 Table 3. Results of Exploratory Factor Analyses: Robot Usage Intention (RUI)
Text S3. Comparisons of Mechanical and Humanoid Robot Behaviors
S3 Table 1. Between-Group Differences on Each Subscale
S3 Table 2. Between-Group Differences on Each Subscale after the First Robot Observation
S3 Table 3. Time by Condition Results for Each Subscale
S3 Figure 1. Ratings of More Mechanical and Humanoid Robot Interactions (Sequentially
Presented In Random Order)
Text S4. Confirmatory Factor Analysis
S4 Table 1. Results of Confirmatory Factor Analyses
Text S5. Prediction of Intentions from Self-Efficacy and Incentives
S5 Table 1. Multiple Regressions Predicting Intentions from Self-Efficacy and Incentives in the
Student Sample
S5 Table 2. Multiple Regressions Predicting Intentions from Self-Efficacy and Incentives in
the Adult Sample
Text S1. Study Design, Materials and Methods
Initial Development of the Robot Incentives Scale (RIS) and Robot Self-Efficacy Scale
(RSES)
Initial development of the assessment measures involved factor analyses on scale
structures to reduce the items into theoretical dimensions. Item pools for the scales were
derived from Social Cognitive Theory (6, 7), comments of participants in previous studies,
and existing measures. After obtaining ethical approval, 202 students aged 13-18
(Median=16 years) were recruited from a private female-only school. Participants viewed
a 3-minute live interaction between a NAO robot and research intern about interests and a
recent experience, and completed the draft RIS, RSES and 5 items about intentions to use
a social robot. Almost all (n=190) reported minimal or no prior experience with robots.
Exploratory Factor Analyses applied Principal Axis Factoring and Oblimin rotation with
Kaiser Normalization on each measure.
Sensitivity to Different Robot Behaviors
The ability to measure sensitive changes to different robot behaviours was conducted after
the reduction of data into theoretical dimensions. Two versions of the student-robot
interaction were scripted and programmed: a more mechanical version with no movement,
a monotone voice and limited responses, and a humanoid one that had animated gestures,
more verbal content and a varied tone. Class groups from the student sample were
randomly allocated to view the mechanical (4 groups, n=57) or humanoid interaction (3
groups, n=58). Other groups from the student sample rated both behaviors in a random
order (27 Mechanical first, 24 Humanoid first).
Confirmation of Internal Structures
Confirmation of the internal scale structures was conducted to validate the identified
dimensions. An online sample of 404 adults (52% female), recruited via university emails,
social media and media interviews, completed the scales in return for entry into a prize
draw. They were aged 18-78 (Median 35 years), 39% were university staff or students,
and 77% held a degree. After viewing a 2-minute video interaction between a NAO robot
and a 30-year-old female who discussed reducing her coffee intake, they completed the
RIS, RSES and intention items. Confirmatory factor analyses used Maximum Likelihood
and Yuan-Bentler χ2 adjustment
Associations with Intentions to Use a Social Robot
Independent contributions of RIS and RSES subscales to prediction of intentions to use
the social robot were examined using multiple regressions with simultaneous entry of
predictors.
Text S2. Exploratory Factor Analyses: Further Notes
Robot Incentives Scale (RIS)
There were no outliers. One item showed some kurtosis (‘This robot will not judge me on
my actions or thoughts’; 1.038): This item also showed inadequate commonality in a
preliminary analysis (0.294) and was omitted from further analyses. The Pattern Matrix
showed cross-loadings for three items (‘This robot would help with things that are
important to me’, 0.470, 0.064, -0.393; ‘I would feel a bond with this robot’, 0.308, 0.193,
-0.465; ‘I would trust this robot to help me’, 0.465, 0.173, -0.266). These items were
sequentially removed. For the remaining items, there was no multi-collinearity
(Determinant = 2.13e-5), and inter-item correlations fell in an acceptable range of 0.40 to
0.84. The Anti-Image Matrix demonstrated that all item partial correlation coefficients
were > 0.50 (Range 0.852 - 0.931). The Kaiser-Meyer-Olkin Measure of Sampling
Adequacy revealed excellent sampling adequacy to produce reliable, distinct factors
(0.907). Bartlett’s Test of Sphericity suggested that sphericity had not been breached 2
(66) = 2110.278, p <0.001). Factor analysis of the Robot Incentives Scale (RIS) suggested
it had 3 subscales after removing 3 cross-loading items: ‘Emotion’ (Eigenvalue=7.332,
61% variance; α=.92), ‘Utility’ (Eigenvalue=1.258; 10% variance; α=.91), and
‘Social/Relational’ (Eigenvalue=1.088; 9% variance; α=.92).
S2 Table 1. Results of Exploratory Factor Analyses: Robot Incentives Scale (RIS)
Robot Incentives Scale
Emotional
Social/
Relational
I like this robot
0.874
0.080
I would enjoy interacting with this robot
0.917
-0.082
I would be happy to talk to this robot
0.750
-0.113
I would like to have this robot around me
0.709
-0.199
This robot is entertaining
0.799
0.093
I would open up easily to this robot
0.231
-0.694
I would talk to this robot about anything
-0.024
-0.977
I would talk to this robot about things I could not talk
about to my family or friends
0.013 0.131 -0.765
This robot would be able to help solve problems
-0.067
-0.013
This robot would be useful for me to have in my life
0.037
-0.157
This robot would be able to provide me with the things
that I want from a robot
0.117 0.770 0.093
This robot would provide reliable assistance to me
-0.012
-0.050
Items are rated from 0, Not at all, to 10, Definitely. Removed items: ‘This robot would
help with things that are important to me’ (0.470, 0.064, -0.393), ‘I would feel a bond with
this robot’ (0.308, 0.193, -0.465), ‘I would trust this robot to help me’ (0.465, 0.173, -
0.266).
Robot Self-Efficacy Scale (RSES)
There were no outliers. Some skewness was present on ‘Understand what this robot is
saying’ (-1.100), but the item was retained because of the importance of its content. All
items had at least two correlations with other items >0.30 and none >0.90. Item
commonalities contributed more than 30% of the shared variance and were >0.40. The
Anti-Image Matrix demonstrated that all item partial correlation coefficients were >0.50
(Range 0.867 - 0.952). Multi-collinearity was not present (Determinant = 7.743e-5), and
the Kaiser-Meyer-Olkin Measure of Sampling Adequacy showed sampling adequacy to
produce reliable and distinct factors (0.908). Bartlett’s Test of Sphericity suggested that
sphericity had not been breached (χ2 (66) = 1856.95, p <0.001). The Robot Self-Efficacy
Scale (RSES) had two factors: ‘Application’ (Eigenvalue=6.988, 58% variance; α=.93),
and ‘Operation’ (Eigenvalue=1.460, 12% variance; α=.89) after removing 3 cross-loading
items.
S2 Table 2. Results of Exploratory Factor Analyses: Robot Self-Efficacy Scale (RSES)
Robot Self-Efficacy Scale
Application
Operation
1.
Use this robot 0.003 0.781
2.
Control this robot 0.131 0.703
3.
Understand what this robot is saying -0.169 0.819
4.
Work with this robot to solve a problem 0.893 -0.034
5.
Work out what to do by talking to this robot 0.767 0.002
6.
Get this robot to do something for me 0.844 0.056
7.
Get this robot to help me with something 0.986 -0.096
8.
Make sure this robot does the task I set it 0.655 0.185
9.
Learn what to do with this robot 0.118 0.673
10. Work out what to do if this robot isn’t doing what I want it
to do
0.258 0.562
11.
Communicate clearly with this robot 0.161 0.661
Items are rated from 0, Sure I can’t to 10, Sure I can. Removed items: ‘Get this robot to
understand what I am saying’ (0.478, 0.341), ‘Know what to do next with this robot’
(0.546, 0.330) and ‘Get this robot to respond right away’ (0.369, 0.454).
Robot Usage Intention Scale (RUI)
There were no outliers or departures from normality. All items had at least two
correlations with other items >0.30 and none >0.90 (Range: 0.709 - 0.878). Item
commonalities contributed more than 30% of the shared variance and were >0.40. The
Anti-Image Matrix demonstrated that all item partial correlation coefficients were >0.50
(Range 0.855 - 0.895). Multi-collinearity was not present (Determinant = 0.004), and the
Kaiser-Meyer-Olkin Measure of Sampling Adequacy showed there was sampling
adequacy to produce reliable and distinct factors (0.867). Bartlett’s Test of Sphericity
suggested that sphericity had not been breached (χ2 (10) = 1079.990, p <0.001). Intentions
items formed a single scale (Eigenvalue=4.188; 84% variance; α=.95).
S2 Table 3. Results of Exploratory Factor Analyses: Robot Usage Intention (RUI)
Robot Usage Intention
Loading
1.
I would interact with this robot often 0.903
2.
I would ask this robot for assistance 0.890
3.
I would spend time with this robot 0.928
4.
I would ask this robot to help me with a task on a regular basis 0.832
5.
I would interact with this robot for a long time 0.911
Items are rated from 0, Not at all, to 10, Definitely.
Text S3. Comparisons of Mechanical and Humanoid Robot Behaviors
Between-Groups Design
From a total possible sample size of 121 students, 115 completed all three subscales. Most
were 16 (59, 51%), 17 (41, 36%) or 15 (12, 10%) years old, but one was 13 and two were
18. There was an even split of Year 11 (52%) and Year 12 (48%) participants. Any
missing item data were substituted by the subscale average for that participant. As shown
in Table 1, between-group ANOVAs showed no significant differences on any subscale
except RSES Operation.
S3 Table 1. Between-Group Differences on Each Subscale
Subscales
M, SD (Humanoid)
M, SD (Mechanical)
F (1, 113)
p
Partial η2
RSES Application
38.37 (8.54)
43.00 (9.71)
0.045
0.832
<0.001
RSES Operation
26.81 (9.78)
27.21 (10.36)
7.344
0.008
0.061
RIS Emotional
30.60 (10.47)
27.82 (9.56)
2.206
0.140
0.019
RIS Utility
20.15 (8.12)
17.24 (9.09)
3.277
0.073
0.028
RIS Social
12.15 (7.84)
11.84 (8.58)
0.042
0.839
<0.001
RUI
22.48 (11.18)
19.96 (11.09)
1.468
0.228
0.013
RSES: Robot Self-Efficacy Scale; RIS: Robot Incentives Scale; RUI: Robot Usage
Intentions. RSES 0 – 110, RIS 0 – 120, RUI 0 – 50.
Within-Groups Design
From a total possible sample size of 60 students, 51 students took part and completed the
scales. Most were 15 (44, 86%), while 6 were 16 (12%) and one (2%) was 14 years of age.
Almost all reported either no experience (48%) or little experience in using or
programming a robot (50%): just one reported a lot of experience. As shown in Table 2,
there were no significant differences on each subscale after the first robot observation,
which replicated similar findings to the between-groups design.
S3 Table 2. Between-Group Differences on Each Subscale after the First Robot
Observation
Subscale
M, SD (Humanoid)
M, SD (Mechanical)
F (1, 49)
p
Partial η2
RSES Application
21.25 (10.66)
22.96 (10.59)
0.330
0.568
0.007
RSES Operation
33.70 (14.71)
33.29 (10.49)
0.013
0.908
<0.001
RIS Emotional
29.58 (10.70)
26.66 (9.96)
1.014
0.319
0.020
RIS Utility
18.16 (7.88)
15.25 (8.93)
1.501
0.226
0.030
RIS Social/Relational
9.79 (6.29)
10.11 (8.56)
0.023
0.881
<0.001
RUI
19.62 (10.52)
16.92 (11.40)
0.765
0.386
0.015
RSES: Robot Self-Efficacy Scale; RIS: Robot Incentives Scale; RUI: Robot Usage
Intentions. RSES 0 – 110, RIS 0 – 120, RUI 0 – 50.
Repeated measures ANOVAs revealed significant Time by Condition effects on all
subscales (Table 3 and Fig 1).
S3 Table 3. Time by Condition Results for Each Subscale
Subscale
F (1, 49)
p
Partial η2
RSES Application
18.772
<0.001
0.277
RSES Operation
31.068
<0.001
0.388
RIS Emotional
76.204
<0.001
0.609
RIS Social/Relational
31.258
<0.001
0.389
RIS Utility
52.162
<0.001
0.516
RUI
53.285
<0.001
0.521
RSES: Robot Self-Efficacy Scale; RIS: Robot Incentives Scale; RUI: Robot Usage
Intentions.
Fig 1. Ratings of More Mechanical and Humanoid Robot Interactions (Sequentially
Presented In Random Order)
Text S4. Confirmatory Factor Analysis
S4 Table 1. Results of Confirmatory Factor Analyses
Robot Scales
Robust
χ
2
(df)
CFI
TLI
SRMR
RMSEA
AIC
Robot Self-Efficacy Scale
12 Items, 2 Factors
(Application, Operation)
329.455 (53)
0.873
0.841
0.060
0.115
19335.355
Dropping 1 Operation item
(“Get this robot to do what I
want”)
219.464 (43)
0.908
0.882
0.047
0.102
17752.967
Also correlating errors from 2 pairs
of Application items
*
170.086 (42)
0.933
0.912
0.042
0.088
17672.910
Robot Incentives Scale
3 Factors 12 Items
(Emotion, Utility,
Social/Relational)
205.269 (51)
0.951
0.936
0.032
0.087
18940.080
Robot Usage Intentions
Single Factor
24.916 (5)
0.978
0.957
0.013
0.100
7734.077
*Correlates errors between “Work with this robot to solve a problem” and “Work out what
to do by talking to this robot”; “Get this robot to do something for me” and “Make sure
this robot does the task I set it.”
Text S5. Prediction of Intentions from Self-Efficacy and Incentives
S5 Table 1. Multiple Regressions Predicting Intentions from Self-Efficacy and Incentives in
the Student Sample
R
R
2
Adjusted
R
2
SE
R2
Change
F Change
df
p
RIS first
Step 1
0.879
0.772
0.769
5.57513
0.772
223.656
3, 198
<0.001
Step 2
0.884
0.781
0.775
5.49530
0.009
3.897
2, 196
0.022
RSES first
Step 1
0.636
0.404
0.398
8.99299
0.404
67.484
2, 199
<0.001
Step 2
0.884
0.781
0.775
5.49530
0.377
112.314
3, 196
<0.001
Pearson correlations
Equation at Final Step
Subscale
r
p
B
SE
t
p
(Constant)
-3.689
1.467
-2.514
0.013
RIS Emotional
0.754
<0.001
0.323
0.052
6.222
<0.001
RIS Utility
0.814
<0.001
0.554
0.067
8.252
<0.001
RIS Social/Relational
0.702
<0.001
0.254
0.069
3.698
<0.001
RSES Application
0.633
<0.001
0.153
0.056
2.720
0.007
RSES Operation
0.370
<0.001
-0.049
0.045
-1.090
0.277
RIS: Robot Incentives Scale; RSES: Robot Self-Efficacy Scale
S5 Table 2. Multiple Regressions Predicting Intentions from Self-Efficacy and
Incentives in the Adult Sample
R
R
2
Adjusted
R
2
SE
R
2
Change
F Change
df
p
RIS first
Step 1
0.905
0.819
0.818
6.10424
0.819
589.679
3, 390
<0.001
Step 2
0.908
0.825
0.823
6.02298
0.006
6.297
2, 388
0.002
RSES first
Step 1
0.735
0.540
0.538
9.72761
0.540
229.590
2, 391
<0.001
Step 2
0.908
0.825
0.823
6.02298
0.285
210.640
3, 388
<0.001
Pearson correlations
Equation at Final Step
Subscale
r
p
B
SE
t
p
(Constant)
-4.539
1.182
-3.839
<0.001
RIS Emotional
0.821
<0.001
0.270
0.042
6.395
<0.001
RIS Utility
0.866
<0.001
0.593
0.057
10.352
<0.001
RIS Social/Relational
0.769
<0.001
0.326
0.054
6.085
<0.001
RSES Application
0.735
<0.001
0.148
0.048
3.070
0.002
RSES Operation
0.528
<0.001
-0.014
0.035
-0.382
0.702
RIS: Robot Incentives Scale; RSES: Robot Self-Efficacy Scale
... The Robot Incentives Scale (RIS) [81] measures perceived incentives to engage with a social robot. It includes three subscales: 'Emotion' with 5 items for its likability, 'Social' with 3 items for social/relational aspects, and 'Utility' with 4 items for perceived utility. ...
... The Robot Usage Intention (RUI) is a 5-item question set (0 = Not at all, 10 = Definitely) assessing how willing people would be to interact with the robot. The scales have been tested on different age range samples (adolescents to older adults), and shown to be sensitive to change across multiple timepoints The scale can assist in the prediction for willingness to engage social robots both in the short and long-term [81]. Cronbach's alpha in the current study was excellent (α = 0.95). ...
... All items were measured on an 11-point scale (0 = Not at all, 10 = Definitely). It has previously been tested in a prior human-robot interaction trial [81][82][83]. In the current study, the items for both Likely and Comfort were summed into two subscales representing likelihood or comfort to discuss health (medical symptoms or conditions and mental health symptoms or conditions) and non-health (casual conversation topics, solving a problem or help with a task, and getting advice/support on a sensitive topic) topics. ...
Article
Full-text available
Mental health and psychological distress are rising in adults, showing the importance of wellbeing promotion, support, and technique practice that is effective and accessible. Interactive social robots have been tested to deliver health programs but have not been explored to deliver wellbeing technique training in detail. A pilot randomised controlled trial was conducted to explore the feasibility of an autonomous humanoid social robot to deliver a brief mindful breathing technique to promote information around wellbeing. It contained two conditions: brief technique training (‘Technique’) and control designed to represent a simple wait-list activity to represent a relationship-building discussion (‘Simple Rapport’). This trial also explored willingness to discuss health-related topics with a robot. Recruitment uptake rate through convenience sampling was high (53%). A total of 230 participants took part (mean age = 29 years) with 71% being higher education students. There were moderate ratings of technique enjoyment, perceived usefulness, and likelihood to repeat the technique again. Interaction effects were found across measures with scores varying across gender and distress levels. Males with high distress and females with low distress who received the simple rapport activity reported greater comfort to discuss non-health topics than males with low distress and females with high distress. This trial marks a notable step towards the design and deployment of an autonomous wellbeing intervention to investigate the impact of a brief robot-delivered mindfulness training program for a sub-clinical population.
... The Robot Incentives Scale (RIS) [81] measures perceived incentives to engage with a social robot. It includes three subscales: 'Emotion' with 5 items for its likability, 'Social' with 3 items for social/relational aspects, and 'Utility' with 4 items for perceived utility. ...
... The Robot Usage Intention (RUI) is a 5-item question set (0 = Not at all, 10 = Definitely) assessing how willing people would be to interact with the robot. The scales have been tested on different age range samples (adolescents to older adults), and shown to be sensitive to change across multiple timepoints The scale can assist in the prediction for willingness to engage social robots both in the short and long-term [81]. ...
... All items were measured on an 11-point scale (0 = Not at all, 10 = Definitely). It has previously been tested in a prior human-robot interaction trial [81][82][83]. In the current study, the items for both Likely and Comfort were summed into two subscales representing likelihood or comfort to discuss health (medical symptoms or conditions and mental health symptoms or conditions) and non-health (casual conversation topics, solving a problem or help with a task, and getting advice/support on a sensitive topic) topics. ...
Preprint
Full-text available
Mental health and psychological distress are rising in adults, showing the importance of wellbeing promotion, support, and technique practice that is effective and accessible. Interactive social robots have been tested to deliver health programs but have not been explored to deliver wellbeing technique training in detail. A pilot randomised controlled trial was conducted to explore the feasibility of an autonomous humanoid social robot to deliver a brief mindful breathing technique to promote information around wellbeing. It contained two conditions: brief technique training (Technique) and control designed to represent a simple wait-list activity to represent a relationship-building discussion (Simple Rapport). This trial also explored willingness to discuss health-related topics with a robot. Recruitment uptake rate through convenience sampling was high (53%). A total of 230 participants took part (mean age = 29 years) with 71% being higher education students. There were moderate ratings of technique enjoyment, perceived usefulness, and likelihood to repeat the technique again. Interaction effects were found across measures with scores varying across gender and distress levels. Males with high distress and females with low distress who received the simple rapport activity reported greater comfort to discuss non-health topics than males with low distress and females with high distress. This trial marks a notable step towards the design and deployment of an autonomous wellbeing intervention to investigate the impact of a brief robot-delivered mindfulness training program for a sub-clinical population.
... 3) Robot Incentives, Self-Efficacy and Usage Intention: The Robot Incentives Scale (RIS) is a 12-item questionnaire to measure incentives to interact with a social robot [23]. It is scored on an 11-point scale with three subscales: 'Emotion' for the likability of the robot (Range = 0 -40), 'Social' for sociability (Range = 0 -30), and 'Utility' for perceived utility (Range = 0 -40). ...
... It is scored on an 11-point scale with three subscales: 'Emotion' for the likability of the robot (Range = 0 -40), 'Social' for sociability (Range = 0 -30), and 'Utility' for perceived utility (Range = 0 -40). The Robot Self-Efficacy Scale (RSES) is an 11-item questionnaire to measure confidence to interact with the social robot [23]. It is scored on an 11-point scale and has two subscales: 'Operation' to operate the robot (Range = 0 -60) and 'Application' for applying it to a task (Range = 0 -50). ...
... It is scored on an 11-point scale and has two subscales: 'Operation' to operate the robot (Range = 0 -60) and 'Application' for applying it to a task (Range = 0 -50). Usage Intention involved a short question set about intention to use the robot (Range = 0 -50) [23]. This measure was collected at follow-up only due to the length of the measure and to minimise session time to under 10 minutes. ...
Conference Paper
Full-text available
Social robots have been used to help people to make healthy changes, and one setting that could benefit from having more support services offered includes the higher education sector. This trial involved an initial test to explore how a social robot could help to deliver a low-intensity problem-solving session for students around study-related issues and challenges. A Pepper Humanoid Robot was deployed in a student centre to help students to build a problem-solving plan on a specific issue. In the trial, 72 students gave detailed responses to session questions for issues such as procrastination, life/study balance and study workload. Students reported good ratings for emotional reaction to the robot, perceived utility, intention to use the robot again, confidence to use the robot, perceived helpfulness from the robot, likelihood to use the robot for a new higher education issue, and to recommend the robot to a friend. Robot evaluation scores were correlated with scores on perceived helpfulness of the robot and confidence to try an idea in the next week. Students who reported positive robot evaluation scores were also more willing to use the session content and rate the content as helpful. One week later, most students reported that the robot session helped them to fix their chosen issue, and that they used at least one idea from the session. Overall, this study found that a session run by a social robot could provide support for a study-related issue or challenge, and that some students did receive benefit from the session content. Future studies could include enhancements and adaptations to session length, technical refinement and capacity to address new issues during the session.
... F ref is the reference force, K is the force sensor stiffness, P M = 1/(Js 2 + Bs) is the motor dynamics, and C f b is the force feedback controller, designed as a proportional-derivative-integral (PID) controller whose gains are tuned using pidtune in MATLAB and the tuned values are given in Table III Nowadays, robots are applied in various fields to accomplish wide range of tasks [1][2][3][4] which has significantly increased the demand for collaborative robots [5][6][7]. These robot applications can be classified as robot-robot interaction [8], human-robot interaction [9], and robot-environment interaction [10][11][12]. Specific robot application examples include manufacturing and assembly [13][14][15][16], space exploration [17,18], social [9], medical, entertainment [19], construction industry [20,21], among others. ...
... These robot applications can be classified as robot-robot interaction [8], human-robot interaction [9], and robot-environment interaction [10][11][12]. Specific robot application examples include manufacturing and assembly [13][14][15][16], space exploration [17,18], social [9], medical, entertainment [19], construction industry [20,21], among others. ...
Thesis
Full-text available
In this thesis, advanced model-based robot force control algorithms are developed exploiting the availability of multisensor information towards attenuating the sensor noises and suppressing the effects of force disturbances. For joint-space single-dof application, a reduced-order multisensor-based force observer (RMFOB) for accurately estimating the force exerted on a load is developed. To suppress the effects of force disturbances in a robust way, a disturbance observer known as model-based force disturbance observer (FDOB) is proposed. Then, the RMFOB and FDOB are combined in a closed-loop setting to form a twofold observer-based force control system. Design methodology and systematic parameter tuning criteria for this double observer-based force control is developed. And lastly, towards high-performance motion control and contact stability, a novel integrated DOB (IDOB) is also developed and its effectiveness evaluated. The IDOB design concept is applied to a multi-dof system where an outer-loop integrated DOB-based admittance control method in task space (OIDOBt) is developed. This is implemented in task-space and outside the inner position/velocity control loop for the 6-DOF industrial manipulator.
... 3) Robot and Human Clinician Health Topic Questionnaire (HQ): A 12-item custom-made question set to assess how comfortable people would feel talking a robot (HQ-R) or human (HQ-H) as the conversation partner for the following topics: casual conversation, physical exercise, dietary intake, alcohol use, smoking, and mental health. 4) Robot Incentives, Self-Efficacy and Usage Intention (Patient Follow-up): The Robot Incentives Scale (RIS) is a 12-item questionnaire to measure incentives to interact with a social robot [24]. The 11-item Robot Self-Efficacy Scale (RSES) [24] measured confidence to interact with the robot. ...
... 4) Robot Incentives, Self-Efficacy and Usage Intention (Patient Follow-up): The Robot Incentives Scale (RIS) is a 12-item questionnaire to measure incentives to interact with a social robot [24]. The 11-item Robot Self-Efficacy Scale (RSES) [24] measured confidence to interact with the robot. Willingness to Use involved a short series of questions about intention to use the robot. ...
Conference Paper
Full-text available
Social robots have been used to promote health education and coaching to provide health information. Important behaviors to address and monitor include actions that can be modified, such as physical activity. These behaviours often require different personalised recommendations. Robots could be an effective way to give personalised health feedback based on scores, including in acute medical settings. This trial involved an automated social robot interaction in a health clinic to collect health data and provide personalized feedback on four key factors: exercise, diet, alcohol and cigarette use. Patients completed an 20-minute health questionnaire with a Pepper Robot in a clinic room during a health visit. The interaction was programmed to run autonomously with automatic scoring and feedback based on health scores. Instructions were delivered using co-verbal speech and detailed text on the tablet. Questions also included ratings on comfort to discuss health topics with a human or robot. Patients could choose to receive an optional follow-up in four weeks' time. A total of 47 patients completed the session. Patients reported being as comfortable to discuss health-related topics with a robot or human for exercise, diet, alcohol, cigarette use, and mental health. Program evaluation received moderate ratings for the robot on ease of use, usefulness and motivation to change a health behavior. No significant health changes were found 30 days later due to high initial health scores, leaving little room for improvement. This initial proof-of-concept trial found that a robot-delivered service could be deployed in a live health clinic in conjunction with patient visits.
... Robinson et al. [17] developed psychometric tools to assess incentives, intentions, and selfefficacy to use social robots. Based on data obtained from such measures, Stevie's features and characteristics can be refined to reflect such dimensions as they may predict acceptability and use of Stevie [18]. ...
... Social techniques for robots can include the use of physical touch (Chen et al., 2011), distance (Kim and Mutlu, 2014), facial expressions (Gonsior et al., 2011), co-verbal gesture (Salem et al., 2013), multi-modal speech to gesture (Aly and Tapus, 2016), eye gaze (Skantze et al., 2014;Stanton and Stevens, 2014), and pre-programmed behaviors clustered as personality traits (Mou et al., 2020). If used suitably, these features can increase positive interaction outcomes, including robot likability, enjoyment, task performance, communication quality, perceived helpfulness, empathy, anthropomorphic interpretation, and future interaction intentions such as willingness to use it (Aly and Tapus, 2016;Chen et al., 2011;Gonsior et al., 2011;Leite et al., 2013;Mou et al., 2020;Robinson et al., 2018;Salem et al., 2013;Skantze et al., 2014;Stanton and Stevens, 2014). Furthermore, interaction partners can have notable emotional responses to robots' behaviors (Guo et al., 2019), including the expression of similar emotions to those the robot is displaying in the interaction (Xu et al., 2015). ...
Article
Full-text available
This paper reports the design and qualitative evaluation of a social robot programmed to deliver a talk-based treatment program to improve health behaviour change for food intake and weight loss. A qualitative study was conducted to investigate factors that influenced human-robot interaction and its relationship to health treatment outcomes. Semi-structured interviews were undertaken on completion of a randomised controlled trial that used an autonomous robot to deliver a 4-week behavioral intervention to help coach people to decrease the consumption of high calorie foods. Questions focused on individuals’ preferences, learnings and outcomes from their participation in the trial. Twenty participants completed the treatment, and 18 conducted an interview. Content analysis found that a social robot to deliver a psychotherapeutic treatment was effective and feasible. Participants did make changes to their health behaviour change with a >50% reduction in high calorie intake and average reduction of 4.4 kilograms in weight loss. The robot received positive evaluations on its interactive nature and sociable persona. Most participants made improvements that were aligned with their chosen health goal after completing the robot-delivered sessions, and reported that the robot sessions helped them to achieve their behaviour change goals, such as consuming fewer high calorie foods. Detailed recommendations are provided for the future design of healthcare interventions by robots, including key considerations for robot behaviour, treatment content, and presentation of the program. Future recommendations are presented for the development of robot personalization to more closely resemble techniques and skills from client-centred counselling.
... From the perspective of affective engineering, these aspects of virtual assistants are inseparable from user motivation. However, user motivation, although generally recognized as an important factor in determining the attractiveness of contemporary and future virtual assistants [3], has been little studied in this context. Here, one must also distinguish between different types of motivation: intrinsic motivation is derived from the expectation of enjoyment through taking an action, whereas extrinsic motivation is oriented toward the consequences of the action [4]. ...
Article
Full-text available
With the growing utility of today's conversational virtual assistants, the importance of user motivation in human-artificial intelligence interactions is becoming more obvious. However, previous studies in this and related fields, such as human-computer interaction, scarcely discussed intrinsic motivation (the motivation to interact with the assistants for fun). Previous studies either treated motivation as an inseparable concept or focused on non-intrinsic motivation (the motivation to interact with the assistant for utilitarian purposes). The current study aims to cover intrinsic motivation by taking an affective engineering approach. A novel motivation model is proposed, in which intrinsic motivation is affected by two factors that derive from user interactions with virtual assistants: expectation of capability and uncertainty. Experiments in which these two factors are manipulated by making participants believe they are interacting with the smart speaker "Amazon Echo" are conducted. Intrinsic motivation is measured both by using questionnaires and by covertly monitoring a five-minute free-choice period in the experimenter's absence, during which the participants could decide for themselves whether to interact with the virtual assistants. Results of the first experiment showed that high expectation engenders more intrinsically motivated interaction compared with low expectation. However, the results did not support our hypothesis that expectation and uncertainty have an interaction effect on intrinsic motivation. We then revised our hypothetical model of action selection accordingly and conducted a verification experiment of the effects of uncertainty. Results of the verification experiment showed that reducing uncertainty encourages more interactions and causes the motivation behind these interactions to shift from non-intrinsic to intrinsic.
Article
Full-text available
The use of sophisticated machines at the workplace-e.g., robots equipped with artificial intelligence-is on the rise. Since humans tend to experience a threat to human uniqueness in response to machines with human-like mental capabilities, I explored whether the same holds true for status threat, a well-researched variable in the interpersonal workplace literature. Across two experiments (N1 = 104, N2 = 589), humans felt higher status threat towards a robot (Experiment 1, laboratory study) and an artificial intelligence (Experiment 2, online study) that outperformed a human in verbal-creative tasks, requiring agency and experience to solve. Contrary to results from human-human literature, higher status threat was linked with higher willingness to interact with the machine, which I trace back to its high perceived usefulness. I further interpret my findings as a hint that humans are open to using modern-day technology if they assume to benefit from the advantages the technology brings to their own work and therefore accept the feeling of status threat at the same time.
Article
Social robots can be an effective solution for the challenges facing an aging population. In this study, we apply the warmth and competence theoretical framework (Fiske et al., 2007) and the “computers as social actors” theory (Reeves & Nass, 1996) to understand how older Chinese adults perceive social robots. We surveyed 1480 Chinese older adults living in rural areas in Mainland China and tested two alternative hypotheses predicting the direction of the relationships between perceived warmth and competence and concerns about social robots. The results indicate that perceived competence was linked to increased technical, financial, and privacy concerns, while perceived warmth was connected to increased psychological concerns and decreased technical and financial concerns. These findings reveal that when social robots fulfill their promises to reduce older adults' health vulnerabilities, they can add a layer of psychological vulnerabilities. The conflicts between these two types of vulnerabilities may explain people's hopes and fears of social robots.
Article
Full-text available
Trust plays a critical role when operating a robotic system in terms of both acceptance and usage. Considering trust is a multidimensional context dependent construct, the differences and common themes were examined to identify critical considerations within human–robot interaction (HRI). In order to examine the role of trust within HRI, a measurement tool was generated based on five attributes: team configuration, team processes, context, task, and system (Yagoda in Human Factors and Ergonomics Society Annual Meeting, San Francisco, CA, pp. 304–308, 2010). The HRI trust scale was developed based on two studies. The first study conducts a content validity assessment of preliminary items generated, based on a review of previous research within HRI and automation, using subject matter experts (SMEs). The second study assesses the quality of each trust scale item derived from the first study. The results were then compiled to generate the HRI trust measurement tool.
Article
Full-text available
Negative attitudes toward robots are considered as one of psychological factors preventing humans from interacting with robots in daily life. To verify their inuence on humans' be- haviors toward robots, we designed and executed experiments where subjects interacted with Robovie, which is being developed as a platform for research on the possibility of communica- tion robots. This paper reports and discusses the results of these experiments on correlation between subjects' negative attitudes and their behaviors toward robots. Moreover, it discusses inuences of genders and experiences of real robots on their negative attitudes and behaviors toward robots.
Article
Full-text available
Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.
Article
As interaction with robots grows, humans are expected to develop greater rapport with them. Assuming that such further interaction will fuel developmental research, we developed a psychological scale for measuring rapport called the Rapport-Expectation with a Robot Scale (RERS). From a controlled experiment where human participants interacted with a robot with/without behaviors based on relational strategies, our validation process found the following: (1) our RERS scale had sufficient internal consistency; (2) the robot behaviors, which were based on relational strategies, increased the participants’ RERS scores; and (3) participants who treated the robot as a human—like conversation partner had higher RERS scores than those who did not.
Article
This commentary addresses the functional properties of perceived self-efficacy in the context of a set of studies contending that belief in one’s capabilities has debilitating or null effects. It encompasses four theoretical orientations. These include social cognitive theory rooted in an agentic perspective, control theory grounded in a cybernetic model, and trait self-efficacy theory and Big Five theory based on a decontextualized trait model. Critical analyses of the studies in question document their failure to fulfill key theoretical, methodological, analytical, and construct assessment requirements. The article extends beyond critical analyses of the published studies. It specifies the theoretical, methodological, and analytical requirements essential to the advancement of knowledge on the role that perceived self-efficacy plays in human self-development, adaption, and change at both the individual and collective levels.
Article
The concept of self-efficacy is receiving increasing recognition as a predictor of health behavior change and maintenance. The purpose of this article is to facilitate a clearer understanding of both the concept and its relevance for health education research and practice. Self-efficacy is first defined and distinguished from other related concepts. Next, studies of the self-efficacy concept as it relates to health practices are examined. This review focuses on cigarette smoking, weight control, contraception, alcohol abuse and exercise behaviors. The studies reviewed suggest strong relationships between self-efficacy and health behavior change and maintenance. Experimental manipulations of self-efficacy suggest that efficacy can be enhanced and that this enhancement is related to subsequent health behavior change. The findings from these studies also suggest methods for modifying health practices. These methods diverge from many of the current, traditional methods for changing health practices. Recommendations for incorporating the enhancement of self-efficacy into health behavior change programs are made in light of the reviewed findings.