Content uploaded by Alexander Toet
Author content
All content in this area was uploaded by Alexander Toet on Jul 30, 2015
Content may be subject to copyright.
Subjective User Experience and Performance
with Active Tangibles on a Tabletop Interface
Jan B.F. van Erp
1,2(&)
, Alexander Toet
1
, Koos Meijer
1
,
Joris Janssen
1
, and Arnoud de Jong
1
1
TNO Human Factors, Kampweg 5, Soesterberg, The Netherlands
{jan.vanerp,lex.toet,joris.janssen,arnoud.dejong}
@tno.nl
2
Human Media Interaction, University of Twente, Drienerlolaan 5,
Enschede, The Netherlands
Abstract. We developed active tangibles (Sensators) that can be used in
combination with multitouch tabletops and that can provide multisensory
(visual, auditory, and vibrotactile) feedback. For spatial alignment and rotation
tasks we measured subjective user experience and objective performance with
these Sensators. We found that active feedback increased accuracy in both tasks,
for all feedback modalities. Active visual feedback yielded the highest overall
subjective user experience and preference scores. Our contribution is that active
feedback improves subjectively perceived performance and reduces perceived
mental workload. Additionally, our findings indicate that users prefer to be
guided by visual signs over auditory and vibrotactile signs.
Keywords: Tangible interfaces User experience Tabletop Multimodal
Active feedback
1 Introduction
Tangible user interfaces (TUIs) provide an intuitive way to interact with digital
information through manipulating physical objects. They combine the dynamic qual-
ities typical of digital information with physical affordances. In combination with
multi-touch tables TUIs provide passive haptic feedback and visual feedback. Users
can handle TUIs similar to everyday objects, which simplifies system interaction and
reduces cognitive load. The power of this concept is for instance effectively illustrated
by Urp [1], an application for working with architectural elements in the context of
urban planning and design, which allows users to move physical models of houses
around on a tabletop surface to observe changes in sunlight and shadows.
Frequently quoted benefits of TUIs include eyes-free interaction, spatial multi-
plexing and bimanualism [2] and the natural affordances of tangible objects [3,4]. In
addition, they can be acquired faster and manipulated more accurately than for instance
multi-touch widgets or a mouse in simple control tasks [5,6].
Active feedback refers to the ability to actively influence the interaction, e.g. by
changing the object’s position or orientation, or multimodal feedback via the haptic,
©Springer International Publishing Switzerland 2015
N. Streitz and P. Markopoulos (Eds.): DAPI 2015, LNCS 9189, pp. 212–223, 2015.
DOI: 10.1007/978-3-319-20804-6_20
auditory or visual modality. Most TUIs currently used with multitouch tabletops are
passive objects that offer no active feedback. If active feedback is provided at all, it is
usually only in the visual modality and on the tabletop surface (e.g. in the form of halos
or virtual knobs around the tangibles), forcing users to look at the table to see the effect
of a manipulation. Actuated TUIs can convey task relevant information through
multiple or alternative sensory channels, thus facilitating information processing by
simplifying information and guiding user attention. In addition, multimodal feedback
has the ability to enhance both objective and subjective performance measures in tasks
with high perceptual and cognitive load [7]. Vibrotactile cues can effectively replace
visual progress information [8,9] and visual alerts [10], although they are not effective
when replacing visual direction or spatial orientation cues [10]. Audio cues have been
found to speed up task performance [11], and can attract attention to tangibles that are
outside the field of view. Hence, distributing feedback over different modalities may
reduce workload and enable multitasking by reducing within- channel interference
[12–14]. In addition, actuated TUIs that are linked to and dynamically represent digital
information afford bidirectional interaction while maintaining consistency between the
physical interfaces and their representations [15], between remotely coupled TUIs
[16,17], or between TUIs and their underlying digital model [18].
We developed active tangibles (Sensators, Fig. 1) that can wirelessly communicate
with for instance a tabletop surface and provide direct visual, auditory, vibrotactile, or
multimodal feedback while taking haptic input, thereby allowing an intuitive interac-
tion that stretches even beyond the boundaries of the tabletop surface. In addition,
wireless connectivity allows to store information in these objects and reuse them on
another tabletop or to superimpose different objects, thus enabling distributed inter-
action between different users on different tables [19,20].
In contrast to passive tangibles, Sensators can actively guide the user and confirm
when they have reached a given location and orientation through multisensory feed-
back. Hence, Sensators have the potential to enhance and intensify the interaction and
collaboration experience between users (e.g. on multitouch surfaces) by supporting
new interaction styles and techniques. In this study we investigated user performance
with Sensators in two spatial tasks: a Movement and a Rotation task. Our first
hypothesis (H1) is that both tasks will be performed faster and with higher accuracy
Fig. 1. The Sensators. The grey areas are coated with conductive paint and are connected with
touch sensors. Left: activated Sensator emitting red light. Right: numbers on top serve to identify
the Sensators. Arrows serve to indicate their orientation.
Subjective User Experience and Performance with Active Tangibles 213
(i.e. a lower error rate) when receiving active feedback compared to receiving only
passive visual feedback. Our second hypothesis (H2) is that visual feedforward cues
signaling which of the Sensators have to be moved will lower task completion timed by
reducing search time. Thirdly, we hypothesize (H3) that any form of active feedback
will improve subjective user experience by reducing the amount of cognitive effort
required to determine the state of the system and the effects of one’s actions. Finally,
(H4) we expect that active visual feedback will enhance user experience to a larger
extent than auditory and tactile feedback, because people are less experienced at being
guided by vibrotactile [21] and auditory signs [10,22,23], and because vision dom-
inates the auditory and haptic sense in spatial tasks [22,24].
2 Related Work
Active tangibles have been introduced before in the context of tabletop systems. The
Haptic Wheel [25] is a mobile rotary controller providing haptic feedback for eyes-free
interaction to reduce cognitive load and visual attention in visually demanding tasks.
The SmartPuck [26] is a multimodal tangible interface providing visual, auditory and
tactile feedback which has been used to navigate geographical information in Google
Earth. Touchbugs [27] are autonomously moving TUIs that can provide haptic feed-
back and accept haptic input. Tangibles that move autonomously [18] or provide
vibrotactile collision feedback [28] have been demonstrated in furniture arrangement
scenarios. Active tangibles have also been used for auditory and haptic rendering of
scatter plots [29].
However, there are only a few studies on user experience and performance with
active tangibles [30,31]. It appears that active tangibles can effectively support users in
solving complex spatial layout problems [30] and in fine-grained manipulation tasks
[31] by providing haptic feedback. In this study we further investigate the experience
and performance with active TUIs providing multisensory feedback in two spatial
tasks.
3 Methods
3.1 Participants
21 adults (10 females, 11 males, mean age = 36.7 years, age range: 20–48 years) were
recruited for this experiment. 17 participants were right-handed. None of the partici-
pants had any physical disabilities. Participants were compensated €30, - for their
participation. All participants used a computer on a weekly basis (M = 30 h per week).
12 participants played computer games on a weekly basis (M = 6 h per week).
20 participants had previous experience with multi-touch technology on mobile
phones. One participant had previous experience using a multi-touch tabletop interface.
214 J.B.F. van Erp et al.
3.2 Material
A Samsung Surface 40 (SUR 40: www.samsung.com) was used as a multi-touch
tabletop computing device. The SUR 40 allows users to directly manipulate digital
content using both multi-touch hand gestures and through the manipulation of tangi-
bles. The SUR 40 features a 40 inch Full HD (1080p) display and multiple Bluetooth
connections. The active tangibles (referred to as ‘Sensators’in the rest of this study)
used in this experiment were specially developed to be used in combination with an
interactive multi-touch tabletop. Each Sensator (6.5 ×6.5 ×5.0 cm) includes an Ar-
duino mini micro-controller and a Bluetooth communication module which enables
communication with the SUR 40. The Sensators can convey vibrotactile, visual and
auditory feedback through various functional parts. Embedded in their translucent 3D
printed housing are two small electronic motorized actuators or tactors. Each tactor can
independently produce nine levels of vibrotactile signals. An RGB LED centered on
top underneath the housing enables a Sensator to display different colors. An embedded
mp3 audio processing shield enables a Sensator to play mp3 audio output signals.
A Sensator also contains two independent touch sensors connected to the top of its four
sides. See [32] for more details.
In this study each Sensator was marked both with its number (1, 2 or 3) and an
arrow symbol (2D orientation), while a fiducial marker was attached to its bottom. This
enabled the SUR40 to track both the location and orientation of each Sensator inde-
pendently. The experiment was performed in a brightly lit room with white walls. The
participant stood directly behind the SUR 40. A side table was placed next to the SUR
40 on its right side. A blue A4 paper sheet, attached to the surface of this side table
within reaching distance of the participant’s right arm, functioned as a target location
during some phases of the experiment. The experimenter stood about 3 m from the
participant, and operated a tripod mounted video camera to record the experiments.
3.3 User Tasks
Participants performed two task: a Movement Task which involved the displacement of
the Sensators to different designated target positions, and a Rotation Task which
involved adjustment of the orientation of the Sensators to a single designated target
site. During the experiments the display of the SUR40 showed an abstract map with a
realistic appearance but without any cues that might interfere with the task. A map was
used as background since it likely that Sensators will ultimately be applied in the
context of geographical information displays.
The three Sensators were placed on their corresponding icons on the SUR40. Prior
to the start of each trial the screen showed two buttons: one on left side labeled ‘Start 1’
and one on the right side labeled ‘Start 2’. To ensure that the participant’s hands were
in the same starting position for each trial these two buttons had to be pressed
simultaneously for 200 ms. After starting a trial a grey button labelled ‘Finish’
appeared in the bottom center of the screen. By pressing this button the participant
could finish the trial. White circles labeled H1, H2 and H3 served as a passive visual
cues during the experiment, indicating target locations for the Sensators with corre-
sponding labels.
Subjective User Experience and Performance with Active Tangibles 215
Movement Task. At the start of each experiment the Sensators were placed on icons
with corresponding numbers (H1, H2, and H3) shown in the lower part of the SUR40
map display. On each trial up to three target icon positions were updated on the screen,
and the participant ‘s task was to move the Sensators with updated target locations to
their new positions as accurately and quickly as possible. Participants were asked not to
lift the Sensators from the table surface, since the SUR40 needs direct contact with a
Sensators’fiducial marker to track its location and pointing angle. They were allowed
to use both hands. Each trial that ended with one or more Sensators not in the correct
position was labeled as incorrect.
Performance in the Movement Task was tested for six different feedback tech-
niques, which served to quickly guide the users to the correct target location in a
stepwise fashion. Four feedback modes were actively provided by the Sensators: Vi-
brotactile, Auditory, Visual and Multimodal (= Visual + Auditory + Vibrotactile). In
addition, the Tabletop provided visual feedback, and we included no feedback or
Baseline condition. In the Vibrotactile feedback mode, the Sensators started vibrating
when they came within 300 p (22 p = 1.0 cm) of the target location. At a distance of
200, 100 and 50 p the vibration intensity increased stepwise. Vibration stopped when
the Sensator came within 8 pixels (the error margin) of the target location, indicating
the correct location had been reached. In the Auditory, Vibrotactile, and Multimodal
modes the intensity of the feedback signal increased stepwise while approaching the
target and stopped when the Sensator had reached its target position. The Auditory
feedback technique was similar to the Tactile technique, but instead of a vibration the
Sensator produced a tone that increased in pitch when it approached the target location.
In the Visual feedback mode Sensators with an updated target location turned blue.
When a Sensator came within the error margin of its target location the Sensator turned
Fig. 2. Tabletop mode (upper) and visual feedback (lower) modes used in the Movement Task.
The two Sensators on the left are in their correct positions.
216 J.B.F. van Erp et al.
green indicating the correct position had been reached (see Fig. 2). The Multimodal
feedback technique was a combination of the individual Vibrotactile, Auditory and
Visual feedback techniques. In the Tabletop visual feedback mode Sensators with an
updated target position were surrounded by a blue disc (R = 140 p) with an opacity of
50 %. This disc (which served as a feedforward cue) remained underneath a Sensator
while it was being moved to its new location, and it turned green when the target
position was reached (Fig. 2). In the Baseline mode, no feedback was provided and the
Sensators acted as passive tangibles. In this case, the users only had the target icons on
the tabletop surface as passive visual cues. Participants performed 15 trials per feed-
back technique.
Rotation Task. Throughout this experiment the Sensators were at fixed locations,
indicated on the SUR40 display by icons labeled with their corresponding numbers
(white discs with R = 3.5 cm, labeled H1, H2, and H3). On each trial of the Rotation
Task a single red disc (R = 1.5 cm) appeared at a different location on the SUR40
display, and the participants had to rotate all three Sensators to orient their pointing
angles to this target as accurately and quickly as possible. They were allowed to use
both hands. Each trial that ended with one or more Sensators not correctly oriented was
labeled as incorrect.
Performance in the Rotation Task was tested for six different feedback techniques,
which served to quickly point the users to the correct target orientation in a stepwise
fashion. In the Vibrotactile feedback mode a Sensator started vibrating when the
pointing angle of the Sensator came within 50 degrees of the angular direction of
the target. At angles of 40, 30 and 20 degrees the vibrating intensity increased until it
came within 10 degrees (the error margin), when it stopped vibrating indicating
it pointed in the correct direction. Auditory feedback was similar to Vibrotactile
feedback, but instead of vibrating the Sensator produced a tone which increased in
pitch until it pointed in the correct direction. In the Visual feedback mode all Sensators
became blue at the start of each trial and they turned green when the Sensators were
correctly oriented. A Sensator that was displaced for more than 5 cm from its start
position would turn red. The Multimodal feedback technique was a combination of the
Vibrotactile, Auditory and Visual feedback techniques. In the Tabletop visual feedback
mode a Sensator that needed to be reoriented was surrounded by a blue disc (R = 140 p)
with an opacity of 50 %, which turned green when the Sensator was turned in the
correct direction. In the Baseline mode, no feedback was provided and the Sensators
acted as passive tangibles. In this case, the participants could only use the target icons
on the tabletop surface as passive visual cues. Participants performed 15 trials per
feedback technique.
3.4 Experimental Design
The experiment was performed according to a 2 ×6 within-subject design with Task
type (Movement Task,Rotation Task) and Feedback modality (Vibrotactile,Auditory,
Visual,Multimodal,Tabletop,Baseline) as independent variables. Each of the six
feedback levels was tested in a separate block of trials. Each participant performed six
Subjective User Experience and Performance with Active Tangibles 217
blocks of 25 trials (150 trials in total). Each block started with 15 trials of the
Movement Task (of which the first three were practice trials), followed by 10 trials of
the Rotation Task (of which the first two were practice trials).
For each trial in the experiment we logged accuracy and task completion time.
Accuracy was defined as the fraction of trials that was correctly performed. Task
completion time was the time that elapsed between the start of a trial and the moment
the finish button was pressed. To measure perceived workload participants scored two
items from the NASA Task Load Index (NASA TLX: “How mentally demanding was
the task?”and “How successful were you in accomplishing what you were asked to
do?”) on a 20 point scale [33]. Participants also rated their overall experience of the
different feedback techniques on two nine point bipolar semantic rating scales from the
Questionnaire for User Interaction Satisfaction (or QUIS: respectively item 3.1: ranging
from ‘terrible’to ‘wonderful’, and item 3.4: ranging from ‘difficult’to ‘easy’[34–36]).
Finally, at the end of the experiment the participants were asked rank order the six
feedback modalities for both tasks from ‘most preferred’to ‘least preferred’. Analysis
of variance (ANOVA) was used to test the relationships between the main variables,
Bonferroni correction was applied where appropirate. The statistical analyses were
performed with IBM SPSS 20.0 for Windows (www.ibm.com).
3.5 Procedures
At the start of the experiment the participants read and signed an informed consent.
Then the experimenter explained the multi-touch system and the Sensators and he
demonstrated the Movement Task and the Rotation Task. Then six experimental blocks
were presented in a randomized order. After each block the participants rated the
applied feedback technique for each task using the UEQ. At the end of all six
experimental blocks the participants rank ordered the six feedback techniques
according to their subjective preference. The experimental protocol was reviewed and
approved by TNO internal review board on experiments with human participants, and
was in accordance with the Helsinki Declaration of 1975, as revised in 2000 [37].
4 Results
One participant was excluded because of an incomplete dataset. Nine participants
reported a lag in the multimodal feedback condition for both tasks, which probably
resulted from a software error. Analysis of the videos showed that the lag reached up to
300 ms. Since a lag of this magnitude will significantly affect the results [38] the
Multimodal condition was not further analyzed in this study.
The ANOVA showed that for both tasks the mean accuracy scores were signifi-
cantly lower for Baseline feedback (p < .001), while they did not differ significantly
between the other feedback techniques. Feedback resulted in an average of 92.5 %
accuracy, in contrast to a Baseline accuracy of 70 % (Table 1).
218 J.B.F. van Erp et al.
Incorrect scores were excluded in the calculation of the task completion time and
the scores were cutoff at 15.000 ms. For the Movement Task, one-way repeated
measures ANOVA revealed no significant difference between mean task completion
time in all five feedback conditions. For the Rotation Task, the ANOVA showed that
Tabletop feedback resulted in significantly faster task performance than both Vibro-
tactile and Baseline feedback (p < .001), while Visual feedback was also faster than
Vibrotactile feedback (p < .05). The means for both tasks are given in Table 2.
The NASA TLX measured perceived performance and mental workload on two 20
point scales. A Wilcoxon signed-rank test showed that for both tasks participants rated
their performance significantly higher in both the Visual and Tabletop feedback modes
than in the other modes (Z = −3.078, p < .005), while performance in the Baseline
mode was perceived as worse compared to all other feedback modes. A similar analysis
showed that both Visual and Tabletop feedback yielded significantly less perceived
mental workload than Vibrotactile and Baseline feedback, while Auditory feedback did
not differ significantly from all other feedback techniques. Since there were no inter-
action effects both scores were combined in Table 3.
Table 1. Mean (SD) accuracy per feedback condition (N = 20)
Feedback modality Movement task Rotation task
Auditory 0.96 (0.07) 0.96 (0.07)
Tabletop 0.93 (0.06) 0.96 (0.08)
Vibrotactile 0.89 (0.07) 0.86 (0.11)
Visual 0.92 (0.08) 0.95 (0.06)
Baseline 0.70 (0.21) 0.66 (0.21)
Table 2. Mean (SD) task completion time (ms) per feedback condition
(N = 20
Feedback modality Movement task Rotation task
Auditory 8153 (1021) 8388 (1358)
Tabletop 8202 (1241) 7869 (1204)
Vibrotactile 7935 (1139) 9232 (1461)
Visual 8005 (1343) 8183 (1104)
Baseline 7462 (1781) 9324 (1740)
Table 3. Mean (SD) NASA TLX scores per feedback condition
(N = 20)
Feedback modality Movement task Rotation task
Auditory 5.1 (3.2) 6.3 (4.4)
Tabletop 3.9 (2.9) 4.6 (2.9)
Vibrotactile 5.7 (2.7) 7.6 (4.0)
Visual 3.4 (2.3) 4.3 (2.3)
Baseline 9.2 (4.9) 9.0 (5.1)
Subjective User Experience and Performance with Active Tangibles 219
The QUIS was used to measure user experience on two nine point scales labeled
terrible-wonderful and difficult-easy. A Wilcoxon signed-rank test showed that for both
tasks participants rated both the Visual and Tabletop feedback as significantly
more wonderful than the Auditory, Vibrotactile and Baseline feedback techniques
(Z = -3.467, p = .001). A similar analysis showed that participants found both Visual
and Tabletop feedback significantly easier to use than Vibrotactile and Baseline
feedback, while there was no significant difference between Visual and Tabletop
feedback. Since there were no interaction effects both scores were combined in Table 4.
Since there were no interaction effects between both tasks, we combined their
raking scores for the different feedback techniques (Table 5). Wilcoxon signed-rank
tests showed that Visual feedback was rated significantly higher than Auditory, Vi-
brotactile and Baseline feedback (p < .001), while Tabletop feedback was rated sig-
nificantly higher than both Auditory and Baseline feedback (p < .005). There was no
significant difference (at the Bonferroni corrected alpha level of .005) between Audi-
tory, Baseline and Vibrotactile feedback (p = .04).
5 Conclusions and Discussion
Hypothesis H1 (both - Movement and Rotation- tasks will be performed faster and with
higher accuracy with active feedback) was only partly confirmed. Active feedback had
no effect on task completion time for the Movement Task. For the Rotation Task
however, both Visual and Tabletop feedback yielded significantly faster task perfor-
mance than Vibrotactile feedback, while Tabletop feedback also resulted in shorter task
completion times than Vibrotactile feedback. Also, all active feedback modes signifi-
cantly increased accuracy for both tasks, while there was no significant difference
between the accuracy in the different active feedback modes.
Table 4. Mean (SD) QUIS scores per feedback condition (N = 20)
Auditory 4.9 (2.0) 6.3 (1.7)
Tabletop 7.2 (1.2) 7.3 (1.6)
Vibrotactile 5.4 (1.7) 5.2 (1.6)
Visual 7.5 (0.9) 7.6 (1.0)
Baseline 4.3 (2.1) 3.8 (2.0)
Table 5. Mean (SD) rank scores per feedback condition (N = 20)
Feedback modality Rank
Auditory 4.0 (1.2)
Tabletop 2.1 (1.4)
Vibrotactile 3.9 (1.4)
Visual 1.9 (0.9)
Baseline 5.0 (1.1)
220 J.B.F. van Erp et al.
Hypothesis H2 (visual feedforward cues signaling which Sensators have to be
moved reduce search time and thereby task completion time) could not be tested due to
software errors.
Hypothesis H3 (active feedback improves subjective user experience) only holds
for the Visual and Tabletop feedback modes. Participants rated their performance
significantly higher in these feedback modes than in the other feedback modes while
performance in the baseline mode was perceived as worse compared to all other
feedback modes. Visual and Tabletop feedback also significantly reduced perceived
mental workload compared to Vibrotactile and Baseline feedback, while Auditory
feedback did not differ significantly from all other feedback modes in this respect.
Finally, hypothesis H4 (active visual feedback enhances user experience more than
auditory and tactile feedback) was also partly confirmed. Visual feedback was rated
significantly higher than Auditory, Vibrotactile and Baseline feedback, while Tabletop
feedback was rated significantly higher than both Auditory and Baseline feedback.
There was no significant difference between Auditory, Baseline and Vibrotactile
feedback.
Summarizing, we found that all active feedback techniques increased accuracy in
both tasks. Active visual (Visual and Tabletop) feedback yielded the highest accuracy
in both tasks, fastest performance in the Rotation task, and overall highest subjective
user experience and preference scores. Without active feedback (Baseline condition)
subjectively perceived performance was lowest and perceived mental workload was
highest. Although Visual and Tabletop feedback performed equally well in most cases,
Visual may be preferable, since visual feedback from the tangible itself reduces clutter
and occlusion on the display surface, and the signal remains visible when the tangible
is used beyond the boundaries of the tabletop. Future work should investigate the
potential added value of auditory or visual feedback in attracting attention to Sensators
that are outside the SUR40 surface, and further investigate optimal combinations of
multimodal feedback (in bi- and tri-modal combinations) and the effects of feedforward
cues on task completion time.
References
1. Underkoffler, J., Ishii, H.: Urp: a luminous-tangible workbench for urban planning and
design. In: Proceedings of the CHI 1999, pp. 386–393. ACM Press (1999)
2. Fitzmaurice, G.W., Buxton, W.A.S.: An empirical evaluation of graspable user interfaces:
towards specialized, space-multiplexed input. In: Proceedings of the CHI 1997, pp. 43–50.
ACM Press (1997)
3. Fitzmaurice, G.W., Ishii, H., Buxton, W.A.S.: Bricks: laying the foundations for graspable
user interfaces. In: Proceedings of the CHI 1995, pp. 442–449. ACM Press (1995)
4. Hurtienne, J., Stößel, C., Weber, K.: Sad is heavy and happy is light: population stereotypes
of tangible object attributes. In: Proceedings of the TEI 2009, pp. 61–68. ACM Press (2009)
5. Tuddenham, P., Kirk, D., Izadi, S.: Graspables revisited: multitouch vs. tangible input for
tabletop displays in acquisition and manipulation tasks. In: Proceedings of the CHI 2010,
pp. 2223–2232. ACM Press (2010)
Subjective User Experience and Performance with Active Tangibles 221
6. Weiss, M., Hollan, J.D., Borchers, J.: Augmenting interactive tabletops with translucent
tangible controls. In: Müller-Tomfelde, C. (ed.) Tabletops –Horizontal Interactive Displays,
pp. 149–170. Springer, London (2010)
7. Lee, J.-H., Spence, C.: Assessing the benefits of multimodal feedback on dual-task
performance under demanding conditions. In: Proceedings of the BCS-HCI 2008, vol. 1,
pp. 185–192. British Computer Society (2008)
8. Brewster, S., King, A.: An investigation into the use of tactons to present progress
information. In: Costabile, M.F., Paternó, F. (eds.) INTERACT 2005. LNCS, vol. 3585,
pp. 6–17. Springer, Heidelberg (2005)
9. van Veen, H.A.H.C., van Erp, J.B.: Tactile information presentation in the cockpit. In:
Brewster, S., Murray-Smith, R. (eds.) Haptic HCI 2000. LNCS, vol. 2058, pp. 174–181.
Springer, Heidelberg (2001)
10. Prewett, M.S., Elliott, L.R., Walvoord, A.G., Coovert, M.D.: A meta-analysis of vibrotactile
and visual information displays for improving task performance. IEEE Trans. SMC-C 42(1),
123–132 (2012)
11. Huang, Y.Y., Moll, J., Sallnäs, E.L., Sundblad, Y.: Auditory feedback in haptic
collaborative interfaces. Int. J. Hum. -Comp. Stud. 70(4), 257–270 (2012)
12. Wickens, C.D.: Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 3,
159–177 (2002)
13. van Erp, J.B.F., Werkhoven, P.: Validation of principles for tactile navigation displays. In:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 1687–
1691. SAGE Publications (2006)
14. Elliott, L.R., Van Erp, J.B.F., Redden, E.S., Duistermaat, M.: Field based validation of a
tactile navigation device. IEEE Trans. Haptics 3(2), 78–87 (2010)
15. Ressler, S., Antonishek, B., Wang, Q., Godil, A.: Integrating active tangible devices with a
synthetic environment for collaborative engineering. In: Proceedings of the Web3D 2001,
pp. 93–100. ACM Press (2001)
16. Richter, J., Thomas, B.H., Sugimoto, M., Inami, M.: Remote active tangible interactions. In:
Proceedings of the TEI 2007, pp. 39–42. ACM Press (2007)
17. Brave, S., Ishii, H., Dahley, A.: Tangible interfaces for remote collaboration and
communication. In: Proceedings of the CSCW 1998, pp. 169–178. ACM Press (1998)
18. Rosenfeld, D., Zawadzki, M., Sudol, J., Perlin, K.: Physical objects as bidirectional user
interface elements. IEEE CGA 24(1), 44–49 (2004)
19. Kubicki, S., Lebrun, Y., Lepreux, S., Adam, E., Kolski, C., Mandiau, R.: Simulation in
contexts involving an interactive table and tangible objects. Sim. Mod. Pract. Theory 31,
116–131 (2013)
20. Lepreux, S., Kubicki, S., Kolski, C., Caelen, J.: From centralized interactive tabletops to
distributed surfaces: The Tangiget concept. Int. J. Hum. -Comp. Interact. 28(11), 709–721
(2012)
21. Van Erp, J.B.F.: Guidelines for the use of vibro-tactile displays in human computer
interaction. In: Proceedings of Eurohaptics, pp. 18–22 (2002)
22. Nesbitt, K.: Designing multi-sensory displays for abstract data (Ph.D. Thesis). 2003.
Sydney, Australia, School of Information Technologies, University of Sydney (2003)
23. Sigrist, R., Rauter, G., Riener, R., Wolf, P.: Augmented visual, auditory, haptic, and
multimodal feedback in motor learning: A review. Psychon. Bull. Rev. 20(1), 21–53 (2013)
24. van Erp, J.B.F., Kooi, F.L., Bronkhorst, A.W., van Leeuwen, D.L., van Esch, M.P., van
Wijngaarden, S.J.: Multimodal interfaces: a framework based on modality appropriateness.
In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 1542–
1546. SAGE Publications (2006)
222 J.B.F. van Erp et al.
25. Bianchi, A., Oakley, I., Lee, J.K., Kwon, D.S., Kostakos, V.: Haptics for tangible
interaction: a vibro-tactile prototype. In: Proceedings of the TEI 2011, pp. 283–284. ACM
Press (2011)
26. Kim, L., Cho, H., Park, S., Han, M.: A tangible user interface with multimodal feedback. In:
Jacko, J.A. (ed.) HCI 2007. LNCS, vol. 4552, pp. 94–103. Springer, Heidelberg (2007)
27. Nowacka, D., Ladha, K., Hammerla, N.Y., Jackson, D., Ladha, C., Rukzio, E., Olivier, P.:
Touchbugs: actuated tangibles on multitouch tables. In: Proceedings of the CHI 2013,
pp. 759–762. ACM Press, (2013)
28. Riedenklau, E., Hermann, T., Ritter, H.: An integrated multi-modal actuated tangible user
interface for distributed collaborative planning. In: Proceedings of the TEI 2012, pp. 169–
174. ACM Press (2012)
29. Riedenklau, E., Hermann, T., Ritter, H.: Tangible active objects and interactive sonification
as a scatter plot alternative for the visually impaired. In: Proceedings of the ICAD-2010,
pp. 1–7. ACM Press (2010)
30. Patten, J., Ishii, H.: Mechanical constraints as computational constraints in tabletop tangible
interfaces. In: Proceedings of the CHI 2007, pp. 809–818. ACM Press (2007)
31. Pedersen, E.W., Hornbæk,K.: Tangible bots: Interaction with active tangibles in tabletop
interfaces. In: Proceedings of the CHI 2011, pp. 2975–2984. ACM Press (2011)
32. van Erp, J.B.F., Toet, A., Janssen, J.: Uni-, bi- and tri-modal warning signals: effects of
temporal parameters and sensory modality on perceived urgency. Saf. Sci. 72,1–8 (2015).
doi:10.1016/j.ssci.2014.07.022
33. Hart, S.G.: Nasa-Task Load Index (Nasa-TLX); 20 Years Later. In: Proceedings of the
Human Factors Ergonomics Society. Annual Meeting, HFES, pp. 904–908 (2006)
34. Chin, J.P., Diehl, V.A., Norman, K.L.: Development of an instrument measuring user
satisfaction of the human-computer interface. In: Proceedings of the CHI 1988, pp. 213–218.
ACM Press (1988)
35. Harper, B.D., Norman, K.L.: Improving user satisfaction: The questionnaire for user
interaction satisfaction version 5.5. In: Proceedings of the First Annual Mid-Atlantic Human
Factors Conference, pp. 224–228 (1993)
36. Slaughter, L.A., Harper, B.D., Norman, K.L.: Assessing the equivalence of paper and
on-line versions of the QUIS 5.5. In: Proceedings of the 2nd Annual Mid-Atlantic Human
Factors Conference, pp. 87–91 (1994)
37. World Medical Association. World Medical Association Declaration of Helsinki: Ethical
principles for medical research involving human subjects. J. Am. Med. Assoc. 284(23),
3043–3045 (2000)
38. Wickens, C.D., Hollands, J.G.: Engineering psychology and human performance, 3rd edn.
Prentice-Hall, Upper Saddle River (2000)
Subjective User Experience and Performance with Active Tangibles 223