ChapterPDF Available

The Effectiveness of Multimodal Sensory Feedback on VR Users’ Behavior in an L-Collision Problem

Authors:

Abstract and Figures

Virtual Reality (VR) is highly dependent on visual information, although it offers multimodal channels for sensory feedback. In this study, we compared the effectiveness of different sensory modalities in the context of collision avoidance in the industrial manufacturing process. Participants performed a pick-and-place task with L-shaped objects on a virtual workstation. In a between-subject design each person performed one of four conditions: Baseline, Auditory, Haptic, and Visual condition. We measured the timing and accuracy of the performed actions. Statistical testing by an ANOVA showed a significant main effect, i.e. a difference between the conditions. We observed the lowest number of collisions in the auditory condition followed by the haptic, baseline and visual conditions. Post hoc tests revealed a significant difference between the auditory condition, the most accurate, and the visual condition, the least accurate. This implies that giving additional feedback by the visual modality is not optimal and utilizing a fully multimodal interface has increased effectivity.
Content may be subject to copyright.
The Effectiveness of Multimodal Sensory
Feedback on VR UsersBehavior
in an L-Collision Problem
Sumin Kim
1(&)
, Krzysztof Izdebski
3
, and Peter König
1,2
1
Institute of Cognitive Science, Universität Osnabrück, Osnabrück, Germany
sumkim@uni-osnabrueck.de
2
Institut für Neurophysiologie und Pathophysiologie,
Universitätsklinikum Hamburg Eppendorf, Hamburg, Germany
3
SALT AND PEPPER Software GmbH & Co. KG, Osnabrück, Germany
Abstract. Virtual Reality (VR) is highly dependent on visual information,
although it offers multimodal channels for sensory feedback. In this study, we
compared the effectiveness of different sensory modalities in the context of
collision avoidance in the industrial manufacturing process. Participants per-
formed a pick-and-place task with L-shaped objects on a virtual workstation. In a
between-subject design each person performed one of four conditions: Baseline,
Auditory, Haptic, and Visual condition. We measured the timing and accuracy of
the performed actions. Statistical testing by an ANOVA showed a signicant
main effect, i.e. a difference between the conditions. We observed the lowest
number of collisions in the auditory condition followed by the haptic, baseline
and visual conditions. Post hoc tests revealed a signicant difference between the
auditory condition, the most accurate, and the visual condition, the least accurate.
This implies that giving additional feedback by the visual modality is not optimal
and utilizing a fully multimodal interface has increased effectivity.
Keywords: VR Multisensory feedback Collision Simulation
1 Introduction
Virtual Reality (VR) has evolved quickly in the last decade. The technical basis and
critical performance criteria have been largely improved and now allow for the
development of virtual environments with high immersion. Simultaneously, due to the
rising number of applications and enthusiasts, the price tag has dropped considerably.
This allows applications in new domains like entertainment, science, and industrial
applications. The industrial use of VR has its emphasis on simulating and prototyping
production processes in virtual environments. Notable features are the realistic ren-
dering of the environment including multimodal features, naturalistic behavior in the
VR by participants and dynamic feedback contingent on task performance. Thus, VR
tries to combine the best of both worlds and triggers a quantitative and qualitative
change in prototyping production processes. Boxplan, an example of industrial VR
software, is a virtual space where users plan their assembly stations at scale, create 3D
mock-ups and experience the assembly workow. Thus, they can faithfully test the
©Springer Nature Switzerland AG 2019
J. Trojanowska et al. (Eds.): Advances in Manufacturing II - Volume 1, LNME, pp. 381389, 2019.
https://doi.org/10.1007/978-3-030-18715-6_32
layout concept in industrial and economic contexts. Importantly, feedback by the later
users is quickly incorporated leading to short turnaround times and a reduction in costs.
For such applications, users require more than just a realistic visualization, but the
direct feedback that directs the realistic physical behavior of the users and the inter-
action with virtual objects [1]. Therefore, the goal of such a VR-application can be
achieved only by realistic experience, as it is essential to recognize that the realism of a
training simulation inuences training effectiveness.
Training in a virtual environment can take different forms. Performing complex
movements, such as in many sports disciplines, naturalistic feedback of desired move-
ment trajectories might be given. Other applications, where no single optimal behavior is
dened, might limit themselves to alarms when an error occurs. Specically, a collision,
one of the most common physical interactions, can trigger such an alarm using multiple
modalities. In a natural environment, it might be seen, be audible, or be felt. However,
basic virtual environments conne themselves to the visual modality, i.e. when a col-
lision occurs the visible movement is stopped. This establishes a baseline condition.
Additionally, signals in other modalities, either in naturalistic form or as standardized
alarms may be supplied [2]. That is, the realistic sound of the collision could give
feedback on the erroneous movement. This, however, would require simulation of the
material properties, which is well beyond the scope of familiar virtual environments.
Therefore, standardized acoustic alarms are often used. Similar concerns apply to
feedback by the tactile modality [3]. To implement natural force feedback for free
movements is much more demanding than a simple vibration alarm. Still, modern
technology provides many choices for multimodal feedback. In this respect, multimodal
feedback has been investigated in different scenarios [4]. The effectiveness of tactile,
visual, and auditory warnings for the case of rear-end collision prevention in simulated
driving has already been demonstrated [5]. It found that the collision warning systems
consisting of different sensory feedback have a reliable effect on users behavior and
inuence the number of collisions made [5]. A few studies already revealed that mul-
timodal feedback design could enhance motor learning and reduce workload, taking
advantage of each modality, which is especially benecial for complex task and pro-
duction process in industries [2].
The importance of haptic feedback in VR is quickly growing (e.g., HaptX) and has
received considerable attention since the very early study of VR [68]. However, while
the comparison among general sensory feedback is competitively debated even in the
framework of VR, the effectiveness of their functions in different contexts has been
explored, but no general understanding is achieved [4]. Hence, the goal of our study is
to compare different sensory modalities regarding their effectiveness in collision
avoidance in VR.
2 Method
To analyze the effectiveness of different sensory-feedback modalities, we set up a study
that compares usersbehaviors in four different sensory-feedback conditions: a baseline
condition with naturalistic visual feedback, an auditory condition with an additional
auditory alarm, a haptic condition with an additional tactile alarm, and a visual
382 S. Kim et al.
condition with an additional color-changing visual alarm. We compare these conditions
in the L-collision problem. This problem describes a collision made by an L-shaped
object with other obstacles. When the head of the L-shaped object is behind the
obstacle, a behavior of pulling out the object causes collisions easily. Therefore it
provides a suitable environment to address the question of interest.
2.1 Participants
In our experiment, 65 volunteers (21 female, 44 male), aged between 19 and 35,
participated. All participants had an average or corrected-to-normal vision and did not
have any known neurological conditions. Due to a misunderstanding of the task
instructions, the data from two participants were excluded. In total, we measured 15
participants in the baseline condition, 15 participants in the auditory condition, 17
participants in the haptic condition and 16 participants in the visual condition. Each
participant was introduced to only one of the four conditions, and the condition was
randomly chosen before the participant was known to the experimenter.
2.2 Apparatus
For our study, we used a VR-ready PC with Nvidia 1070, Unity3D 5.6.3p2, NewtonVR
and HTC Vive HMD (110-degree eld of view, 90 Hz, resolution 1080 1200 px per
eye). As we used the NewtonVR environment in our study, we decided to set a pure
NewtonVR condition with only its natural visual cue as our control condition (baseline
condition).
Fig. 1. (a) This gure describes the experimental setup scene of the shelf and the box.
A participant needed to pick up an L-shaped object, pull out of the shelf and place into the box to
complete a trial (top). (b) This is a picture of the rst shelf L-shaped objects (5 per each story)
and obstacles (bottom). (c) Participants were given enough time to adapt to the VR environment
(right).
The Effectiveness of Multimodal Sensory Feedback on VR UsersBehavior 383
2.3 Task
The task employed in this study was the L-collision problem (Fig. 1). A two-story shelf
with different-sized L-shaped objects was positioned in front of the user. Obstacles
were mainly consisting of two types: First, an obstacle with a minimal gap between the
ceiling of the shelf and itself so that the user could rotate the L-shaped object or pull it
to the side. Second, an obstacle that has a large enough gap between itself and the
ceiling, such that the user can apply any movements, for example merely lifting the L-
shaped object and pulling it out directly. The participants were instructed to pull out ten
L-shaped objects out of obstacles on the two-story shelf under one of the four different
feedback collision conditions. By putting the selected L-shaped object into the box
behind the user, the trial was considered to be completed. When users missed or
dropped the L-shaped object before they placed the object into the box, the trial was
recorded as a failure. Avoiding collisions with the given obstacles was not mentioned
so that the participants must recognize that by themselves via received feedback.
However, because the interacting object was L-shape, it was technically difcult to pull
it out and complete the task successfully when it stuck or collided with obstacles.
Therefore, by giving an instruction of pulling the L-shaped objects out of the obstacles,
the participants had to try to avoid the collision. The removal task was explained
individually to each participant. Participants was given one of the four different
feedback collision conditions: baseline, auditory, haptic and visual condition. In the
baseline condition, which served as the control condition, participants received no
feedback other than the natural visual cue. The visual cue here was given by default
physics setup of NewtonVR. The interacting object did not pass through the obstacles,
and participants could not complete the task without nding a way to avoid the
obstacles. All other conditions also included this type of natural NewtonVR visual cue.
On top of that, auditory, haptic and visual condition employed an additional modality
for feedback. Thus, these three conditions are multimodal feedback conditions
including the natural visual cue provided by NewtonVR setup as well. In the auditory
condition, an alarm sound played when the object touched any obstacles. In the haptic
condition, the controller performing the grabbing motion vibrated to indicate a colli-
sion. In the visual condition, participants received additional visual feedback via having
the color of the object being changed every time it touched another object, here we call
them obstacles, and reverting to its original color once it is no longer touching other
objects. The L-shaped objects color material changed to black when it touched an
obstacle, and it changed back to its original color when it was moved away from the
obstacle.
2.4 Procedure and Analysis
We recorded the number and timing of collisions. Each collision was labeled with the
trial number and the index of the specic L-shaped object involved in the collision.
Completion of the trial was dened such that the L-shaped object interacted with the
collider of the box, which was placed behind the user. Successful completion was noted
as well as the number of failures to complete the task. In order to make the collisions
comparable in different conditions, consistent feedback types were provided throughout
384 S. Kim et al.
each multimodal condition. For example, the same feedback color in visual condition,
the same beeping sound in auditory condition, and the same vibration frequency in
haptic condition were used. The analysis was performed using a one-way ANOVA and
Tukey Tests for post-hoc analysis. For outlier treatment, capping process was used by
replacing those observations outside the lower limit with the value of 5th percentile and
those that lie above the upper limit, with the value of 95th percentile before parametric
testing by an ANOVA.
3 Results
As a rst step, we performed two controls that could potentially inuence the inter-
pretation of our results. Specically, we examined learning effects and compared the
level of difculty for different objects. For this reason, two different sequences of the
number of collisions were visualized before the analysis of the number of collisions
among different sensory-feedback types.
Fig. 2. (a) The number of collisions for the different trials; (b) The number of collisions for the
different L-shaped objects; (c) Boxplot on the four different conditions to compare their mean
value occurred.
The Effectiveness of Multimodal Sensory Feedback on VR UsersBehavior 385
As each participant performed the task in a pseudo-random sequence, we checked
whether a learning effect occurred over the trials. The Fig. 2a demonstrates that most of
the users made more collisions on their rst trial than any other trial, and the least
number of collisions on their last trial. However, a higher index of the trial did not
always lead to fewer collisions. In the other trials, trial 2 to trial 9 seem to vary on the
number of collisions. On the analysis of the Fig. 2b, the fth object (L5) seemed to be
the most challenging object for the users. It caused more than 20 collisions on average.
In comparison, the fourth object (L4) was the easiest, with less than ve collisions on
average. However, besides these two extreme cases, the variation in the number of
collisions for different objects was moderate and, thus, added a limited amount of
variance to each task. Hence, we concluded that there was no strong learning effect that
occurred after the rst trial in this task. Also, as all subjects handled all objects and
performed the identical number of trials the data might be well averaged over these two
variables.
For the next step, we focused on the differences between the conditions to explore
the effectiveness of multimodal feedback. The auditory condition resulted in the least
number of collisions, followed by the haptic condition, the baseline condition, and
nally the visual condition. For statistical analysis, a one-way between subjects
ANOVA was conducted to compare the number of collisions of the virtual L-shaped
objects and obstacles in the different conditions. There was a signicant difference in
the number of collisions for the four conditions [F(3) = 2.9, p = 0.0424]. As we found
a statistically signicant main effect of condition, we computed a Tukey post-hoc test.
Tukey HSD post-hoc test indicated that the mean score for the auditory condition
(M = 94.6, SD = 58.98) was signicantly different than for the visual condition
(M = 151.03, SD = 88.32) [p < 0.05.]. No other signicant pairwise differences were
found. Thus, we observe signicantly different numbers of collisions as a function of
condition, and specically that the number of collisions in the auditory feedback
condition is reduced in the pairwise comparison to the visual condition.
4 Discussion
With this experiment, we could demonstrate that the choice of modality inuences the
effectiveness of multimodal feedback. Specically, supplying the additional feedback
by another modality seems more effective than using the visual modality for natural
feedback and an alarm signal at the same time.
A prior study comparing the effectiveness between visual-auditory and visual-
tactile multimodal feedbacks on users in real-world task setup suggested that multi-
modal feedback is advantageous when compared to single modalities [4]. Specically,
it showed that visual-auditory feedback is most effective when a single task is being
performed [4]. Another prior study also made a convincing case for the inclusion of
multimodal feedback for common direct manipulations such as the drag-and-drop and
showed that the inclusion of auditory feedback was common to conditions that
improved performance [9]. These results obtained in real-world setups match the
observations of the present study using a VR setup. Here, that the auditory feedback
condition, which includes the natural visual feedback, shows the best performance.
386 S. Kim et al.
Furthermore, the visual feedback condition, which contained a visual alarm on top
of the natural visual feedback, performed signicantly worse in our study. We spec-
ulate that the central focus induced by the task in VR reduced attention to the peripheral
vision and made the visual alarm less effective [10]. The visually alarming feedback
detection, therefore, might not function as effectively as it would do in non-constrained
real-world conditions. Another nding was that the controlled baseline condition with
the natural visual cue did not signicantly differ from the visual feedback condition, if
anything it was slightly better. In other words, there was a large number of collisions
detected in the visual feedback condition, particularly compared to other conditions
including baseline condition. This provides evidence that a visual cue and additional
visual information convey a similar type of information. From this we can conclude
that both together do not improve the effectiveness of feedback.
These results are compatible with studies investigating the attentional bottleneck of
multiple modalities [1113]. They report that in a dual-task setup the interference is
reduced when multiple modalities are involved. Also, further studies demonstrated that
multimodal feedback is advantageous when compared to single modalities in a variety
of task setups [4,9,14]. In this respect, natural visual cue and additionally designed
visual feedback of a color change are both mediated by the visual modality. This
explains why the visual condition, which is technically a single modality feedback
condition, performed worse than other multimodal feedback conditions.
A further reason for the increased effectiveness of auditory feedback might be that
participants are only adapted to realistic feedback. In the real world, when collisions
occur between two objects, auditory and haptic feedback are naturally generated, as
well as force feedback and the natural visual cue. However, the change of color of the
colliding object is rather articial feedback. As our motor control is inuenced by
internal representations of the actual and predicted states of our body and the external
world environment [15], such an articial cue would not be predicted in the case of an
error, i.e., is harder to interpret. We speculate that the lack of natural internal repre-
sentations due to the unrealistic feedback, such as the changing of the color of the
object, can lead to ineffectiveness in task performance in VR.
In line with our study, there is a study that shows the utilization of the haptic
feedback in telepresence assembly task environment, whose setup is comparable to
virtual environments. Although it emphasized on the utilization of the haptic feedback,
it highlights that more realistic presence under haptic feedback was achieved by other
modalities such as a visual bar graph or an auditory stimulus [8], supporting the
effectiveness of the auditory feedback.
Another nding is in accordance with our result, showing the efcacy of multi-
modal feedback in general [4,9,14]. This was reached by a few studies, comparing the
effect of different modalities on users performance. One study specically found out
that the multimodal combination of visual-auditory feedback yields more favorable
results regarding performance than visual feedback alone in single task scenarios under
normal workload conditions [4]. Also, our study expanded the result of a joint task
study in a non-VR condition [13] to a VR condition, showing that auditory displays are
a viable option to receive task-related information in virtual reality as well.
To that end, our study demonstrates that different types of feedback should be
considered depending on the different contexts of VR applications in order to optimize
The Effectiveness of Multimodal Sensory Feedback on VR UsersBehavior 387
their effectiveness. Notably, our results cast a new light on the function of sensory
modalities other than vision in VR. However, as pointed out in other similar studies,
with varying workloads, different modalities could offer additional advantages. Hence,
our research suggests further study into investigating these ndings in more specic
contexts or different tasks in order to apply them to particular practical cases.
Acknowledgments. We gratefully acknowledge the support by the project ErgoVR (BMBF,
KMU Innovativ V5KMU17/221) and the SALT AND PEPPER Software GbmH & Co.KG.
References
1. Ragan, E.D., Bowman, D.A., Kopper, R., Stinson, C., Scerbo, S., McMahan, R.P.: Effects of
eld of view and visual complexity on virtual reality training effectiveness for a visual
scanning task. IEEE Trans. Vis. Comput. Graph. 21(7), 794807 (2015)
2. Sigrist, R., Rauter, G., Riener, R., Wolf, P.: Augmented visual, auditory, haptic, and
multimodal feedback in motor learning: a review. Psychon. Bull. Rev. 20(1), 2153 (2013)
3. Hayward, V., Astley, O.R., Cruz-Hernandez, M., Grant, D., Robles-De-La-Torre, G.: Haptic
interfaces and devices. Sens. Rev. 24(1), 1629 (2014)
4. Burke, J.L., Prewett, M.S., Gray, A.A., Yang, L., Stilson, F.R., Coovert, M.D., Elliot, L.R.,
Redden, E.: Comparing the effects of visual-auditory and visual-tactile feedback on user
performance: a meta-analysis. In: Proceedings of the 8th International Conference on
Multimodal Interfaces, pp. 108117. ACM (2006)
5. Scott, J.J., Gray, R.: A comparison of tactile, visual, and auditory warnings for rear-end
collision prevention in simulated driving. Hum. Factors 50(2), 264275 (2008)
6. Burdea, G.C.: Keynote address: haptics feedback for virtual reality. In: Proceedings of
International Workshop on Virtual Prototyping, Laval, France, pp. 8796 (1999)
7. Srinivasan, M.A., Basdogan, C.: Haptics in virtual environments: taxonomy, research status,
and challenges. Comput. Graph. 21(4), 393404 (1997)
8. Petzold, B., Zaeh, M.F., Faerber, B., Deml, B., Egermeier, H., Schilp, J., Clarke, S.: A study
on visual, auditory, and haptic feedback for assembly tasks. Presence: Teleoper. Virtual
Environ. 13(1), 1621 (2004)
9. Jacko, J.A., Scott, I.U., Sainfort, F., Barnard, L., Edwards, P.J., Emery, V.K., Kongnakorn,
T., Moloney, K.P., Zorich, B.S.: Older adults and visual impairment: what do exposure times
and accuracy tell us about performance gains associated with multimodal feedback? In:
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 33
40. ACM (2003)
10. Khan, A.Z., Blohm, G., McPeek, R.M., Lefevre, P.: Differential inuence of attention on
gaze and head movements. J. Neurophysiol. 101(1), 198206 (2009)
11. Wahn, B., König, P.: Can limitations of visuospatial attention be circumvented? A review.
Front. Psychol. 8, 1896 (2017)
12. Wahn, B., König, P.: Is attentional resource allocation across sensory modalities task-
dependent? Adv. Cogn. Psychol. 13(1), 83 (2017)
13. Wahn, B., Schwandt, J., Krüger, M., Crafa, D., Nunnendorf, V., König, P.: Multisensory
teamwork: using a tactile or an auditory display to exchange gaze information improves
performance in joint visual search. Ergonomics 59(6), 781795 (2016)
388 S. Kim et al.
14. Lee, J.H., Spence, C.: Assessing the benets of multimodal feedback on dual-task
performance under demanding conditions. In: Proceedings of the 22nd British HCI Group
Annual Conference on People and Computers: Culture, Creativity, Interaction-Volume 1.
British Computer Society, pp. 185192 (2008)
15. Frith, C.D., Blakemore, S.J., Wolpert, D.M.: Abnormalities in the awareness and control of
action. Philos. Trans. R. Soc. London. Ser. B 355(1404), 17711788 (2000). https://doi.org/
10.1080/00140139.2015.1099742
The Effectiveness of Multimodal Sensory Feedback on VR UsersBehavior 389
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In daily life, humans are bombarded with visual input. Yet, their attentional capacities for processing this input are severely limited. Several studies have investigated factors that influence these attentional limitations and have identified methods to circumvent them. Here, we provide a review of these findings. We first review studies that have demonstrated limitations of visuospatial attention and investigated physiological correlates of these limitations. We then review studies in multisensory research that have explored whether limitations in visuospatial attention can be circumvented by distributing information processing across several sensory modalities. Finally, we discuss research from the field of joint action that has investigated how limitations of visuospatial attention can be circumvented by distributing task demands across people and providing them with multisensory input. We conclude that limitations of visuospatial attention can be circumvented by distributing attentional processing across sensory modalities when tasks involve spatial as well as object-based attentional processing. However, if only spatial attentional processing is required, limitations of visuospatial attention cannot be circumvented by distributing attentional processing. These findings from multisensory research are applicable to visuospatial tasks that are performed jointly by two individuals. That is, in a joint visuospatial task requiring object-based as well as spatial attentional processing, joint performance is facilitated when task demands are distributed across sensory modalities. Future research could further investigate how applying findings from multisensory research to joint action research may facilitate joint performance. Generally, findings are applicable to real-world scenarios such as aviation or car-driving to circumvent limitations of visuospatial attention.
Article
Full-text available
Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain’s costly resource expenditures and simultaneously maximizes capability to process currently relevant information.
Article
Full-text available
In joint tasks, adjusting to the actions of others is critical for success. For joint visual search tasks, research has shown that when search partners visually receive information about each other's gaze, they use this information to adjust to each other's actions, resulting in faster search performance. The present study used a visual, a tactile and an auditory display, respectively, to provide search partners with information about each other's gaze. Results showed that search partners performed faster when the gaze information was received via a tactile or auditory display in comparison to receiving it via a visual display or receiving no gaze information. Findings demonstrate the effectiveness of tactile and auditory displays for receiving task-relevant information in joint tasks and are applicable to circumstances in which little or no visual information is available or the visual modality is already taxed with a demanding task such as air-traffic control. Practitioner Summary: The present study demonstrates that tactile and auditory displays are effective for receiving information about actions of others in joint tasks. Findings are either applicable to circumstances in which little or no visual information is available or when the visual modality is already taxed with a demanding task.
Article
Full-text available
Haptic displays are emerging as effective interaction aids for improving the realism of virtual worlds. Being able to touch, feel, and manipulate objects in virtual environments has a large number of exciting applications. The underlying technology, both in terms of electromechanical hardware and computer software, is becoming mature and has opened up novel and interesting research areas. In this paper, we clarify the terminology of human and machine haptics and provide a brief overview of the progress recently achieved in these fields, based on our investigations as well as other studies. We describe the major advances in a new discipline, Computer Haptics (analogous to computer graphics), that is concerned with the techniques and processes associated with generating and displaying haptic stimuli to the human user. We also summarize the issues and some of our results in integrating haptics into multimodal and distributed virtual environments, and speculate on the challenges for the future.
Article
Full-text available
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. We conducted a controlled experiment to test the effects of display and scenario properties on training effectiveness for a visual scanning task in a simulated urban environment. The experiment varied the levels of field of view and visual complexity during a training phase and then evaluated scanning performance with the simulator's highest levels of fidelity and scene complexity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual complexity significantly affected target detection during training; higher field of view led to better performance and higher visual complexity worsened performance. Additionally, adherence to the prescribed visual scanning strategy during assessment was best when the level of visual complexity during training matched that of the assessment conditions, providing evidence that similar visual complexity was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training-evaluation in a more realistic setting may be necessary.
Article
Full-text available
Haptic displays are emerging as effective interaction aids for improving the realism of virtual worlds. Being able to touch, feel, and manipulate objects in virtual environments has a large number of exciting applications. The underlying technology, both in terms of electromechanical hardware and computer software, is becoming mature and has opened up novel and interesting research areas. In this paper, we clarify the terminology of human and machine haptics and provide a brief overview of the progress recently achieved in these fields, based on our investigations as well as other studies. We describe the major advances in a new discipline, Computer Haptics (analogous to computer graphics), that is concerned with the techniques and processes associated with generating and displaying haptic stimuli to the human user. We also summarize the issues and some of our results in integrating haptics into multimodal and distributed virtual environments, and speculate on the challenges for the future.
Article
Full-text available
Haptic interfaces enable person-machine communication through touch, and most commonly, in response to user movements. We comment on a distinct property of haptic interfaces, that of providing for simultaneous information exchange between a user and a machine. We also comment on the fact that, like other kinds of displays, they can take advantage of both the strengths and the limitations of human perception. The paper then proceeds with a description of the components and the modus operandi of haptic interfaces, followed by a list of current and prospective applications and a discussion of a cross-section of current device designs.
Conference Paper
This study examines the effects of multimodal feedback on the performance of older adults with different visual abilities. Older adults possessing normal vision (n=29) and those who have been diagnosed with Age-Related Macular Degeneration (n=30) performed a series of drag-and-drop tasks under varying forms of feedback. User performance was assessed with measures of feedback exposure times and accuracy. Results indicated that for some cases, non-visual (e.g. auditory or haptic) and multimodal (bi- and trimodal) feedback forms demonstrated significant performance gains over the visual feedback form, for both AMD and normally sighted users. In addition to visual acuity, effects of manual dexterity and computer experience are considered.
Article
It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborated.