FIGURE 6 - uploaded by Jussi Rantala
Content may be subject to copyright.
Mean error rates for different delays in Kangas et al. (2014d). © 2014 Association for Computing Machinery, Inc. Reprinted by permission. 

Mean error rates for different delays in Kangas et al. (2014d). © 2014 Association for Computing Machinery, Inc. Reprinted by permission. 

Source publication
Article
Full-text available
Vibrotactile feedback is widely used in mobile devices because it provides a discreet and private feedback channel. Gaze based interaction, on the other hand, is useful in various applications due to its unique capability to convey the focus of interest. Gaze input is naturally available as people typically look at things they operate, but feedback...

Context in source publication

Context 1
... varied the delay between the gaze event and the vibrotactile feedback to find out how much time we have for giving the feedback before the (too long) delay starts to affect the user's behavior. The results showed that there was a significant increase in the error rates (i.e. the gaze + button press did not match the target) with delays around 250 to 350 ms (see Figure 6). This falls within the typical range of average fixation durations reported in literature (see e.g. ...

Citations

... The sensation of vibrotactile stimuli has been studied for a long time [1], and extensive research has clarified the type of sensations that can be designed using different vibration actuators [2], [3]. Recently vibrotactile perceptions under multimodality have garnered research interest [4], [5]. ...
... Smartphones have now become a necessity in our lives and are one of the most familiar vibration actuators. Vibrations emitted by smartphones can impart various meanings to the information that users obtain from the screen [5]. With the widespread use of smartphones, vibratory stimuli have become a part of our daily lives [6]; the usage environment of smartphones is entirely different from the ones assumed in previous studies. ...
Preprint
Full-text available
Vibrations emitted by smartphones have become a part of our daily lives. The vibrations can add various meanings to the information people obtain from the screen. Hence, it is worth understanding the perceptual transformation of vibration with ordinary devices to evaluate the possibility of enriched vibrotactile communication via smartphones. This study assessed the reproducibility of vibrotactile sensations via smartphone in the in-the-wild environment. To realize improved haptic design to communicate with smartphone users smoothly, we also focused on the moderation effects of the in-the-wild environments on the vibrotactile sensations: the physical specifications of mobile devices, the manner of device operation by users, and the personal traits of the users about the desire for touch. We conducted a Web-based in-the-wild experiment instead of a laboratory experiment to reproduce an environment as close to the daily lives of users as possible. Through a series of analyses, we revealed that users perceive the weight of vibration stimuli to be higher in sensation magnitude than intensity under identical conditions of vibration stimuli. We also showed that it is desirable to consider the moderation effects of the in-the-wild environments for realizing better tactile system design to maximize the impact of vibrotactile stimuli.
... Since the interaction via gaze gestures is usually facilitated without a graphical user interface, the related studies focus more on vibrotactile feedback (Rantala et al., 2020). It was found that the implementation of vibrotactile feedback can reduce response time as well as improve the user's subjective evaluation (Kangas et al., 2014). ...
Article
We present an eye typing interface with one-point calibration, which is a two-stage design. The characters are clustered in groups of four characters. Users select a cluster by gazing at it in the first stage and then select the desired character by following its movement in the second stage. A user study was conducted to explore the impact of auditory and visual feedback on typing performance and user experience of this novel interface. Results show that participants can quickly learn how to use the system, and an average typing speed of 4.7 WPM can be reached without lengthy training. The subjective data of participants revealed that users preferred visual feedback over auditory feedback while using the interface. The user study indicates that this eye typing interface can be used for walk-up-and-use interactions, as it is easily understood and robust to eye-tracking inaccuracies. Potential areas of application, as well as possibilities for further improvements, are discussed.
... Activities where vibrotactile feedback is supported by experimental observations are posture correction (Bark et al., 2011;Ying and Morrell, 2010), rehabilitation training of the upper limb (Kapur et al., 2009), spatial guidance (Meier et al., 2015), aid for vestibular balance disorders (Sienko et at., 2013), gait (Crea et al., 2016), human-robot collaboration (Casalino et al., 2018), VR (Louison et al., 2015), AR (Zhu, Cao and Cai, 2020), prosthetics feedback (Chen, Feng and Wang 2016), and sports training (Alahakone and Senanayake, 2009). Additionally, it provides a discreet and private feedback channel that avoids stigmatizing gear setups (Rantala et al., 2017). ...
Conference Paper
Full-text available
Container lashers are at a significant risk of developing musculoskeletal diseases (MSDs) when working at port facilities. Repetitive strain injuries (RSIs) to the back, shoulders, wrists, and hands, in particular, are widespread. This work investigates the ability of a closed-loop vibrotactile motion guidance (VMG) system to teach an ergonomics-focused approach. The taught technique was developed for tensioning and loosening turnbuckles, an important step in container lashing. During five sessions, two groups, each with three participants, were observed. Participants' initial ability was tested in a baseline session. During this session, participants only receive auditory feedback. A VMG device is used to instruct the experimental group during the next three sessions. Traditional auditory feedback is used to teach the control group. Finally, neither group will wear the VMG device during the follow-up session. The findings of this study suggest that both VMG and auditory feedback training are effective training strategies for reducing postural error state (Wilcoxon Signed-Rank, p < 0.05). However, results suggest that VMG does not provide a significant error state reduction compared to auditory feedback training (Mann-Whitney, p > 0.05).
... In general, eye tracking can provide an unobtrusive way to observe, analyze and utilize a person's visual attention (Duchowski, 2017). Rantala et al. (2020) studied the temporal and spatial mechanisms between the combination of the gaze and tactile modalities, and found that tactile feedback performed equally well as both, visual and auditory feedback, to provide users with signals in an unobtrusive way. Moreover, Gkonos et al. (2017) combined a tactile belt with an eye tracker to provide pedestrians with navigation instructions. ...
Article
Full-text available
Contemporary aircraft cockpits rely mostly on audiovisual information propagation which can overwhelm particularly novice pilots. The introduction of tactile feedback, as a less taxed modality, can improve the usability in this case. As part of a within-subject simulator study, 22 participants are asked to fly a visual-flight-rule scenario along a predefined route and identify objects in the outside world that serve as waypoints. Participants fly two similar scenarios with and without a tactile belt that indicates the route. Results show that with the belt, participants perform better in identifying objects, have higher usability and user experience ratings, and a lower perceived cognitive workload, while showing no improvement in spatial awareness. Moreover, 86% of the participants state that they prefer flying with the tactile belt. These results suggest that a tactile belt provides pilots with an unobtrusive mode of assistance for tasks that require orientation using cues from the outside world.
... Recent development of low cost eye trackers has meant a significant expansion in the research on practical utilization of gaze in various kind of situations. Some practical use cases for eye tracking are, for example, 1) usability studies where the user's gaze behavior gives valuable information of which features the user is paying and not paying attention to (Jacob and Karn, 2003;Poole and Ball, 2006), 2) market research where gaze behavior is studied to learn what features in a product are noticed (Wedel and Pieters, 2008), and 3) as an input method for human-computer interfaces (Kangas et al., 2016;Morimoto and Mimica, 2005;Rantala et al., 2020). A special application area for gaze tracking has been setting up human-computer input methods for such disabled people who are unable to use other input technologies (Bates et al. (2007). ...
Article
Full-text available
Head mounted displays provide a good platform for viewing of immersive 360° or hemispheric images. A person can observe an image all around, just by turning his/her head and looking at different directions. The device also provides a highly useful tool for studying the observer’s gaze directions and head turns. We aimed to explore the interplay between participant’s head and gaze directions and collected head and gaze orientation data while participants were asked to view and study hemispheric images. In this exploration paper we show combined visualizations of both the head and gaze orientations and present two preliminary models of the relation between the gaze and the head orientations. We also show results of an analysis of the gaze and head behavior in relation to the given task/question.
... reduce Root Mean Square (RMS) of tilt angle [6] and Center of Pressure (CoP) [6], as well as percentage in time spent above threshold [7]. In contrast to the other mentioned modalities, vibrotactile biofeedback is unobtrusive, not distracting from other tasks [8] and not limiting other sensory organs (e.g. auditory or visual) [9]. ...
Conference Paper
Vibrotactile biofeedback can improve balance and consequently be helpful in fall prevention. However, it remains unclear how different types of stimulus presentations affect not only trunk tilt, but also Center of Pressure (CoP) displacements, and whether an instruction on how to move contributes to a better understanding of vibrotactile feedback.Based on lower back tilt angles (L5), we applied individualized multi-directional vibrotactile feedback to the upper torso by a haptic vest in 30 healthy young adults. Subjects were equally distributed to three instruction groups (attractive - move in the direction of feedback, repulsive - move in the opposite direction of feedback & no instruction - with attractive stimuli). We conducted four conditions with eyes closed (feedback on/off, Narrow Stance with head extended, Semi-Tandem stance), with seven trials of 45s each. For CoP and L5, we computed Root Mean Square (RMS) of position/angle and standard deviation (SD) of velocity, and for L5 additionally, the percentage in time above threshold. The analysis consisted of mixed model ANOVAs and t-tests (α-level: 0.05).In the attractive and repulsive groups feedback significantly decreased the percentage above threshold (p<0.05). Feedback decreased RMS of L5, whereas RMS of CoP and SD of velocity in L5 and COP increased (p<0.05). Finally, an instruction on how to move contributed to a better understanding of the vibrotactile biofeedback.
... Furthermore, a recent paper underlines the capacity of vibrotactile feedbacks to be equally effective to visual and auditory ones [24]. Finally, future evaluations may include the analysis of different Kbs layout as a QWERTY keyboard in which the location of the keys could be based on the relative frequency of letters in a lexicon corpus. ...
... suggested that "virtualization capabilities" influence information systems performance in an organization. Hence, the need to have an effective user interface that can be integrated into information systems design methodologies (Rantala et al., 2020). ...
... as mobile devices nowadays, comes with diverse functionalities & features (He et al., 2016). with different applications that requires different "user interface and interaction design" (Rantala et al., 2020). Research have shown that, there are over a billion of "smartphone" operators across the globe, comprising of "elderly peoples, children's and people disabilities" (Punchoojit & Hongwarittorrn, 2017). ...
Article
Full-text available
Nowadays modern information systems (emerging technologies) are increasingly becoming an integral part of our daily lives and has begun to pose a serious challenge for human-computer interaction (HCI) professionals, as emerging technologies in the area of mobile and cloud computing, and internet of things (IoT), are calling for more devotion from HCI experts in terms of systems interface design. As the number of mobile platforms users, nowadays comprises of children’s, elderly people, and people with disabilities or disorders, all demanding for an effective user interface that can meet their diverse needs, even on the move, at anytime and anywhere. This paper, review current articles (43) related to HCI interface design approaches to modern information systems design with the aim of identifying and determining the effectiveness of these methods. The study found that the current HCI design approaches were based on desktop paradigm which falls short of providing location-based services to mobile platforms users. The study also discovered that almost all the current interface design standard used by HCI experts for the design of user’s interface were not effective & supportive of emerging technologies due to the flexibility nature of these technologies. Based on the review findings, the study suggested the combination of Human-centred design with agile methodologies for interface design, and call on future works to use qualitative or quantitative approach to further investigate HCI methods of interface design with much emphasis on cloud-based technologies and other organizational information systems.
... The experiments presented in this work exclusively studied visual action effects since the most basic effect of each saccade lies in the visual perception of the post-saccadic object. Whether other reafferences of goal-oriented eye movements, for instance eye movements with vibrotactile (see Rantala et al., 2020, for a review and design guidelines of gaze interaction with vibrotactile feedback in human-computer interaction) or auditory feedback, might be used in a similar way to retrieve an action still needs to be tested. However, I propose that non-visual reafferences of action effects should principally also be capable of retrieving eye movements associated with generating the intended action effect. ...
Thesis
Full-text available
Humans use their eyes not only as visual input devices to perceive the environment, but also as an action tool in order to generate intended effects in their environment. For instance, glances are used to direct someone else's attention to a place of interest, indicating that gaze control is an important part of social communication. Previous research on gaze control in a social context mainly focused on the gaze recipient by asking how humans respond to perceived gaze (gaze cueing). So far, this perspective has hardly considered the actor’s point of view by neglecting to investigate what mental processes are involved when actors decide to perform an eye movement to trigger a gaze response in another person. Furthermore, eye movements are also used to affect the non-social environment, for instance when unlocking the smartphone with the help of the eyes. This and other observations demonstrate the necessity to consider gaze control in contexts other than social communication whilst at the same time focusing on commonalities and differences inherent to the nature of a social (vs. non-social) action context. Thus, the present work explores the cognitive mechanisms that control such goal-oriented eye movements in both social and non-social contexts. The experiments presented throughout this work are built on pre-established paradigms from both the oculomotor research domain and from basic cognitive psychology. These paradigms are based on the principle of ideomotor action control, which provides an explanatory framework for understanding how goal-oriented, intentional actions come into being. The ideomotor idea suggests that humans acquire associations between their actions and the resulting effects, which can be accessed in a bi-directional manner: Actions can trigger anticipations of their effects, but the anticipated resulting effects can also trigger the associated actions. According to ideomotor theory, action generation involves the mental anticipation of the intended effect (i.e., the action goal) to activate the associated motor pattern. The present experiments involve situations where participants control the gaze of a virtual face via their eye movements. The triggered gaze responses of the virtual face are consistent to the participant’s eye movements, representing visual action effects. Experimental situations are varied with respect to determinants of action-effect learning (e.g., contingency, contiguity, action mode during acquisition) in order to unravel the underlying dynamics of oculomotor control in these situations. In addition to faces, conditions involving changes in non-social objects were included to address the question of whether mechanisms underlying gaze control differ for social versus non-social context situations. The results of the present work can be summarized into three major findings. 1. My data suggest that humans indeed acquire bi-directional associations between their eye movements and the subsequently perceived gaze response of another person, which in turn affect oculomotor action control via the anticipation of the intended effects. The observed results show for the first time that eye movements in a gaze-interaction scenario are represented in terms of their gaze response in others. This observation is in line with the ideomotor theory of action control. 2. The present series of experiments confirms and extends pioneering results of Huestegge and Kreutzfeldt (2012) with respect to the significant influence of action effects in gaze control. I have shown that the results of Huestegge and Kreutzfeldt (2012) can be replicated across different contexts with different stimulus material given that the perceived action effects were sufficiently salient. 3. Furthermore, I could show that mechanisms of gaze control in a social gaze-interaction context do not appear to be qualitatively different from those in a non-social context. All in all, the results support recent theoretical claims emphasizing the role of anticipation-based action control in social interaction. Moreover, my results suggest that anticipation-based gaze control in a social context is based on the same general psychological mechanisms as ideomotor gaze control, and thus should be considered as an integral part rather than as a special form of ideomotor gaze control.
... Similarly, He et al. (2018) develop a methodology to predict human intention in real-time such that it can be used in HCI systems. Rantala et al. (2020) and Khamis et al. (2018) survey recent approaches for gaze-based HCI and present future directions in this area. ...
Article
Wearable devices have the potential to transform multiple facets of human life, including healthcare, activity monitoring, and interaction with computers. However, a number of technical and adaptation challenges hinder the widespread and daily usage of wearable devices. Recent research efforts have focused on identifying these challenges and solving them such that the potential of wearable devices can be realized. This monograph starts with a survey of the recent literature on the challenges faced by wearable devices. Then, it discusses potential solutions to each of the challenges. We start with the primary application areas that provide value to the users of wearable devices. We then present recent work on the design of physically flexible and bendable devices that aim to improve user comfort. We also discuss state-of-the-art energy harvesting and security solutions that can improve user compliance of wearable devices. Overall, this monograph aims to serve as a comprehensive resource for challenges and solutions towards self-powered wearable devices for health and activity monitoring.