Conference Paper

Interact with your car: a user-elicited gesture set to inform future in-car user interfaces

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In recent years, stereoscopic 3D (S3D) displays have shown promising results on user experience, for navigation, and critical warnings when applied in cars. However, previous studies have only investigated these displays in non-interactive use-cases. So far, interacting with stereoscopic 3D content in cars has not been studied. Hence, we investigated how people interact with large S3D dashboards in automated vehicles (SAE level 4). In a user-elicitation study (N=23), we asked participants to propose interaction techniques for 24 referents while sitting in a driving simulator. Based on video recordings and motion tracking data of 1104 proposed interactions containing gestures and other input modalities, we grouped the gestures per task. Overall, we can report a chance-corrected agreement rate of k = 0.232 and by that, a medium agreement among participants. Based on the agreement rates, we defined two sets of gestures: a basic and a holistic version. Our results show that participants intuitively interact with S3D dashboards and that they prefer mid-air gestures that either directly manipulate the virtual object or operate on a proxy object. We further compare our results with similar results in different settings and provide insights on factors that have shaped our gesture set.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Therefore, it is possible to develop new application scenarios within the detailed purposes of the driving context if the driver can conveniently input the surroundings to the IVIS [10,39,43,46]. Research has used direct spatial selection in the driving context to consider external surrounding objects as inputs [20,24,47]. Gomaa et al. demonstrated an improved (with respect to accuracy) external object referencing method using finger pointing and gaze [16]. ...
... These technologies can facilitate the transition between primary and secondary tasks and enable the driver to perform secondary tasks more conveniently [26]. Accordingly, exploratory studies have been conducted on the application of natural modalities to interaction techniques in autonomous driving [9,37,47]. However, further advances in autonomous driving technology are required before a fully autonomous driving scenario is realized [1]. ...
... In Point & Select, finger pointing is first used to make rapid and approximate selections and then, button manipulation is used for fine tuning. However, considering its use cases in a real driving context, there are a variety of modalities that can be combined with Point & Select in a natural manner, such as voice, gaze, and gesture [16,43,46,47]. In particular, voice input is a potential type of interaction modality in the driving context that can be combined with Point & Select to clearly communicate a POI to the IVIS, during the entire process from entering the POI to its applications after input. ...
Preprint
We propose an interaction technique called "Point & Select." It enables a driver to directly enter a point of interest (POI) into the in-vehicle infotainment system while driving in a city. Point & Select enables the driver to directly indicate with a finger, identify, adjust (if required), and finally confirm the POI on the screen by using buttons on the steering wheel. Based on a comparative evaluation of two conditions (driving-only and driving with input-task) on a simulator, we demonstrated the feasibility of the interaction in the driving context from the perspective of driver performance and interaction usability at speeds of 30, 50, and 70 km/h. Although the interaction usage and speed partially affected the driver's mental load, all the participants drove at an acceptable level in each condition. They carried out the task successfully with a success rate of 96.9% and task completion time of 1.82 seconds on average.
... At the time of writing (February 2020), much progress has been made on the reliability of HG recognition and UI design, both through SDK updates (e.g., LMC SDK v4 was released in 2018), the use of machine learning, UI design and best practices blogs, source code published on developer websites [37], as well as many academic papers that have helped unveil and mitigate some of the major flaws found in HG-controlled IVISs (and in other mid-air spatial interactions) such as fatigue, reliability and accuracy, therefore accelerating technology adoption [38]. For instance, recent elicitation studies specific to the automotive setting [39] have demonstrated that users prefer mid-air HGs that directly manipulate a virtual object or proxy object therefore defining a holistic gesture set that is intuitive and functional for performing canonical tasks on car-dashboards while driving. ...
... Figure 5 shows the gestures and corresponding mid-air haptics that were used in the first prototype we developed. These five HGs chosen were inspired from recent relevant literature [9] [18] [19] [20] to allow for direct comparison and also since these have already undergone user testing and have emerged through user feedback elicitation [39] or online survey studies tested via agreement metrics propped in [42]. A swiping HG was associated with moving to the next menu navigation as described in [33]. ...
... Finally, we have also tried to accompany HGs with mid-air haptics that convey contextual or subconscious information through the tactile (invisible) information channel. While rigorous testing is already underway by several groups aiming to help identify and quantitatively measure the reliability of HG controlled IVISs [39] [9], we think that these do not yet address or adequately utilize the full capability potential of mid-air haptics towards improving the reliability of HG-controlled interactive IVISs. To that end, we strongly believe that proper investigations that couple interaction design, hand tracking and mid-air haptic capabilities are needed if we are to unveil deep synergies between these technologies and thus have a step change in the reliability of such integrated systems. ...
Article
Full-text available
We present advancements in the design and development of in-vehicle infotainment systems that utilize gesture input and ultrasonic mid-air haptic feedback. Such systems employ state-of-the-art hand tracking technology and novel haptic feedback technology and promise to reduce driver distraction while performing a secondary task therefore cutting the risk of road accidents. In this paper, we document design process considerations during the development of a mid-air haptic gesture-enabled user interface for human-vehicle-interactions. This includes an online survey, business development insights, background research, and an agile framework component with three prototype iterations and user-testing on a simplified driving simulator. We report on the reasoning that led to the convergence of the chosen gesture-input and haptic-feedback sets used in the final prototype, discuss the lessons learned, and give hints and tips that act as design guidelines for future research and development of this technology in cars.
... At the time of writing (February 2020), much progress has been made on the reliability of HG recognition and UI design, both through SDK updates (e.g., LMC SDK v4 was released in 2018), the use of machine learning, UI design and best practices blogs, source code published on developer websites [37], as well as many academic papers that have helped unveil and mitigate some of the major flaws found in HG-controlled IVISs (and in other mid-air spatial interactions) such as fatigue, reliability and accuracy, therefore accelerating technology adoption [38]. For instance, recent elicitation studies specific to the automotive setting [39] have demonstrated that users prefer mid-air HGs that directly manipulate a virtual object or proxy object therefore defining a holistic gesture set that is intuitive and functional for performing canonical tasks on car-dashboards while driving. ...
... Figure 5 shows the gestures and corresponding mid-air haptics that were used in the first prototype we developed. These five HGs chosen were inspired from recent relevant literature [9] [18] [19] [20] to allow for direct comparison and also since these have already undergone user testing and have emerged through user feedback elicitation [39] or online survey studies tested via agreement metrics propped in [42]. A swiping HG was associated with moving to the next menu navigation as described in [33]. ...
... Finally, we have also tried to accompany HGs with mid-air haptics that convey contextual or subconscious information through the tactile (invisible) information channel. While rigorous testing is already underway by several groups aiming to help identify and quantitatively measure the reliability of HG controlled IVISs [39] [9], we think that these do not yet address or adequately utilize the full capability potential of mid-air haptics towards improving the reliability of HG-controlled interactive IVISs. To that end, we strongly believe that proper investigations that couple interaction design, hand tracking and mid-air haptic capabilities are needed if we are to unveil deep synergies between these technologies and thus have a step change in the reliability of such integrated systems. ...
Preprint
Full-text available
We present advancements in the design and development of in-vehicle infotainment systems that utilize gesture input and ultrasonic mid-air haptic feedback. Such systems employ state-of-the-art hand tracking technology and novel haptic feedback technology and promise to reduce driver distraction while performing a secondary task therefore cutting the risk of road accidents. In this paper, we document design process considerations during the development of a mid-air haptic gesture-enabled user interface for human-vehicle-interactions. This includes an online survey, business development insights, background research, and an agile framework component with three prototype iterations and user-testing on a simplified driving simulator. We report on the reasoning that led to the convergence of the chosen gesture-input and haptic-feedback sets used in the final prototype, discuss the lessons learned, and give hints and tips that act as design guidelines for future research and development of this technology in cars.
... User-defined gestures have been investigated by researchers in various contexts [7,8,15,19,26,27,31,34,36]. Wobbrock et al. [36] studied user-defined gestures for multi-touch surface computing, such as tabletop. ...
... In the work of Ruiz et al. [27], they utilized the user-defined method to develop the motion gesture set for mobile interaction. Weidner and Broll [34] proposed user-defined hand gestures for interacting with in-car user interfaces, and Troiano et al. [31] presented for interacting with elastic, deformable displays. These user-defined methods motivated our research. ...
Preprint
Full-text available
Recent research proposed eyelid gestures for people with upper-body motor impairments (UMI) to interact with smartphones without finger touch. However, such eyelid gestures were designed by researchers. It remains unknown what eyelid gestures people with UMI would want and be able to perform. Moreover, other above-the-neck body parts (e.g., mouth, head) could be used to form more gestures. We conducted a user study in which 17 people with UMI designed above-the-neck gestures for 26 common commands on smartphones. We collected a total of 442 user-defined gestures involving the eyes, the mouth, and the head. Participants were more likely to make gestures with their eyes and preferred gestures that were simple, easy-to-remember, and less likely to draw attention from others. We further conducted a survey (N=24) to validate the usability and acceptance of these user-defined gestures. Results show that user-defined gestures were acceptable to both people with and without motor impairments.
... For example, the steering wheel was found a suitable support for gesture input [4,5], while mid-air gestures could be performed in the Black/dp/B00U8IX7DQ) gearshift area [16] to control multimedia and navigation [12], audio, and climate controls [2]. Prior work has examined the ergonomics of in-car gesture input [27], the intuitiveness of gesture commands [18], gesture input usability [28], and gesture vocabulary design [6,11,13,14,24], including the use of gesture elicitation and analysis methods [22,26]. ...
... Several studies have implemented the user-defined gesture elicitation method [22,26] to discover user preferences for gestures inside the car [6,11,14,24]. For example, May et al. [14] were ...
... Safety & Driver Assistance [4, 5, 21, 22, 25, 26, 29, 31, 42, 43, 50-52, 70-72, 77, 82, 83, 86, 88-90, 96, 105, 107-109, 112, 115-119, 122, 131, 138, 144, 154, 156, 160, 161, 163, 183-185, 194, 199, 202, 208, 215, 216, 221, 231, 232, 239, 240, 245] Navigation & Routing [13,17,27,45,46,81,85,92,95,102,129,132,140,150,158,191,195,196,209,213,214,229,241,243] User Interface Design [16, 24, 47, 53, 54, 56, 60, 63, 74, 76, 78, 80, 91, 106, 110, 113, 114, 120, 133-135, 168, 173, 181, 200, 201, 203-205, 212, 226, 227, 242] Interaction Modalities [19,23,101,103,104,111,165,166,170,190,207,210,217,220,224,225] [14,84,97,141,148] We clustered the reviewed papers into a total of eight application areas which shows that there is a wide field for automotive HCI researchers investigating AR technology. Figure 2 shows the advancement of these application areas over time, and Table 1 displays the reviewed literature with respect to the individual application areas. ...
Article
Full-text available
There is a growing body of research in the field of interaction between drivers/passengers and automated vehicles using augmented reality (AR) technology. Furthering the advancements and availability of AR, the number of use cases in and around vehicles rises. Our literature review reveals that in the past, AR research focussed on increasing road safety and displaying navigational aids, however, more recent research explores the support of immersive (non-)driving related activities, and finally enhance driving and passenger experiences, as well as assist other road users through external human-machine interfaces (HMIs). AR may also be the enabling technology to increase trust and acceptance in automated vehicles through explainable artificial intelligence (AI), and therefore help on the shift from manual to automated driving. We organized a workshop addressing AR in automotive human-computer interaction (HCI) design, and identified a number of challenges including human factors issues that need to be tackled, as well as opportunities and practical usages of AR in future mobility. We believe that our status-quo literature analysis and future-oriented workshop results can serve as a research agenda for user interface designers and researchers when developing automotive AR interfaces.
... Progress in automotive display technology offers opportunities for innovative designs. Researchers and manufacturers have explored different in-car display technologies such as stereoscopic 3D displays [26], heads-up displays [13], interactive dashboards [2], and windshield displays [8]. Vehicles' exterior surfaces are also considered as potential design spaces [3,4]. ...
Conference Paper
Full-text available
Our work extends contemporary research into visualizations and related applications for automobiles. Focusing on external car bodies as a design space we introduce the External Automotive Displays (EADs), to provide visualizations that can share context and user-specific information as well as offer opportunities for direct and mediated interaction between users and automobiles. We conducted a design study with interaction designers to explore design opportunities on EADs to provide services to different road users; pedestrians, passengers, and drivers of other vehicles. Based on the design study, we prototyped four EADs in virtual reality (VR) to demonstrate the potential of our approach. This paper contributes our vision for EADs, the design and VR implementation of a few EAD prototypes, a preliminary design critique of the prototypes, and a discussion of the possible impact and future usage of external automotive displays.
Chapter
Holographic 3D (H3D) displays have the potential to enhance future car interiors and provide users with a new dimension of visual and interactive experience, offering a larger depth range than other state of the art 3D display technologies. In this work, a user-elicited gesture set for 3D interaction with non-driving related tasks was built and evaluated. As the H3D technology itself is still in development, mixed reality headsets (Hololens 1 and 2) were used to emulate a virtual H3D display. In a gesture-elicitation study, N = 20 participants proposed mid-air gestures for a set of 33 tasks (referents) displayed either within or outside of participants’ reach. The resulting set of most mentioned proposals was refined with a reverse-matching task, in which N = 21 participants matched referents to videos of elicited gestures. In a third evaluation step, usability and memorability characteristics of the user-elicited gesture set were compared to those of an expert-elicited alternative using a between-subjects design with N = 16 participants in each group. Results showed that while both sets can be learned and recalled comparably well, the user-elicited gesture set was associated with a higher gesture suitability and ease, a higher perceived intuitiveness and a lower perceived mental effort. Implications for future H3D in-car interfaces are discussed.
Article
Autonomous vehicles provide new input modalities to improve interaction with in-vehicle information systems. However, due to the road and driving conditions, the user input can be perturbed, resulting in reduced interaction quality. One challenge is assessing the vehicle motion effects on the interaction without an expensive high-fidelity simulator or a real vehicle. This work presents SwiVR-Car-Seat, a low-cost swivel seat to simulate vehicle motion using rotation. In an exploratory user study (N=18), participants sat in a virtual autonomous vehicle and performed interaction tasks using the input modalities touch, gesture, gaze, or speech. Results show that the simulation increased the perceived realism of vehicle motion in virtual reality and the feeling of presence. Task performance was not influenced uniformly across modalities; gesture and gaze were negatively affected while there was little impact on touch and speech. The findings can advise automotive user interface design to mitigate the adverse effects of vehicle motion on the interaction.
Thesis
Full-text available
In the near future, mixed traffic consisting of manual and autonomous vehicles (AVs) will be common. Autonomous vehicles with advanced technology offer opportunities for innovative designs and introduce communication challenges for vulnerable road users such as pedestrians and cyclists. Our goal is to explore the emerging new domain of interaction between different road users and autonomous vehicles in a future AV transportation ecosystem. This led us to conduct the thesis following these two themes: 1) understanding design opportunities for external automotive displays (EADs) of AVs; 2) exploring the design of interactions between vulnerable road users (VRUs) and AVs. In theme 1, our work extends contemporary research into visualizations and related applications for autonomous vehicles. Focusing on external car bodies as a design space we introduce a set of EADs. EADs show visualizations to share context and user-specific information and offer opportunities for interaction between users and AVs. We conducted a design study to explore design concepts for EADs to provide services to different road users: pedestrians, passengers, and drivers of other vehicles. Based on the design study, we prototyped four EADs in virtual reality (VR) to demonstrate the potential of our approach. This exploration contributes to our vision for EADs, a design critique of the prototypes, and a discussion of the possible impact and future usage of external automotive displays. In theme 2, we are interested in the ways pedestrians will interact with autonomous vehicles in the absence of non-verbal cues from the driver (such as eye movements, hand gestures, etc.). Crossing streets in these new situations could be more dangerous for VRUs without a proper communication medium. We examined a subset of this challenge with two groups of pedestrians: interaction between AVs and pedestrians with hearing aids (PHAs), and pedestrians in wheelchairs (PWs). First, we worked with hearing aid users as a preliminary exploration of this research. We conduct a co-design study with a co-designer with hearing impairment who has lived experience of wearing hearing aid enhancements. This study contributes several insights and design recommendations on how potential audio cues can be designed to enhance direct communications between PHAs and AVs. For the second part of our research, we designed interactions between pedestrians in wheelchairs and AVs. From an early exploration of potential interface designs through a design study with interaction designers, we prototyped different interfaces in VR. Then, we evaluated the implemented simulations during a co-design study with a powered wheelchair user following inclusive design practices. We identify and reflect on interface design ideas that can help PWs make safe crossing decisions at intersections and discuss design insights for implementing different inclusive interfaces.
Article
Full-text available
While augmented reality (AR) interfaces have been researched extensively over the last decades, studies on their application in vehicles have only recently advanced. In this paper, we systematically review 12 years of AR research in the context of automated driving (AD), from 2009 to 2020. Due to the multitude of possibilities for studies with regard to AR technology, at present, the pool of findings is heterogeneous and non-transparent. From a review of the literature we identified N = 156 papers with the goal to analyze the status quo of existing AR studies in AD, and to classify the related literature into application areas. We provide insights into the utilization of AR technology used at different levels of vehicle automation, and for different users (drivers, passengers, pedestrians) and tasks. Results show that most studies focused on safety aspects, driving assistance, and designing non-driving related tasks. AR navigation, trust in automated vehicles (AVs), and interaction experiences also marked a significant portion of the published papers, however a wide range of different parameters was investigated by researchers. Among other things, we find that there is a growing trend toward simulating AR content within virtual driving simulators. We conclude with a discussion of open challenges, and give recommendation for future research in automated driving at the AR side of the reality-virtuality continuum.
Article
Full-text available
While virtual reality (VR) interfaces have been researched extensively over the last decades, studies on their application in vehicles have only recently advanced. In this paper, we systematically review 12 years of VR research in the context of automated driving (AD), from 2009 to 2020. Due to the multitude of possibilities for studies with regard to VR technology, at present, the pool of findings is heterogeneous and non-transparent. We investigated N = 176 scientific papers of relevant journals and conferences with the goal to analyze the status quo of existing VR studies in AD, and to classify the related literature into application areas. We provide insights into the utilization of VR technology which is applicable at specific level of vehicle automation and for different users (drivers, passengers, pedestrians) and tasks. Results show that most studies focused on designing automotive experiences in VR, safety aspects, and vulnerable road users. Trust, simulator and motion sickness, and external human-machine interfaces (eHMIs) also marked a significant portion of the published papers, however a wide range of different parameters was investigated by researchers. Finally, we discuss a set of open challenges, and give recommendation for future research in automated driving at the VR side of the reality-virtuality continuum.
Article
We conduct an examination of the preferences of drivers and passengers alike for in-vehicle interactions with a multistudy approach consisting of (1) a targeted literature survey of applications and user interfaces designed to support interactions with in-vehicle controls and systems based on gesture and voice input; (2) a large-scale survey (N = 160 participants) to understand drivers and passengers’ preferences for driving and traveling by car; and (3) an end-user elicitation study (N = 40 drivers and passengers) to collect preferences for gesture and voice input commands for in-vehicle interaction. We analyze and discuss the gesture and voice commands proposed by our participants and describe their characteristics, such as production times for gesture input and the vocabulary size of voice commands.
Conference Paper
Full-text available
Spatially distributed warning signals are able to increase the effectiveness of Advanced Driver Assistance Systems. They provide a better performance regarding attention shifts towards critical objects, and thus, lower a driver´s reaction time and increase traffic safety. The question which modality is used best, however, remains open. We present three driving simulator studies (30 participants each) with spatially distributed warnings, whereby two focused on spatial-visual as well as auditory warnings respectively. The third study, which combined the most promising approaches from the previous studies, depicts a multimodal spatial warning system. All studies included a baseline without secondary tasks and warnings. Afterwards, subjects were confronted with multiple (30+) critical objects while performing a secondary task. The chronological order of warnings was randomly mixed between spatial, non-spatial and no warning during the first two studies. Data from reaction times, eye tracking data, and questionnaires were collected. Results show that spatial-visual directed warnings are more effective than non-spatial warnings in large distances, but subjects do have difficulties in detecting objects in peripheral regions when they are distracted. While auditory spatial warnings are not as efficient as literature implies, it still performed best in this particular situation. Results of the multimodal warning study, discussion and implications on Advanced Driver Assistance Systems (ADAS) conclude the paper.
Conference Paper
Full-text available
Interactive digital signage is increasingly deployed in urban environments, from airports and train stations, to cinemas and shopping malls, whilst their integration into public spaces introduces new possibilities in multimedia presentation. In this paper, we explore the impact of mid-air haptic feedback on user engagement during gesture-based interactions with digital posters. To that end, a user study with seventeen participants was undertaken with two independent variables: interactivity (high/low), and mid-air haptic cues (on/off), whilst user engagement and emotional affect was measured with respect to various metrics. In this first attempt to understand the significance of mid-air haptic interactions for digital signage, we found increased user engagement levels, comparable to, if not greater than those achieved by content gamification. In particular, our analysis suggests that mid-air haptic feedback significantly improved usability and aesthetic appeal in comparison to digital signage without haptic feedback. Similarly, a higher level of gamification was also found to boost user engagement and helped to offer more compelling experiences with digital signage.
Conference Paper
Full-text available
When operating a conditionally automated vehicle, humans occasionally have to take over control. If the driver is out-of-the-loop, a certain amount of time is necessary to gain situation awareness. This work evaluates the potential of smart stereoscopic 3D (S3D) dashboard visualizations to support situation assessment and by that, take-over performance. In a driving simulator study with a 4×2 between-within design, we presented smart take-over-requests (TOR) showing the current traffic situation on various locations in 2D and S3D to 52 participants performing the n-back task. Driving and glance behavior indicate that participants considered the smart TORs and by that, could perform more safe take-overs. Warnings in S3D and warnings appearing at the participant's focus of attention as well as the instrument cluster performed best.
Conference Paper
Full-text available
Technology acceptance is a critical factor influencing the adoption of automated vehicles. Consequently, manufacturers feel obliged to design automated driving systems in a way to account for negative effects of automation on user experience. Recent publications confirm that full automation will potentially lack in the satisfaction of important user needs. To counteract, the adoption of Intelligent User Interfaces (IUIs) could play an important role. In this work, we focus on the evaluation of the impact of scenario type (represented by variations of road type and traffic volume) on the fulfillment of psychological needs. Results of a qualitative study (N=30) show that the scenario has a high impact on how users perceive the automation. Based on this, we discuss the potential of adaptive IUIs in the context of automated driving. In detail, we look at the aspects trust, acceptance, and user experience and its impact on IUIs in different driving situations.
Conference Paper
Full-text available
With increasing automation, occupants of fully autonomous vehicles are likely to be completely disengaged from the driving task. However, even with no driving involved, there are still activities that will require interfaces between the vehicle and passengers. This study evaluated different configurations of screens providing operational-related information to occupants for tracking the progress of journeys. Surveys and interviews were used to measure trust, usability, workload and experience after users were driven by an autonomous low speed pod. Results showed that participants want to monitor the state of the vehicle and see details about the ride, including a map of the route and related information. There was a preference for this information to be displayed via an onboard touchscreen device combined with an overhead letterbox display versus a smartphone-based interface. This paper provides recommendations for the design of devices with the potential to improve the user interaction with future autonomous vehicles.
Conference Paper
Full-text available
Predictive touch is an HMI technology that relies on inferring, early in the pointing gesture, the interface item a driver or passenger intends to select on an in-vehicle display [1, 2]. It simplifies and expedites the selection task, thereby reducing the associated interaction effort. This paper presents two studies on drivers using predictive touch and focuses on evaluating the best means to facilitate selecting the intended on-display item. This includes immediate midair selection with the system autonomously auto-selecting the predicted interface component, hover/dwell and drivers pressing a button on the steering wheel to execute the selection action. These were arrived at in an expert workshop study with twelve participants. The results of the subsequent evaluation study with twenty four participants demonstrate, using quantitative and qualitative measures, that immediate mid-air selection is a promising assistive scheme, where drivers need not touch a physical surface to select interface components, thus touch-free control.
Conference Paper
Full-text available
We foresee conversational driver assistants playing a crucial role in automated driving interactions. In this video we present a study of user interactions with an in-vehicle agent, "Theo", under SAE Level 4 automated driving. We use a remote Wizard-of-Oz setup where participants, sitting in a driving simulator, experience real-life video footage transmitted from a vehicle in the neighborhood and interact with Theo to instruct the vehicle where to go. We configured Theo to present 3 levels of conversational abilities (terse, verbose and helpful). We show the results of 9 participants tasked to negotiate destinations and route changes. Voice interaction was reported as preferred means of communication with Theo. There was a clear preference for talkative assistants which were perceived more responsive and intelligent. We highlight challenging interactions for users such as vehicle maneuvers in parking areas and specifying drop off points and interesting associations between the agent performance and the automated vehicle abilities.
Conference Paper
Full-text available
The use of ultrasound haptic feedback for mid-air gestures in cars has been proposed to provide a sense of control over the user's intended actions and to add touch to a touchless interaction. However, the impact of ultrasound feedback to the gesturing hand regarding lane deviation, eyes-off-the-road time (EORT) and perceived mental demand has not yet been measured. This paper investigates the impact of uni- and multimodal presentation of ultrasound feedback on the primary driving task and the secondary gesturing task in a simulated driving environment. The multimodal combinations of ultrasound included visual, auditory, and peripheral lights. We found that ultrasound feedback presented unimodally and bimodally resulted in significantly less EORT compared to visual feedback. Our results suggest that multimodal ultrasound feedback for mid-air interaction decreases EORT whilst not compromising driving performance nor mental demand and thus can increase safety while driving.
Conference Paper
Full-text available
Automated driving eliminates the permanent need for vehicle control and allows to engage in non-driving related tasks. As literature identifies office work as one potential activity, we estimate that advanced input devices will shortly appear in automated vehicles. To address this matter, we mounted a keyboard on the steering wheel, aiming to provide an exemplary safe and productive working environment. In a driving simulator study (n=20), we evaluated two feedback mechanisms (heads-up augmentation on a windshield, conventional heads-down display) and assessed both typing effort and driving performance in handover situations. Results indicate that the windshield alternative positively influences handovers, while heads-down feedback results in better typing performance. Text difficulty (two levels) showed no significant impact on handover time. We conclude that for a widespread acceptance of specialized interfaces for automated vehicles, a balance between safety aspects and productivity must be found in order to attract customers while retaining driving safety.
Conference Paper
Full-text available
Employing a 2x2 within-subjects design, forty-eight experienced drivers (28 male, 20 female) undertook repeated button selection and ‘slider-bar’ manipulation tasks, to compare a traditional touchscreen with a virtual mid-air gesture interface in a driving simulator. Both interfaces were tested with and without haptic feedback generated using ultrasound. Results show that combining gestures with mid-air haptic feedback was particularly promising, reducing the number of long glances and mean off-road glance time associated with the in-vehicle tasks. For slider-bar tasks in particular, gestures-with-haptics was also associated with the shortest interaction times, highest number of correct responses and least ‘overshoots’, and was favoured by participants. In contrast, for button-selection tasks, the touchscreen was most popular, enabling the highest accuracy and quickest responses, particularly when combined with haptic feedback to guide interactions, although this also increased visual demand. The study shows clear potential for gestures with mid-air ultrasonic haptic feedback in the automotive domain.
Conference Paper
Full-text available
Six experienced drivers each undertook five 30-minute journeys (portrayed as ‘daily commutes’ i.e. one on each of five consecutive weekdays) in a medium-fidelity driving-simulator engineered to mimic a highly-automated vehicle. Participants were encouraged to act as they might in such a vehicle by bringing with them their own objects/devices to use. During periods of automation, participants were quickly engrossed by their chosen activities, many of which had strong visual, manual and cognitive elements, and required postural adaptation (e.g. moving/reclining the driver’s seat); the steering wheel was typically used to support objects/devices. Consistently high subjective ratings of trust suggest that drivers were unperturbed by the novelty of highly-automated driving and generally willing to allow the vehicle to assume control; ratings of situational awareness varied considerably indicating mixed opinions. Qualitative results are discussed in the context of the re-design of vehicles to enable safe and comfortable engagement with secondary activities during high-automation.
Conference Paper
Full-text available
This paper reports on the use of in-car 3D displays in a real-world driving scenario. Today, stereoscopic displays are becoming ubiquitous in many domains such as mobile phones or TVs. Instead of using 3D for entertainment, we explore the 3D effect as a mean to spatially structure user interface (UI) elements. To evaluate potentials and drawbacks of in-car 3D displays we mounted an autostereoscopic display as instrument cluster in a vehicle and conducted a real-world driving study with 15 experts in automotive UI design. The results show that the 3D effect increases the perceived quality of the UI and enhances the presentation of spatial information (e.g., navigation cues) compared to 2D. However, the effect should be used well-considered to avoid spatial clutter which can increase the system's complexity.
Conference Paper
Full-text available
Today, the vast majority of research on novel automotive user interface technologies is conducted in the lab, often using driving simulation. While such studies are important in early stages of the design process, we argue that ultimately studies need to be conducted in the real-world in order to investigate all aspects crucial for adoption of novel user interface technologies in commercial vehicles. In this paper, we present a case study that investigates introducing autostereoscopic 3D dashboards into cars. We report on studying this novel technology in the real world, validating and extending findings of prior simulator studies. Furthermore, we provide guidelines for practitioners and researchers to design and conduct real-world studies that minimize the risk for participants while at the same time yielding ecologically valid findings.
Conference Paper
Full-text available
In this paper, we investigate user performance for stereoscopic automotive user interfaces (UI). Our work is motivated by the fact that stereoscopic displays are about to find their way into cars. Such a safety-critical application area creates an inherent need to understand how the use of stereoscopic 3D visualizations impacts user performance. We conducted a comprehensive study with 56 participants to investigate the impact of a 3D instrument cluster (IC) on primary and secondary task performance. We investigated different visualizations (2D and 3D) and complexities (low vs. high amount of details) of the IC as well as two 3D display technologies (shutter vs. autostereoscopy). As secondary tasks the participants judged spatial relations between UI elements (expected events) and reacted on pop-up instructions (unexpected events) in the IC. The results show that stereoscopy increases accuracy for expected events, decreases task completion times for unexpected tasks, and increases the attractiveness of the interface. Furthermore, we found a significant influence of the used technology, indicating that secondary task performance improves for shutter displays.
Conference Paper
Full-text available
Using an interactive display, such as a touchscreen, entails undertaking a pointing gesture and dedicating a considerable amount of attention to execute a selection task. In this paper, we give an overview of the concept of intent-aware interactive displays that can determine, early in the free hand pointing gesture, the icon/item the user intends to select on the touchscreen. This can notably reduce the pointing time, aid implementing effective selection facilitation routines and enhance the overall system accuracy as well as the user experience. Intent-aware displays employ a gesture tracking sensor in conjunction with novel probabilistic intent inference algorithms to predict the endpoint of a free hand pointing gesture. Real 3D pointing data is used to illustrate the usefulness and effectiveness of the proposed approach.
Conference Paper
Full-text available
With the proliferation of the touchscreen technology, interactive displays are becoming an integrated part of the modern vehicle environment. However, due to road and driving conditions, the user input on such displays can be perturbed resulting in erroneous selections. This paper describes an evaluative study of the usability and input performance of in-vehicle touchscreens. The analysis is based on data collected in instrumented cars driven under various road/driving conditions. We assess the frequency of failed selection attempts , distances by which users miss the intended on-screen target and the durations of undertaken free hand pointing gestures to accomplish the selection tasks. It is shown that the road/driving conditions can notably undermine the usability of an interactive display when the user input is perturbed, e.g. due to the experienced vibrations and lateral accelerations in the vehicle. The distance between the location of an erroneous on-screen selection and the intended endpoint on the display, is closely related to the level of present in-vehicle noise. The conducted study can advise graphical user interfaces design for the vehicle environment where the user free hand pointing gestures can be subject to varying levels of perturbations .
Article
Full-text available
A group of researchers has presented a way to reduce legacy bias in elicitation studies. These researchers state that it would be more helpful for users and developers to take advantage of this bias wherever possible. Using the legacy bias makes the transition toward new forms of interaction smooth. The approach to adapt the elicitation studies to reduce this bias includes three techniques, such as production, priming, and partner. Production requires users to produce multiple interaction proposals for each referent may force them to move beyond simple, legacy-inspired techniques to ones that require more reflection. Priming aims to give the users an idea of the possibilities they have when generating gestures. Partners is the third method, which suggests letting users work together with others to leverage their ideas.
Conference Paper
Full-text available
Recently there has been an increase in research towards using hand gestures for interaction in the field of Augmented Reality (AR). These works have primarily focused on researcher designed gestures, while little is known about user preference and behavior for gestures in AR. In this paper, we present our guessability study for hand gestures in AR in which 800 gestures were elicited for 40 selected tasks from 20 participants. Using the agreement found among gestures, a user-defined gesture set was created to guide designers to achieve consistent user-centered gestures in AR. Wobbrock’s surface taxonomy has been extended to cover dimensionalities in AR and with it, characteristics of collected gestures have been derived. Common motifs which arose from the empirical findings were applied to obtain a better understanding of users’ thought and behavior. This work aims to lead to consistent user-centered designed gestures in AR.
Article
Full-text available
This study explores, in the context of semi-autonomous driving, how the content of the verbalized message accompanying the car’s autonomous action affects the driver’s attitude and safety performance. Using a driving simulator with an auto-braking function, we tested different messages that provided advance explanation of the car’s imminent autonomous action. Messages providing only “how” information describing actions (e.g., “The car is braking”) led to poor driving performance, whereas “why” information describing reasoning for actions (e.g., “Obstacle ahead”) was preferred by drivers and led to better driving performance. Providing both “how and why” resulted in the safest driving performance but increased negative feelings in drivers. These results suggest that, to increase overall safety, car makers need to attend not only to the design of autonomous actions but also to the right way to explain these actions to the drivers.
Article
Full-text available
This paper compares the user experience of three novel concept designs for 3D-based car dashboards. Our work is motivated by the fact that analogue dashboards are currently being replaced by their digital counterparts. At the same time, auto-stereoscopic displays enter the market, allowing the quality of novel dashboards to be increased, both with regard to the perceived quality and in supporting the driving task. Since no guidelines or principles exist for the design of digital 3D dashboards, we take an initial step in designing and evaluating such interfaces. In a study with 12 participants we were able to show that stereoscopic 3D increases the perceived quality of the display while motion parallax leads to a rather disturbing experience.
Conference Paper
Full-text available
Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with objects stereoscopically displayed in front of the screen poses some fundamental challenges: Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate the visually perceived object to touch the 2D surface behind the object. Conversely, recent technologies support capturing finger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In this paper we perform a comparison between such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results show that the 2D touch technique is more efficient close to the screen, whereas for targets further away from the screen, 3D selection outperforms 2D touch. Based on the results, we present implications for the design and development of future touch-sensitive interfaces for stereoscopic displays.
Conference Paper
Full-text available
Recently there has been an increase in research of hand gestures for interaction in the area of Augmented Reality (AR). However this research has focused on developer designed gestures, and little is known about user preference and behavior for gestures in AR. In this paper, we present the results of a guessability study focused on hand gestures in AR. A total of 800 gestures have been elicited for 40 selected tasks from 20 partic-ipants. Using the agreement found among gestures, a user-defined gesture set was created to guide design-ers to achieve consistent user-centered gestures in AR.
Article
Full-text available
To answer the question: “what is 3D good for?” we reviewed the body of literature concerning the performance implications of stereoscopic 3D (S3D) displays versus non-stereo (2D or monoscopic) displays. We summarized results of over 160 publications describing over 180 experiments spanning 51 years of research in various fields including human factors psychology/engineering, human–computer interaction, vision science, visualization, and medicine. Publications were included if they described at least one task with a performance-based experimental evaluation of an S3D display versus a non-stereo display under comparable viewing conditions. We classified each study according to the experimental task(s) of primary interest: (a) judgments of positions and/or distances; (b) finding, identifying, or classifying objects; (c) spatial manipulations of real or virtual objects; (d) navigation; (e) spatial understanding, memory, or recall and (f) learning, training, or planning. We found that S3D display viewing improved performance over traditional non-stereo (2D) displays in 60% of the reported experiments. In 15% of the experiments, S3D either showed a marginal benefit or the results were mixed or unclear. In 25% of experiments, S3D displays offered no benefit over non-stereo 2D viewing (and in some rare cases, harmed performance). From this review, stereoscopic 3D displays were found to be most useful for tasks involving the manipulation of objects and for finding/identifying/classifying objects or imagery. We examine instances where S3D did not support superior task performance. We discuss the implications of our findings with regard to various fields of research concerning stereoscopic displays within the context of the investigated tasks.
Conference Paper
Full-text available
Driven by technological advancements, gesture interfaces have recently found their way into vehicular prototypes of various kind. Unfortunately, their application is less than perfect and detailed information about preferred gesture execution regions, spatial extent, and time behavior are not available yet. Providing car (interior) manufacturer with gesture characteristics would allow them to design future in-vehicle concepts in a way to not interfere with gestural interaction. To tackle the problem, this research aims as preliminary work for a later standardization of the diverse properties of gestures and gesture classes similarly to what is already standardized in norms such as ISO 3958/4040 for placement and reachability of traditional controls and indicators. We have set up a real driving experiment recording trajectories and time behavior of gestures related to car and media control tasks. Data evaluation reveals that most of the subjects perform gestures in the same region (bounded by a“triangle”steering wheel, rear mirror, and gearshift) and with similar spatial extent (on average below 2 sec.). The generated density plots can be further used for an initial discussion about gesture execution in the passenger compartment. The final aim is to propose a new standard on permitted gesture properties (time, space) in the car.
Article
Full-text available
Visual discomfort has been the subject of considerable research in relation to stereoscopic and autostereoscopic displays. In this paper, the importance of various causes and aspects of visual discomfort is clarified. When disparity values do not surpass a limit of 1 °, which still provides sufficient range to allow satisfactory depth perception in stereoscopic television, classical determinants such as excessive binocular parallax and accommodation-vergence conflict appear to be of minor importance. Visual discomfort, however, may still occur within this limit and we believe the following factors to be the most pertinent in contributing to this: (1) temporally changing demand of accommodation-vergence linkage, e.g., by fast motion in depth; (2) three-dimensional artifacts resulting from insufficient depth information in the incoming data signal yielding spatial and temporal inconsistencies; and (3) unnatural blur. In order to adequately characterize and understand visual discomfort, multiple types of measurements, both objective and subjective, are required.
Article
Full-text available
In modern cars users need to interact with safety and comfort functions, driver assistance systems, and infotainment devices. Basic requirements include the perception of the current status and of information items as well as the control of functions. Handling that myriad amount of information while driving requires an appropriate interaction design, structure and visualization of the data. This paper investigates potentials and limitations of stereoscopic 3D for visualizing an in-vehicle information system. We developed a spatial in-car visualization concept that exploits three dimensions for the system's output. Based on a prototype, that implements the central functionality of our concept, we evaluate the 3D representation. A laboratory study with 32 users indicates that stereoscopic 3D is the better choice as it improves the user experience, increases the attractiveness, and helps the user in recognizing the current state of the system. The study shows no significant differences between non-stereoscopic and stereoscopic representations in the users' workload. This indicates that stereoscopic visualizations have no negative impact on the primary driving task.
Conference Paper
Full-text available
Gesture-based mobile interfaces require users to change the way they use technology in public settings. Since mobile phones are part of our public appearance, designers must integrate gestures that users perceive as acceptable for pub-lic use. This topic has received little attention in the litera-ture so far. The studies described in this paper begin to look at the social acceptability of a set of gestures with re-spect to location and audience in order to investigate possi-ble ways of measuring social acceptability. The results of the initial survey showed that location and audience had a significant impact on a user's willingness to perform ges-tures. These results were further examined through a user study where participants were asked to perform gestures in different settings (including a busy street) over repeated trials. The results of this work provide gesture design rec-ommendations as well as social acceptability evaluation guidelines.
Conference Paper
Full-text available
Guessability is essential for symbolic input, in which users enter gestures or keywords to indicate characters or commands, or rely on labels or icons to access features. We present a unified approach to both maximizing and evaluating the guessability of symbolic input. This approach can be used by anyone wishing to design a symbol set with high guessability, or to evaluate the guessability of an existing symbol set. We also present formulae for quantifying guessability and agreement among guesses. An example is offered in which the guessability of the EdgeWrite unistroke alphabet was improved by users from 51.0% to 80.1% without designer intervention. The original and improved alphabets were then tested for their immediate usability with the procedure used by MacKenzie and Zhang (1997). Users entered the original alphabet with 78.8% and 90.2% accuracy after 1 and 5 minutes of learning, respectively. The improved alphabet bettered this to 81.6% and 94.2%. These improved results were competitive with prior results for Graffiti, which were 81.8% and 95.8% for the same measures.
Conference Paper
Full-text available
In this paper, we explore the challenges in applying and investigate methodologies to improve direct-touch interaction on intangible displays. Direct-touch interaction simplifies object manipulation, because it combines the input and display into a single integrated interface. While traditional tangible display-based direct-touch technology is commonplace, similar direct-touch interaction within an intangible display paradigm presents many challenges. Given the lack of tactile feedback, direct-touch interaction on an intangible display may show poor performance even on the simplest of target acquisition tasks. In order to study this problem, we have created a prototype of an intangible display. In the initial study, we collected user discrepancy data corresponding to the interpretation of 3D location of targets shown on our intangible display. The result showed that participants performed poorly in determining the z-coordinate of the targets and were imprecise in their execution of screen touches within the system. Thirty percent of positioning operations showed errors larger than 30mm from the actual surface. This finding triggered our interest to design a second study, in which we quantified task time in the presence of visual and audio feedback. The pseudo-shadow visual feedback was shown to be helpful both in improving user performance and satisfaction.
Article
Multimodal interaction techniques using gesture and speech offer architects and engineers a natural way to create 3D CAD (computer-aided design) models for conceptual design. Gestures and speech for such interfaces must be based on empirically grounded knowledge instead of arbitrary vocabularies as employed in extant literature. We conducted an experiment with 41 participants from architecture and engineering backgrounds to elicit preferences of gestures and speech for 3D CAD modeling referents such as primitives, manipulations, and navigation. In this paper, we present results from the thematic analysis of the elicited gestures. We present a compilation of gestures, which were evaluated by experts to be suitable for the articulation of 3D CAD modeling referents. We also present a set of speech command terms elicited from participants. Finally, we provide recommendations for the design of a gesture and speech based CAD modeling system for conceptual design.
Chapter
The Problem. The consequences and implications of simulator sickness for the validity of simulation can be severe if not controlled and taken into account (Casali, 1986). Many of today’s driving simulators are used to perform research, training, or proof of design activities. A prerequisite to generalizing the results found in research conducted in a simulator is an understanding of the validity of the resulting experience. Without question, simulator sickness is a factor that can affect the validity of research simulators. Given the potential consequences of simulator sickness, it is difficult to assess the value of the results obtained from a simulator study known to have significant sickness problems. Role of Driving Simulators. There are alternatives to driving simulators for studying most, if not all, issues. However, these alternatives are often unsafe, do not provide a well-controlled environment, and require large sums of money to implement. Thus, driving simulators are necessary and the associated issues of simulator sickness need to be addressed. Key Results of Driving Simulator Studies. Simulator sickness can affect a driver’s performance in a variety of negative ways due to inappropriate behaviors, loss of motivation, avoidance of tasks that are found disturbing, distraction from normal attention allocation processes, and a preoccupation with the fact that something is not quite right. On the positive side, simulator selection, participant screening, scenario design, and control of the environment can all reduce the incidence of simulator sickness. Scenarios and Dependent Variables. Examples of the sorts of scenarios that lead to extremes of simulator sickness are discussed. Additionally, the various measures that have been used against simulator sickness are highlighted, including some with predictive validity. Platform Specificity and Equipment Limitations. Simulator sickness appears to be most extreme in fully immersive environments and when head-mounted displays are used. A motion base does not necessarily reduce simulator sickness symptoms.
Conference Paper
Mid-air pointing gestures enable drivers to interact with a wide range of vehicle functions, without requiring drivers to learn a specific set of gestures. A sufficient pointing accuracy is needed, so that targeted elements can be correctly identified. However, people make relatively large pointing errors, especially in demanding situations such as driving a car. Eye-gaze provides additional information about the drivers' focus of attention that can be used to compensate imprecise pointing. We present a practical implementation of an algorithm that integrates gaze data, in order to increase the accuracy of pointing gestures. A user experiment with 91 participants showed that our approach led to an overall increase of pointing accuracy. However, the benefits depended on the participants' initial gesture performance and on the position of the target elements. The results indicate a great potential to support gesture accuracy, but also the need for a more sophisticated fusion algorithm.
Conference Paper
Because gesture design for augmented reality (AR) remains idiosyncratic, people cannot necessarily use gestures learned in one AR application in another. To design discoverable gestures, we need to understand what gestures people expect to use. We explore how the scale of AR affects the gestures people expect to use to interact with 3D holograms. Using an elicitation study, we asked participants to generate gestures in response to holographic task referents, where we varied the scale of holograms from desktop-scale to room-scale objects. We found that the scale of objects and scenes in the AR experience moderates the generated gestures. Most gestures were informed by physical interaction, and when people interacted from a distance, they sought a good perspective on the target object before and during the interaction. These results suggest that gesture designers need to account for scale, and should not simply reuse gestures across different hologram sizes.
Article
Discovering gestures that gain consensus is a key goal of gesture elicitation. To this end, HCI research has developed statistical methods to reason about agreement. We review these methods and identify three major problems. First, we show that raw agreement rates disregard agreement that occurs by chance and do not reliably capture how participants distinguish among referents. Second, we explain why current recommendations on how to interpret agreement scores rely on problematic assumptions. Third, we demonstrate that significance tests for comparing agreement rates, either within or between participants, yield large Type I error rates (>40% for α =.05). As alternatives, we present agreement indices that are routinely used in inter-rater reliability studies. We discuss how to apply them to gesture elicitation studies. We also demonstrate how to use common resampling techniques to support statistical inference with interval estimates. We apply these methods to reanalyze and reinterpret the findings of four gesture elicitation studies.
Conference Paper
A mid-air gesture-based interface could provide a less cumbersome in-vehicle interface for a safer driving experience. Despite the recent developments in gesture-driven technologies facilitating the multi-touch and mid-air gestures, interface safety requirements as well as an evaluation of gesture characteristics and functions, need to be explored. This paper describes an optimization study on the previously developed GestDrive gesture vocabulary for in-vehicle secondary tasks. We investigate mid-air gestures and secondary tasks, their correlation, confusions, unintentional inputs and consequential safety risks. Building upon a statistical analysis, the results provide an optimized taxonomy break-down for a user-centered gestural interface design which considers user preferences, requirements, performance, and safety issues.
Chapter
Automated driving will be one of the most important trends of the next decade and will have a sustained impact on the automotive industry itself and the way vehicles are used in the future. Automated driving is particularly important for the commercial vehicle sector in view of the high mileages trucks accumulate as compared to passenger cars.
Conference Paper
The amount of available information and functionality in cars is increasing rapidly. Academia and industry are working tirelessly on technologies to integrate both in automotive user experience while providing high usability and safety. A large stereoscopic 3D dashboard could offer binocular disparity as a depth cue and by this, support these efforts by allowing novel techniques for adaptive and situation-aware in-car user interfaces. However, humans capability to perceive stereoscopic 3D depends on several factors, among them is the zone of comfort. In this paper, we present preliminary results of our design space exploration in form of the zone of comfort. In a user study we established a depth budget ranging from 23.8 ± 2.9cm in front of the dashboard to 22.7 ± 5.5cm behind it.
Conference Paper
In-air gestures have become more prevalent in the vehicle cockpit in recent years. However, air gesture interfaces are still quite young and users have very little experience with such interactions. In the vehicle, ease of use relates directly to driver safety. Previous work has suggested that gesture sets created through participatory methods tend to be easier for people to grasp and use than designer-designed sets. In the present study, two novel participatory design activities -- an elicitation activity in which participants produced gestures, and an online survey in which they assessed the workload associated with those gestures -- were conducted to assess possible air gestures for control of in-vehicle menus. A recommended gesture set is presented alongside broader recommendations for vehicle gesture design.
Chapter
The long awaited arrival of automated driving technology has the automotive industry perched on the precipice of radical change when it comes to the design of vehicle interiors and user experience. Recently, much thinking and many vehicle concepts have been devoted to demonstrating how vehicle interiors might change when vehicles reach full automation, where a human driver is neither required nor in some cases, even allowed to control the vehicle. However, looking more near term across all global market segments, we will likely see an increasing number of vehicles with widely varying automation capabilities emerging simultaneously. Any system short of full automation will still require driver control in some set of situations, and some fully automated vehicles will still allow driver control when desired. While it is unlikely that the basic seating arrangement, steering wheel, and pedals will be radically altered in this emerging segment of partial to highly automated vehicles, it is quite clear that the overall user experience during automated driving will need to evolve. Drivers will not be content to hold the steering wheel and stare at the road waiting for what may be a very infrequent request to take-over driving. The chapter presents the research conducted to develop the Valeo Mobius® Intuitive Driving solution for providing an embedded digital experience, even in lower levels of automation, and all while still promoting both shorter transition response times and better transition quality when emergency situations call for a transition from automated to manual control.
Conference Paper
In this paper, we investigate which non-driving-related activities drivers want to perform while driving highly or fully automated. Beyond the available advanced driving assistance functions, we expect that highly automated driving will soon be available in production vehicles. While many technological aspects have been investigated, it is not yet clear (a) which activities the drivers want to perform once they do not have to steer or monitor their car any more and (b) which of those will be feasible. In contrast to prior (survey-based) research, we investigate the driver's needs for such activities by employing a combination of a web survey, in-situ observations, and an in-situ survey. Also, we have a look at the specific requirements of the European / German market in contrast to prior research conducted mostly for English-speaking countries. The findings indicate that besides traditional activities (talking to passengers, listening to music), daydreaming, writing text messages, eating and drinking, browsing the Internet, and calling are most wanted for highly automated driving. This shows the potential for mobile and ubiquitous multimedia applications in the car.
Conference Paper
Hand gestures are a suitable interface medium for in-vehicle interfaces. They are intuitive and natural to perform, and less visually demanding while driving. This paper aims at analysing human gestures to define a preliminary gesture vocabulary for in-vehicle climate control using a driving simulator. We conducted a user- elicitation experiment on 22 participants performing two driving scenarios with different levels of cognitive load. The participants were filmed while performing natural gestures for manipulating the air-conditioning inside the vehicle. Comparisons are drawn between the proposed approach to define a vocabulary using 9 new gestures (GestDrive) and previously suggested methods. The outcomes demonstrate that GestDrive is successful in describing the employed gestures in detail.
Article
Researchers are making efforts to reduce legacy bias, which is a limitation of current elicitation methods. There are many open challenges in updating elicitation methods to incorporate production, priming, and partner techniques. Gesture elicitation is emerging as a potential approach to address this challenge. Gesture elicitation has been applied to a wide variety of emerging interaction and sensing technologies, including touchscreens, depth cameras, styli, foot-operated UIs, multidisplay environments, mobile phones, multimodal gesture-and-speech interfaces, stroke alphabets, and above-surface interfaces. One advantage of gesture elicitation is that the technique is not limited to current sensing technologies. It enables interaction designers to focus on end users' desires as opposed to settling for what is technically convenient at the moment.
Conference Paper
We address in this work the process of agreement rate analysis for characterizing the level of consensus between participants' proposals elicited during guessability studies. Two new measures, i.e., disagreement rate for referents and coagreement rate between referents, are proposed to accompany the widely-used agreement rate formula of Wobbrock et al. [37] when reporting participants' consensus for symbolic input. A statistical significance test for comparing the agreement rates of k>=2 referents is presented in analogy with Cochran's success/failure Q test [5], for which we express the test statistic in terms of agreement and coagreement rates. We deliver a toolkit to assist practitioners to compute agreement, disagreement, and coagreement rates, and run statistical tests for agreement rates at p=.05, .01, and .001 levels of significance. We validate our theoretical development of agreement rate analysis in relation with several previously published elicitation studies. For example, when we present the probability distribution function of the agreement rate measure, we also use it (1) to explain the magnitude of agreement rates previously reported in the literature, and (2) to propose qualitative interpretations for agreement rates, in analogy with Cohen's guidelines for effect sizes [6]. We also re-examine previously published elicitation data from the perspective of the agreement rate test statistic, and highlight new findings on the effect of referents over agreement rates, unattainable prior to this work. We hope that our contributions will advance the current knowledge in agreement rate analysis, providing researchers and practitioners with new techniques and tools to help them understand user-elicited data at deeper levels of detail and sophistication.
Article
Full body gestures provide alternative input to video games that are more natural and intuitive. However, full-body game gestures designed by developers may not always be the most suitable gestures available. A key challenge in full-body game gestural interfaces lies in how to design gestures such that they accommodate the intensive, dynamic nature of video games, e.g., several gestures may need to be executed simultaneously using different body parts. This paper investigates suitable simultaneous full-body game gestures, with the aim of accommodating high interactivity during intense gameplay. Three user studies were conducted: first, to determine user preferences, a user-elicitation study was conducted where participants were asked to define gestures for common game actions/commands; second, to identify suitable and alternative body parts, participants were asked to rate the suitability of each body part (one and two hands, one and two legs, head, eyes, and torso) for common game actions/commands; third, to explore the consensus of suitable simultaneous gestures, we proposed a novel choice-based elicitation approach where participants were asked to mix and match gestures from a predefined list to produce their preferred simultaneous gestures. Our key findings include (i) user preferences of game gestures, (ii) a set of suitable and alternative body parts for common game actions/commands, (iii) a consensus set of simultaneous full-body game gestures that assist interaction in different interactive game situations, and (iv) generalized design guidelines for future full-body game interfaces. These results can assist designers and practitioners to develop more effective full-body game gestural interfaces or other highly interactive full-body gestural interfaces.
Article
Download Free Sample Fueled by falling display hardware costs and rising demand, digital signage and pervasive displays are becoming ever more ubiquitous. Such systems have traditionally been used for advertising and information dissemination, with digital signage commonplace in shopping malls, airports and public spaces. While advertising and broadcasting announcements remain important applications, developments in sensing and interaction technologies are enabling entirely new classes of display applications that tailor content to the situation and audience of the display. As a result, signage systems are beginning to transition from simple broadcast systems to rich platforms for communication and interaction. In this lecture, we provide an introduction to this emerging field for researchers and practitioners interested in creating state-of-the-art pervasive display systems. We begin by describing the history of pervasive display research, providing illustrations of key systems, from pioneering work on supporting collaboration to contemporary systems designed for personalized information delivery. We then consider what the near future might hold for display networks -- describing a series of compelling applications that are being postulated for future display networks. Creating such systems raises a wide range of challenges and requires designers to make a series of important trade-offs. We dedicate four chapters to key aspects of pervasive display design: audience engagement, display interaction, system software, and system evaluation. These chapters provide an overview of current thinking in each area. Finally, we present a series of case studies of display systems and our concluding remarks.
Article
Simulator sickness (SS) in high-fidelity visual simulators is a byproduct of modem simulation technology. Although it involves symptoms similar to those of motion-induced sickness (MS), SS tends to be less severe, to be of lower incidence, and to originate from elements of visual display and visuo-vestibular interaction atypical of conditions that induce MS. Most studies of SS to date index severity with some variant of the Pensacola Motion Sickness Questionnaire (MSQ). The MSQ has several deficiencies as an instrument for measuring SS. Some symptoms included in the scoring of MS are irrelevant for SS, and several are misleading. Also, the configural approach of the MSQ is not readily adaptable to computer administration and scoring. This article describes the development of a Simulator Sickness Questiomaire (SSQ), derived from the MSQ using a series of factor analyses, and illustrates its use in monitoring simulator performance with data from a computerized SSQ survey of 3,691 simulator hops. The database used for development included more than 1,100 MSQs, representing data from 10 Navy simulators. The SSQ provides straightforward computer or manual scoring, increased power to identify "problem" simulators, and improved diagnostic capability.
Conference Paper
Many surface computing prototypes have employed gestures created by system designers. Although such gestures are appropriate for early investigations, they are not necessarily reflective of user behavior. We present an approach to designing tabletop gestures that relies on eliciting gestures from non-technical users by first portraying the effect of a gesture, and then asking users to perform its cause. In all, 1080 gestures from 20 participants were logged, analyzed, and paired with think-aloud data for 27 commands performed with 1 and 2 hands. Our findings indicate that users rarely care about the number of fingers they employ, that one hand is preferred to two, that desktop idioms strongly influence users' mental models, and that some commands elicit little gestural agreement, suggesting the need for on-screen widgets. We also present a complete user-defined gesture set, quantitative agreement scores, implications for surface technology, and a taxonomy of surface gestures. Our results will help designers create better gesture sets informed by user behavior.
Conference Paper
We propose a new system that efficiently combines direct multitouch interaction with co-located 3D stereoscopic visualization. In our approach, users benefit from well-known 2D metaphors and widgets displayed on a monoscopic touchscreen, while visualizing occlusion-free 3D objects floating above the surface at an optically correct distance. Technically, a horizontal semi-transparent mirror is used to reflect 3D images produced by a stereoscopic screen, while the user's hand as well as a multitouch screen located below this mirror remain visible. By registering the 3D virtual space and the physical space, we produce a rich and unified workspace where users benefit simultaneously from the advantages of both direct and indirect interaction, and from 2D and 3D visualizations. A pilot usability study shows that this combination of technology provides a good user experience.
Article
The Reason and Brand Motion Sickness Susceptibility Questionnaire (MSSQ) has remained unchanged for a quarter of a century. The primary aims of this investigation were to improve the design of the MSSQ, simplify scoring, produce new adult reference norms, and analyse motion validity data. We also considered the relationship of sickness from other nonmotion causes to the MSSQ. Norms and percentiles for a sample of 148 subjects were almost identical to the original version of this instrument. Reliability of the whole scale gave a Cronbach's standardised item alpha of 0.86, the correlation between Part A (child) and Part B (adult) was r = 0.65 (p < 0.001), and test-retest reliability may be assumed to be better than 0.8. Predictive validity of the MSSQ for motion sickness tolerance using laboratory motion devices averaged r = 0.45. Correlation between MSSQ and other sources of nausea and vomiting in the last 12 months, excluding motion sickness itself, was r = 0.3 (p < 0.001), migraine was the most important contributor to this relationship. In patients (n = 101) undergoing chemotherapy, there were significant correlations between MSSQ and chemotherapy-induced nausea and vomiting. Migraine also appeared as a predictor of chemotherapy-induced sickness. It was concluded that the revised MSSQ can be used as a direct replacement of the original version. The relationship between motion sickness susceptibility and other causes of sickness, including migraine and chemotherapy, points to the involvement of the vestibular system in the response to nonmotion emetogenic stimuli. Alternatively, this relationship may reflect individual differences in excitability of the postulated final common emetic pathway.