Conference Paper

Designing for Multiple Hand Grips and Body Postures within the UX of a moving Smartphone

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper we explore how screen-based smartphone interaction can be enriched when designers focus on the physical interaction issues surrounding the device. These consist of the hand grips used (Symmetric bimanual, Asymmetric bimanual with thumb, Single handed, Asymmetric bimanual with finger), body postures (Sitting at a table, Standing, Lying down) and the tilting of the smartphone itself. These physical interactions are well described in the literature and several research papers provide empirical metrics describing them. In this paper, we go one step further by using this data to generate new screenbased interactions. We achieved this by conducting two workshops to investigate how smartphone interaction design can be informed by the physicality of smartphone interaction. By analysing the outcomes, we provide 14 new screen interaction examples with additional insights comparing outcomes for various body postures and grips.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Depending on the context, a grip can convey meaningful information of user's intention based on the ways people hold the objects [29]. Researchers have explored using the information of hand grip to support interactions with the mobile devices [9,32]. Graspables [26] explored the idea of using grips as input gestures. ...
... Other studies have investigated using grips to support active reading [32], predict touch point [19], and adapt UIs [9] on mobile devices such as tablets and smartphones. ...
... Negulescu and McGrenere [11] identified that, in different postures, users adjust the grip at runtime in such a way that they can point at the location accurately. Eardley et al., in their recent works [2][3][4], focused on understanding the influence of body posture and device size on mobile phone interactions. However, they did not particularly look upon parameters responsible for one-handed thumb-based interaction. ...
... Based on the device size, user hand size and interaction style, the zone boundaries need to be redefined, (2) users grip the phone by extending their index fingers up to the other side of the phone and exert pressure while interacting [1]. We observed from video analysis that most of our participants held the phone without exerting pressure in the back side of phones, both in sitting and walking postures, even with the large sized mobile (similar to one-handed grip pattern shown at [3]). Also, they did not often change their grip. ...
... C2 -Design for mobile device ergonomics: Our design should support two-handed mobile interaction, since that is how users typically watch videos [47]. Here, we consider asymmetric bimanual smartphone input with thumb, which has been shown to be supported by standing postures [28]. This means action items (in this case the annotation interface) should be within the functional area of the thumb. ...
... However, our interface can be easily extended to portrait (vertical) mode, whereby the joystick controller module will be placed on the bottom third of a mobile screen display. Since this view is shown to be the most common device orientation mode [72] for watching live videos (e.g., Instagram Stories [71]), it can be easily adapted while still abiding by ergonomic constraints of standing postures [28]. ...
Conference Paper
Full-text available
Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching , without increasing users' mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos.
... The group who entered simple Chinese phrases using split keyboard obtained a lower error rate. For a symmetrical two-handed grip gesture, there is a suitable area for typing at the sides and bottom of the screen, which is about a quarter of the arc of the thumb joint (Eardley, Roudaut, Gill, & Thompson, 2018). The middle part of standard keyboard is out of this suitable area. ...
... With desktop computers, the usage is consistent: one is facing the monitor and typically using mouse and keyboard. In contrast, mobile devices can be used in a variety of ways: either hand held, placed on a table, or even body-worn as with smartwatches; while sitting, standing, or lying down; and with one or two hands [8,30,31]. In consequence, the viewing angle, orientation, and distance to the display can differ, which likely affects readability of the visualized content. ...
... In the literature, several consistent conclusions have been reported, such as the most common style being onehanded thumb operation [27,35]. As a result, a recent work showed that, in the graphical user interfaces (GUIs) of apps or webpages for smartphones, participants preferred the bottom end of the screen for GUI item placement [19]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. ...
Conference Paper
Full-text available
To achieve better touch GUI designs, researchers have studied optimal target margins, but state-of-the-art results on this topic have been observed for only middle-size touchscreens. In this study, to better determine more practical target arrangements for smartphone GUIs, we conducted four experiments to test the effects of gaps among targets on touch pointing performance. Two lab-based and two crowd-based experiments showed that wider gaps tend to help users reduce the task completion time and the rate of accidental taps on unintended items. For 1D and 2D tasks, the results of the lab-based study with a one-handed thumb operation style showed that 4-mm gaps were the smallest sizes to remove negative effects of surrounding items. The results of the crowd-based study, however, indicated no specific gaps to completely remove the negative effects.
... Löchtefeld et al. [31] detect which hand unlocked the device to shift the UI towards that hand. Eardley et al. [13,14] present several adaptive UI concepts, e.g., a keyboard that shifts to the user's thumb when the device is tilted sideways. ...
Conference Paper
Smartphones are used predominantly one-handed, using the thumb for input. Many smartphones, however, have grown beyond 5". Users cannot tap everywhere on these screens without destabilizing their grip. ForceRay (FR) lets users aim at an out-of-reach target by applying a force touch at a comfortable thumb location, casting a virtual ray towards the target. Varying pressure moves a cursor along the ray. When reaching the target, quickly lifting the thumb selects it. In a first study, FR was 195 ms slower and had a 3% higher selection error than the best existing technique, BezelCursor (BC), but FR caused significantly less device movement than all other techniques, letting users maintain a steady grip and removing their concerns about device drops. A second study showed that an hour of training speeds up both BC and FR, and that both are equally fast for targets at the screen border.
... Although we focused on full-body poses, future agency of virtual human could enable zoom functions and simulate elaborate hand poses in relation to product elements [33]. For example, based on the work of Rachel Eardley [10,11], it would be feasible to generate hand grips and body postures for mobile phone use and make mannequins react to UI design choices such as swiping, typing, or UI layouts. ...
Conference Paper
Full-text available
When designing comfort and usability in products, designers need to evaluate aspects ranging from anthropometrics to use scenarios. Therefore, virtual and poseable mannequins are employed as a reference in early-stage tools and for evaluation in the later stages. However, tools to intuitively interact with virtual humans are lacking. In this paper, we introduceSmartManikin, a mannequin with agency that responds to high-level commands and to real-time design changes. We first captured human poses with respect to desk configurations, identified key features of the pose and trained regression functions to estimate the optimal features at a given desk setup. The SmartManikin's pose is generated by the predicted features as well as by using forward and inverse kinematics. We present our design, implementation, and an evaluation with expert designers. The results revealed that SmartManikin enhances the design experience by providing feedback concerning comfort and health in real time.
... Problematic Interface and Content Design: Creating solutions for improving mobile content design is perhaps the most practical first step towards addressing SVIs because designers have control over the look and functionality of their content. Research has demonstrated that data highlighting interaction issues can be used to inform the exploration of novel interface designs that would overcome previous limitations [17]. ...
Conference Paper
Full-text available
Mobile technologies are used in increasingly diverse and challenging environments. With the predominantly visual nature of mobile devices, Situational Visual Impairments (SVIs) are a growing concern. However, fundamental knowledge is lacking about the causes of SVIs, how people deal with SVIs, and whether their solutions are effective. To address this, we first conducted a convenience-sampled online questionnaire with 174 participants, and identified many causes and (ineffective) solutions. To firmly ground our initial results, we then conducted a two-week ecological momentary assessment with 24 participants, balanced by age and gender across Australia and Scotland. We confirmed that SVIs are experienced often and during typical mobile tasks, and can be very frustrating. We identify a range of factors causing SVIs, discuss mobile design implications, and introduce an SVI Context Model rooted in empirical evidence. The contributions in this paper will support the development of new effective SVI solutions.
Article
This paper investigates the relationship between menu design and hand positions in relation to the assessment of end users with main focus on usability, user preference, and potential adaptions to different hand positions. Sixteen (N=16) participants first participated in a co-design workshop, in which they proposed menu designs for different hand grips. Based on the design proposals, a selection of menu designs were derived and implemented in a mobile app prototype, on which a menu selection study was conducted to investigate performance and perceived usability of the menus in one-handed and two-handed interaction. The results include user ratings and performance, which highlight the need for mobile menus to be adapted for different hand positions. Based on that, we derive design recommendations for more adaptive, user-centric and ergonomic mobile menu designs to match the natural interactions of users.
Article
In this paper, we dedicate a handgrip-free technique, called ”AcHand”, to continuously detect respiratory rates under acoustic signals sensing via a smartphone. Although the particularity of the AcHand is to follow the subtle movements of the heart during its displacement, we acknowledge specific challenges that can make the signal vulnerable to handgripfree scenarios such as 1) stochastic hand activities: fluctuations in movement can alter the trajectory of the acoustic signal to the microphones and cause random changes in the breathing curve, resulting in complex and unpredictable dynamics, and 2) clicking on the phone screen can disrupt the breathing by generating a large signal variation to create undesirable peaks-to-noisy the signal. However, by analyzing the recorded signal reflection evoked by a chirp sound stimulus, we propose a linear transformation of the Cardioreflector-Manoreflector to analyze the respiratory patterns variation and distinguish weak breathing signals under the handgrip-free. We compare the present signal with the ground truth recorded by ECG to approve the efficiency of the proposed method. Our experiments show AcHand achieves a MAE of 0.341 bpm, which is a significant improvement over the state-of-the-art devices, and enhancing detection capability across handgrip-free scenarios.
Chapter
Complete and unambiguous descriptions of user tasks is a cornerstone of user centered design approaches as they provide a unique way of describing precisely and entirely users’ actions that have to be performed in order for them to reach their goals and perform their work. Three key challenges in describing these user tasks lay in the quantity and type of information to embed, the level of details of the type of actions and the level of refinement in the hierarchy of tasks. For “standard” interactions, such as performing work using desktop-like interactive applications, several notations have been proposed and make explicit how to address these challenges. Interactions on the move raise additional issues as physical movement of the operator have to be accounted for and described. Currently no task notation is able to describe adequately users’ movements making it impossible to reason about quantity of movement to perform a task and about the issues for performing incorrect movements. This paper extends current knowledge and practice in tasks modelling taking into account in a systematic manner users’ movements and more precisely body parts, body size, range of motion, articulations and position of body parts relatively to objects in the environment. Through two different case studies we show the importance of adding those elements to describe and analyze users’ work and tasks when users perform physical interactions. We also demonstrate that such models are of prime importance to reason about quantity of movements, muscular fatigue and safety.KeywordsTask modellingSafetyTask analysisWorkHuman Body
Chapter
In this study, we investigated how users work with a tablet in a non-desk setting. While portable computing devices have enabled ‘work anywhere’, few furniture and user interface has been designed for working in non-desk settings. In a co-exploration study, participants (n = 20) performed three tablet activities (video watching, typing, and drawing) on a configurable sofa. Using multiple cushions, the participants iteratively explored comfortable sitting as well as lying postures in diverse body orientations. The result showed a variety of postures and support configurations for the comfort of working in a non-desk setting. Unlike working at a conventional desk setting, the body parts supported each other as well as a tablet, and preferred postures were unique to each activity. By systematically analyzing and categorizing postures, we uncovered new design opportunities for furniture and interactive systems that do not deal with a traditional desk setting. KeywordsCouch potatoPostureCo-exploration studyNon-desk workingMobile devices
Article
We introduce a novel one-handed input technique for mobile devices that is not based on pointing, but on motion matching -where users select a target by mimicking its unique animation. Our work is motivated by the findings of a survey (N=201) on current mobile use, from which we identify lingering opportunities for one-handed input techniques. We follow by expanding on current motion matching implementations - previously developed in the context of gaze or mid-air input - so these take advantage of the affordances of touch-input devices. We validate the technique by characterizing user performance via a standard selection task (N=24) where we report success rates (>95%), selection times (~1.6 s), input footprint, grip stability, usability, and subjective workload - in both phone and tablet conditions. Finally, we present a design space that illustrates six ways in which motion matching can be embedded into mobile interfaces via a camera prototype application.
Conference Paper
In this paper, we investigated how "lying down'' body postures affected the use of the smartphone user interface (UI) design. Extending previous research that studied body postures, handgrips, and the movement of the smartphone. We have done this in three steps; (1) An online survey that examined what type of lying down postures, participants, utilized when operating a smartphone; (2) We broke down these lying down postures in terms of body angle (i.e., users facing down, facing up, and on their side) and body support; (3) We conducted an experiment questioning the effects that these body angles and body supports had on the participants' handgrips. What we found was that the smartphone moves the most (is the most unstable) in the "facing up (with support)'' condition. Additionally, we discovered that the participants preferred body posture was those that produced the least amount of motion (more stability) with their smartphones.
Thesis
Full-text available
Billions of mobile devices are used worldwide for a significant number of important tasks in our personal and professional lives. Unfortunately, mobile devices are prone to interaction challenges as a result of the changing contexts of use, resulting in the user experiencing a situational impairment. For example, when typing in a vehicle being driven over an uneven road, it is difficult to avoid incorrect key presses. Situational visual impairments (SVIs) are one type of usability and accessibility challenge mobile device user's face (e.g., not being able to read and reply to an important email when outside under bright sunlight), which suggests that current mobile industry practices are insufficient for supporting designers when addressing SVIs. However, there is little HCI research that provides a comprehensive understanding of SVIs through qualitative research. Considering that we primarily interact with mobile devices through the screen, it is arguably important to further research this area. Understanding the true context of SVIs will help to identify adequate solutions. To address this, I recruited 174 participants for an online survey and 24 participants across Australia and Scotland for a two-week ecological momentary assessment to establish what factors contribute to SVIs experienced when using a mobile device. My findings revealed that SVIs are a complex phenomenon with several interacting factors. I introduce a mobile device SVI Context Model to conceptualise the problem. I identified that mobile content design was the most practical first step towards addressing SVIs. Following this, I surveyed 43 mobile content designers and ran four follow-on interviews to identify how often SVIs were considered and how I could provide effective support. I found key similarities and differences between accessibility and designing to reduce SVIs. The participants requested guidelines, education, and digital design tools for improved SVI design support. I focused on identifying the necessary features and implementation for an SVI design tool that would support designers because this would have an immediate and positive influence on addressing SVIs. Next, I surveyed 50 mobile app designers using an online survey to understand how mobile app interfaces are designed. I identified a wide variety of tools and practices used, and the participants also raised challenges for designing mobile app interfaces that had implications for users experiencing SVIs. Using my new understanding of SVIs and the challenges mobile designers face, I ran two design workshops. The purpose of the first workshop was to generate ideas for SVI design tools that would fit within a typical designer's workflow. I then created high-fidelity prototypes to elicit more informed feedback in the second workshop. To address the problem of insufficient support for designers, I present a set of recommendations for developing SVI design tools to support designers in creating mobile content that reduces SVIs in different contexts. The recommendations provide guidance on how to incorporate SVI design support into existing design software (e.g., Sketch) and future design software. Design software companies following my recommendations will lead to an improved set of tools for designers to expand mobile content designs to different contexts. The development and inclusion of these designs within mobile apps (e.g., allowing alternative modes such as for day or night) will provide users with more control in addressing SVIs through enhanced content design.
Article
Full-text available
In the first part of this paper, we propose PINlogger.js which is a JavaScript-based side channel attack revealing user PINs on an Android mobile phone. In this attack, once the user visits a website controlled by an attacker, the JavaScript code embedded in the web page starts listening to the motion and orientation sensor streams without needing any permission from the user. By analysing these streams, it infers the user's PIN using an artificial neural network. Based on a test set of fifty 4-digit PINs, PINlogger.js is able to correctly identify PINs in the first attempt with a success rate of 82.96%, which increases to 96.23% and 99.48% in the second and third attempts respectively. The high success rates of stealing user PINs on mobile devices via JavaScript indicate a serious threat to user security. In the second part of the paper, we study users' perception of the risks associated with mobile phone sensors. We design user studies to measure the general familiarity with different sensors and their functionality, and to investigate how concerned users are about their PIN being stolen by an app that has access to each sensor. Our results show that there is significant disparity between the actual and perceived levels of threat with regard to the compromise of the user PIN. We discuss how this observation, along with other factors, renders many academic and industry solutions ineffective in preventing such side channel attacks.
Conference Paper
Full-text available
Although different types of touch surfaces have gained extensive attention in HCI, this is the first work to directly compare them for two critical factors: performance and ergonomics. Our data come from a pointing task (N=40) carried out on five common touch surface types: public display (large, vertical, standing), tabletop (large, horizontal, seated), laptop (medium, adjustably tilted, seated), tablet (seated, in hand), and smartphone (single- and two-handed input). Ergonomics indices were calculated from biomechanical simulations of motion capture data combined with recordings of external forces. We provide an extensive dataset for researchers and report the first analyses of similarities and differences that are attributable to the different postures and movement ranges.
Article
Full-text available
In this paper, we investigate the effects of encumbrance (carrying typical objects such as shopping bags during interaction) and walking on target acquisition on a touchscreen mobile phone. Users often hold objects and use mobile devices at the same time and we examined the impact encumbrance has on one- and two- handed interactions. Three common input postures were evaluated: two-handed index finger, one-handed preferred thumb and two-handed both thumbs, to assess the effects on performance of carrying a bag in each hand while walking. The results showed a significant decrease in targeting performance when users were encumbered. For example, input accuracy dropped to 48.1% for targeting with the index finger when encumbered, while targeting error using the preferred thumb to input was 4.2mm, an increase of 40% compared to unencumbered input. We also introduce a new method to evaluate the user's preferred walking speed when interacting - PWS&I, and suggest future studies should use this to get a more accurate measure of the user's input performance.
Conference Paper
Full-text available
Text entry on smartphones is far slower and more error-prone than on traditional desktop keyboards, despite sophisticated detection and auto-correct algorithms. To strengthen the empirical and modeling foundation of smartphone text input improvements, we explore touch behavior on soft QWERTY keyboards when used with two thumbs, an index finger, and one thumb. We collected text entry data from 32 participants in a lab study and describe touch accuracy and precision for different keys. We found that distinct patterns exist for input among the three hand postures, suggesting that keyboards should adapt to different postures. We also discovered that participants' touch precision was relatively high given typical key dimensions, but there were pronounced and consistent touch offsets that can be leveraged by keyboard algorithms to correct errors. We identify patterns in our empirical findings and discuss implications for design and improvements of soft keyboards.
Article
Full-text available
The lack of tactile feedback on touch screens makes typing difficult, a challenge exacerbated when situational impairments like walking vibration and divided attention arise in mobile settings. We introduce WalkType, an adaptive text entry system that leverages the mobile device's built-in tri-axis accelerometer to compensate for extraneous movement while walking. WalkType's classification model uses the displacement and acceleration of the device, and inference about the user's footsteps. Additionally, WalkType models finger-touch location and finger distance traveled on the screen, features that increase overall accuracy regardless of movement. The final model was built on typing data collected from 16 participants. In a study comparing WalkType to a control condition, WalkType reduced uncorrected errors by 45.2% and increased typing speed by 12.9% for walking participants.
Conference Paper
Full-text available
This paper presents a novel user interface for handheld mobile devices by recognizing hand grip patterns. Par- ticularly, we consider the scenario where the device is provided with an array of capacitive touch sensors un- derneath the exterior cover. In order to provide the users with intuitive and natural manipulation experience, we use pattern recognition techniques for identifying the users' hand grips from the touch sensors. Preliminary user studies suggest that filtering out unintended user hand grip is one of the most important issues to be re- solved. We discuss the details of the prototype imple- mentation, as well as engineering challenges for practi- cal deployment.
Conference Paper
Full-text available
As mobile and tangible devices are getting smaller and smal- ler it is desirable to extend the interaction area to their whole surface area. The HandSense prototype employs capacitive sensors for detecting when it is touched or held against a body part. HandSense is also able to detect in which hand the device is held, and how. The general properties of our approach were confirmed by a user study. HandSense was able to correctly classify over 80 percent of all touches, dis- criminating six different ways of touching the device (hold left/right, pick up left/right, pick up at top/bottom). This in- formation can be used to implement or enhance implicit and explicit interaction with mobile phones and other tangible user interfaces. For example, graphical user interfaces can be adjusted to the user's handedness.
Conference Paper
Full-text available
It is generally assumed that touch input cannot be accurate because of the fat finger problem, i.e., the softness of the fingertip combined with the occlusion of the target by the finger. In this paper, we show that this is not the case. We base our argument on a new model of touch inaccuracy. Our model is not based on the fat finger problem, but on the perceived input point model. In its published form, this model states that touch screens report touch location at an offset from the intended target. We generalize this model so that it represents offsets for individual finger postures and users. We thereby switch from the traditional 2D model of touch to a model that considers touch a phenomenon in 3-space. We report a user study, in which the generalized model explained 67% of the touch inaccuracy that was previously attributed to the fat finger problem. In the second half of this paper, we present two devices that exploit the new model in order to improve touch accuracy. Both model touch on per-posture and per-user basis in order to increase accuracy by applying respective offsets. Our RidgePad prototype extracts posture and user ID from the user’s fingerprint during each touch interaction. In a user study, it achieved 1.8 times higher accuracy than a simulated capacitive baseline condition. A prototype based on optical tracking achieved even 3.3 times higher accuracy. The increase in accuracy can be used to make touch interfaces more reliable, to pack up to 3.3^2 > 10 times more controls into the same surface, or to bring touch input to very small mobile devices.
Conference Paper
We present an investigation into how hand usage is affected by different body postures (Sitting at a table, Lying down and Standing) when interacting with smartphones. We theorize a list of factors (smartphone support, body support and muscle usage) and explore their influence the tilt and rotation of the smartphone. From this we draw a list of hypotheses that we investigate in a quantitative study. We varied the body postures and grips (Symmetric bimanual, Asymmetric bimanual finger, Asymmetric bimanual thumb and Single-handed) studying the effects through a dual pointing task. Our results showed that the body posture Lying down had the most movement, followed by Sitting at a table and finally Standing. We additionally generate reports of motions performed using different grips. Our work extends previous research conducted with multiple grips in a sitting position by including other body postures, it is anticipated that UI designers will use our results to inform the development of mobile user interfaces.
Conference Paper
When people hold their smartphone in landscape orientation, they use their thumbs for input on the frontal touchscreen, while their remaining fingers rest on the back of the device (BoD) to stabilize the grip. We present BackXPress, a new interaction technique that lets users create BoD pressure input with these remaining fingers to augment their interaction with the touchscreen on the front: Users can apply various pressure levels with each of these fingers to enter different temporary "quasi-modes" that are only active as long as that pressure is applied. Both thumbs can then interact with the frontal screen in that mode. We illustrate the practicality of BackXPress with several sample applications, and report our results from three user studies: Study 1 investigated which fingers can be used to exert BoD pressure and found index, middle, and ring finger from both hands to be practical. Study 2 revealed how pressure touches from these six fingers are distributed across the BoD. Study 3 examined user performance for applying BoD pressure (a) during single touches at the front and (b) for 20 seconds while touching multiple consecutive frontal targets. Participants achieved up to 92% pressure accuracy for three separate pressure levels above normal resting pressure, with the middle fingers providing the highest accuracy. BoD pressure did not affect frontal touch accuracy. We conclude with design guidelines for BoD pressure input.
Conference Paper
In this paper we present an investigation into how hand usage is affected by different mobile phone form factors. Our initial (qualitative) study explored how users interact with various mobile phone types (touchscreen, physical keyboard and stylus). The analysis of the videos revealed that each type of mobile phone affords specific handgrips and that the user shifts these grips and consequently the tilt and rotation of the phone depending on the context of interaction. In order to further investigate the tilt and rotation effects we conducted a controlled quantitative study in which we varied the size of the phone and the type of grips (Symmetric bimanual, Asymmetric bimanual with finger, Asymmetric bimanual with thumb and Single handed) to better understand how they affect the tilt and rotation during a dual pointing task. The results showed that the size of the phone does have a consequence and that the distance needed to reach action items affects the phones' tilt and rotation. Additionally, we found that the amount of tilt, rotation and reach required corresponded with the participant's grip preference. We finish the paper by discussing the design lessons for mobile UI and proposing design guidelines and applications for these insights.
Conference Paper
In this paper we investigate the physical interaction between the hand and three types of mobile device interaction: touchscreen, physical keyboard and stylus. Through a controlled study using video observational analysis, we observed firstly, how the participants gripped the three devices and how these grips were device dependent. Secondly we looked closely at these grips to uncover how participants performed what we call micro-movements to facilitate a greater range of interaction, e.g. reaching across the keyboard. The results extend current knowledge by comparing three handheld device input methods and observing the movements, which the hand makes in five grips. The paper concludes by describing the development of a conceptual design, proposed as a provocation for the opening of dialogue on how we conceive hand usage and how it might be optimized when designed for mobile devices.
Conference Paper
Touchscreens continue to advance including progress towards sensing fingers proximal to the display. We explore this emerging pre-touch modality via a self-capacitance touchscreen that can sense multiple fingers above a mobile device, as well as grip around the screen's edges. This capability opens up many possibilities for mobile interaction. For example, using pre-touch in an anticipatory role affords an "ad-lib interface" that fades in a different UI--appropriate to the context--as the user approaches one-handed with a thumb, two-handed with an index finger, or even with a pinch or two thumbs. Or we can interpret pre-touch in a retroactive manner that leverages the approach trajectory to discern whether the user made contact with a ballistic vs. a finely-targeted motion. Pre-touch also enables hybrid touch + hover gestures, such as selecting an icon with the thumb while bringing a second finger into range to invoke a context menu at a convenient location. Collectively these techniques illustrate how pre-touch sensing offers an intriguing new back-channel for mobile interaction.
Conference Paper
In order to reach targets with one hand on common large mobile touch displays, users tilt and shift the device in their hand. In this work, we use this grip change as a continuous information stream for detecting where the user will touch while their finger is still en-route. We refer to this as in the air prediction. We show that grip change detected using standard mobile motion sensors produces similar in the air touch point predictions to techniques that use auxiliary sensor arrays, even in varying physical scenarios such as interacting in a moving vehicle. Finally, our model that combines grip change and the resulting touch point predicted where users intended to land, lowering error rates by 41%.
Conference Paper
As large-screen smartphones are trending, they bring a new set of challenges such as acquiring unreachable screen targets using one hand. To understand users' touch behavior on large mobile touchscreens, we conducted an empirical experiment to discover their usage patterns of tilting devices toward their thumbs to touch screen regions. Exploiting this natural tilting behavior, we designed three novel mobile interaction techniques: TiltSlide, TiltReduction, and TiltCursor. We conducted a controlled experiment to compare our methods with other existing methods, and then evaluated them in real mobile phone scenarios such as sending an e-mail and web surfing. We constructed a design space for one-hand targeting interactions and proposed design considerations for one-hand targeting in real mobile phone circumstances.
Article
The purpose of this study was to determine how smartphone use posture affects biomechanical variables and muscle activities. Eleven university students(age: 22.2?2.6 yrs, height: 176.6?4.7 cm, weight: 69.5?7.5 kg) who have no musculoskeletal disorder were recruited as the subject according to having experience in using the smartphone for more than one year. Angular velocity, muscle activity, and thumb finger pressure were determined for each trial. For each dependent variable, a one-way analysis of variance (ANOVA) with repeated measures was performed to test if significant difference existed among different three conditions (p
Article
We demonstrate that front-of-screen targeting on mobile phones can be predicted from back-of-device grip manipulations. Using simple, low-resolution capacitive touch sensors placed around a standard phone, we outline a machine learning approach to modelling the grip modulation and inferring front-of-screen touch targets. We experimentally demonstrate that grip is a remarkably good predictor of touch, and we can predict touch position 200ms before contact with an accuracy of 18mm.
Article
We present a predictive model for the functional area of the thumb on a touchscreen surface: the area of the interface reachable by the thumb of the hand that is holding the device. We derive a quadratic formula by analyzing the kinematics of the gripping hand. Model fit is high for the thumb-motion trajectories of 20 participants. The model predicts the functional area for a given 1) surface size, 2) hand size, and 3) position of the index finger on the back of the device. Designers can use this model to ensure that a user interface is suitable for interaction with the thumb. The model can also be used inversely - that is, to infer the grips assumed by a given user interface layout.
Article
This article presents a tentative theoretical framework for the study of asymmetry in the context of human bimanual action. It is emphasized that in man most skilled manual activities involve two hands playing different roles, a fact that has been often overlooked in the experimental study of human manual lateralization.As an alternative to the current concepts of manual preference and manual superiority—whose relevance is limited to the particular case of unimanual actions—the more general concept of lateral preference is proposed, to denote preference for one of the two possible ways of assigning two roles to two hands.A simple model describing man’s favored intermanual division of labor in the variety of his skilled manual activities is outlined. The two main assumptions of the model are the following. 1) The two hands represent two motors, that is, devices serving to create motion, whose internal (biomechanical and physiological) complexity is ignored in the suggested approach. 2) In man, the two manual motors cooperate with one another as if they were assembled in series, thereby forming a kinematic chain: In a right-hander allowed to follow his or her lateral preferences, motion produced by the right hand tends to articulate with motion produced by the left.It is suggested that the kinematic chain model may help in understanding the adaptive advantage of human manual specialization.
Article
We present iRotate, an approach to automatically rotate screens on mobile devices to match users' face orientation. Current approaches to automatic screen rotation are based on gravity and device orientation. Our survey of 513 users shows that 42% currently experience auto-rotation that leads to incorrect viewing orientation several times a week or more, and 24% find the problem to be very serious to extremely serious. iRotate augments gravity-based approach, and uses front cameras on mobile devices to detect users' faces and rotates screens accordingly. It requires no explicit user input and supports different user postures and device orientations. We have implemented a iRotate that works in real-time on iPhone and iPad, and we assess the accuracy and limitations of iRotate through a 20- participant feasibility study.
Article
Choices of strategies for analyzing records or transcripts of human behavior in everyday, naturalistic settings are affected by a variety of external and internal design constraints. Among the most common techniques identified are analytic induction, the constant comparative method, typological analysis, enumerative systems, and standardized observational protocols. Each of these is described and analyzed along multiple dimensions of design considerations. SCHOOL ETHNOGRAPHY; OBSERVATIONAL RESEARCH; QUALITATIVE ANALYSIS; RESEARCH METHODOLOGY.
Conference Paper
In spite of the increasing popularity of handheld touchscreen devices, little research has been conducted on how to evaluate and design one handed thumb tapping interactions. In this paper, we present a study that researched three issues related to these interactions: 1) whether it is necessary to evaluate these interactions with the preferred and the non-preferred hand; 2) whether participants evaluating these interactions should be asked to stand and walk during evaluations; 3) whether targets on the edge of the screen enable participants to be more accurate in selection than targets not on the edge. Half of the forty participants in the study used their non-preferred hand and half used their preferred hand. Each participant conducted half of the tasks while walking and half while standing. We used 25 different target positions (16 on the edge of the screen) and five different target sizes. The participants who used their preferred hand completed tasks more quickly and accurately than the participants who used their non-preferred hand, with the differences being large enough to suggest it is necessary to evaluate this type of interactions with both hands. We did not find differences in the performance of participants when they walked versus when they stood, suggesting it is not necessary to include this as a variable in evaluations. In terms of target location, participants rated targets near the center of the screen as easier and more comfortable to tap, but the highest accuracy rates were for targets on the edge of the screen.
Article
Mobile device text messaging and other typing is rapidly increasing worldwide. A checklist was utilized to characterize joint postures and typing styles in individuals appearing to be of college age (n = 859) while typing on their mobile devices in public. Gender differences were also ascertained. Almost universally, observed subjects had a flexed neck (91.0%, n = 782), and a non-neutral typing-side wrist (90.3%, n = 776). A greater proportion of males had protracted shoulders (p < 0.01, χ(2) test), while a greater proportion of females had a typing-side inner elbow angle of <90°, particularly while standing (p = 0.03, χ(2) test). 46.1% of subjects typed with both thumbs (two hands holding the mobile device). Just over one-third typed with their right thumb (right hand holding the mobile device). No difference in typing styles between genders was found. Future research should determine whether the non-neutral postures identified may be associated with musculoskeletal disorders.
Article
To work successfully, designers must understand the various body shapes and physical abilities of the population for which they design. The Measure of Man and Woman is an updated and expanded version of the landmark human factors book first published in 1959. It brings together a wealth of crucial information to help designers create products and environments that better accommodate human needs.
Article
This article presents a tentative theoretical framework for the study of asymmetry in the context of human bimanual action. It is emphasized that in man most skilled manual activities involve two hands playing different roles, a fact that has been often overlooked in the experimental study of human manual lateralization. As an alternative to the current concepts of manual preference and manual superiority-whose relevance is limited to the particular case of unimanual actions-the more general concept of lateral preference is proposed to denote preference for one of the two possible ways of assigning two roles to two hands. A simple model describing man's favored intermanual division of labor in the model are the following. 1) The two hands represent two motors, that is, decomplexity is ignored in the suggested approach. 2) In man, the two manual motors cooperate with one another as if they were assembled in series, thereby forming a kinematic chain: In a right-hander allowed to follow his or her lateral preferences, motion produced by the right hand tends to articulate with motion produced by the left. It is suggested that the kinematic chain model may help in understanding the adaptive advantage of human manual specialization.
Hands Published by Princeton University Press
  • J Napier