ThesisPDF Available

Improving Content Design on Mobile Devices to Reduce Situational Visual Impairments

Authors:

Abstract and Figures

Billions of mobile devices are used worldwide for a significant number of important tasks in our personal and professional lives. Unfortunately, mobile devices are prone to interaction challenges as a result of the changing contexts of use, resulting in the user experiencing a situational impairment. For example, when typing in a vehicle being driven over an uneven road, it is difficult to avoid incorrect key presses. Situational visual impairments (SVIs) are one type of usability and accessibility challenge mobile device user's face (e.g., not being able to read and reply to an important email when outside under bright sunlight), which suggests that current mobile industry practices are insufficient for supporting designers when addressing SVIs. However, there is little HCI research that provides a comprehensive understanding of SVIs through qualitative research. Considering that we primarily interact with mobile devices through the screen, it is arguably important to further research this area. Understanding the true context of SVIs will help to identify adequate solutions. To address this, I recruited 174 participants for an online survey and 24 participants across Australia and Scotland for a two-week ecological momentary assessment to establish what factors contribute to SVIs experienced when using a mobile device. My findings revealed that SVIs are a complex phenomenon with several interacting factors. I introduce a mobile device SVI Context Model to conceptualise the problem. I identified that mobile content design was the most practical first step towards addressing SVIs. Following this, I surveyed 43 mobile content designers and ran four follow-on interviews to identify how often SVIs were considered and how I could provide effective support. I found key similarities and differences between accessibility and designing to reduce SVIs. The participants requested guidelines, education, and digital design tools for improved SVI design support. I focused on identifying the necessary features and implementation for an SVI design tool that would support designers because this would have an immediate and positive influence on addressing SVIs. Next, I surveyed 50 mobile app designers using an online survey to understand how mobile app interfaces are designed. I identified a wide variety of tools and practices used, and the participants also raised challenges for designing mobile app interfaces that had implications for users experiencing SVIs. Using my new understanding of SVIs and the challenges mobile designers face, I ran two design workshops. The purpose of the first workshop was to generate ideas for SVI design tools that would fit within a typical designer's workflow. I then created high-fidelity prototypes to elicit more informed feedback in the second workshop. To address the problem of insufficient support for designers, I present a set of recommendations for developing SVI design tools to support designers in creating mobile content that reduces SVIs in different contexts. The recommendations provide guidance on how to incorporate SVI design support into existing design software (e.g., Sketch) and future design software. Design software companies following my recommendations will lead to an improved set of tools for designers to expand mobile content designs to different contexts. The development and inclusion of these designs within mobile apps (e.g., allowing alternative modes such as for day or night) will provide users with more control in addressing SVIs through enhanced content design.
Content may be subject to copyright.
A preview of the PDF is not available
Conference Paper
Full-text available
Although the exploration of variations is a key part of interface design, current processes for creating variations are mostly manual. We present Scout, a system that helps designers explore many variations rapidly through mixed-initiative interaction with high-level constraints and design feedback. Past constraint-based layout systems use low-level spatial constraints and mostly produce only a single design. Scout advances upon these systems by introducing high-level constraints based on design concepts (e.g. emphasis). With Scout, we have formalized several high-level constraints into their corresponding low-level spatial constraints to enable rapidly generating many designs through constraint solving and program synthesis.
Article
Full-text available
The adverse effect of ambient noise on humans has been extensively studied in fields like cognitive science, indicating a significant impact on cognitive performance, behaviour, and emotional state. Surprisingly, the effect of ambient noise has not been studied in the context of mobile interaction. As smartphones are ubiquitous by design, smartphone users are exposed to a wide variety of ambient noises while interacting with their devices. In this paper, we present a structured analysis of the effect of six distinct ambient noise types on typical smartphone usage tasks. The evaluated ambient noise types include variants of music, urban noise and speech. We analyse task completion time and errors, and find that different ambient noises affect users differently. For example, while speech and urban noise slow down text entry, being exposed to music reduces completion time in target acquisition tasks. Our study contributes to the growing research area on situational impairments, and we compare our results to previous work on the effect of cold-induced situational impairments. Our results can be used to support smartphone users through adaptive interfaces which respond to the ongoing context of the user.
Conference Paper
Full-text available
Interface designers often use screenshot images of example designs as building blocks for new designs. Since images are unstructured and hard to edit, designers typically reconstruct screenshots with vector graphics tools in order to reuse or edit parts of the design. Unfortunately, this reconstruction process is tedious and slow. We present Rewire, an interactive system that helps designers leverage example screenshots. Rewire automatically infers a vector representation of screenshots where each UI component is a separate object with editable shape and style properties. Based on this representation, the system provides three design assistance modes that help designers reuse or redraw components of the example design. The results from our quantitative and user evaluations demonstrate that Rewire can generate accurate vector representations of interface screenshots found in the wild and that design assistance enables users to reconstruct and edit example designs more efficiently compared to a baseline design tool.
Article
We propose adaptive tone mapping for display enhancement under ambient light using constrained optimization. To deal with the visibility reduction caused by ambient light in displays, we perform different operations for display enhancement according to the intensity of ambient light. Since weak ambient light has little effect on displayed images, we only perform contrast enhancement for them. However, strong ambient light makes displayed images dark for human eyes, causing severe visibility reduction in luminance and contrast. To enhance the visibility of displayed images under strong ambient light, we formulate a constraint optimization problem which consists of luminance enhancement, contrast enhancement, and distortion minimization terms, and find an optimal trade-off among them by solving it. Finally, we conduct color scaling to reproduce vivid colors in displayed images. Experimental results demonstrate that the proposed method significantly enhances the brightness, contrast, details, and colors of displayed images and outperforms other state-of-the-art methods under ambient light.
Conference Paper
We conduct the first large-scale analysis of the accessibility of mobile apps, examining what unique insights this can provide into the state of mobile app accessibility. We analyzed 5,753 free Android apps for label-based accessibility barriers in three classes of image-based buttons: Clickable Images, Image Buttons, and Floating Action Buttons. An epidemiology-inspired framework was used to structure the investigation. The population of free Android apps was assessed for label-based inaccessible button diseases. Three determinants of the disease were considered: missing labels, duplicate labels, and uninformative labels. The prevalence, or frequency of occurrences of barriers, was examined in apps and in classes of image-based buttons. In the app analysis, 35.9% of analyzed apps had 90% or more of their assessed image-based buttons labeled, 45.9% had less than 10% of assessed image-based buttons labeled, and the remaining apps were relatively uniformly distributed along the proportion of elements that were labeled. In the class analysis, 92.0% of Floating Action Buttons were found to have missing labels, compared to 54.7% of Image Buttons and 86.3% of Clickable Images. We discuss how these accessibility barriers are addressed in existing treatments, including accessibility development guidelines.
Conference Paper
Modern smartphones are built with capacitive-sensing touchscreens, which can detect anything that is conductive or has a dielectric differential with air. The human finger is an example of such a dielectric, and works wonderfully with such touchscreens. However, touch interactions are disrupted by raindrops, water smear, and wet fingers because capacitive touchscreens cannot distinguish finger touches from other conductive materials. When users' screens get wet, the screen's usability is significantly reduced. RainCheck addresses this hazard by filtering out potential touch points caused by water to differentiate fingertips from raindrops and water smear, adapting in real-time to restore successful interaction to the user. Specifically, RainCheck uses the low-level raw sensor data from touchscreen drivers and employs precise selection techniques to resolve water-fingertip ambiguity. Our study shows that RainCheck improves gesture accuracy by 75.7%, touch accuracy by 47.9%, and target selection time by 80.0%, making it a successful remedy to interference caused by rain and other water.
Chapter
Many websites do not satisfy minimum contrast requirements. One reason could be that designers must select colors through trial and error using contrast calculators. This paper presents a visual framework for working with color contrasts. The foreground and background colors are detected automatically, and views are presented to simulate how a design is viewed with different levels of reduced vision. Moreover, saturation-brightness plots are introduced to help make valid color choices. Color corrections are proposed and visualized.
Conference Paper
In this paper we explore how screen-based smartphone interaction can be enriched when designers focus on the physical interaction issues surrounding the device. These consist of the hand grips used (Symmetric bimanual, Asymmetric bimanual with thumb, Single handed, Asymmetric bimanual with finger), body postures (Sitting at a table, Standing, Lying down) and the tilting of the smartphone itself. These physical interactions are well described in the literature and several research papers provide empirical metrics describing them. In this paper, we go one step further by using this data to generate new screenbased interactions. We achieved this by conducting two workshops to investigate how smartphone interaction design can be informed by the physicality of smartphone interaction. By analysing the outcomes, we provide 14 new screen interaction examples with additional insights comparing outcomes for various body postures and grips.
Conference Paper
Gaze gesture-based interactions on a computer are promising, but the existing systems are limited by the number of supported gestures, recognition accuracy, need to remember the stroke order, lack of extensibility, and so on. We present a gaze gesture-based interaction framework where a user can design gestures and associate them to appropriate commands like minimize, maximize, scroll, and so on. This allows the user to interact with a wide range of applications using a common set of gestures. Furthermore, our gesture recognition algorithm is independent of the screen size, resolution, and the user can draw the gesture anywhere on the target application. Results from a user study involving seven participants showed that the system recognizes a set of nine gestures with an accuracy of 93% and a F-measure of 0.96. We envision, this framework can be leveraged in developing solutions for situational impairments, accessibility, and also for implementing rich a interaction paradigm.
Conference Paper
We present an investigation into how hand usage is affected by different body postures (Sitting at a table, Lying down and Standing) when interacting with smartphones. We theorize a list of factors (smartphone support, body support and muscle usage) and explore their influence the tilt and rotation of the smartphone. From this we draw a list of hypotheses that we investigate in a quantitative study. We varied the body postures and grips (Symmetric bimanual, Asymmetric bimanual finger, Asymmetric bimanual thumb and Single-handed) studying the effects through a dual pointing task. Our results showed that the body posture Lying down had the most movement, followed by Sitting at a table and finally Standing. We additionally generate reports of motions performed using different grips. Our work extends previous research conducted with multiple grips in a sitting position by including other body postures, it is anticipated that UI designers will use our results to inform the development of mobile user interfaces.