
Jacob WobbrockUniversity of Washington Seattle | UW · Information School
Jacob Wobbrock
Ph.D. Human-Computer Interaction
About
177
Publications
72,226
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
15,028
Citations
Introduction
I am a Professor in The Information School and, by courtesy, in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. I direct the ACE Lab comprising Ph.D. students from information science and computer science. My field is human-computer interaction (HCI), where my research involves understanding people’s interactions with computers and information, and improving those interactions through design and engineering, especially for people with disabilities. I am the past chair of the Master of Human-Computer Interaction & Design program and an active member of the DUB Group.
Additional affiliations
September 2011 - present
September 2006 - September 2011
September 2006 - September 2017
Publications
Publications (177)
Computer iconography in desktop operating systems and applications has evolved in style but, in many cases, not in substance for decades. For example, in many applications, a 3.5" floppy diskette icon still represents the “Save” function. But many of today's young adult computer users grew up without direct physical experience of floppy diskettes a...
Method-independent text entry evaluation tools are often used to conduct text entry experiments and compute performance metrics, like words per minute and error rates. The input stream paradigm of Soukoreff & MacKenzie (2001, 2003) still remains prevalent, which presents a string for transcription and uses a strictly serial character representation...
Current text correction processes on mobile touch devices are laborious: users either extensively use backspace, or navigate the cursor to the error position, make a correction, and navigate back, usually by employing multiple taps or drags over small targets. In this paper, we present three novel text correction techniques to improve the correctio...
Online news sources have transformed civic discourse, and much has been made of their credibility. Although web page credibility has been investigated generally, most work has focused on the credibility of web page content. In this work, we study the isolated appearance of news-like web pages. Specifically, we report on a laboratory experiment invo...
Situationally induced impairments and disabilities (SIIDs) can compromise people's use of mobile devices. Factors like walking, divided attention, cold temperatures, low light levels, glare, inebriation, fear, loud noises, or rainwater can make using a device in off-desktop environments challenging and even unsafe. Unfortunately, today's mobile dev...
This chapter presents an overview of situationally-induced impairments and disabilities, or SIIDs, which are caused by situations, contexts, or environments that negatively affect the abilities of people interacting with technology, especially when they are on-the-go. Although the lived experience of SIIDs is, of course, unlike that of health-induc...
Situationally-induced impairments and disabilities (SIIDs) make it difficult for users of interactive computing systems to perform tasks due to context (e.g., listening to a phone call when in a noisy crowd) rather than a result of a congenital or acquired impairment (e.g., hearing damage). SIIDs are a great concern when considering the ubiquitousn...
Human-computer input performance inherently involves speed-accuracy tradeoffs---the faster users act, the more inaccurate those actions are. Therefore, comparing speeds and accuracies separately can result in ambiguous outcomes: Does a fast but inaccurate technique perform better or worse overall than a slow but accurate one? For pointing, speed an...
We present Cluster Touch, a combined user-independent and user-specific touch offset model that improves the accuracy of touch input on smartphones for people with motor impairments, and for people experiencing situational impairments while walking. Cluster Touch combines touch examples from multiple users to create a shared user-independent touch...
End-user elicitation studies are a popular design method. Currently, such studies are usually confined to a lab, limiting the number and diversity of participants, and therefore the representativeness of their results. Furthermore, the quality of the results from such studies generally lacks any formal means of evaluation. In this paper, we address...
The workshop paper can be downloaded from here https://dl.acm.org/citation.cfm?id=3299029 and the proceedings for the workshop can be found here https://arxiv.org/html/1904.05382
Pointing to targets in graphical user interfaces remains a frequent and fundamental necessity in modern computing systems. Yet for millions of people with motor impairments, children, and older users, pointing—whether with a mouse cursor, a stylus, or a finger on a touch screen—remains a major access barrier because of the fine-motor skills require...
End-user elicitation studies are a popular design method, but their data require substantial time and effort to analyze. In this paper, we present Crowdsensus, a crowd-powered tool that enables researchers to efficiently analyze the results of elicitation studies using subjective human judgment and automatic clustering algorithms. In addition to ou...
We conduct the first large-scale analysis of the accessibility of mobile apps, examining what unique insights this can provide into the state of mobile app accessibility. We analyzed 5,753 free Android apps for label-based accessibility barriers in three classes of image-based buttons: Clickable Images, Image Buttons, and Floating Action Buttons. A...
Personal technologies are rarely designed to be accessible to disabled people, partly due to the perceived challenge of including disability in design. Through design workshops, we addressed this challenge by infusing user-centered design activities with Design for Social Accessibility-a perspective emphasizing social aspects of accessibility-to in...
Modern smartphones are built with capacitive-sensing touchscreens, which can detect anything that is conductive or has a dielectric differential with air. The human finger is an example of such a dielectric, and works wonderfully with such touchscreens. However, touch interactions are disrupted by raindrops, water smear, and wet fingers because cap...
We introduce $Q, a super-quick, articulation-invariant point-cloud stroke-gesture recognizer for mobile, wearable, and embedded devices with low computing resources. $Q ran up to 142X faster than its predecessor $P in our benchmark evaluations on several mobile CPUs, and executed in less than 3% of $P's computations without any accuracy loss. In ou...
By focusing on users' abilities rather than disabilities, designers can create interactive systems better matched to those abilities.
Breathalyzers, the standard quantitative method for assessing inebriation, are primarily owned by law enforcement and used only after a potentially inebriated individual is caught driving. However, not everyone has access to such specialized hardware. We present drunk user interfaces: smartphone user interfaces that measure how alcohol affects a pe...
Talk therapy is a common, effective, and desirable form of mental health treatment. Yet, it is inaccessible to many people. Enabling peers to chat online using effective principles of talk therapy could help scale this form of mental health care. To understand how such chats could be designed, we conducted a two-week field experiment with 40 people...
Despite years of addressing disability in technology design and advocating user-centered design practices, popular mainstream technologies remain largely inaccessible for people with disabilities. We conducted a design course study investigating how student designers regard disability and explored how designing for multiple disabled and nondisabled...
With the ubiquity of mobile touchscreen devices like smartphones, two widely used text entry methods have emerged: small touch-based keyboards and speech recognition. Although speech recognition has been available on desktop computers for years, it has continued to improve at a rapid pace, and it is currently unknown how today's modern speech recog...
Mobile accessibility is often a property considered at the level of a single mobile application (app), but rarely on a larger scale of the entire app "ecosystem," such as all apps in an app store, their companies, developers, and user influences. We present a novel conceptual framework for the accessibility of mobile apps inspired by epidemiology....
The term "disability" connotes an absence of ability, but is like saying "dis-money" or "dis-height." All living people have some abilities [2]. Unfortunately, history is filled with examples of a focus on dis-ability, on what is missing, and on ensuing attempts to replace lost function to make people match a rigid world. Although often well intend...
We present Group Touch, a method for distinguishing among multiple users simultaneously interacting with a tabletop computer using only the touch information supplied by the device. Rather than tracking individual users for the duration of an activity, Group Touch distinguishes users from each other by modeling whether an interaction with the table...
We present cascading dwell gaze typing, a novel approach to dwell-based eye typing that dynamically adjusts the dwell time of keys in an on-screen keyboard based on the likelihood that a key will be selected next, and the location of the key on the keyboard. Our approach makes unlikely keys more difficult to select and likely keys easier to select...
We introduce interaction proxies as a strategy for runtime repair and enhancement of the accessibility of mobile applications. Conceptually, interaction proxies are inserted between an application's original interface and the manifest interface that a person uses to perceive and manipulate the application. This strategy allows third-party developer...
Barriers to accessing mental health care leave the majority of people with mental illnesses without professional care. Peer support has been shown to address gaps in care, and could scale to wider audiences through technology. But technology design for mental health peer support lags far behind tools for individuals and clinicians. To identify oppo...
Despite practices addressing disability in design and advocating user-centered design (UCD) approaches, popular mainstream technologies remain largely inaccessible for people with disabilities. We conducted a design course study investigating how student designers regard disability and explored how designing for both disabled and non-disabled users...
With laptops and desktops, the dominant method of text entry is the full-size keyboard; now with the ubiquity of mobile devices like smartphones, two new widely used methods have emerged: miniature touch screen keyboards and speech-based dictation. It is currently unknown how these two modern methods compare. We therefore evaluated the text entry p...
Research on children's interactions with touchscreen devices has examined small and large screens and compared interaction to adults or among children of different ages. Little work has explicitly compared interaction on different platforms, however. Large touchscreen displays can be deployed flat, as in a table, or vertically, as on a wall. While...
Elicitation studies, where users supply proposals meant to effect system commands, have become a popular method for system designers. But the method to date has assumed a within-subjects procedure and statistics. Despite the benefits of examining the relative agreement of independent groups (e.g., men versus women, children versus adults, novices v...
We present two contributions toward improving the accessibility of touch screens for people with motor impairments. First, we provide an exploration of the touch behaviors of 10 people with motor impairments, e.g., we describe how touching with the back or sides of the hand, with multiple fingers, or with knuckles creates varied multi-point touches...
Data not suitable for classic parametric statistical analyses arise frequently in human–computer interaction studies. Various nonparametric statistical procedures are appropriate and advantageous when used properly. This chapter organizes and illustrates multiple nonparametric procedures, contrasting them with their parametric counterparts. Guidanc...
Interaction logs generated by educational software can provide valuable insights into the collaborative learning process and identify opportunities for technology to provide adaptive assistance. Modeling collaborative learning processes at tabletop computers is challenging, as the computer is only able to log a portion of the collaboration, namely...
With the recent influx of smartphones, tablets, and wearables such as watches and glasses, personal interactive device use is increasingly visible and commonplace in public and social spaces. Assistive Technologies (ATs) used by people with disabilities are observable to others and, as a result, can affect how AT users are perceived. This raises th...
ARTool is an R package implementing the Aligned Rank Transform for conducting nonparametric analyses of variance on factorial models. This implementation is based on the ART procedure as used in the original implementation of ARTool by Wobbrock et al.
Work in human-computer interaction has generally assumed either a single user or a group of users working together in a shared virtual space. Recent crowd-powered systems use a different model in which a dynamic group of individuals (the crowd) collectively form a single actor that responds to real-time performance tasks, e.g., controlling an on-sc...
Mobile sign language video conversations can become unintelligible if high video transmission rates cause network congestion and delayed video. In an effort to understand the perceived lower limits of intelligible sign language video intended for mobile communication, we evaluated sign language video transmitted at four low frame rates (1, 5, 10, a...
We address in this work the process of agreement rate analysis for characterizing the level of consensus between participants' proposals elicited during guessability studies. Two new measures, i.e., disagreement rate for referents and coagreement rate between referents, are proposed to accompany the widely-used agreement rate formula of Wobbrock et...
Smartphones and tablets are often used in dynamic environments that force users to break focus and attend to their surroundings, creating a form of "situational impairment." Current mobile devices have no ability to sense when users divert or restore their attention, let alone provide support for resuming tasks. We therefore introduce SwitchBack, a...
As we increasingly strive for scientific rigor and generalizability in HCI research, should we entertain any hope that by doing good science, our discoveries will eventually be more transferrable to industry? We present an in-depth case study of how an HCI research innovation goes through the process of transitioning from a university project to a...
We introduce gesture heatmaps, a novel gesture analysis technique that employs color maps to visualize the variation of local features along the gesture path. Beyond current gesture analysis practices that characterize gesture articulations with single-value descriptors, e.g., size, path length, or speed, gesture heatmaps are able to show with colo...
Mobile sign language video communication has the potential to be more accessible and affordable if the current recommended video transmission standard of 25 frames per second at 100 kilobits per second (kbps) as prescribed in the International Telecommunication Standardization Sector (ITU-T) Q. 26/16 were relaxed. To investigate sign language video...
Researchers are making efforts to reduce legacy bias, which is a limitation of current elicitation methods. There are many open challenges in updating elicitation methods to incorporate production, priming, and partner techniques. Gesture elicitation is emerging as a potential approach to address this challenge. Gesture elicitation has been applied...
We present a new method of predicting the endpoints of mouse movements. While prior approaches to endpoint prediction have relied upon normative kinematic laws, regression, or control theory, our approach is straightforward but kinematically rich. Our key insight is to regard the unfolding velocity profile of a pointing movement as a 2-D stroke ges...
We present the Bubble Lens, a new target acquisition technique that remedies the limitations of the Bubble Cursor to increase the speed and accuracy of acquiring small, dense targets--precisely those targets for which the Bubble Cursor degenerates to a point cursor. When targets are large and sparse, the Bubble Lens behaves like the Bubble Cursor....
A study of small groups collaborating at an interactive tabletop was conducted. Group discussions were coded according to the type and quality of social regulation processes used. Episodes of high and low quality social regulation were then matched with the software logs to identify patterns of interaction associated with quality of social regulati...
Current measures of stroke gesture articulation lack descriptive power because they only capture absolute characteristics about the gesture as a whole, not fine-grained features that reveal subtleties about the gesture articulation path. We present a set of twelve new relative accuracy measures for stroke gesture articulation that characterize the...
Adding tactile feedback to touch screens can improve their accessibility to blind users, but prior approaches to integrating tactile feedback with touch screens have either offered limited functionality or required extensive (and typically expensive) customization of the hardware. We introduce touchplates, carefully designed tactile guides that pro...
Mobile sign language video conversations can become unintelligible due to high video transmission rates causing network congestion and delayed video. In an effort to understand how much sign language video quality can be sacrificed, we evaluated the perceived lower limits of intelligible sign language video transmitted at four low frame rates (1, 5...
This paper describes Snippets, a novel method for improving computerized data entry from paper forms. Using computer vision techniques, Snippets segments an image of the form into small snippets that each contain the content for a single form field. Data entry is performed by looking at the snippets on the screen and typing values directly on the s...
Gesture-based touch screen user interfaces, when designed to be accessible to blind users, can be an effective mode of interaction for those users. However, current accessible touch screen interaction techniques suffer from one serious limitation: they are only usable on devices that have been explicitly designed to support them. Access Lens is a n...
Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show t...
We present a multi-site field study to evaluate LemonAid, a crowdsourced contextual help approach that allows users to retrieve relevant questions and answers by making selections within the interface. We deployed LemonAid on 4 different web sites used by thousands of users and collected data over several weeks, gathering over 1,200 usage logs, 168...
A "discount" version of Q-methodology for HCI, called "HCI-Q", can be used in iterative design cycles to explore, from the point of view of users and other stakeholders, what makes technologies personally significant. Initially, designers critically reflect on their own assumptions about how a design may affect social and individual behavior. Then,...
The challenge of mobile text entry is exacerbated as mobile devices are used in a number of situations and with a number of hand postures. We introduce ContextType, an adaptive text entry system that leverages information about a user's hand posture (using two thumbs, the left thumb, the right thumb, or the index finger) to improve mobile touch scr...
Little work has been done on understanding the articulation patterns of users' touch and surface gestures, despite the importance of such knowledge to inform the design of gesture recognizers and gesture sets for different applications. We report a methodology to analyze user consistency in gesture production, both between-users and within-user, by...
The Psychomotor Vigilance Task (PVT) is a validated reaction time (RT) test used to assess aspects of sleep loss including alertness and sleepiness. PVT typically requires a physical button to assess RT, which minimizes the effect of execution time (the time taken to perform a gesture) on RT. When translating this application to mobile devices, a t...
Pointing to targets in graphical user interfaces remains a frequent and fundamental necessity in modern computing systems. Yet for millions of people with motor impairments, children, and older users, pointing-whether with a mouse cursor, a stylus, or a finger on a touch screen-remains a major access barrier because of the fine-motor skills require...
Enabling end-users of Augmentative and Alternative Communication (AAC) systems to add personalized video content at runtime holds promise for improving communication, but the requirements for such systems are as yet unclear. To explore this issue, we present Vid2Speech, a prototype AAC system for children with complex communication needs (CCN) that...
Blind mobile device users face security risks such as inaccessible authentication methods, and aural and visual eavesdropping. We interviewed 13 blind smartphone users and found that most participants were unaware of or not concerned about potential security threats. Not a single participant used optional authentication methods such as a password-p...
We introduce GripSense, a system that leverages mobile device touchscreens and their built-in inertial sensors and vibration motor to infer hand postures including one- or two-handed interaction, use of thumb or index finger, or use on a table. GripSense also senses the amount of pres-sure a user exerts on the touchscreen despite a lack of direct p...
Rapid prototyping of gesture interaction for emerging touch platforms requires that developers have access to fast, simple, and accurate gesture recognition approaches. The $-family of recognizers ($1, $N) addresses this need, but the current most advanced of these, $N-Protractor, has significant memory and execution costs due to its combinatoric g...
We describe an experiment to determine the effects of meditation training on the multitasking behavior of knowledge workers. Three groups each of 12-15 human resources personnel were tested: (1) those who underwent an 8-week training course on mindfulness-based meditation, (2) those who endured a wait period, were tested, and then underwent the sam...
We present Input Finger Detection (IFD), a novel technique for nonvisual touch screen input, and its application, the Perkinput text entry method. With IFD, signals are input into a device with multi-point touches, where each finger represents one bit, either touching the screen or not. Maximum likelihood and tracking algorithms are used to detect...
We present a general-purpose implementation of a target-aware pointing technique, functional across an entire desktop and independent of application implementations. Specifically, we implement Grossman and Balakrishnan's Bubble Cursor, the fastest general pointing facilitation technique in the literature. Our implementation obtains the necessary kn...
Although typing on touchscreens is slower than typing on physical keyboards, touchscreens offer a critical potential advantage: they are software-based, and, as such, the keyboard layout and classification models used to interpret key presses can dynamically adapt to suit each user's typing pattern. To explore this potential, we introduce and evalu...
Web-based technical support such as discussion forums and social networking sites have been successful at ensuring that most technical support questions eventually receive helpful answers. Unfortunately, finding these answers is still quite difficult, since users' textual queries are often incomplete, imprecise, or use different vocabularies to des...
We present the Input Observer, a tool that can run quietly in the background of users' computers and measure their text entry and mouse pointing performance from everyday use. In lab studies, participants are presented with prescribed tasks, enabling easy identification of speeds and errors. In everyday use, no such prescriptions exist. We devised...
The lack of tactile feedback on touch screens makes typing difficult, a challenge exacerbated when situational impairments like walking vibration and divided attention arise in mobile settings. We introduce WalkType, an adaptive text entry system that leverages the mobile device's built-in tri-axis accelerometer to compensate for extraneous movemen...
The HCI research community grows bigger each year, refining and expanding its boundaries in new ways. The ability to effectively review submissions is critical to the growth of CHI and related conferences. The review process is designed to produce a consistent supply of fair, high-quality reviews without overloading individual reviewers; yet, after...
Although many techniques have been proposed to improve text input on touch screens, the vast majority of this research ignores non-alphanumeric input (i.e., punctuation, symbols, and modifiers). To support this input, widely adopted commercial touch-screen interfaces require mode switches to alternate keyboard layouts for most punctuation and symbo...
Our workshop has three primary goals. The first goal is community building: we want to get text entry researchers that are active in different communities into one place. Our second goal is to promote CHI as a natural and compelling focal point for all kinds of text entry research. The third goal is to discuss some difficult issues that are hard or...
Touchscreen devices have exploded onto the commercial stage in the past decade, most prolifically in smartphones, but in other forms as well, including tablets and interactive tabletops. A touchscreen's flat, glassy surface means that even expert typists have to look down at their fingers instead of feeling for the home row keys to situate their ha...
We explore using vibration on a smartphone to provide turn-by-turn walking instructions to people with visual impairments. We present two novel feedback methods called Wand and ScreenEdge and compare them to a third method called Pattern. We built a prototype and conducted a user study where 8 participants walked along a pre-programmed route using...
Video and image quality are often objectively measured using peak signal-to-noise ratio (PSNR), but for sign language video, human comprehension is most important. Yet the relationship of human comprehension to PSNR has not been studied. In this survey, we determine how well PSNR matches human comprehension of sign language video. We use very low b...
We present Portico, a portable system for enabling tangible interaction on and around tablet computers. Two cameras on small foldable arms are positioned above the display to recognize a variety of physical objects placed on or around the tablet. These cameras have a larger field-of-view than the screen, allowing Portico to extend interaction signi...
Many touch screens remain inaccessible to blind users, and those approaches to providing access that do exist offer minimal support for interacting with large touch screens or spatial data. In this paper, we introduce a set of three software-based access overlays intended to improve the accessibility of large touch screen interfaces, specifically i...
Despite the growing research on usability in the pre-development phase, we know little about post-deployment usability activities. To characterize these activities, we surveyed 333 full-time usability professionals and consultants working in large and small corporations from a wide range of industries. Our results show that, as a whole, usability p...
Nonparametric data from multi-factor experiments arise often in human-computer interaction (HCI). Examples may include error counts, Likert responses, and preference tallies. But because multiple factors are involved, common nonparametric tests (e.g., Friedman) are inadequate, as they are unable to examine interaction effects. While some statistica...
Blind and deaf-blind people often rely on public transit for everyday mobility, but using transit can be challenging for them. We conducted semi-structured interviews with 13 blind and deaf-blind people to understand how they use public transit and what human values were important to them in this domain. Two key values were identified: independence...
Touch screen surfaces large enough for ten-finger input have become increasingly popular, yet typing on touch screens pales in comparison to physical keyboards. We examine typing patterns that emerge when expert users of physical keyboards touch-type on a flat surface. Our aim is to inform future designs of touch screen keyboards, with the ultimate...
Despite growing awareness of the accessibility issues surrounding touch screen use by blind people, designers still face challenges when creating accessible touch screen interfaces. One major stumbling block is a lack of understanding about how blind people actually use touch screens. We conducted two user studies that compared how blind people and...
Fitts' law (1954) characterizes pointing speed-accuracy performance as throughput, whose invariance to target distances (A) and sizes (W) is known. However, it is unknown whether throughput and Fitts' law models in general are invariant to task dimensionality (1-D vs. 2-D), whether univariate (SDx) or bivariate (SDx,y) endpoint deviation is used, w...
Recently, Wobbrock et al. (2008) derived a predictive model of pointing accuracy to complement Fitts' law's predictive model of pointing speed. However, their model was based on one-dimensional (1-D) horizontal movement, while applications of such a model require two dimensions (2-D). In this paper, the pointing error model is investigated for 2-D...
Few research studies focus on how the use of assistive technologies is affected by social interaction among people. We present an interview study of 20 individuals to determine how assistive technology use is affected by social and professional contexts and interactions. We found that specific assistive devices sometimes marked their users as havin...