
Jonathan Dobres- PhD
- Researcher at Massachusetts Institute of Technology
Jonathan Dobres
- PhD
- Researcher at Massachusetts Institute of Technology
About
68
Publications
26,835
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,348
Citations
Introduction
Current institution
Publications
Publications (68)
Psychophysical research on text legibility has historically investigated factors such as size, colour and contrast, but there has been relatively little direct empirical evaluation of typographic design itself, particularly in the emerging context of glance reading. In the present study, participants performed a lexical decision task controlled by...
Classification image analysis is a psychophysical technique in which noise components of stimuli are analyzed to produce an image that reveals critical features of a task. Here we use classification images to gain greater understanding of perceptual learning. To achieve reasonable classification images within a single session, we developed an effic...
Generative AI (GenAI), specifically the Large Language Model (LLM), focuses on creating content like text, images, audio, or other data types similar to that created by humans. Advancements in various LLMs have accelerated AI development, potentially impacting various fields, including education. Can LLMs produce educational content with similar re...
Readability is on the cusp of a revolution. Fixed text is becoming fluid as a proliferation of digital reading devices rewrite what a document can do. As past constraints make way for more flexible opportunities, there is great need to understand how reading formats can be tuned to the situation and the individual. We aim to provide a firm foundati...
Voice interfaces reduce visual demand compared with visual-manual interfaces, but the extent depends on design. This study compared visual demand during baseline driving with driving while using voice or manual inputs to place calls with Chevrolet MyLink, Volvo Sensus, or a smartphone. Mean glance duration and total eyes-off-road-time increased whe...
In our age of ubiquitous digital displays, adults often read in short, opportunistic interludes. In this context of Interlude Reading, we consider if manipulating font choice can improve adult readers' reading outcomes. Our studies normalize font size by human perception and use hundreds of crowdsourced participants to provide a foundation for unde...
Modern digital interfaces display typeface in ways new to the 500 year old art of typography, driving a shift in reading from primarily long-form to increasingly short-form. In safety-critical settings, such at-a-glance reading competes with the need to understand the environment. To keep both type and the environment legible, a variety of ‘middle...
Typography plays an increasingly important role in today’s dynamic digital interfaces. Graphic designers and interface engineers have more typographic options than ever before. Sorting through this maze of design choices can be a daunting task. Here we present the results of an experiment comparing differences in glance-based legibility between eig...
This paper describes a set of data made available that contains detailed subtask coding of interactions with several production vehicle human machine interfaces (HMIs) on open roadways, along with accompanying eyeglance data.
The impact of using a smartwatch to initiate phone calls on driver workload, attention, and performance was compared to smartphone visual-manual (VM) and auditory-vocal (AV) interfaces. In a driving simulator, 36 participants placed calls using each method. While task time and number of glances were greater for AV calling on the smartwatch vs. smar...
In-vehicle information systems that allow drivers to use a single voice command to complete a task rather than multiple commands better keep drivers’ attention toward the road, especially compared with when drivers complete the task manually. However, single voice commands are longer and more complex and may be difficult for older drivers to use. T...
Interest in leveraging smartphone technology for scientific data collection has increased significantly in recent years. Mobile platforms have now been employed to investigate a variety of physiological and behavioral phenomena. Here we add to this rapidly growing body of work, using a specially designed mobile application to collect data on text l...
Drivers adapt their glance behavior when using automation, which may detract attention from their surroundings. Glance behavior during parallel parking maneuvers performed with and without automated steering was compared. Drivers directed a smaller proportion of their glances toward the parking space and spent less time looking at it when using aut...
Allergen information on food labels is not standardized, making allergen avoidance difficult for consumers. This study investigated the speed and accuracy of allergen identification on commercial packaging across different types of warning labels. The results identified packaging label characteristics significantly correlated with faster and more a...
Reading at a glance, once a relatively infrequent mode of reading, is becoming common. Mobile interaction paradigms increasingly dominate the way in which users obtain information about the world, which often requires reading at a glance, whether from a smartphone, wearable device, or in-vehicle interface. Recent research in these areas has shown t...
Applied research on driving and basic vision research have held similar views on central, fovea-based vision as the core of visual perception. In applied work, the concept of the Useful Field, as determined by the Useful Field of View (UFOV) test, divides vision between a “useful” region towards the center of the visual field, and the rest of the v...
When designers typographically tweak fonts to make an interface look ‘cool,’ they do so amid a rich design tradition, albeit one that is little-studied in regards to the rapid ‘at a glance’ reading afforded by many modern electronic displays. Such glanceable reading is routinely performed during human-machine interactions where accessing text compe...
The Alliance of Automobile Manufacturers and the National Highway Traffic Safety Administration have each developed a set of guidelines intended to help developers of embedded in-vehicle systems minimize the visual demand placed on a driver interacting with the visual-manual interface of the system. Though based on similar precepts, the guidelines...
Recent research on the legibility of digital displays has demonstrated a “positive polarity advantage”, in which black-on-white text configurations are more legible than their negative polarity, white-on-black counterparts. Existing research in this area suggests that the positive polarity advantage stems from the brighter illumination emitted by p...
Older drivers represent the fastest-growing segment of the driving population. Aging is associated with well-known declines in reaction time and visual processing, and, as such, future roadway infrastructure and related design considerations will need to accommodate this population. One potential area of concern is the legibility of highway signage...
Aging-related changes in the visual system diminish the capacity to perceive the world with the ease and fidelity younger adults are accustomed to. Among many consequences of this, older adults find that text that they could once read easily proves difficult to read, even with sufficient acuity correction. Building on previous work examining visual...
In-vehicle user interfaces increasingly rely on digital text to display information to the driver. Led by Apple's iOS, thin, lightweight typography has become increasingly popular in cutting-edge HMI designs. The legibility trade-offs of lightweight typography are sparsely studied, particularly in the glance-like reading scenarios necessitated by d...
Previous work examining the impact of a set of intrinsic and extrinsic features on relative legibility of typefaces has shown that legibility losses are more pronounced in older subjects (Dobres et.al, VSS 2014). To better understand the effects of visual degradation on legibility for older and younger subjects, we performed an experiment in which...
Older drivers comprise an undue percentage of roadway crashes and fatalities, and existing data implicates decrements to situational awareness as one factor. Although forward attention in older drivers is well studied, rearward attention for this population is little explored. What evidence exists has suggested reduced mirror checks, especially und...
This paper presents the results of a study of how people interacted with a production voice-command based interface while driving on public roadways. Tasks included phone contact calling, full address destination entry, and point-of-interest (POI) selection. Baseline driving and driving while engaging in multiple-levels of an auditory-vocal cogniti...
Experiment 4 was undertaken as an exploratory study of driver behavior with and without ACC active during single-task baseline driving and when interacting with voice-involved and primary visual-manual infotainment secondary tasks. An analysis sample of 24 participants, equally balanced by gender and two age groups (20-29 and 60-69), was given trai...
Experiment 1 is the first in a series of three studies designed to develop data to support exploring the generalizability of, and extend upon, the findings on the demands of production level voice-command systems from the MIT AgeLab's Phase I CSRC work that was undertaken in a 2010 Lincoln MKS. Self-report, eye glance, physiology (heart rate and sk...
Experiment 2 is the second in a series of three studies designed to develop data to support exploring the generalizability of, and extend upon, the findings on the demands of production level voice-command systems from the MIT AgeLab's Phase I CSRC work that was undertaken in a 2010 Lincoln MKS. Self-report, eye glance, physiology (heart rate and s...
Experiment 3 is the third in a series of three studies designed to develop data to support exploring the generalizability of, and extend upon, the findings on the demands of production level voice-command systems from the MIT AgeLab's Phase I CSRC work that was undertaken in a 2010 Lincoln MKS. Self-report, eye glance, physiology (heart rate and sk...
A simulator study evaluated the extent to which the use of a smartwatch to initiate phone calls while driving impacts driver workload, attention, and performance, relative to visual-manual (VM) and auditory-vocal (AV) calling methods on a smartphone. Participants completed four calling tasks using each method while driving in a simulator and comple...
This research examined, as an exploratory secondary analysis, the frequency of lane departure warnings in two commercially available vehicles and users' behavioral and physiological responses to the alarms. The two lane departure systems used different alerting mechanisms. One provided an auditory alert, while the other activated haptic stimulation...
A Samsung Galaxy S4 and Apple iPhone 5s were compared in a driving simulator where participants performed visual-manual and auditory-vocal address entry tasks. Auditory-vocal tasks were associated with shorter task times, fewer off-road glances, lower workload ratings, and reduced impact on vehicle performance. Primarily nominal differences were fo...
Voice interface use has become increasingly popular in vehicles. It is important that these systems divert drivers' attention from the primary driving task as little as possible, and numerous efforts have been devoted to categorizing demands associated with these systems. Nonetheless, there is still much to be learned about how various implementati...
There is limited research on trade-offs in demand between manual and voice interfaces of embedded and portable technologies. Mehler et al. identified differences in driving performance, visual engagement and workload between two contrasting embedded vehicle system designs (Chevrolet MyLink and Volvo Sensus). The current study extends this work by c...
One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices...
Die Erfordernisse zur Differenzierung zwischen Automarken kollidieren oft mit dem Bedarf einer lesbaren Schnittstelle. Farbe, Typografie und Layout sind die visuellen Basisbestandteile der Markenkommunikation. Leichte Bedienbarkeit, Navigation und Ergonomie sind Bestandteil des Interaktionszyklus zwischen Mensch und Maschine und sind fuer die Marke...
A driving simulation study assessed the impact of vocally entering an alpha numeric destination into Google Glass relative to voice and touch-entry methods using a handheld Samsung Galaxy S4 smartphone. Driving performance (standard deviation of lateral lane position and longitudinal velocity) and reaction to a light detection response task (DRT) w...
In-vehicle user interfaces increasingly rely on screens filled with digital text to display information to the driver. As these interfaces have the potential to increase the demands placed upon the driver, it is important to design them in a way that minimizes attention time to the device and thus keeps the driver focused on the road. Previous rese...
Multi-function in-vehicle interfaces are an increasingly common feature in automobiles. Over the past several years, these interfaces have taken on an ever-greater number of functions and the ways in which drivers interact with information have become more complex. Parallel with these technical developments, interest in ensuring that these systems...
A driving simulation study was performed to compare visual-manual (touch screen based) destination entry using a Samsung Galaxy S4 smartphone with the standard voice command based interface and a voice based “Hands-Free mode” that appears to be intended for use while driving (i.e. has a steering wheel icon adjacent to the mode selection menu and th...
Typeface design has long been considered an art, one guided by a long accumulation of best practices. Differences between typefaces can be obvious, such as the flourishes of a serif typeface versus the clean outlines of sans-serifs, or they may be minor, such as the variability of stroke width within a letter. Here we employ psychophysical techniqu...
The AAA Foundation for Traffic Safety tasked the MIT AgeLab with developing a data-driven system for rating new in-vehicle technologies, analogous to NCAP crashworthiness, but extended to scalar evaluations of the objective safety benefits of emerging safety technologies (e.g., adaptive headlights, back-up cameras, lane-departure warning). Such a s...
Text-rich driver–vehicle interfaces are increasingly common in new vehicles, yet the effects of different typeface characteristics on task performance in this brief off-road based glance context remains sparsely examined. Subjects completed menu selection tasks while in a driving simulator. Menu text was set either in a ‘humanist’ or ‘square grotes...
This report assesses the extent to which key findings from our initial on-road study (Reimer, Mehler, Dobres & Coughlin, 2013) on driver interaction with a production version, in-vehicle voice command system replicate, as well as considering whether two differing approaches to introducing drivers to the driver vehicle interface (DVI) impact their p...
The AAA Foundation for Traffic Safety tasked the MIT AgeLab with developing a data-driven system for rating new in-vehicle technologies, analogous to NCAP crashworthiness, but extended to scalar evaluations of the objective safety benefits of emerging safety technologies (e.g., adaptive headlights, back-up cameras, lane-departure warning). Such a s...
This report details the rational, methods, and of an on-road study assessing perceived workload, physiological arousal, visual attention, and basic driving performance metrics while drivers engaged in a number of tasks with a production version, in-vehicle voice-command system. The same metrics were also evaluated while participants carried out an...
This paper presents data on drivers’ behaviors associated with the use of portable, multi-modal human machine interfaces. It may contribute to ongoing discussions concerning guidelines for assessing visual-manipulative demands of in-vehicle technologies. This paper reports the eye tracking and driving performance data from 24 younger adults (20-29...
A simulation study compared 23 young adult drivers’ task completion time, mean glance time, number of glances, and percentage of long glances while performing a navigation entry task with a Garmin portable GPS system and a mobile navigation application (iOS 5 Google Maps) on an iPod Touch. We compared participants’ performance on these tasks using...
As the population has become both older and more technologically literate, a new class of “brain training” computer programs have gained in popularity. Though these programs have attracted substantial attention from scientists and consumers, the extent of their benefits, if any, remain unclear. Here the authors employ neuropsychological tests and b...
In an on-road experiment, driving performance, visual attention, heart rate and subjective ratings of workload were evaluated in response to a working memory (n-back) and a visual-spatial (clock) task. Subjective workload ratings for the two types of tasks did not statistically differ, suggesting a similar level of overall workload. Gaze concentrat...
Proposed visual-manual distraction guidelines for in-vehicle electronic devices (NHTSA, 2012) specify 3 criteria by which unacceptable levels of visual distraction are to be quantified using driving simulation testing. This paper reports on data obtained on a sample of 24 younger adults (20-29 years) dialing a flip-style phone with tactile buttons...
Visual perceptual learning (VPL) is defined as a long-term performance enhancement on a visual task, and is typically thought of as a manifestation of plasticity in visual processing. It is thought that neural representations relevant to a recently learned task are consolidated over the course of hours or days and made robust against the effects of...
Background / Purpose:
A number of studies have examined the role of response feedback (informing an observer of performance accuracy) during training in perceptual learning and found that feedback increases the magnitude and speed of learning ( Herzog and Fahle 1998 and 1999).Here we present evidence for a new role of feedback: feedback stabilize...
Feedback regarding the correctness of subjects' responses has been shown to have beneficial effects on perceptual learning. It has been shown that feedback can increase the rate of learning (Herzog & Fahle, 1999) or make it possible for an observer to learn with stimuli that would be too difficult to learn in the absence of feedback (Seitz et al 20...