Figure 2 - uploaded by Yi-Hao Peng
Content may be subject to copyright.
Field of View of the Human Eye. (from [28], CC BY-SA 3.0). The central vision is the very center of our gaze with an eccentricity of 2.5°, and para-central vision has an eccentricity of 4°. The rest is peripheral vision, which is weaker at distinguishing details and colors.

Field of View of the Human Eye. (from [28], CC BY-SA 3.0). The central vision is the very center of our gaze with an eccentricity of 2.5°, and para-central vision has an eccentricity of 4°. The rest is peripheral vision, which is weaker at distinguishing details and colors.

Context in source publication

Context 1
... design a better approach to display information, we need to have a better understanding of the human visual system [26]. Human * e-mail:scott20122@gmail.com † e-mail:b03902097@ntu.edu.tw ‡ e-mail:b01902044@ntu.edu.tw § e-mail:mikechen@csie.ntu.edu.tw vision can be categorized into central, para-central, and peripheral vision (See Fig. 2). The fovea provides central vision, which is the very center of gaze with an eccentricity of 2.5 (5 of the field of view), and has the highest visual acuity. Para-central vision has an eccentricity of 4, and the rest is peripheral vision, which can be up to more than 90. Peripheral vision is weaker at distinguishing detail, color, and ...

Citations

... To calculate this, MediaPipe's face recognition functionality is utilized [28], specifically leveraging the face landmarks to determine the proportion of the eyes in the image, which is then converted into the actual reading distance [25]. To ensure accurate monitoring of reading progress, participants were instructed to read the passages aloud, as quickly and accurately as possible while maintaining comprehension, following the methodology employed by Ku [24]. A pre-test questionnaire collected demographics, reading habits, vision details, and experience with reading technologies. ...
... Reading performance was assessed through two key metrics: good output and comprehension. Reading goodput, measured in characters per minute (CPM), was calculated by dividing the total number of characters correctly read aloud by the total reading time, following the method described in Ku et al.[24,66]. The number of characters correctly read aloud and the total reading time were assessed from the audio recordings of the reading sessions. ...
Preprint
Full-text available
Situational visual impairments (SVIs) significantly impact mobile readability, causing user discomfort and hindering information access. This paper introduces SituFont, a novel just-in-time adaptive intervention (JITAI) system designed to enhance mobile text readability by semi-automatically adjusting font parameters in response to real-time contextual changes. Leveraging smartphone sensors and a human-in-the-loop approach, SituFont personalizes the reading experience by adapting to individual user preferences, including personal factors such as fatigue and distraction level, and environmental factors like lighting, motion, and location. To inform the design of SituFont, we conducted formative interviews (N=15) to identify key SVI factors affecting readability and controlled experiments (N=18) to quantify the relationship between these factors and optimal text parameters. We then evaluated SituFont's effectiveness through a comparative user study under eight simulated SVI scenarios (N=12), demonstrating its ability to overcome SVIs. Our findings highlight the potential of JITAI systems like SituFont to mitigate the impact of SVIs and enhance mobile accessibility.
... This paper introduces PeriText, a method that utilizes peripheral vision for reading text on augmented reality glasses. It employs Rapid Serial Visual Presentation (RSVP) and found that users preferred a 5°eccentricity over 8° in walking scenarios, resulting in better reading efficiency [5]. ...
... In the past few years, researchers have paid much attention to several aspects of smart glasses, such as display mode (Ku et al., 2019;Rzayev et al., 2020Rzayev et al., , 2018, and text style (Debernardis et al., 2014;Gabbard et al., 2006;Jankowski et al., 2010;Ku et al., 2019), etc. A few studies have reported using smart glasses when walking (Ku et al., 2019;Lakehal et al., 2020;Rzayev et al., 2018;Thi Minh Tran & Parker, 2020). ...
... In the past few years, researchers have paid much attention to several aspects of smart glasses, such as display mode (Ku et al., 2019;Rzayev et al., 2020Rzayev et al., , 2018, and text style (Debernardis et al., 2014;Gabbard et al., 2006;Jankowski et al., 2010;Ku et al., 2019), etc. A few studies have reported using smart glasses when walking (Ku et al., 2019;Lakehal et al., 2020;Rzayev et al., 2018;Thi Minh Tran & Parker, 2020). ...
... In the past few years, researchers have paid much attention to several aspects of smart glasses, such as display mode (Ku et al., 2019;Rzayev et al., 2020Rzayev et al., , 2018, and text style (Debernardis et al., 2014;Gabbard et al., 2006;Jankowski et al., 2010;Ku et al., 2019), etc. A few studies have reported using smart glasses when walking (Ku et al., 2019;Lakehal et al., 2020;Rzayev et al., 2018;Thi Minh Tran & Parker, 2020). Lakehal et al. (2020) use smart glasses for pedestrian navigation to evaluate the effect on spatial knowledge acquisition. ...
... We found that the size of the stimuli still contributed to the effectiveness of reading messages with peripheral vision, which is consistent with the teaching of peripheral vision reading (S. M. Stuart M Anstis, 1974;Chung et al., 1998;Ku et al., 2019;Sanders & McCormick, 1998;Summala et al., 1996). We also found that sufficiently large letters can improve the effectiveness of peripheral vision for reading messages but are not recognized by the subject's response to a certain size range. ...
... Examples of systems include Cow-Clock (Bakker et al. 2012), FireFlies (Bakker et al. 2013, Lantern (Alavi and Dillenbourg 2012), or Audience Silhouettes (Vatavu 2015), to name a few, which have employed physical devices of various kinds to deliver information at the periphery of user attention. Also, a few systems have proposed peripheral interactions for AR environments Renner and Pfeiffer 2017;Ku et al. 2019), but such contributions have been scarce compared to the large body of work using physical devices in physical environments. We believe that this state of affairs is a direct consequence of lacking concepts and frameworks for combined attention and hybrid reality concepts, which have developed in isolation in distinct scientific communities. ...
Article
Full-text available
We examine peripheral interactions in XR environments, for which we propose a conceptual space with two specialized dimensions, interaction-attention and reality-virtuality. We also formalize the notion of an “XR display” to expand the application range of ambient displays from physical environments to XR. To operationalize these conceptual contributions for researchers and practitioners, we capitalize on Sapiens, an open-source event-based software architecture for peripheral interactions in smart environments, to propose Sapiens-in-XR, an extended architecture that also covers XR displays. In a simulation study based on a Poisson probabilistic model of notification delivery, we demonstrate the efficiency of the event processing pipeline of Sapiens-in-XR with an average processing time of just 18ms from event creation to delivery. We present simulations of peripheral interaction scenarios enabled by our conceptual space and Sapiens-in-XR, and report empirical results from a controlled experiment implementing one scenario, where users were asked to maintain their focus of attention in the central field of view while notifications were displayed at the attention periphery. Our results show similar user perception and the same level of user performance for understanding and recalling content of notifications in either the virtual and physical environments. Our conceptual space, software architecture, and simulator constitute tools meant to assist researchers and practitioners to explore, design, and implement peripheral interactions in XR.
... However, considering that a text using RSVP requires the space of a single word, a VR user can engage in VR tasks and simultaneously read the displayed text. Moreover, several works also used RSVP reading in multi-task scenarios [10,23,33]. However, it is not clear how the combination of text locations and presentation types affects the VR experience. ...
... With the development of mobile information terminals and the speed and coverage of mobile Internet access, there are more opportunities to view novel information [15] and messages [1], edit text [10], and browse other information [11,13] while on the move. OST-HMDs are devices that are expected to allow people to keep their head at a neutral position and look forward while viewing information without having to hold a screen in their hands. ...
... The center of the screen scored the highest in terms of the response time whereas it was preferable to place icons at the screen periphery in terms of usability [8]. However, in an experiment evaluating the readability and usability of text displayed at the screen periphery, both scores tended to decrease as the user approached the periphery [15]. ...
... Again using RSVP, Ku et al. explored how text case and font affect the ability to read text at two eccentricity values. The authors found that even though reading was possible at both positions, mental load at 8°was significantly higher than when the text was showed closer to the central part of the retina (5°) [19]. ...
... Again using RSVP, Ku et al. explored how text case and font affect the ability to read text at two eccentricity values. The authors found that even though reading was possible at both positions, mental load at 8°was significantly higher than when the text was showed closer to the central part of the retina (5°) [19]. ...
Article
The study examines the impacts of different menu types on touchscreen operations under varying visuospatial working memory (VSWM) loads through an in-vehicle information/infotainment system (IVIS). Using eye-tracking, EEG data, and the NASA-TLX questionnaire, it assesses the effects of menu types and VSWM loads on task performance, visual search efficiency, and mental workload. The 36 participants were divided into hierarchical and grouping menu groups, demonstrating that grouping menus exhibit better task performance and visual search efficiency. In contrast, hierarchical menus cause a higher subjective mental workload under greater VSWM loads. Theta waves in the occipital brain region indicate reduced mental workload for grouping menus, and alpha waves in the central region correlate with VSWM load. For goal-oriented search tasks, consider the number of fixations and VSWM interference in IVIS testing. Future studies should simulate real menu usage scenarios and multitasking to offer practical design guidance for in-vehicle and aviation systems.