Jangkyu Ju’s research while affiliated with Korea University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


Visual Complexity of Head-Up Display in Automobiles Modulates Attentional Tunneling
  • Article

June 2023

·

37 Reads

·

6 Citations

Human Factors The Journal of the Human Factors and Ergonomics Society

Jieun Lee

·

Nahyun Lee

·

Jangkyu Ju

·

[...]

·

Objective: To investigate how the visual complexity of head-up displays (HUDs) influence the allocation of driver's attention in two separate visual domains (near and far domains). Background: The types and amount of information displayed on automobile HUDs have increased. With limited human attention capacity, increased visual complexity in the near domain may lead to interference in the effective processing of information in the far domain. Method: Near-domain and far-domain vision were separately tested using a dual-task paradigm. In a simulated road environment, 62 participants were to control the speed of the vehicle (SMT; near domain) and manually respond to probes (PDT; far domain) simultaneously. Five HUD complexity levels including a HUD-absent condition were presented block-wise. Results: Near domain performance was not modulated by the HUD complexity levels. However, the far domain detection accuracies were impaired as the HUD complexity level increased, with greater accuracy differences observed between central and peripheral probes. Conclusion: Increased HUD visual complexity leads to a biased deployment of driver attention toward the central visual field. Therefore, the formulation of HUD designs must be preceded by an in-depth investigation of the dynamics of human cognition. Application: To ensure driving safety, HUD designs should be rendered with minimal visual complexity by incorporating only essential information relevant to driving and removing driving-irrelevant or additional visual details.


Value-driven attention and associative learning models: a computational simulation analysis

May 2023

·

25 Reads

·

2 Citations

Psychonomic Bulletin & Review

Value-driven attentional capture (VDAC) refers to a phenomenon by which stimulus features associated with greater reward value attract more attention than those associated with smaller reward value. To date, the majority of VDAC research has revealed that the relationship between reward history and attentional allocation follows associative learning rules. Accordingly, a mathematical implementation of associative learning models and multiple comparison between them can elucidate the underlying process and properties of VDAC. In this study, we implemented the Rescorla-Wagner, Mackintosh (Mac), Schumajuk-Pearce-Hall (SPH), and Esber-Haselgrove (EH) models to determine whether different models predict different outcomes when critical parameters in VDAC were adjusted. Simulation results were compared with experimental data from a series of VDAC studies by fitting two key model parameters, associative strength (V) and associability (α), using the Bayesian information criterion as a loss function. The results showed that SPH-V and EH- α outperformed other implementations of phenomena related to VDAC, such as expected value, training session, switching (or inertia), and uncertainty. Although V of models were sufficient to simulate VDAC when the expected value was the main manipulation of the experiment, α of models could predict additional aspects of VDAC, including uncertainty and resistance to extinction. In summary, associative learning models concur with the crucial aspects of behavioral data from VDAC experiments and elucidate underlying dynamics including novel predictions that need to be verified.


The Modulation of Value-Driven Attentional Capture by Exploration for Reward Information
  • Article
  • Publisher preview available

October 2022

·

27 Reads

·

5 Citations

Previous studies on value-driven attentional capture (VDAC) have demonstrated that the uncertainty of reward value modulates attentional allocation via associative learning. However, it is unclear whether such attentional exploration is executed based on the amount of potential reward information available for refining value prediction or the absolute size of reward prediction error. The present study investigated the effects of reward information (information entropy) and prediction error (variance) on attentional bias while controlling for the influence of the strength of reward association. Participants were instructed to search for either a red or green target circle and respond to the line orientation within the target. Each target color was associated with reward contingencies with different levels of uncertainty. In Experiment 1, one color was paired with a single reward value (zero entropy and variance) and the other with multiple reward values (high entropy and variance). In Experiment 2, one color had a high-entropy, low-variance reward contingency and the other had the inverse. Attentional interference for distractors with high entropy was consistently greater than low or zero entropy distractors. In addition, in Experiment 3, when distractors with an identical level of variance were given, information entropy was observed to modulate the attentional bias toward distractors. Lastly, Experiment 4 revealed that distractors associated with contrasting levels of variance, while information entropy was kept identical, failed to modulate VDAC. These results indicate that value-based attention is primarily allocated to cues that provide maximal information about the reward outcomes and that information entropy is one of the key predictors mediating attentional exploration and associative learning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

View access options

Citations (3)


... Human attention capacity is inherently limited , and it is not feasible to attend to every simultaneous target (Broadbent, 1957;Sanocki et al., 2015). Multiple targets may compete for attention, increase the cognitive load (Merenda et al., 2018;Zhang & Lee, 2021), and hinder the allocation of attention to critical targets (Currano et al., 2021;Lee et al., 2024). Recognizing critical objects surrounded by numerous objects can be challenging (Albonico et al., 2018;Chaney et al., 2014;Levi, 2008). ...

Reference:

Priority Design in Multi-Target AR-HUD Warning: Evidence from Eye Movement and Behavior of the Novice Driver
Visual Complexity of Head-Up Display in Automobiles Modulates Attentional Tunneling
  • Citing Article
  • June 2023

Human Factors The Journal of the Human Factors and Ergonomics Society

... Although currently, there is no formal model on how VMAC is learned and influences attentional priority, Le Pelley et al. (2016) proposed a formulation of the Mackintosh model of Pavlovian conditioning 4 in which the attention received by a cue is a direct function of its absolute associative strength. This model can, in principle, predict the acquisition of the VMAC effect (see also Jeong et al., 2023). However, recent research has found that when a distractor is associated with higher reward variability, it also receives increased attentional priority (Cho & Cho, 2021;Le Pelley et al., 2019a, b;Pearson et al., 2024). ...

Value-driven attention and associative learning models: a computational simulation analysis
  • Citing Article
  • May 2023

Psychonomic Bulletin & Review

... Specifically, Le Pelley et al.'s findings suggest that the noninformative distractor received greater priority than the informative distractor (see also Cho & Cho, 2021). Subsequent studies have demonstrated that this pattern of uncertainty-modulated attentional capture (UMAC) is a product of both outcome entropy (i.e., the number of distinct outcomes paired with a stimulus: Ju & Cho, 2023) and outcome variance (the spread of values of those outcomes: Pearson et al., 2024). Contrary to the idea of selective attention being fueled by a drive to reduce immediate uncertainty-which would motivate prioritization of stimuli providing the greatest immediate information gain (i.e., stimuli most accurately predict the outcome of the current trial)-UMAC constitutes a case in which attention acts to prioritize a stimulus that provides more uncertainty about upcoming reward over a stimulus that provides fully diagnostic information (for related evidence from a somewhat different perspective, see Beesley et al., 2015;Chao et al., 2021;Easdale et al., 2019;Vigo et al., 2013-we consider this research further in the General Discussion section). ...

The Modulation of Value-Driven Attentional Capture by Exploration for Reward Information