Figure 4 - available via license: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
Content may be subject to copyright.
Response times of the test phase in Experiment 1. Error bars represent standard error of the mean (*p < 0.05; **p < 0.001).
Source publication
Kim and Beck (2020b) demonstrated that value-driven attention is based on relative value rather than absolute value, suggesting that prospect theory is relevant to our understanding of value-driven attention. To further this understanding, the present study investigated the impacts of diminishing sensitivity on value-driven attention. According to...
Contexts in source publication
Context 1
... accuracy, the main effect of distractor condition was not significant, F(2, 142) = 2.55, p = 0.08, For RT, there was a main effect of type of distractor, F(2, 142) = 193.91, p < 0.001, η 2 p = 0.73 (see Figure 4). Planned comparisons revealed that mean RT was slower when the 100-point color distractors (M = 865 ms, SE = 10 ms) were presented than when no color singleton distractors (M = 783 ms, SE = 9 ms) were presented, t(71) = 18.21, p < 0.001. ...
Context 2
... importantly, mean RT was slower when the 100-point color distractors were presented than when the 1000-point color distractors were presented, t(71) = 2.27, p = 0.026, d = 0.27 (see Figure 4). The findings imply that the 100-point color distractor attracted attention more than the 1000-point color distractor. ...
Similar publications
Citations
... This outcome is in line with general ideas in behavioral economics (the diminishing sensitivity principle; Kahneman & Tversky, 1979). Similarly, the importance of the relative difference in VDAC was directly demonstrated by Kim et al. (2022); for example, 100-valued stimuli captured more attention compared to 1-valued stimuli than 1,000-valued stimuli captured when compared to 901-valued stimuli. ...
Value-driven attentional capture (VDAC) refers to a phenomenon by which stimulus features associated with greater reward value attract more attention than those associated with smaller reward value. To date, the majority of VDAC research has revealed that the relationship between reward history and attentional allocation follows associative learning rules. Accordingly, a mathematical implementation of associative learning models and multiple comparison between them can elucidate the underlying process and properties of VDAC. In this study, we implemented the Rescorla-Wagner, Mackintosh (Mac), Schumajuk-Pearce-Hall (SPH), and Esber-Haselgrove (EH) models to determine whether different models predict different outcomes when critical parameters in VDAC were adjusted. Simulation results were compared with experimental data from a series of VDAC studies by fitting two key model parameters, associative strength (V) and associability (α), using the Bayesian information criterion as a loss function. The results showed that SPH-V and EH- α outperformed other implementations of phenomena related to VDAC, such as expected value, training session, switching (or inertia), and uncertainty. Although V of models were sufficient to simulate VDAC when the expected value was the main manipulation of the experiment, α of models could predict additional aspects of VDAC, including uncertainty and resistance to extinction. In summary, associative learning models concur with the crucial aspects of behavioral data from VDAC experiments and elucidate underlying dynamics including novel predictions that need to be verified.
... A reward of 10 cents reduced switching costs by increasing cognitive control more when the other available reward was 1 cent than when it was 19 cents. Kim et al. (2022) demonstrated diminishing sensitivity in attention. The absolute value difference between 1 point and 100 points had more impact on attention than the absolute value difference between 901 points and 1,000 points. ...
... In line with the previous research (Anderson, 2016;Kim & Beck, 2020;Kim et al., 2022;Otto & Vassena, 2021), the present study showed that value-driven attentional control may not operate to maximize benefits; instead, it is based on subjective value. These findings are in line with research (Irons & Leber, 2016) showing the relationships between attentional control and cognitive efforts. ...
... Furthermore, a common neural apparatus, the frontal lobe network, is essential in both selection processes (Buschman & Miller, 2007;Duncan, 1995;Padoa-Schioppa & Conen, 2017). In line with this, the present and previous studies demonstrated that attention operates based on the reference dependence principle (Kim & Beck, 2020), the diminishing sensitivity principle (Kim et al., 2022), and the loss aversion principle of prospect theory. Thus, prospect theory is applied to the selection in the decision-making stage and the attentional selection in the perceptual stage. ...
Loss aversion is a psychological bias where an increase in loss is perceived as being larger than an equivalent increase in gain. In the present study, two experiments were conducted to explore whether attentional control reflects loss aversion. Participants performed a visual search task. On each trial, a red target and a green target were presented simultaneously, and participants were free to search for either one. Participants always gained points when they searched for a gain color target (e.g., red). However, they gained or lost points when they searched for a gain-loss color target (e.g., green). In Experiment 1, the expected values of the gain color and the gain-loss color were equal. Therefore, for maximizing the reward, participants did not need to preferably search for a particular color. However, results showed that participants searched for the gain color target more than the gain-loss color target, suggesting stronger attentional control for the gain color than the gain-loss color. In Experiment 2, even though the expected value of the gain-loss color was greater than that of the gain color, attention was allocated to the gain color more than to the gain-loss color. The results imply that attentional control can operate in accordance with the loss aversion principle when the boundary conditions for loss aversion in a repeated binary decision-making task were met.