Moshe Glickman’s research while affiliated with University College London and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (26)


Human–AI interaction creates a feedback loop that makes humans more biased (experiment 1)
a, Human–AI interaction. Human classifications in an emotion aggregation task are collected (level 1) and fed to an AI algorithm (CNN; level 2). A new pool of human participants (level 3) then interact with the AI. During level 1 (emotion aggregation), participants are presented with an array of 12 faces and asked to classify the mean emotion expressed by the faces as more sad or more happy. During level 2 (CNN), the CNN is trained on human data from level 1. During level 3 (human–AI interaction), a new group of participants provide their emotion aggregation response and are then presented with the response of an AI before being asked whether they would like to change their initial response. b, Human–human interaction. This is conceptually similar to the human–AI interaction, except the AI (level 2) is replaced with human participants. The participants in level 2 are presented with the arrays and responses of the participants in level 1 (training phase) and then judge new arrays on their own as either more sad or more happy (test phase). The participants in level 3 are then presented with the responses of the human participants from level 2 and asked whether they would like to change their initial response. c, Human–AI-perceived-as-human interaction. This condition is also conceptually similar to the human–AI interaction condition, except participants in level 3 are told they are interacting with another human when in fact they are interacting with an AI system (input: AI; label: human). d, Human–human-perceived-as-AI interaction. This condition is similar to the human–human interaction condition, except that participants in level 3 are told they are interacting with AI when in fact they are interacting with other humans (input: human; label: AI). e, Level 1 and 2 results. Participants in level 1 (green circle; n = 50) showed a slight bias towards the response more sad. This bias was amplified by AI in level 2 (blue circle), but not by human participants in level 2 (orange circle; n = 50). The P values were derived using permutation tests. All significant P values remained significant after applying Benjamini–Hochberg false discovery rate correction at α = 0.05. f, Level 3 results. When interacting with the biased AI, participants became more biased over time (human–AI interaction; blue line). In contrast, no bias amplification was observed when interacting with humans (human–human interaction; orange line). When interacting with an AI labelled as human (human–AI-perceived-as-human interaction; grey line) or humans labelled as AI (human–AI-perceived-as-human interaction; pink line), participants’ bias increased but less than for the human–AI interaction (n = 200 participants). The shaded areas and error bars represent s.e.m.
A biased algorithm produces human bias, whereas an accurate algorithm improves human judgement
a, Baseline block. Participants performed the RDK task, in which an array of moving dots was presented for 1 s. They estimated the percentage of dots that moved from left to right and reported their confidence. b, Algorithms. Participants interacted with three algorithms: accurate (blue distribution), biased (orange distribution) and noisy (red distribution). c, Interaction blocks. Participants provided their independent judgement and confidence (self-paced) and then observed their own response and a question mark where the AI algorithm response would later appear. Participants were asked to assign weights to their response and the response of the algorithm (self-paced). Thereafter, the response of the algorithm was revealed (2 s). Note that the AI algorithm’s response was revealed only after the participants indicated their weighting. As a result, they had to rely on their global evaluation of the AI based on previous trials. d, AI-induced bias. Interacting with a biased AI resulted in significant human bias relative to baseline (P values shown in red) and relative to interactions with the other algorithms (P values shown in black; n = 120). e, When interacting with a biased algorithm, AI-induced bias increases over time (n = 50). f, AI-induced accuracy change. Interacting with an accurate AI resulted in a significant increase in human accuracy (that is, reduced error) relative to baseline (P values shown in red) and relative to interactions with the other algorithms (P values shown in black; n = 120). g, When interacting with an accurate algorithm, AI-induced accuracy increases over time (n = 50). h,i, Participants perceived the influence of the accurate algorithm on their judgements to be greatest (h; n = 120), even though the actual influence of the accurate and biased algorithms was the same (i; n = 120). The thin grey lines and circles correspond to individual participants. In d and f, the circles correspond to group means, the central lines represent median values and the bottom and top edges are the 25th and 75th percentiles, respectively. In e and g, the error bars represent s.e.m. The P values were derived using permutation tests. All significant P values remained significant after applying Benjamini–Hochberg false discovery rate correction at α = 0.05.
Interaction with a real-world AI system amplifies human bias (n = 100)
a, Experimental design. The experiment consisted of three stages. In stage 1, participants were presented with images featuring six individuals from different race and gender groups: a White man, a White woman, an Asian man, an Asian woman, a Black man and a Black woman. On each trial, participants selected the person who they thought was most likely to be a financial manager. In stage 2, for each trial, three images of financial managers generated by Stable Diffusion were randomly chosen and presented to the participants. In the control condition, participants were presented with three images of fractals instead. In stage 3, participants repeated the task from stage 1, allowing measurement of the change in participants’ choices before versus after exposure to the AI-generated images. b, The results revealed a significant increase in participants’ inclination to choose White men as financial managers after being exposed to AI-generated images, but not after being exposed to fractal neutral images (control). The error bars represent s.e.m. Face stimuli in a reproduced from ref. ⁵⁹ under a Creative Commons licence CC BY 4.0.
How human–AI feedback loops alter human perceptual, emotional and social judgements
  • Article
  • Full-text available

December 2024

·

209 Reads

·

22 Citations

Nature Human Behaviour

Moshe Glickman

·

Artificial intelligence (AI) technologies are rapidly advancing, enhancing human capabilities across various fields spanning from finance to medicine. Despite their numerous advantages, AI systems can exhibit biased judgements in domains ranging from perception to emotion. Here, in a series of experiments (n = 1,401 participants), we reveal a feedback loop where human–AI interactions alter processes underlying human perceptual, emotional and social judgements, subsequently amplifying biases in humans. This amplification is significantly greater than that observed in interactions between humans, due to both the tendency of AI systems to amplify biases and the way humans perceive AI systems. Participants are often unaware of the extent of the AI’s influence, rendering them more susceptible to it. These findings uncover a mechanism wherein AI systems amplify biases, which are further internalized by humans, triggering a snowball effect where small errors in judgement escalate into much larger ones.

Download

Fig. 4 | Preferences are stable across information-seeking and informationsharing. The robust correlations between the beta coefficient obtained when predicting information-seeking (x-axis) and information-sharing (y-axis) from a, d uncertainty, b, e valence, and c, f instrumentality in Exp. 1 (N = 43, top row) and the Replication (N = 90, bottom row). a, d Participants who preferred to seek information under high uncertainty also preferred to share information when the receiver was under high uncertainty. b, e Participants who preferred to seek positive
Average Beta coefficient for predicting information seeking and sharing
The Calinski-Harabasz index for each experiment (Exp. 1-Info Seeking, Exp. 1-Info Sharing, and Replication - Info Seeking and Sharing) as a function of K
Three diverse motives for information sharing

November 2024

·

60 Reads

·

1 Citation

Communications Psychology

Knowledge is distributed over many individuals. Thus, humans are tasked with informing one another for the betterment of all. But as information can alter people’s action, affect and cognition in both positive and negative ways, deciding whether to share information can be a particularly difficult problem. Here, we examine how people integrate potentially conflicting consequences of knowledge, to decide whether to inform others. We show that participants (Exp1: N = 114, Pre-registered replication: N = 102) use their own information-seeking preferences to solve complex information-sharing decisions. In particular, when deciding whether to inform others, participants consider the usefulness of information in directing action, its valence and the receiver’s uncertainty level, and integrate these assessments into a calculation of the value of information that explains information sharing decisions. A cluster analysis revealed that participants were clustered into groups based on the different weights they assign to these three factors. Within individuals, the relative influence of each of these factors was stable across information-seeking and information-sharing decisions. These results suggest that people put themselves in a receiver position to determine whether to inform others and can help predict when people will share information.




Overt visual attention in the formation of preference between lottery options

November 2023

·

101 Reads

·

·

Aurelien Wyngaard

·

[...]

·

Models of multi-attribute decision making vary on whether all or only part of the information available is being processed, as well as on whether preference formation is based on within-option or within-attribute processing. Here we carry out a combined empirical and computational study in which we rely on lottery-options with varying task complexities. We monitor eye-gaze during the decision formation to determine which decision-relevant information participants attend, and when. We then compare a large set of models, of different levels of complexity in their ability to account for the choices made by individual participants, and we find that two models outperform all others. The first is the two-layer 1 leaky-competing accumulator based on Prospect Theory (LCA-PT), which predicts human choices on simple tasks better than any other model. For complex tasks a new model based on older work in operations research performs best in the complex task, with both its performance as well as that of the second-ranked LCA-PT model significantly exceeding that of all other models. Both of these models use the sequence of observed eye movements for each participant to capture the allocation of attention to specific options and attributes during the decision process, but make different assumptions about the effect of attention on decision making. Our results suggest that, when faced with complex choice problems, people form preferences primarily based on attention-guided pairwise, within-attribute value-comparisons.


Fig 1. Procedure. (A) Trial procedure. Every trial began with a black fixation cross followed by a simultaneous presentation of auditory (pure tone) and visual (blue fixation cross( cues. The blue fixation cross continued to be presented during the interval between cue onset and the target onset (foreperiod, FP). The duration of the foreperiod varied according to the phase and group conditions. After the foreperiod, the target was presented and was followed by a mask. Following target presentation, participants were asked to determine whether the target was tilted left or right and respond by pressing one of two buttons, and then received feedback (green or red fixation cross for correct or incorrect responses, respectively). After the feedback, there was a variable interval of 0.2-0.7 s before the next trial was initiated. (B) Experimental blocks. The experiment included a total of nine blocks: In three preliminary staircase blocks, the tilt threshold was chosen individually per participant to achieve approximately 80% accuracy rates. This was followed by the main experiment that included four acquisition blocks, and two transfer blocks. Each experimental block included 96 trials.
Fig 2. Accuracy rate results. (A) The X-axis represents the experimental blocks in chronological order (Acquisition= A 1-4, Transfer= T 1-2). Dashed line represents the onset of the transfer phase. Error bars depict ±1 standard error from the group mean. Simple effects are marked as: p<0.05*, p<0.01**, ns=not significant. (B) Regression line of trial-wise temporal dynamic for accuracy rates in each group during the transfer phase, based on the individual participant data across trials. Shaded area represents 95% confidence interval around the slope. Dashed line represents the onset of second block (T2).
Fig 3. Reaction time results. Regression line of trial-wise temporal dynamic of reaction times in the two group (Fixed and Random) during the transfer phase, based on individual participant data across trials. Shaded area represents 95% confidence interval around the slope. Dashed line represents the onset of the second block (T2).
Fig 5. Saccade-rate in the transfer phase. Mean saccade-rate for the first (orange) and second (green) transfer blocks, separately for the fixed (A) and the random (B) groups. Shadowed area represents the analyzed duration (-100 to 0 ms, pre-target SR), chosen based on previous studies. The dashed lines represent the target onset at time zero. (C) Average pre-target SR of the 1 st (early) and 2 nd (late) blocks, in each group during the transfer phase. p<0.05* and p<0.01**. (D) Average pre-target SR in both phases and groups. Error bars represent ±1 standard error from the group mean. p<0.05*, ns= not significant.
Fig 6. Drift Diffusion Model results. (A-B) Accuracy and RTs of the fixed (blue) and random (red) groups in the transfer and acquisition phases for both the real data (solid lines) and model predictions (dashed lines). Error bars represent within-subjects SE. (C) Mean drift fit for the fixed (blue) and random (red) groups for the acquisition (A) and transfer (T) phases. Error bars represent ±1 standard deviation from the condition's mean; (D) Single-subject drift fit for the fixed (blue) and random (red) groups for each phase, with the diagonal line representing the identity line (drift fit in acquisition phase = drift fit in transfer phase); (E-F) Mean and singlesubject boundary separation fits; (G-H) Mean and single-subject non-decision point fits. The significant effects between phases within group are marked: p<0.01**, ns= not significant.
Exposure to temporal randomness promotes subsequent adaptation to new temporal regularities

June 2023

·

58 Reads

Noise is intuitively thought to interfere with perceptual learning; However, human and machine learning studies suggest that, in certain contexts, variability may reduce overfitting and improve generalizability. Whereas previous studies have examined the effects of variability in learned stimuli or tasks, it is hitherto unknown what are the effects of variability in the temporal environment. Here, we examined this question in two groups of adult participants (N=40) presented with visual targets at either random or fixed temporal routines and then tested on the same type of targets at a new nearly-fixed temporal routine. Findings reveal that participants of the random group performed better and adapted quicker following a change in the timing routine, relative to participants of the fixed group. Corroborated with eye tracking and computational modeling, these findings suggest that prior exposure to temporal randomness promotes the formation of new temporal expectations and enhances generalizability in a dynamic environment. We conclude that noise plays an important role in promotion perceptual learning in the temporal domain: rather than interfering with the formation of temporal expectations, noise enhances them. This counterintuitive effect is hypothesized to be achieved through eliminating overfitting and promoting generalizability.


Fig. 1 Experimental design. A Stimuli: An illustration of the compound figure used for both tasks. Features of the Numerical cognition task. Each stimulus was characterized by Congruency and by Numerical Distance. A stimulus was considered congruent if the global number (big in physical size) also had a larger numerical value, and incongruent if the global number had a lower numerical value. Numerical Distance was defined as the numerical difference between the global and local levels ('7' in the example stimulus). B Features of the Preference task. Condition: The task included two conditions: (1) the number in the global level represented the poten-
Fig. 5 Correlations of global/local processing style between the perceptual and preferential tasks. A Model driven approach: The perceptual saliency parameter ( θ perceptual ) is associated with the prefer-
The effect of perceptual organization on numerical and preference-based decisions shows inter-subject correlation

January 2023

·

48 Reads

Psychonomic Bulletin & Review

Individual differences in cognitive processing have been the subject of intensive research. One important type of such individual differences is the tendency for global versus local processing, which was shown to correlate with a wide range of processing differences in fields such as decision making, social judgments and creativity. Yet, whether these global/local processing tendencies are correlated within a subject across different domains is still an open question. To address this question, we develop and test a novel method to quantify global/local processing tendencies, in which we directly set in opposition the local and global information instead of instructing subjects to specifically attend to one processing level. We apply our novel method to two different domains: (1) a numerical cognition task, and (2) a preference task. Using computational modeling, we accounted for classical effects in choice and numerical-cognition. Global/local tendencies in both tasks were quantified using a salience parameter. Critically, the salience parameters extracted from the numerical cognition and preference tasks were highly correlated, providing support for robust perceptual organization tendencies within an individual.




Biased AI systems produce biased humans

November 2022

·

25 Reads

·

2 Citations

Human decision-making biases are pervasive, influencing areas such as finance and medicine. Reliance on Artificial Intelligence (AI) systems has been offered as a solution to reduce such systematic errors. However, in judgments ranging from perception to emotion, AI systems can also exhibit biases. Here, across multiple experiments (N = 1,022) we reveal a feedback loop where human-AI interactions alter processes underlying human perceptual, emotion and social judgements that subsequently amplify biases in humans. This amplification is significantly greater than observed in human-human feedback loops. Participants are unaware of the extent of the AI’s influence, which can leave them more susceptible to it. The findings uncover a mechanism by which AI-human interaction creates a snowball effect: small human errors in judgement escalate into much larger ones.


Citations (16)


... " A recent challenge that underscores the importance of such joint consideration is the interaction between humans and artificial intelligence (AI). In human-AI interaction, interactions seem to be increasingly reciprocal, where AI systems not only respond to human input but also shape user behavior in return [12]. Just as nature and nurture, wave and particle, structure and agency have come to be seen as inseparable. ...

Reference:

Beyond Isolation: Towards an Interactionist Perspective on Human Cognitive Bias and AI Bias
How human–AI feedback loops alter human perceptual, emotional and social judgements

Nature Human Behaviour

... However, the market setting is noisy, and there is not much control over the strategic incentives of the Senders or their beliefs about the decision-makers, preferences, making it hard to disentangle various explanations for information transmission and suppression. Another closely related study is Vellani et al. (2024). In an online experiment, they examine the motives of sharing potentially unpleasant information about monetary losses for the receiver. ...

Three diverse motives for information sharing

Communications Psychology

... Our results indicate that participants learned the AI system's bias readily, primarily due to the characteristics of the AI's judgements, but also because of participants' perception of the AI (see Fig. 1f; for extensive discussion, see ref. 62). Specifically, we observed that when participants were told they were interacting with a human when in fact they were interacting with an AI, they learned the AI's bias to a lesser extent than when they believed they were interacting with an AI (although they did still significantly learn the bias). ...

AI-induced Hyper-Learning in Humans
  • Citing Article
  • September 2024

Current Opinion in Psychology

... For instance, knowing approximately at which lag (i.e., the time interval between targets) T2 will occur, either through statistical learning of lag probabilities, symbolic cueing of T2 lags, or even instantaneously processed lag information in a preceding trial, can alleviate the AB deficit (Choi et al., 2012;Hilkenmeier & Scharlau, 2010;Martens & Johnson, 2005;Visser et al., 2014;Yao et al., 2022). However, both the learning and exploitation of temporal regularities are likely to depend on the configuration of temporal context and the associated uncertainty level (Grabenhorst et al., 2021;Nobre & van Ede, 2018;Jazayeri & Shadlen, 2010;Shdeour et al., 2024). A similar dependence on the temporal configuration and its corresponding level of uncertainty can also be inferred in the AB task (Lasaponara et al., 2015). ...

Exposure to temporal variability promotes subsequent adaptation to new temporal regularities
  • Citing Article
  • January 2024

Cognition

... Algorithmic injustice arises when patterns of marginalisation, imprinted in the historical data that shape the training and the testing of the system, produce individual predictive anomalies that, if Frontiers in Pharmacology frontiersin.org 08 left unchecked, inform a pernicious feedback loop of further exacerbating future down-stream systemic and structural injustice within larger groupings (Kearns and Roth, 2020;Glickman and Sharot, 2022). Algorithmic injustice is aggravated where data are under-representative or exclude certain categories of persons resulting in the exacerbation of long-standing societal biases that exist in relation to protected features like race and gender, and are magnified by virtue of their reach and scale. ...

Biased AI systems produce biased humans
  • Citing Preprint
  • November 2022

... Psychology, neuroscience, and neuroeconomics have debated over how the process of deliberation is implemented at the neural level. It has been found that many brain regions track the evidence accumulated for a decision [1][2][3][4][5][6][7][8][9][10][11][12][13] , which has been captured by a dominant, drift-diffusion model of decisionmaking [14][15][16] . ...

Evidence integration and decision confidence are modulated by stimulus consistency

Nature Human Behaviour

... Although such direct access accounts were pushed aside in favor of inferential accounts which could better explain metacognitive errors and illusions (Dunlosky & Metcalfe, 2009;Rhodes, 2016;Schwartz et al., 1997), they have returned in a new form as the result of neuroimaging studies (Kelley et al., 2020). For example, many neuroimaging studies of metacognition, including both metacognition about visual discriminations and about memory, assume that neurological activity in brain regions relevant to the focal task "feed forward" into the brain regions in the frontal lobe thought to be responsible for metacognitive thinking (e.g., Fleming & Lau, 2014;Kelley et al., 2020Kelley et al., , 2021O'Bryan et al., 2018;Rahnev & Fleming, 2019;Rahnev et al., 2022;Rosenbaum et al., 2022). This direct physical mechanism does not rule out a potential role for biases and errors, but current neurocognitive accounts of metacognition do not usually examine or address metacognitive biases and errors (Kelley et al., 2020). ...

The Cognition/Metacognition Trade-Off

Psychological Science

... This is due to conversational search providing integrated and summarized information tailored to specific questions [40]. It aligns better with the high-level abstract thinking used by users in these situations [33,75]. Conversely, web search provides a list of results, requiring users to filter and summarize information themselves. ...

Abstract Thinking Facilitates Aggregation of Information

... To test this prediction, we used a variant of the value-psychophysics paradigm in which participants extract the average value from series of consecutively and rapidly presented numbers 9,13,[20][21][22][23] . ...

Extracting Summary Statistics of Rapid Numerical Sequences

... Future work will be needed to examine the neural mechanism that extracts the drift rate from fluctuating values (sampled from memory or prospective imagination; Bakkour et al., 2019;Poldrack et al., 2001;Schacter et al., 2007) and that reduces the drift rate of more strongly fluctuating items. One interesting possibility is that the effective drift rate might be modulated by the temporal congruency of the evidence in successive samples (Glickman et al., 2020); the higher the value certainty, the lower the variance of the value signals, thus leading to a higher probability that successive samples will provide consistent choice evidence. Perhaps certainty could be tuned by attentional gating in the brain valuation pathways (Schonberg & Katz, 2020), such that more attention to the valuation process would enable higher value certainty. ...

Evidence integration and decision-confidence are modulated by stimulus consistency
  • Citing Preprint
  • October 2020