Content uploaded by Dale Rutherford
Author content
All content in this area was uploaded by Dale Rutherford on Apr 23, 2025
Content may be subject to copyright.
Echo Chamber Dynamics in LLMs: Mitigating
Bias and Model Drift
Dale Rutherford10009-0004-7950-024X and Ningning
Wu10009-0002-6450-9482
University of Arkansas at Little Rock, Little Rock, AR 72204, USA
https://ualr.edu/academics/graduate/computer-and-information-sciences/
darutherford@ualr.edu, nxwu@ualr.edu
Abstract. Large Language Models (LLMs) are critical tools for knowl-
edge generation and decision-making in fields such as science, business,
governance, and education. However, these models are increasingly prone
to Bias, Misinformation, and Errors (BME) due to multi-level feed-
back loops that exacerbate distortions over iterative training cycles.
This paper presents a comprehensive framework for understanding these
feedback mechanisms—User-AI Interaction, Algorithmic Curation, and
Training Data Feedback—as primary drivers of model drift and informa-
tion quality decay.
We introduce three novel metrics—Bias Amplification Rate (BAR), Echo
Chamber Propagation Index (ECPI), and Information Quality Decay
(IQD) Score—to quantify and track the impact of feedback-driven bias
propagation. Simulations demonstrate how these metrics reveal evolving
risks in LLMs over successive iterations. Our findings emphasize the ur-
gency of implementing lifecycle-wide governance frameworks incorporat-
ing real-time bias detection, algorithmic fairness constraints, and human-
in-the-loop verification to ensure the long-term reliability, neutrality, and
accuracy of LLM-generated outputs.
Keywords: Bias Amplification ·Model Drift ·Echo Chamber Effect ·
AI Governance ·Misinformation Quality Decay
2 Rutherford & Wu
1 Introduction
AI-generated content has become integral to fields like research, journalism, and
decision-making automation. Large Language Models (LLMs) play a central role
in these areas but differ from traditional knowledge sources due to their suscepti-
bility to self-reinforcing cycles that compound biases, misinformation, and errors
(BME) over time. Iterative training updates amplify these distortions, causing
model drift and reducing information diversity and accuracy [1,2].
While short-term bias detection and mitigation strategies are widely stud-
ied, the long-term accumulation of BME remains underexplored, particularly
regarding its impact on model drift and information quality decay [3]. Existing
AI governance frameworks often overlook this risk, leaving LLMs vulnerable to
evolving into self-reinforcing misinformation engines that compromise their reli-
ability and neutrality across critical domains like science, business, public policy,
and education.
This paper introduces a comprehensive framework to analyze and mitigate
BME propagation within LLMs by identifying critical points for intervention
across the model lifecycle. We propose three new metrics—Bias Amplification
Rate (BAR), Echo Chamber Propagation Index (ECPI), and Information Qual-
ity Decay (IQD) Score—to quantify the long-term impact of feedback loops and
offer strategies for proactive governance.
Contribution and Significance – This paper makes several key contributions
to the study of AI-driven information quality and its long-term sustainability:
Mapping of Multi-Level Feedback Loop Reinforcement: The study introduces
a novel framework that categorizes the self-reinforcing dynamics of BME across
user-AI interactions, algorithmic curation, and training data feedback loops.
Introduction of BME Propagation Metrics: The paper proposes quantifiable
assessment metrics such as the Bias Amplification Rate (BAR), Echo Cham-
ber Propagation Index (ECPI), and Information Quality Decay Score (IQD) to
measure the long-term impact of feedback loops on AI-curated knowledge.
Impact Analysis Across Critical Domains: By examining the effects of LLM-
driven information distortion on science, business, public policy, and education,
the research highlights the real-world risks of unchecked AI-driven information
decay.
Policy and Mitigation Recommendations: The study proposes a lifecycle-wide
approach to AI model governance, focusing on data integrity, intervention touch-
points, and adaptive model alignment strategies to prevent self-reinforcing bias
and misinformation.
By systematically analyzing how feedback loops amplify distortions in AI sys-
tems, this paper contributes to ongoing discussions in Information Science, AI
Ethics, and AI Governance, providing an actionable roadmap for ensuring the
long-term reliability, neutrality, and fairness of AI-generated information [4,5].
Echo Chamber Dynamics in LLMs 3
2 Literature Review
Large Language Models (LLMs) have become indispensable in natural language
understanding and generation across various domains, including healthcare, legal
systems, and public policy. However, their outputs are prone to biases, misin-
formation, and errors that can compromise fairness and utility, especially in
high-stakes settings [6,7,3]. These biases often originate from training data,
model architecture, and user interactions and are further exacerbated by multi-
level feedback loops during deployment [8,9]. Over time, these feedback loops
amplify distortions, causing information quality decay and reducing response
diversity [4].
The Echo Chamber Effect in LLMs refers to the cyclical reinforcement of bi-
ases, dominant narratives, and specific perspectives within the model’s outputs.
This effect can manifest across three levels, each contributing to the narrowing
of response diversity and the progressive degradation of information quality [6].
Feedback loops in LLMs amplify dominant patterns, reducing response di-
versity and entrenching biases [10]. Intra-session feedback loops occur within a
single interaction as models adjust outputs based on user preferences, reinforcing
biases through repeated prompts [11,12]. Real-time content feedback arises when
LLMs ingest live internet data, amplifying popular narratives at the expense of
less frequent perspectives—a phenomenon known as the "loopback effect" [13].
Iterative training feedback loops pose the most significant challenge, as models
retrained on previously generated outputs become increasingly biased, causing
model drift and diminishing data diversity over successive updates [14,13].
Despite the growing body of literature on bias detection and mitigation, sev-
eral gaps remain. Existing studies focus on individual components of feedback
loops but rarely address their interconnected dynamics across the entire LLM
lifecycle. There is also limited research on developing predictive models for in-
formation quality decay and the compounded effects of feedback-driven bias.
This study aims to fill these gaps by proposing new metrics—Bias Amplification
Rate (BAR), Echo Chamber Propagation Index (ECPI), and Information Qual-
ity Decay (IQD)—and a lifecycle-wide governance strategy to track and mitigate
feedback-driven bias propagation in LLMs.
3 Theoretical Framework: Understanding the Echo
Chamber Dynamics
The propagation of Bias, Misinformation, and Errors (BME) in Large Language
Models (LLMs) is a systemic phenomenon driven by self-reinforcing feedback
loops embedded in AI training, inference, and retraining processes. These loops
amplify distortions over successive learning cycles, reducing information diver-
sity and degrading response quality, neutrality, and factual accuracy [13,15].
Although individual biases may appear insignificant in a single response, their
repeated reinforcement leads to long-term model drift and entrenched distortions
within the AI knowledge base [15,16].
4 Rutherford & Wu
Three distinct feedback loop levels—micro (User-AI Interaction), meso (Al-
gorithmic Curation), and macro (Training Data Feedback)—collectively drive
the propagation of BME within AI-driven ecosystems, accelerating information
quality decay and diminishing the reliability of LLM outputs [8,7].
Fig. 1. Multi-Level Feedback Dynamics
3.1 Three Levels of Feedback Loop Reinforcement
User-AI Interaction Feedback (Micro-Level) At the micro-level, user en-
gagement with AI systems plays a pivotal role in shaping model behavior through
confirmation bias and selective information exposure [15]. Users often seek in-
formation that aligns with pre-existing beliefs, preferences, or cognitive biases,
leading to a pattern where AI-generated responses are reinforced based on user
adoption [15,12].
Process: A user generates a query based on their personalized interests or
biases. The AI then provides a response that aligns with its training data and
previous user interactions [15]. If the user finds the response acceptable and en-
gages with it, the AI interprets this as a sign of usefulness, reinforcing similar
outputs in future interactions [16]. Over time, the user’s engagement influences
the AI’s learning patterns, narrowing the range of responses and limiting expo-
sure to diverse or opposing perspectives.
Impact: Through selective information exposure, users often engage
with AI-generated responses that reinforce their existing biases. This creates a
self-reinforcing cycle. As the AI prioritizes personalized responses, it intensi-
fies the echo chamber effect, further isolating users within ideologically or
informationally restricted bubbles [15]. Consequently, their thought processes
become narrower, and over time, they encounter fewer alternative viewpoints,
which diminishes their ability to critically engage with new information.
Algorithmic Curation & Real-Time Content Reinforcement (Meso-
Level): At the meso-level, algorithmic filtering and content selection mecha-
Echo Chamber Dynamics in LLMs 5
nisms amplify trending narratives, dynamically shaping AI-curated information
based on engagement patterns and real-time internet content sourcing [17,18].
Process: AI gathers information from various external sources, including web
content, news media, and open-access repositories. It selects content using algo-
rithms that prioritize data based on engagement metrics, trending topics, and
user behavior patterns. The AI then filters and refines the information, emphasiz-
ing popular and widely accepted narratives while downplaying low-engagement
or alternative perspectives. The final output is presented to users, further shap-
ing public discourse and contributing to the data used for model retraining.
Impact: By reinforcing popular narratives, AI tends to prioritize trend-
ing content, which increases the risk of amplifying misinformation that gains
traction online [18]. This leads to the selective filtering of alternative in-
formation, resulting in the suppression of dissenting perspectives or emerging
insights due to engagement-driven ranking systems. Ultimately, this creates a
cycle of external confirmation bias, where AI learns from user behaviors and
societal trends, making individuals more susceptible to ideological and informa-
tional reinforcement.
AI Model Drift & Training Data Feedback (Macro-Level) At the macro
level, AI models undergo self-reinforcing updates where they train on their own
generated outputs, leading to model drift and long-term information decay [3,
19].
Process: AI-generated responses are stored, archived, and used in future
model training datasets. In the next iteration of the model, it learns from these
prior responses, which can reinforce existing biases and misinformation. The
model may drift away from neutral and diverse data sources with each cycle,
leaning more toward historically reinforced patterns. Model drift can lead to
model degradation, where earlier errors become indistinguishable from factual
knowledge due to their repeated integration into training data [19].
Impact: Cumulative bias amplification occurs when AI-generated content
affects future iterations of AI, progressively reinforcing existing biases. As the
integrity of the training data degrades, it results in self-reinforcing errors
and misinformation, which contaminate the AI training corpus and contribute
to knowledge decay [13]. This self-perpetuating decay LLM quality impairs
their ability to self-correct, making it challenging to identify and eliminate factual
errors.
These multi-level feedback loops—spanning micro, meso, and macro lev-
els—compound bias and misinformation, significantly impacting LLM reliability
and output diversity. Addressing these loops requires a lifecycle-wide governance
strategy that includes real-time bias detection, algorithmic fairness constraints,
and human-in-the-loop verification to mitigate long-term risks and ensure sus-
tained model integrity.
6 Rutherford & Wu
4 Impact Analysis: BME Propagation and Information
Quality Decay
In real-world contexts, amplifying Bias, Misinformation, and Errors (BME) can
severely impact AI-driven systems such as educational platforms. Consider EduNet,
a fictional AI-based learning tool initially trained on balanced, peer-reviewed
datasets. At launch, it provided accurate and diverse responses.
However, as users interact and provide feedback, EduNet begins to prioritize
high-engagement content, suppressing nuanced perspectives. Over time, user-
driven preferences and algorithmic curation narrow response diversity, entrench-
ing dominant narratives and reducing information quality. Periodic retraining
on platform-generated data accelerates this decline, leading to cumulative bias.
The metrics—Bias Amplification Rate (BAR), Echo Chamber Propagation
Index (ECPI), and Information Quality Decay (IQD)—help track and mitigate
these effects by quantifying bias evolution, response diversity decline, and factual
erosion. Integrating these indicators into governance frameworks is essential to
maintaining model integrity.
4.1 Bias and Misinformation Reinforcement at Each Feedback Level
The progression of BME reinforcement across AI systems follows a cascading
structure, where distortions introduced at the micro-level influence macro-level
outcomes. Below is an overview of how feedback loops escalate BME propagation
[6].
Table 1. Feedback Loop Levels and Their Impact on Information Quality
Feedback Loop Level Primary Reinforcement
Mechanism
Impact on Information
Quality
Micro (User-AI
Interaction)
Confirmation Bias & Selective
Exposure
Echo Chamber Formation, Limited
Information Diversity
Meso (Algorithmic
Curation)
Trending Narrative
Prioritization & Algorithmic
Bias
Misinformation Amplification,
Reinforcement of Popular but
Flawed Content
Macro (AI Model Drift) Recursive Training on
AI-Generated Outputs
Knowledge Decay, Long-Term
Model Distortion
4.2 Scenario-Based Application of Quantitative Metrics for
Evaluating BME Propagation
Amplifying Bias, Misinformation, and Errors (BME) can significantly impact
AI-driven systems like EduNet, a fictional AI-based learning platform. Initially
trained on balanced datasets, EduNet performs well but becomes biased as user
Echo Chamber Dynamics in LLMs 7
interactions lead to a focus on high-engagement content, narrowing response di-
versity and entrenching dominant narratives. This results in information quality
decay.
To track and mitigate these effects, metrics like Bias Amplification Rate
(BAR), Echo Chamber Propagation Index (ECPI), and Information Quality
Decay (IQD) are essential. These tools quantify the evolution of bias, decline
in response diversity, and erosion of factual accuracy. Future governance efforts
should incorporate these indicators to uphold the integrity of the AI model.
Bias Amplification Rate (BAR) - Measures how bias evolves over iterative
training cycles. A higher BAR indicates rapid bias amplification, necessitating
early intervention [20].
BAR =PBiast+1
PBiast
(1)
where Biastis the measured bias level at a given training iteration.
Figure 1 below shows a simulation on BAR changes with eight rounds of
retaining. The training index 0 means the initial model training, and indices
from 1-8 refer to retraining of the model. The simulation assumes that there is
no bias in the initial model. For simplicity of discussion, it is assumed the bias
increases at a constant rate after each re-training. The figure shows the changes
of BAR after 8 rounds of retraining for three bias increase rates: 0.01, 0.05, and
0.1. It shows that with a bias increase rate of 0.05, BAR is about 1.5 after 8
rounds of re-training.
Fig. 2. Simulation of BAR changes with retraining
8 Rutherford & Wu
Echo Chamber Propagation Index (ECPI) - Quantifies the decline in re-
sponse diversity due to feedback loops. Values closer to 1.0 indicate a significant
reduction in diverse perspectives [21].
EC P I = 1 −
UniqueResponses
T otalResponses (2)
where UniqueResponses represents distinct knowledge perspectives within
AI outputs.
Figure 2 shows a simulation on ECPI changes with retaining. It is assumed
that the number of unique responses reduces 5% after each retraining. The train-
ing index 0 means the initial model training, and indices from 1-8 represent re-
training of the model. The figure shows ECPI changes of 3 scenarios with an
initial ECPI as 0.01, 0.05, and 0.15. If shows that with an initial ECPI as 0.01,
after eight rounds of retraining, the model’s ECPI will reach 0.34.
Fig. 3. Simulation of ECPI changes with retraining
Information Quality Decay (IQD) Score - Tracks the proportion of unver-
ified content in AI-generated outputs [22]. A rising IQD score signals increasing
factual degradation over time.
IQD =PUnverif iedC ontent
P(V erif iedContent +U nverifiedContent)(3)
where Unverif iedContent refers to outputs lacking external corroboration.
Figure 3 shows a simulation on IQD changes with retaining. It is assumed
that the number of unverified contents increase 5% after each retraining, and
Echo Chamber Dynamics in LLMs 9
the verified contents are unchanged during the retraining. The training index 0
means the initial model training, and indices from 1-8 represent re-training of
the model. The figure shows IQD changes of 3 scenarios with an initial IQD as
0.1, 0.2, and 0.3 respectively. There are studies that show a high percentage of
unverified contents on the Internet. With an initial IQD as 0.3, after 8 rounds
of retraining, IQD reaches to 0.39.
Fig. 4. Simulation of IQD changes with retraining
Applying the BME metrics in the hypothetical scenario demonstrates how
subtle biases and misinformation can propagate exponentially in AI-driven sys-
tems, particularly when feedback loops—user ratings, algorithmic curation, and
model retraining—reinforce favored narratives. By systematically measuring BAR,
ECPI, and IQD, stakeholders can better understand, anticipate, and mitigate
the complex dynamics of bias and error accumulation in large language models.
4.3 Real-World Consequences
The real-world consequences of AI-driven misinformation extend beyond isolated
outputs, influencing global knowledge frameworks and decision-making processes
across key sectors. When conflicting or inaccurate content is propagated by AI
models, it can distort facts, shape public perception, and undermine trust in AI-
driven tools. For instance, misinformation in educational content erodes critical
thinking, while bias in AI-curated business insights can result in flawed decisions.
Table 2 highlights the broader impacts of information quality decay across vari-
ous sectors, demonstrating how persistent errors can compromise the reliability
of AI-driven systems in science, education, business, governance, and journalism.
10 Rutherford & Wu
Table 2. Impact of AI-Driven Information Decay Across Key Sectors
Sector Impact of AI-Driven Information Decay
Science & Research AI-generated misinformation may distort peer-reviewed
literature, leading to incorrect conclusions.
Education AI-driven learning platforms may reinforce inaccuracies,
reducing critical thinking in students.
Business & Industry AI-curated insights may introduce bias-driven distor-
tions, leading to flawed decision-making.
Public Policy & Governance AI-assisted policy recommendations may misalign with
factual realities, leading to regulatory failures.
Media & Journalism Automated AI-driven news curation may distort public
trust in journalism.
4.4 Conclusion and Key Takeaways
This study quantifies bias progression using BAR, ECPI, and IQD and under-
scores the need for lifecycle-wide governance and proactive intervention strate-
gies. Key recommendations include real-time bias detection, human-in-the-loop
verification, and cross-domain governance models. These measures will help mit-
igate risks and ensure fairness in high-stakes applications.
By introducing Bias Amplification Rate (BAR), Echo Chamber Propagation
Index (ECPI), and Information Quality Decay (IQD) metrics, this research offers
a structured approach to monitoring the evolving risks in LLMs. The findings
underscore the importance of:
Lifecycle-wide governance frameworks to ensure continuous monitoring
and timely interventions.
Real-time bias detection to prevent the compounding effects of feedback
loops.
Human-in-the-loop verification to enhance factual accuracy and model
reliability.
These strategies are critical for maintaining AI-generated content’s long-term
fairness, neutrality, and accuracy in domains such as education, public policy,
and business intelligence.
5 Findings and Discussion
5.1 Key Findings
The analysis of Bias, Misinformation, and Errors (BME) propagation across
multi-level feedback loops in Large Language Models (LLMs) reveals systemic
patterns of distortion that degrade information quality over time. The following
are the key findings:
Echo Chamber Dynamics in LLMs 11
Bias Amplification and Model Drift - Models trained on their own outputs
exhibit a significant increase in bias after two to three retraining cycles. The
higher the frequency of retraining on AI-generated data, the greater the bias
magnification and response homogeneity [23].
Declining Information Diversity (Echo Chamber Propagation) - The
Echo Chamber Propagation Index (ECPI) reveals a decline in response variabil-
ity over successive feedback iterations in user-preference-driven environments.
User-centric engagement amplifies selective exposure to specific narratives, cre-
ating informational bubbles.
Information Quality Decay - The Information Quality Decay (IQD) Score
indicates that AI models relying on real-time internet data experience factual
degradation when misinformation cycles back into training corpora. Misinforma-
tion or unverifiable content becomes self-reinforcing, reducing the model’s ability
to self-correct.
5.2 Discussion
The findings of this study reveal a systemic pattern of bias propagation and
model drift in Large Language Models (LLMs), driven by multi-level feedback
loops. While short-term bias detection methods are well-documented, this study
highlights the urgent need for lifecycle-wide governance to mitigate long-term
risks. The interaction of micro-level user prompts, meso-level algorithmic cura-
tion, and macro-level iterative retraining cycles creates a self-reinforcing system
that accelerates information decay and reduces response diversity.
Practical Implications in High-Stakes Domains: In sectors such as health-
care and education, these dynamics have serious implications. For instance, in
healthcare, biased outputs could reinforce health disparities by prioritizing com-
monly queried conditions while underrepresenting rare diseases. In education,
AI-driven learning platforms risk propagating misinformation, reducing students’
exposure to diverse perspectives and weakening critical thinking skills.
Similarly, public policy can be affected when biased models influence data-
driven decision-making processes. If policymakers rely on biased AI-generated
insights, it could lead to flawed policies that disproportionately affect certain
demographics. Business and industry applications, such as AI-powered recom-
mendation systems, could also suffer from feedback loops that reduce customer
choice and entrench market dominance of particular products or ideas.
Comparison with Existing Research and Mitigation Strategies: Previ-
ous studies have emphasized data quality and algorithmic fairness as key in-
terventions. For example, frameworks like the Data and Model Bias Assessment
Framework (DAMBAF) focus on evaluating data quality through metrics such as
the Data Quality Index (DQI) and Bias and Error Propagation Rate (BEPR).
However, these approaches often overlook the impact of user interactions and
12 Rutherford & Wu
real-time content ingestion, which are critical drivers of the echo chamber effect
in LLMs.
The metrics proposed in this study—Bias Amplification Rate (BAR), Echo
Chamber Propagation Index (ECPI), and Information Quality Decay (IQD)—provide
a quantitative foundation for addressing these overlooked areas. Integrating these
metrics into AI governance frameworks can enhance early detection of bias and
reduce long-term risks.
Limitations and Future Directions: It is essential to recognize that this
study is based on simulations and theoretical models. While these metrics offer
valuable insights, further empirical validation is required across diverse real-
world datasets. Additionally, future research should explore the integration of
unsupervised learning techniques for real-time bias detection, as well as hybrid
governance models that combine algorithmic verification with human oversight.
5.3 Conclusion
The propagation of Bias, Misinformation, and Errors (BME) in LLMs is not an
isolated issue but a systemic challenge rooted in multi-level feedback loops. This
study provides a comprehensive framework for understanding and mitigating
these risks through the introduction of three novel metrics—Bias Amplifi-
cation Rate (BAR),Echo Chamber Propagation Index (ECPI), and
Information Quality Decay (IQD).
Policy and Practical Recommendations: To ensure the long-term reliability
and fairness of AI-generated content, stakeholders must adopt a lifecycle-wide
governance approach. Key recommendations include:
Real-Time Bias Detection Systems: Automated systems should be integrated
into AI development pipelines to detect emerging biases during inference and re-
training cycles.
Human-in-the-Loop Verification: Continuous human oversight is essential,
especially in high-stakes applications such as healthcare, public policy, and edu-
cation.
Fairness-Aware Modeling Techniques: Incorporating fairness constraints into
training algorithms can reduce selective content amplification and preserve re-
sponse diversity.
Call to Action - The future of LLMs depends on proactive governance, inter-
disciplinary collaboration, and continuous monitoring. Developers, researchers,
and policymakers must work together to create scalable solutions that prioritize
transparency, neutrality, and fairness.
The growing adoption of LLMs across industries offers immense potential,
but without adequate safeguards, these systems risk becoming engines of mis-
Echo Chamber Dynamics in LLMs 13
information. By addressing these challenges, the AI research community can
help build more reliable, equitable, and transparent AI-driven ecosystems. This
study serves as a call to action for all stakeholders to prioritize long-term model
integrity and data quality in the rapidly evolving landscape of generative AI.
6 Future Work
This study highlights how multi-level feedback loops—spanning User-AI Inter-
action, Algorithmic Curation, and Training Data Feedback—drive Bias, Misin-
formation, and Errors (BME) propagation in Large Language Models (LLMs).
These feedback mechanisms create systemic reinforcement of distortions, result-
ing in model drift, information quality decay, and reduced response diversity.
Without proactive intervention, LLMs risk becoming self-reinforcing misinfor-
mation engines, compromising their reliability in high-stakes domains such as
education, public policy, business, and research.
6.1 Future Research Directions
Future research should address three Priority areas:
Real-Time Bias Detection Systems - Developing automated systems capa-
ble of identifying and mitigating bias during inference and retraining cycles is
essential. Future research should explore unsupervised learning models for real-
time bias detection and early intervention.
Hybrid Misinformation Detection Models - Combining algorithmic verifi-
cation with human oversight will improve the reliability of AI-curated content.
Future studies should focus on integrating natural language processing (NLP)
tools with human-in-the-loop verification frameworks to ensure content accuracy.
Cross-Domain Governance Standards - Establishing interdisciplinary gov-
ernance frameworks across scientific, economic, and policy domains will be cru-
cial for maintaining AI ethics and neutrality. Collaboration between AI re-
searchers, policymakers, and industry stakeholders is necessary to implement
scalable solutions.
By addressing these areas, future research can help mitigate bias propagation,
enhance the reliability of LLMs, and contribute to a more equitable AI-driven
future.
By addressing these areas, future research can help mitigate bias propagation,
enhance LLM reliability, and contribute to more transparent and fair AI systems.
Proactive lifecycle-wide governance and continuous monitoring are essential to
preserving the long-term integrity and neutrality of AI-driven knowledge ecosys-
tems.
14 Rutherford & Wu
References
[1] Minhyeok Lee. “On the Amplification of Linguistic Bias through Unin-
tentional Self-reinforcement Learning by Generative Language Models -
A Perspective”. In: ArXiv abs/2306.07135 (2023). url:https: / / api .
semanticscholar.org/CorpusID:259137705.
[2] Ren Yi et al. “Bias Amplification in Language Model Evolution: An Iter-
ated Learning Perspective”. In: (2024).
[3] Surya Gangadhar Patchipala. “Tackling data and model drift in AI: Strate-
gies for maintaining accuracy during ML model inference”. In: Interna-
tional Journal of Science and Research Archive (2023). url:https : / /
api.semanticscholar.org/CorpusID:273898149.
[4] Bernd W. Wirtz, Jan C. Weyerer, and Ines Kehl. “Governance of artificial
intelligence: A risk and guideline-based integrative framework”. In: Gov.
Inf. Q. 39 (2022), p. 101685. url:https://api.semanticscholar.org/
CorpusID:247432792.
[5] Norainie Ahmad et al. “ETHICS AND PUBLIC TRUST IN AI GOVER-
NANCE: A LITERATURE REVIEW”. In: International Journal of Law,
Government and Communication (2024). url:https://api.semanticscholar.
org/CorpusID:275306770.
[6] Nicol\‘ o Pagan et al. “A Classification of Feedback Loops and Their Re-
lation to Biases in Automated Decision-Making Systems”. In: Equity and
Access in Algorithms, Mechanisms, and Optimization. ACM, Oct. 2023,
pp. 1–14. doi:10. 1145/3617694.3623227.url:http ://dx.doi. org/
10.1145/3617694.3623227.
[7] Andrey Veprikov, Alexander Afanasiev, and Anton Khritankov. “A Math-
ematical Model of the Hidden Feedback Loop Effect in Machine Learning
Systems”. In: arXiv.org (2024). Publisher: arXiv. doi:10.48550/ARXIV.
2405.02726.url:https://arxiv.org/abs/2405.02726.
[8] Anton Khritankov. “Hidden Feedback Loops in Machine Learning Sys-
tems: A Simulation Model and Preliminary Results”. In: Lecture Notes in
Business Information Processing. ISSN: 1865-1348. Springer International
Publishing, 2021, pp. 54–65. isbn: 978-3-030-65853-3. doi:10.1007/978-
3- 030 - 65854 - 0_ 5.url:http: / /dx .doi . org/ 10. 1007 /978 - 3- 030 -
65854-0_5.
[9] Rubén González-Sendino, Emilio Serrano, and Javier Bajo. “ Mitigating
bias in artificial intelligence: Fair data generation via causal models for
transparent and explainable decision-making”. In: Future Generation Com-
puter Systems 155 (June 2024). Publisher: Elsevier BV, pp. 384–401.
[10] Alaina N. Talboy and Elizabeth Fuller. “Challenging the appearance of
machine intelligence: Cognitive bias in LLMs and Best Practices for Adop-
tion”. In: (2023). Publisher: arXiv.
[11] Alexander Pan et al. “Feedback loops with language models drive in-
context reward hacking”. In: ArXiv abs/2402.06627 (2024). url:https:
//api.semanticscholar.org/CorpusID:267617187.
Echo Chamber Dynamics in LLMs 15
[12] Nikhil Sharma, Q. Vera Liao, and Ziang Xiao. “Generative Echo Chamber?
Effect of LLM-Powered Search Systems on Diverse Information Seeking”.
In: Proceedings of the CHI Conference on Human Factors in Computing
Systems. ACM, May 2024, pp. 1–17. doi:10. 1145 / 3613904 .3642459.
url:http://dx.doi.org/10.1145/3613904.3642459.
[13] Martin Briesch, Dominik Sobania, and Franz Rothlauf. “Large language
models suffer from their own output: An analysis of the self-consuming
training loop”. In: ArXiv abs/2311.16822 (2023). url:https : / / api .
semanticscholar.org/CorpusID:265466007.
[14] Runshan Fu, Yan Huang, and Param Vir Singh. “AI and Algorithmic Bias:
Source, Detection, Mitigation and Implications”. In: SSRN Electronic Jour-
nal (2020). Publisher: Elsevier BV.
[15] Jonathan Stray. “The AI Learns to Lie to Please You: Preventing Biased
Feedback Loops in Machine-Assisted Intelligence Analysis”. In: Analytics
2.2 (Apr. 2023). Publisher: MDPI AG, pp. 350–358.
[16] Moshe Glickman and Tali Sharot. “How human-AI feedback loops alter
human perceptual, emotional and social judgements.” In: Nature human
behaviour (2024). url:https://api.semanticscholar.org/CorpusID:
274856951.
[17] Daron Acemoglu, Asuman E. Ozdaglar, and James Siderius. “A Model
of Online Misinformation”. In: Review of Economic Studies (2023). url:
https://api.semanticscholar.org/CorpusID:246940909.
[18] Shalmali Patil, Arth Jani, and Sukanya Konatam. “Trend Amplification
or Suppression: The Dual Role of AI in Influencing Viral Content”. en.
In: International Journal of Global Innovations and Solutions (IJGIS)
(Nov. 2024). doi:10.21428/e90189c8.361bcc7f.url:https://ijgis.
pubpub.org/pub/07h8h2gy (visited on 02/09/2025).
[19] Dimitrios Michael Manias, Ali Chouman, and Abdallah Shami. “Model
Drift in Dynamic Networks”. In: IEEE Communications Magazine 61 (2023),
pp. 78–84. url:https://api.semanticscholar.org/CorpusID:259698683.
[20] Ashish Garg and Rajesh Sl. “ PCIV method for Indirect Bias Quantification
in AI and ML Models”. In: 2021. url:https://api.semanticscholar.
org/CorpusID:235583262.
[21] Chantal Shaib et al. “Standardizing the Measurement of Text Diversity:
A Tool and a Comparative Analysis of Scores”. In: ArXiv abs/2403.00553
(2024). url:https://api.semanticscholar.org/CorpusID:268230880.
[22] Weixuan Wang et al. “Assessing the Reliability of Large Language Model
Knowledge”. In: (2023). Publisher: arXiv.
[23] Damien Ferbach et al. “Self-Consuming Generative Models with Curated
Data Provably Optimize Human Preferences”. In: ArXiv abs/2407.09499
(2024). url:https://api.semanticscholar.org/CorpusID:271213167.