April 2025
·
26 Reads
Large Language Models (LLMs) are critical tools for knowledge generation and decision-making in fields such as science, business, governance, and education. However, these models are increasingly prone to Bias, Misinformation, and Errors (BME) due to multi-level feedback loops that exacerbate distortions over iterative training cycles. This paper presents a comprehensive framework for understanding these feedback mechanisms-User-AI Interaction, Algorithmic Curation, and Training Data Feedback-as primary drivers of model drift and information quality decay. We introduce three novel metrics-Bias Amplification Rate (BAR), Echo Chamber Propagation Index (ECPI), and Information Quality Decay (IQD) Score-to quantify and track the impact of feedback-driven bias propagation. Simulations demonstrate how these metrics reveal evolving risks in LLMs over successive iterations. Our findings emphasize the urgency of implementing lifecycle-wide governance frameworks incorporating real-time bias detection, algorithmic fairness constraints, and human-in-the-loop verification to ensure the long-term reliability, neutrality, and accuracy of LLM-generated outputs.