BookPDF Available

Algorithmic Democracy: A Critical Perspective Based on Deliberative Democracy

Authors:

Abstract

Drawing on a two-way democracy, the aim of this book is intended as an aid for thinking through viable alternatives to the current state of democracy with regard to its ethical foundations and the moral knowledge implicit in or assumed by the way we perceive and understand democracy. It is intended to stimulate reflection and discussion on the basis that, by addressing what we understand as democracy, we can inevitably influence the reality known as democracy. Democracy’s evident regression in today’s world makes this all too apparent: it has become a hostage to all kinds of autocracies and technopopulisms, which are supported to a greater or lesser extent by the current algorithmic revolution.
A preview of the PDF is not available
... Por una parte, a través de los mecanismos de funcionamiento de la esfera pública digital fuertemente vinculados con las influencias sociales, se ha intentado imponer un espiral del silencio (Noelle-Neumann, 2010) que potencie que las opiniones de los individuos se acerquen y se alineen alrededor de la opinión mayoritaria o, en caso de que no sea así, que el valor, la importancia y el peso de estas opiniones en la esfera pública digital sea silenciado, reducido o eliminado (Sunstein, 2017(Sunstein, y 2019. Por otra parte, se han desarrollado instrumentos e infraestructuras de introducción de datos y contenidos sintéticos y difusión artificial de estos en la esfera pública digital para generar ruido, alterar el funcionamiento de los algoritmos de las plataformas digitales, distorsionar de forma artificial la opinión pública 1 y crear una «opinión pública sintética» (Saura García y Calvo, 2024). ...
... Sin embargo, con el paso del tiempo, el avance de la hiperconectividad, dataficación y algoritmización de todos los ámbitos, tareas y funciones de las democracias en las sociedades modernas ha propiciado que esta situación cambie y se haya revertido, lo que ha dado lugar a impactos negativos para la soberanía, autonomía y autodeterminación de la ciudadanía, deformándose la libre e igual participación democrática y concentrándose la esfera pública digital en un pequeño grupo de plataformas y servicios digitales propiedad de grandes corporaciones digitales. Ello ha socavado las condiciones básicas para el correcto funcionamiento de los sistemas democráticos (Saura García, y 2024Habermas, 2023;García-Marzá y Calvo, 2024). ...
... Sin embargo, con el paso del tiempo, el avance de la hiperconectividad, dataficación y algoritmización de todos los ámbitos, tareas y funciones de las democracias en las sociedades modernas ha propiciado que esta situación cambie y se haya revertido, lo que ha dado lugar a impactos negativos para la soberanía, autonomía y autodeterminación de la ciudadanía, deformándose la libre e igual participación democrática y concentrándose la esfera pública digital en un pequeño grupo de plataformas y servicios digitales propiedad de grandes corporaciones digitales. Ello ha socavado las condiciones básicas para el correcto funcionamiento de los sistemas democráticos (Saura García, y 2024Habermas, 2023;García-Marzá y Calvo, 2024). ...
Article
Full-text available
Este artículo profundiza en el cambio estratégico de la manipulación de la opinión pública por medio de la colonización algorítmica, el imperialismo tecnológico, la generación de datos y la creación de contenidos sintéticos. Ello es utilizado por organizaciones gubernamentales, grandes corporaciones tecnológicas y poderes económicos para amenazar la integridad de la democracia a través de la alteración de los procesos de racionalización y sentido, así como los flujos de comunicación implicados, lo que conduce a la aparición de patologías sociales, anomalías y distorsiones en la esfera pública digital. El objetivo de este artículo es mostrar las principales estrategias vinculadas con las tecnologías digitales disruptivas para la creación de sentido y la manipulación de la opinión pública y, especialmente, criticar los impactos y desafíos que subyacen a este nuevo contexto democrático algoritmizado, sintetificado y masivo para la propia opinión pública y la democracia.
... GenAI models, while transformative, introduce complex challenges, particularly in ensuring transparency, accountability, and equity in their outputs and processes [21]. The notion of "trustworthy AI" must move beyond technical compliance to consider whose trust is prioritized, what ethical frameworks are employed, and how diverse stakeholders-including minority communitiesare included in decision-making processes [124][125][126][127][128][129][130][131][132]. ...
... In a decentralized environment, the lack of a central authority to verify and validate content further amplifies the problem, necessitating the development of robust detection methodologies. However, it remains to be seen whether decentralization actually implies distributing power or, by contrast, is concentrated in a few tech-savvy elites [130,200]. ...
Article
Full-text available
As generative AI (GenAI) technologies proliferate, ensuring trust and transparency in digital ecosystems becomes increasingly critical, particularly within democratic frameworks. This article examines decentralized Web3 mechanisms—blockchain, decentralized autonomous organizations (DAOs), and data cooperatives—as foundational tools for enhancing trust in GenAI. These mechanisms are analyzed within the framework of the EU’s AI Act and the Draghi Report, focusing on their potential to support content authenticity, community-driven verification, and data sovereignty. Based on a systematic policy analysis, this article proposes a multi-layered framework to mitigate the risks of AI-generated misinformation. Specifically, as a result of this analysis, it identifies and evaluates seven detection techniques of trust stemming from the action research conducted in the Horizon Europe Lighthouse project called ENFIELD: (i) federated learning for decentralized AI detection, (ii) blockchain-based provenance tracking, (iii) zero-knowledge proofs for content authentication, (iv) DAOs for crowdsourced verification, (v) AI-powered digital watermarking, (vi) explainable AI (XAI) for content detection, and (vii) privacy-preserving machine learning (PPML). By leveraging these approaches, the framework strengthens AI governance through peer-to-peer (P2P) structures while addressing the socio-political challenges of AI-driven misinformation. Ultimately, this research contributes to the development of resilient democratic systems in an era of increasing technopolitical polarization.
... Drawing on broader discourses about the use of AI in political contexts and decision support in other domains, it examines the opportunities, possibilities, and risks of employing AI for political decision-making (Fitria Fatimah, 2024;Hudson, 2018;McEvoy, 2019;Vera Hoyos and Cárdenas Marín, 2024). It also examines potential consequences of such AI use, utilizing concepts like "algorithmic democracy" (García-Marzá and Calvo, 2024), develops legal and ethical frameworks for the lawful and responsible use of such systems (Fitria Fatimah, 2024;Kuziemski and Misuraca, 2020), and investigates public reactions to the potential deployment of prototypical AI-systems in political decision-making contexts (Starke and Lünich, 2020). The third research field delves into the relationship between political decision-making and uncertainty. ...
Article
Full-text available
Political decision-making is often riddled with uncertainties, largely due to the complexities and fluid nature of contemporary societies, which make it difficult to predict the consequences of political decisions. Despite these challenges, political leaders cannot shy away from decision-making, even when faced with overwhelming uncertainties. Thankfully, there are tools that can help them manage these uncertainties and support their decisions. Among these tools, Artificial Intelligence (AI) has recently emerged. AI-systems promise to efficiently analyze complex situations, pinpoint critical factors, and thus reduce some of the prevailing uncertainties. Furthermore, some of them have the power to carry out in-depth simulations with varying parameters, predicting the consequences of various political decisions, and thereby providing new certainties. With these capabilities, AI-systems prove to be a valuable tool for supporting political decision-making. However, using such technologies for certainty purposes in political decision-making contexts also presents several challenges—and if these challenges are not addressed, the integration of AI in political decision-making could lead to adverse consequences. This paper seeks to identify these challenges through analyses of existing literature, conceptual considerations, and political-ethical-philosophical reasoning. The aim is to pave the way for proactively addressing these issues, facilitating the responsible use of AI for managing uncertainty and supporting political decision-making. The key challenges identified and discussed in this paper include: (1) potential algorithmic biases, (2) false illusions of certainty, (3) presumptions that there is no alternative to AI proposals, which can quickly lead to technocratic scenarios, and (4) concerns regarding human control.
... Por otro lado, como respuesta frente a las nuevas técnicas de control basadas en algoritmos inteligentes que están haciendo tambalear los cimientos de la democracia y las bases de la ciudadanía (Caffarena, 2017;Sastre y Gordo, 2019). finalmente, como contrapoder frente a la vigilancia masiva que los gobiernos y las corporaciones ejercen sobre la sociedad civil a través de los ecosistemas ciberfísicos y los fenómenos que le subyacen: hiperconectividad, dataficación y algoritmización (van Dijck, 2014;Morozov, 2018;Calvo 2020;García-Marzá y Calvo, 2024). ...
Article
Full-text available
Esta investigación analiza críticamente el actual contexto sociopolítico y económico de vigilancia masiva para reconstruir las claves y condiciones de posibilidad que orientan su desarrollo en sentido justo y responsable. El estudio, por un lado, advierte del peligro de los impactos disruptivos que produce sobre la sociedad y sus diferentes esferas funcionales la vigilancia social masiva ejercida por los estados y las grandes corporaciones tecnológicas. Por otro, de forma más concreta, sugiere que esta vigilancia social masiva está dando lugar a prácticas despóticas que pervierten los procesos democráticos, reducen los espacios de libertad y aumentan la brecha de las desigualdades. Finalmente, desde sus presupuestos normativos y condiciones de posibilidad, ofrece orientaciones para hacer frente a la vigilancia social masiva: la promoción de una sociedad civil fuerte, dinámica y crítica que actúe como contrapoder frente al estado y las grandes tecnológicas.
... Through initiatives presented at high-profile events such as the Smart City Expo World Congress, Barcelona has established itself as a leading example of responsible and ethical adoption of AI in public services [21]; crucially, though, ensuring that artificial intelligence (AI) systems uphold human rights and operate ethically is a significant challenge. To address this, Barcelona has implemented protocols designed to integrate human rights safeguards throughout the deployment of AI systems. ...
Article
Full-text available
This review paper examines how Generative AI (GAI) and Large Language Models (LLMs) can transform smart cities in the Industry 5.0 era. Through selected case studies and portions of the literature, we analyze these technologies’ impact on industrial processes and urban management. The paper targets GAI as an enabler for industrial optimization and predictive maintenance, underlining how domain experts can work with LLMs to improve municipal services and citizen communication, while addressing the practical and ethical challenges in deploying these technologies. We also highlight promising trends, as reflected in real-world case studies ranging from factories to city-wide test-beds and identify pitfalls to avoid. Widespread adoption of GAI still faces challenges that include infrastructure and lack of specialized knowledge as a limitation of proper implementation. While LLMs enable new services for citizens in smart cities, they also expose certain privacy issues, which we aim to investigate in this study. Finally, as a way forward, the paper suggests future research directions covering new ethical AI frameworks and long-term studies on societal impacts. Our paper is a starting point for industrial pioneers and urban developers to navigate the complexity of GAI and LLM integration, balancing the demands of technological innovation on one hand and ethical responsibility on the other.
... Por otro lado, como respuesta frente a las nuevas técnicas de control basadas en algoritmos inteligentes que están haciendo tambalear los cimientos de la democracia y las bases de la ciudadanía (Caffarena, 2017;Sastre y Gordo, 2019). finalmente, como contrapoder frente a la vigilancia masiva que los gobiernos y las corporaciones ejercen sobre la sociedad civil a través de los ecosistemas ciberfísicos y los fenómenos que le subyacen: hiperconectividad, dataficación y algoritmización (van Dijck, 2014;Morozov, 2018;Calvo 2020;García-Marzá y Calvo, 2024). ...
Article
Full-text available
Esta investigación analiza críticamente el actual contexto sociopolítico y económico de vigilancia masiva para reconstruir las claves y condiciones de posibilidad que orientan su desarrollo en sentido justo y responsable. El estudio, por un lado, advierte del peligro de los impactos disruptivos que produce sobre la sociedad y sus diferentes esferas funcionales la vigilancia social masiva ejercida por los estados y las grandes corporaciones tecnológicas. Por otro, de forma más concreta, sugiere que esta vigilancia social masiva está dando lugar a prácticas despóticas que pervierten los procesos democráticos, reducen los espacios de libertad y aumentan la brecha de las desigualdades. Finalmente, desde sus presupuestos normativos y condiciones de posibilidad, ofrece orientaciones para hacer frente a la vigilancia social masiva: la promoción de una sociedad civil fuerte, dinámica y crítica que actúe como contrapoder frente al estado y las grandes tecnológicas.
... In a decentralized environment, the lack of a central authority to verify and validate content further amplifies the problem, necessitating the development of robust detection methodologies. Although it remains to be seen, whether decentralization actually implies distributing power or, by contrast, is concentrated in a few tech-savvy elites [130,200]. ...
Preprint
Full-text available
As generative AI (GenAI) technologies proliferate, the need for trust and transparency in digital ecosystems intensifies, especially within democratic frameworks. This article investigates decentralized Web3 mechanisms, specifically those based on blockchain, decentralized autonomous organizations (DAOs), and data cooperatives, to establish robust detection techniques fostering trust in GenAI. These mechanisms are explored against the backdrop of the EU-funded Horizon Europe lighthouse project on Trustworthy AI entitled ENFIELD as foundational elements that support content authenticity, community-driven verification, and data sovereignty, aligning with the EU’s AI Act and Draghi Report policy framework. After a state-of-the-art deep analysis, this article presents a multi-layered framework to address the risks associated with AI-generated misinformation encompassing seven detection techniques of trust, including (i) federated learning for decentralized AI detection, (ii) blockchain-based provenance tracking, (iii) Zero-Knowledge Proofs for content authentication, (iv) DAOs for crowdsourced verification, (v) AI-powered digital watermarking, (vi) explainable AI (XAI) for content detection, and (vii) Privacy-Preserving Machine Learning (PPML). This approach not only strengthens AI governance through P2P frameworks but also mitigates the socio-political impacts of AI on public trust, offering a pathway through these seven techniques to allow resilient democratic systems in an era of increasing technopolitical polarization.
Article
Las esferas públicas de Internet se han analizado a partir de un concepto positivo de democracia que proporciona variables según las cuales la Internet puede o no ser un lugar apropiado para el desarrollo de competencias ciudadanas. Esta investigación propone un análisis materialista del funcionamiento político y social de la Internet que logre derivar inmanentemente sus determinaciones, así como sus propias contradicciones. Para ello, se establecerán como punto de partida los presupuestos de la teoría de la esfera pública y los bloqueos de la experiencia planteados por Kluge y Negt. Luego se abordan las esferas públicas de Internet en el proceso de legitimación política. Finalmente, se concluirá que los niveles de la experiencia involucrados en la legitimación política son indicio de un procesamiento de la experiencia humana en el que ya no solo es el trabajo lo que es subsumido realmente por el capital, sino también la vida humana.
Article
Full-text available
Recent years have seen artificial intelligence (AI) technologies from large companies increasingly privatize people’s data, creating asymmetrical and undemocratic economic relations. Specifically, generative AI disseminates false information, distorts perceptions, and transforms the free and critical cultural public sphere into one that is privatized and undemocratic. This study examines the major Screen Actors Guild-American Federation of Television and Radio Artists strike in Hollywood in May 2023, focusing on the issues raised against actors’ digital replicas from a democratic perspective. The introduction of this technology, aiming to enhance the audience’s immersive experience, reinforces the cultural imperialistic and neoliberal hierarchical relation between companies and actors. Moreover, this study explains how digital replicas relegate actors to a subjugated state, damage their image, and demote them to the periphery of filmmaking, thereby resulting in undemocratic problems that deprive them of their subjectivity and creativity. The main findings are as follows: (1) Actors’ data, embedded in the data capitalism structure, are used to generate their digital replicas, thus causing economic and structural inequalities. Video companies’ monopolization and unapproved use of such data lead to the loss of these actors’ freedom and humanity. (2) Unauthorized digital replicas of actors through deepfakes globally damage their public image and social authority, and such false body representation has negative cultural and ontological effects on them. (3) The use of digital replicas excludes actors from the filmmaking process, eliminating their interaction and creativity in relation to other creators and audiences and preventing their participation in the critical and cultural public sphere of cinema. As humans and generative AI continue to coexist, using digital replicas with actors’ legal consent is important as it ensures their independence and expressive potential. This will develop a democratic film industry that enhances the interactive cinema–media cultural public sphere.
ResearchGate has not been able to resolve any references for this publication.