Olya Kudina’s research while affiliated with Delft University of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (18)


AI versus AI for democracy: exploring the potential of adversarial machine learning to enhance privacy and deliberative decision-making in elections
  • Article
  • Full-text available

October 2024

·

15 Reads

AI and Ethics

·

Olya Kudina

·

·

Ibo Van de Poel

Our democratic systems have been challenged by the proliferation of artificial intelligence (AI) and its pervasive usage in our society. For instance, by analyzing individuals’ social media data, AI algorithms may develop detailed user profiles that capture individuals’ specific interests and susceptibilities. These profiles are leveraged to derive personalized propaganda, with the aim of influencing individuals toward specific political opinions. To address this challenge, the value of privacy can serve as a bridge, as having a sense of privacy can create space for people to reflect on their own political stance prior to making critical decisions, such as voting for an election. In this paper, we explore a novel approach by harnessing the potential of AI to enhance the privacy of social-media data. By leveraging adversarial machine learning, i.e., “AI versus AI,” we aim to fool AI-generated user profiles to help users hold a stake in resisting political profiling and preserve the deliberative nature of their political choices. More specifically, our approach probes the conceptual possibility of infusing people’s social media data with minor alterations that can disturb user profiling, thereby reducing the efficacy of the personalized influences generated by political actors. Our study delineates the boundary of ethical and practical implications associated with this ‘AI versus AI’ approach, highlighting the factors for the AI and ethics community to consider in facilitating deliberative decision-making toward democratic elections.

Download

Large language models, politics, and the functionalization of language

September 2024

·

51 Reads

AI and Ethics

This paper critically examines the political implications of Large Language Models (LLMs), focusing on the individual and collective ability to engage in political practices. The advent of AI-based chatbots powered by LLMs has sparked debates on their democratic implications. These debates typically focus on how LLMS spread misinformation and thus hinder the evaluative skills of people essential for informed decision-making and deliberation. This paper suggests that beyond the spread of misinformation, the political significance of LLMs extends to the core of political subjectivity and action. It explores how LLMs contribute to political de-skilling by influencing the capacities of critical engagement and collective action. Put differently, we explore how LLMs shape political subjectivity. We draw from Arendt’s distinction between speech and language and Foucault’s work on counter-conduct to articulate in what sense LLMs give rise to political de-skilling, and hence pose a threat to political subjectivity. The paper concludes by considering how to reconcile the impact of LLMs on political agency without succumbing to technological determinism, and by pointing to how the practice of parrhesia enables one to form one’s political subjectivity in relation to LLMs.




Value dynamism. The terms on the left refer to the corresponding notions of value in Dewey’s writings
Value adaptation: the reinterpretation of value that is due to value dynamics becomes generalized and results in a change at the generalized level. The terms on the left refer to the corresponding notions of value in Dewey’s writings
Value emergence: in some cases, existing values may be irrelevant (rather than absent) and the articulation of a new value in judgement may be generalized and so result in a change at the generalized level. The terms on the left refer to the corresponding notions of value in Dewey’s writings
Value change dynamics in the Google Glass case
Value change dynamics in the “right to be forgotten” case

+1

Understanding Technology-Induced Value Change: a Pragmatist Proposal

June 2022

·

772 Reads

·

22 Citations

Philosophy & Technology

We propose a pragmatist account of value change that helps to understand how and why values sometimes change due to technological developments. Inspired by John Dewey's writings on value, we propose to understand values as evaluative devices that carry over from earlier experiences and that are to some extent shared in society. We discuss the various functions that values fulfil in moral inquiry and propose a conceptual framework that helps to understand value change as the interaction between three manifestations of value distinguished by Dewey, i.e., "immediate value," "values as the result of inquiry" and "generalized values." We show how this framework helps to distinguish three types of value change: value dynamism, value adaptation, and value emergence, and we illustrate these with examples from the domain of technology. We argue that our account helps to better understand how technology may induce value change, namely through the creation of what Dewey calls indeterminate situations, and we show how our account can integrate several insights on (techno)moral change offered by other authors.


Bridging values: Finding a balance between privacy and control. The case of Corona apps in Belgium and the Netherlands

February 2022

·

59 Reads

·

5 Citations

Journal of Contingencies and Crisis Management

his paper focuses on two examples of the introduction and use of COVID-19 contact tracing apps in The Netherlands (CoronaMelder) and Belgium (Coronalert). It aims to offer a critical, sociotechnical perspective on tracing apps to understand how social, technical, and institutional dimensions form the ingredients for increasing surveillance. While it is still too early to gauge the implications of surveillance-related initiatives in the fight against COVID-19, the “technology theatre” put in place worldwide has already shown that very little can be done to prevent the deployment of technologies, even if their effectiveness is yet to be determined. The context-specific perspective outlined here offers insights into the interests of many different actors involved in the technology theatre, for instance, the corporate interest in sociotechnical frameworks (both apps rely on the Google/Apple exposure notifications application programming interface). At the same time, our approach seeks to go beyond dystopian narratives that do not consider important sociocultural dimensions, such as choices made during app development and implementation to mitigate potential negative impacts on privacy.


Moral Uncertainty in Technomoral Change: Bridging the Explanatory Gap

January 2022

·

92 Reads

·

36 Citations

Perspectives on Science

This paper explores the role of moral uncertainty in explaining the morally disruptive character of new technologies. We argue that existing accounts of technomoral change do not fully explain its disruptiveness. This explanatory gap can be bridged by examining the epistemic dimensions of technomoral change, focusing on moral uncertainty and inquiry. To develop this account, we examine three historical cases: the introduction of the early pregnancy test, the contraception pill, and brain death. The resulting account highlights what we call “differential disruption” and provides a resource for fields such as technology assessment, ethics of technology, and responsible innovation.


Speak, memory: the postphenomenological analysis of memory-making in the age of algorithmically powered social networks

January 2022

·

86 Reads

·

3 Citations

Humanities and Social Sciences Communications

This paper explores the productive role that social network platforms such as Facebook, play in the practice of memory-making. While such platforms facilitate interaction across distance and time, they also solidify human self-expression and memory-making by systematically confronting the users with their digital past. By relying on the framework of postphenomenology, the analysis will scrutinize the mediating role of the Memories feature of Facebook, powered by recurring algorithmic scheduling and devoid of meaningful context. More specifically, it will show how this technological infrastructure mediates the concepts of memory, control and space, evoking a specific interpretation of the values of time, remembering and forgetting. As such, apart from preserving memories, Facebook appears as their co-producer, guiding the users in determining the criteria for remembering and forgetting. The paper finishes with suggestions on how to critically appropriate the memory-making features of social network platforms that would both enable their informed use and account for their mediating role in co-shaping good memories.


What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms

January 2022

·

66 Reads

·

18 Citations

Theoretical Medicine and Bioethics

In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which explore the interplay between technologies and morality, we present an analysis of concerns related to the adoption of machine learning-aided medical diagnosis. We analyze anticipated moral issues that machine learning systems pose for different stakeholders, such as bias and opacity in the way that models are trained to produce diagnoses, changes to how health care providers, patients, and developers understand their roles and professions, and challenges to existing forms of medical legislation. Albeit preliminary in nature, the insights offered by the technomoral change and the technological mediation approaches expand and enrich the current discussion about machine learning in diagnostic practices, bringing distinct and currently underexplored areas of concern to the forefront. These insights can contribute to a more encompassing and better informed decision-making process when adapting machine learning techniques to medical diagnosis, while acknowledging the interests of multiple stakeholders and the active role that technologies play in generating, perpetuating, and modifying ethical concerns in health care.


Design Considerations for Data Daemons: Co-creating Design Futures to Explore Ethical Personal Data Management

June 2021

·

46 Reads

Wiebke Toussaint

·

Alejandra Gomez Ortega

·

·

[...]

·

Mobile applications and online service providers track our virtual and physical behaviour more actively and with a broader scope than ever before. This has given rise to growing concerns about ethical personal data management. Even though regulation and awareness around data ethics are increasing, end-users are seldom engaged when defining and designing what a future with ethical personal data management should look like. We explore a participatory process that uses design futures, the Future workshop method and design fictions to envision ethical personal data management with end-users and designers. To engage participants effectively, we needed to bridge their differential expertise and make the abstract concepts of data and ethics tangible. By concretely presenting personal data management and control as fictitious entities called Data Daemons, we created a shared understanding of these abstract concepts, and empowered non-expert end-users and designers to become actively engaged in the design process.


Citations (14)


... The adaptive alignment framework we proposed in this paper follows a retroactive approach to pluralistic AI, with some accompanying implications. We consider these implications through the sociotechnical systems perspective; in matters related to human users, AI algorithms are inseparable from the sociotechnical systems within which they are embedded [Kudina and van de Poel, 2024]. ...

Reference:

Adaptive Alignment: Dynamic Preference Adjustments via Multi-Objective Reinforcement Learning for Pluralistic AI
A sociotechnical system perspective on AI
  • Citing Article
  • June 2024

Minds and Machines

... Previous research has shown performance differences in the recognition rate of ASR models due to speaker variations in demographic attributes. Some works show better recognition for male speech [36,16,17,27], while most indicate better performance for female speech [14,32,28,1,25,13,15] and other report mixed findings or no (significant) differences [37,26,8]. Age-related disparities also surface, with studies generally showing superior ASR performance for teenagers over children [14,13,15]. ...

Towards inclusive automatic speech recognition

Computer Speech & Language

... As our remembering practices and experiences of the past are mediated, memorial technologies reveal and affirm specific mnemotechnic values related to how history might be brought to human attention, consideration, and conceptualization (see Kudina, 2021;Kudina & Verbeek, 2019;Van de Poel & Kudina, 2022). In this context, mnemotechnic values are normative concerns that arise from collective remembering: they are able to guide people in defining the criteria and standards for remembering and forgetting. ...

Understanding Technology-Induced Value Change: a Pragmatist Proposal

Philosophy & Technology

... The 'deeper penetration of social control into the social body' (Cohen, 1979, p. 356), revealed by the COVID-19's version of the all-seeing Panopticon (Bentham, 1995;Foucault, 1991), was reflected in the increased use of technologies, such as drones and apps, to monitor human behaviour. Belgium, for example, used drones to warn or remind citizens in parks of the new measures as well as to discipline and alert others (Van Brakel et al., 2022;Stonor, 2020). While the new technology was speedily incorporated into the fight against the virus, this also raised several concerns, particularly regarding privacy and data protection. ...

Bridging values: Finding a balance between privacy and control. The case of Corona apps in Belgium and the Netherlands

Journal of Contingencies and Crisis Management

... An example of this can be found in an ongoing conversation on techno-moral disruption. Nickel et al. (2022) (Nickel et al., 2022). A change in circumstances can be such that a certain understanding of a value is no longer wholly fitting. ...

Moral Uncertainty in Technomoral Change: Bridging the Explanatory Gap

Perspectives on Science

... We assist in an interaction between the human and the computational in a process in which algorithms are social actors that participate in the definition of collective memories. Artificial intelligence brings people into new relations with the past by recreating what counts as historical evidence (Kudina, 2022). Based on these rationales, the outputs produced by artificial intelligence might be seen as speculative narratives and semantic artefacts resulting from the industrialization of automatic and cheap textual occurrences (Floridi & Chiriatti, 2020). ...

Speak, memory: the postphenomenological analysis of memory-making in the age of algorithmically powered social networks

Humanities and Social Sciences Communications

... A critical challenge involves ensuring data privacy and security in IoT systems, which handle highly sensitive patient information. IoT devices are often vulnerable to cyberattacks, which could result in data breaches that compromise patient confidentiality and trust in these technologies [39]. Additionally, algorithmic bias remains a concern in AI-based healthcare systems. ...

What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms

Theoretical Medicine and Bioethics

... Unravelling the Connection between Phonetics and Resonance. The direct correlation between phonetic proficiency and vocal resonance, as identified through Bernac's principles, unveils the interplay of linguistic precision and acoustic qualities in vocal performance (Kudina & Coeckelbergh, 2021). This revelation propels the discussion into auditory aesthetics, where phonetics serves not only as a conduit for linguistic accuracy but also as a sculptor of vocal timbre. ...

“Alexa, define empowerment”: voice assistants at home, appropriation and technoperformances

Journal of Information Communication and Ethics in Society

... Data bias is encountered in many real-world applications, including medical diagnosis, 3 image or facial recognition, 17,18 text classification, 19 and speech recognition. 20,21 It can manifest THE BIGGER PICTURE Data bias occurs in the sampling processes of many real-world datasets. For health data, gender and ethnicity are common factors causing sampling bias. ...

Quantifying Bias in Automatic Speech Recognition

... As our remembering practices and experiences of the past are mediated, memorial technologies reveal and affirm specific mnemotechnic values related to how history might be brought to human attention, consideration, and conceptualization (see Kudina, 2021;Kudina & Verbeek, 2019;Van de Poel & Kudina, 2022). In this context, mnemotechnic values are normative concerns that arise from collective remembering: they are able to guide people in defining the criteria and standards for remembering and forgetting. ...

“Alexa, who am I?”: Voice Assistants and Hermeneutic Lemniscate as the Technologically Mediated Sense-Making

Human Studies