Merel Noorman’s research while affiliated with Tilburg University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


The Governance of AI Technologies
  • Chapter

April 2025

·

10 Reads

Merel Noorman

Norms and values are not stable things. They change over time and in response to, among other things, the introduction of new technologies. This is particularly clear in the developing field of AI Governance, where the introduction of new technologies instigates and triggers renegotiations about the values and norms that these new technologies put under pressure. One element of Deborah Johnson’s more recent work is the focus on negotiations around values and norms as part of changing practices following the introduction of computational technologies, like AI. This work aligns with the practice turn in philosophy and the social sciences, where scholars have argued that a focus on practices provides a promising perspective to think about human activity and social life. The focus on practices offers opportunities for empirical philosophical research on how interventions can be made in these negotiations and practices ‘in the making’. This paper explores the potential role empirical philosophers and computer ethicists can play in shaping the embedding and governance of AI technologies in practice. It examines this role in the context of AI governance clinics organized for municipalities to help smart city projects address ethical and social issues that go beyond the formal legal and policy questions.


Editors’ Introduction

April 2025

·

5 Reads

Computer Ethics is a practical kind of philosophy that is aimed at investigating how computer technologies should be used. It has accompanied the development of these technologies for over half a century. One of the leading figures in this field has been Deborah G. Johnson, whose agenda-setting handbook on Computer Ethics has inspired generations of scholars to explore the new ethical questions that these technologies raise. In 2021, Johnson was awarded the Society of Philosophy of Technology Lifetime Achievement Award for her outstanding contribution to computer and engineering ethics. In recognition of this award and her research on Computer Ethics, this edited volume brings together philosophers and scholars from other disciplines including computer scientists, cognitive scientists, and STS scholars, who have engaged with Johnson’s extensive body of work. Some of the contributors, such as van den Hoven and Miller, have helped shape the field of computer ethics, while contributors from later generations who benefitted from the work of these trailblazers are following their path and further exploring and expanding on their legacy. The volume seeks to introduce the lessons learned from Computer Ethics to a broader audience of scholars from different disciplines and show how they still resonate in today’s ethical discussions about new emerging computer technologies. Each chapter illustrates how combining philosophy of technology, ethics, and different disciplinary perspectives can help analyze and clarify the complex intricacies of computer technology and societies. The common thread in these chapters is a focus on issues of algorithmic accountability. As the only (for now) full-fledged moral agents, humans are called to action to discuss, propose, negotiate, and implement ethical ways to use computers, which obviously and most importantly include attribution of responsibility when something goes wrong. This introductory chapter situates Johnson’s work within the broader discussion on Computer Ethics and provides short summaries of the contributions to this volume.


Regulating AI in the ‘twin transitions’: Significance and shortcomings of the AI Act in the digitalised electricity sector
  • Article
  • Full-text available

October 2024

·

33 Reads

Review of European

The use of artificial intelligence (AI) in the electricity sector is increasing as part of the twin digital and green transitions in the European Union (EU). AI technologies can be used for optimisation, prediction and scheduling purposes, offering possible solutions to the challenges presented by the ongoing energy transition. At the same time, AI technologies in general, not only in the electricity sector, also raise concerns about (unintended) harms and risks for people. To respond to these concerns, the EU has taken steps to regulate the responsible development and use of AI. A significant milestone in this endeavour is the recent adoption of the AI Act, the first horizontal EU legislation introducing harmonised rules for AI systems. In this paper, we examine different uses of AI in the electricity domain and investigate the extent to which the upcoming AI Act applies to them. In addition, we identify the shortcomings of the AI Act in the context of the electricity sector, as well as aspects that are not covered by this legislation. We show that the AI Act will only cover a narrow subset of AI systems in the electricity sector and that addressing some of the challenges that these systems pose may require further regulation.

Download

Warren’s framework
Democratizing AI from a Sociotechnical Perspective

November 2023

·

189 Reads

·

8 Citations

Minds and Machines

Artificial Intelligence (AI) technologies offer new ways of conducting decision-making tasks that influence the daily lives of citizens, such as coordinating traffic, energy distributions, and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether they like it or not, and they usually do not have much say in them. The democratic challenge for those working on AI technologies with collectively binding effects is both to develop and deploy technologies in such a way that the democratic legitimacy of the relevant decisions is safeguarded. In this paper, we develop a conceptual framework to help policymakers, project managers, innovators, and technologists to assess and develop approaches to democratize AI. This framework embraces a broad sociotechnical perspective that highlights the interactions between technology and the complexities and contingencies of the context in which these technologies are embedded. We start from the problem-based and practice-oriented approach to democracy theory as developed by political theorist Mark Warren. We build on this approach to describe practices that can enhance or challenge democracy in political systems and extend it to integrate a sociotechnical perspective and make the role of technology explicit. We then examine how AI technologies can play a role in these practices to improve or inhibit the democratic nature of political systems. We focus in particular on AI-supported political systems in the energy domain.


AI and Energy Justice

February 2023

·

125 Reads

·

14 Citations

Artificial intelligence (AI) techniques are increasingly used to address problems in electricity systems that result from the growing supply of energy from dynamic renewable sources. Researchers have started experimenting with data-driven AI technologies to, amongst other uses, forecast energy usage, optimize cost-efficiency, monitor system health, and manage network congestion. These technologies are said to, on the one hand, empower consumers, increase transparency in pricing, and help maintain the affordability of electricity in the energy transition, while, on the other hand, they may decrease transparency, infringe on privacy, or lead to discrimination, to name a few concerns. One key concern is how AI will affect energy justice. Energy justice is a concept that has emerged predominantly in social science research to highlight that energy related decisions—in particular, as part of the energy transition—should produce just outcomes. The concept has been around for more than a decade, but research that investigates energy (in)justice in the context of digitalized and data-driven electricity systems is still rather scarce. In particular, there is a lack of scholarship focusing on the challenges and questions that arise from the use of AI technologies in the management of electricity systems. The central question of this paper is, therefore: what may be the implications of the use of AI in smart electricity systems from the perspective of energy justice, and what does this mean for the design and regulation of these technologies?



Questioning the Normative Core of RI: The Challenges Posed to Stakeholder Engagement in a Corporate Setting

October 2017

·

237 Reads

·

18 Citations

Responsible Innovation (RI) is a normative conception of technology development, which hopes to improve upon prevailing practices. One of its key principles is the active involvement of a broad range of stakeholders in deliberations in order to better embed innovations in society. In this paper, we examine the applicability of this principle in corporate settings and in smaller scale technological projects. We do so in the context of a case study focused on an innovation project of a start-up organisation with social aspirations. We describe our failed attempts to introduce RI-inspired stakeholder engagement approaches and articulate the ‘reasonable reasons’ why the organisation rejected these approaches. We then examine the methods that the organisation adopted to be responsive to various stakeholders’ needs and values. Based on our analysis, we argue that there is a need for the field of RI to explore additional and alternative ways to address issues of stakeholder commitment and inclusion, in order to make RI’s deliberative ideals more applicable to the rapid, fluid, partial, and provisional style of deliberation and decision making that we found in corporate contexts.

Citations (3)


... Additionally, AI itself helps spread verifiable information by automating fact-checking processes. Noorman and Swierstra (2023) further articulate that the latest technological shift has accelerated democratization toward AI, hence enabling it to be built upon solutions to some of humanity's gigantic problems, including climate change and chronic diseases. It will, therefore, be much easier to bring about an infrastructure of collaboration that allows different AI systems to better coordinate for the greater good of humanity when AI is decentralized. ...

Reference:

Democratizing Artificial Intelligence for Social Good: A Bibliometric–Systematic Review Through a Social Science Lens
Democratizing AI from a Sociotechnical Perspective

Minds and Machines

... Policymakers should establish cybersecurity frameworks and ethical guidelines to protect AI systems from threats and ensure that the benefits of AI are distributed equitably across society. Ethical consid- erations, such as preventing algorithmic bias and promoting transparency, should be integrated into AI deployment to mitigate potential risks of discrimination or unfair outcomes [51]. By addressing these considerations, policymakers can support the successful deployment of AI/ML technologies in energy systems, maximizing their potential for sustainable development. ...

AI and Energy Justice

... Just as the 2008 financial crisis put stakeholder relationships under severe pressure, the recent COVID-19 crisis has demonstrated society's growing interest in stabilizing stakeholder relationships (Blustein et al., 2020), because these bonds have many beneficial effects for society (Post et al., 2002). However, since the beginning of stakeholder engagement practices, there have also been worries about their dark side (Kujala et al., 2022), about firms' capacity to genuinely and effectively engage with stakeholders in the long run (Banerjee, 2008), and about the extent to which stakeholder interests are balanced as a matter of strategic choice and are thus fragile (Noorman et al., 2017). Voice options in cooperative governance are marked by formal codetermination (FitzRoy & Kraft, 1993;Hansmann, 1990), and engagement practices in capitalist enterprises often resemble dialogic exchanges, which are opportunities for the organization and stakeholders to share preferences and concerns with each other, but are not formalized and often lack binding power (Passetti et al., 2019). ...

Questioning the Normative Core of RI: The Challenges Posed to Stakeholder Engagement in a Corporate Setting