Chapter

AI Ethics Needs Good Data

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We are entering a new era of technological determinism and solutionism in which governments and business actors are seeking data-driven change, assuming that Artificial Intelligence is now inevitable and ubiquitous. But we have not even started asking the right questions, let alone developed an understanding of the consequences. Urgently needed is debate that asks and answers fundamental questions about power. This book brings together critical interrogations of what constitutes AI, its impact and its inequalities in order to offer an analysis of what it means for AI to deliver benefits for everyone. The book is structured in three parts: Part 1, AI: Humans vs. Machines, presents critical perspectives on human-machine dualism. Part 2, Discourses and Myths About AI, excavates metaphors and policies to ask normative questions about what is ‘desirable’ AI and what conditions make this possible. Part 3, AI Power and Inequalities, discusses how the implementation of AI creates important challenges that urgently need to be addressed. Bringing together scholars from diverse disciplinary backgrounds and regional contexts, this book offers a vital intervention on one of the most hyped concepts of our times.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... As these autonomous agents traverse physical spaces and engage with diverse stakeholders, the ability to provide coherent and interpretable justifications for their behavior becomes indispensable [22]. ...
... Data quality plays a foundational role in AI ethics, a principle that assumes heightened significance in the context of machine learning migration [22]. Ensuring the integrity and fairness of the data that algorithms rely on becomes a paramount concern as they transition from centralized computing environments to distributed robotic systems. ...
Chapter
Full-text available
This chapter critically examines the ethical, legal, and societal implications of integrating large language models (LLMs) into political and economic structures. By drawing from interdisciplinary perspectives in psychology, AI ethics, and legal scholarship, it explores the evolving landscape of AI governance and its impact on society. The discussion focuses on the challenges of granting AI autonomy and agency, assessing how LLMs influence decision-making and governance. A key aspect of the chapter is its proposal for a framework of “Constitutional AI,” which seeks to align AI decision-making with constitutional values such as fairness, justice, and transparency. It highlights the need for explainable AI (XAI) techniques, robust governance policies, and ethical considerations in AI system design. The potential risks of AI misuse, manipulation, and opacity are also addressed, emphasizing accountability and user empowerment. The chapter further examines psychological and philosophical concepts like agency, normativity, and the metaphysics of self-constitution, linking them to AI’s role in human decision-making. Ultimately, it advocates for AI systems that operate in a safe, secure, and trustworthy manner, ensuring their development benefits society while maintaining ethical integrity and legal compliance.
... Interesting and important as such initiatives are, they remain unconvincing, at least for me. Paradigms for use and dissemination of technology have overwhelmingly been skewed towards favouring big American tech corporations, although this position has been contested by Couldry and Mejias who argue that "data colonialism involves not one pole of colonial power ('the West'), but at least two: the United States and China" (Couldry and Mejias, 2019, p. 337), and Gravett (2020) (Daly et al 2021). Crucially, there has been a significant emphasis on cases involving the use of Chinese AI in trials conducted in Xinjiang/East Turkestan. ...
Chapter
Full-text available
This chapter aims to explore the complexities and limitations of AI ethics, with a particular focus on the issue of whiteness. The authors will explore the ways in which AI ethics has been used as a tool to maintain and expand dominant power, as well as to shift questions of 'whether or not' and 'for the benefit of whom' to questions of 'how', thereby perpetuating systemic inequalities. Rather than following a traditional structure, the chapter will present the authors' collective conversation throughout, reflecting on the complexity of the issues at hand. Through this conversation, the authors will grapple with what it might mean to decolonize AI ethics, the uncertainties and challenges of such a process. The conversation will involve a diverse group of scholars with different positionalities, and will manifest a probing series of critical lines of inquiry marked by collective self-criticism and an implicit political orientation.
Article
Full-text available
Envisioning humans and (smart) robots collaboratively working on the manufacturing shop floor, sharing spaces, tasks and objectives, reflects the ambitious goal that the ideal factory of the future aspires to attain. However, ensuring the effective implementation of this novel form of labour organisation remains an ongoing area of research. Key aspects such as the future role of workers, potential psychological risks, and the overall ethical considerations of Human-Robot (H-R) collaboration warrant further investigation until the underpinning safety challenges have been addressed. This study presents a novel ethical framework for H-R collaboration in manufacturing, which involved 30 subject-matter experts in ethics within the European context in a collaborative design process conducted through a year-long three-round data collection qualitative Delphi study. The ethical framework adopts a human-centric approach, recognising the influences that expand beyond the specific context of H-R dynamics on the shop floor, towards organisational and societal governance for a more responsible integration of (smart) robotics into the professional settings. Ethics, in this regard, aims to foster ethical awareness and accountability in the processes and practices of design and innovation, involving all stakeholders who play a role in shaping the future of Industry 5.0.
Chapter
Generative AI systems have given incredible ability to independently produce a wide variety of content types, including textual, visual, and more. Complex issues with copyright protection and intellectual property rights have arisen as a result of this change. With a focus on fostering responsible global governance, this research delves into the complex legal and ethical considerations underlying Generative AI. The goal of this chapter is to take a look at the complicated legal issues that come up because of Generative AI's ability to generate material on its own. This chapter analyzes the current legal documents, legislation, and international treaties, focusing on ethical concerns. Ultimately, the authors want to have a positive impact on efforts to build responsible and efficient international frameworks for regulating Generative AI. This study provides an exhaustive case for the implementation of legal frameworks that can efficiently tackle the intricate legal and ethical quandaries posed by Generative AI, while simultaneously encouraging the progress of innovation and creativity.
Article
Full-text available
The history of high-tech regulation is a path studded with incidents. Each adverse event allowed the gathering of more information on high technologies and their impacts on people, infrastructure, and other technologies, posing the bases for their regulation. With the increasing diffusion of artificial intelligence (AI) use, it is plausible that this connection between incidents and high-tech regulation will be confirmed for this technology as well. This study focuses on the role of AI incidents and an efficient strategy of incident data collection and analysis to improve our knowledge of the impact of AI technologies and regulate them better. To pursue this objective, the paper first analyses the evolution of high-tech regulation in the aftermath of incidents. Second, the paper focuses on the recent developments in AI regulation through soft and hard laws. Third, this study assesses the quality of the available AI incident databases and their capacity to provide information useful for opening and regulating the AI black box. This study acknowledges the importance of implementing a strategy for gathering and analysing AI incident data and approving flexible AI regulation that evolves with such a new technology and with the information that we will receive from adverse events—an approach that is also endorsed by the European Commission and its proposal to regulate and harmonise rules on AI.
Article
Artificial intelligence (AI) and algorithmic decision making are having a profound impact on our daily lives. These systems are vastly used in different high-stakes applications like healthcare, business, government, education, and justice, moving us toward a more algorithmic society. However, despite so many advantages of these systems, they sometimes directly or indirectly cause harm to the users and society. Therefore, it has become essential to make these systems safe, reliable, and trustworthy. Several requirements, such as fairness, explainability, accountability, reliability, and acceptance, have been proposed in this direction to make these systems trustworthy. This survey analyzes all of these different requirements through the lens of the literature. It provides an overview of different approaches that can help mitigate AI risks and increase trust and acceptance of the systems by utilizing the users and society. It also discusses existing strategies for validating and verifying these systems and the current standardization efforts for trustworthy AI. Finally, we present a holistic view of the recent advancements in trustworthy AI to help the interested researchers grasp the crucial facets of the topic efficiently and offer possible future research directions.
Article
Full-text available
The recent introduction of AI tools in the justice sector poses several ethical implications as risks for judges’ independence and for procedural transparency, and discrimination biases. By developing ethical frameworks governing AI application, private and public agents have been increasingly dealing with risks pertaining to the use of AI. By inventorying and analyzing a set of ethical documents through content analysis, this study highlights the ethical implications involved in the application of AI. Moreover, by investigating the CEPEJ Charter (European Commission for the Effectiveness of Justice of the Council of Europe), the unique ethical document focusing on AI in justice, we were able to clarify potential differences between justice and other contexts of AI application with respect to risks prospected and the protection of ethical principles. The analysis confirms that the discipline of AI is a complex subject that involves very different aspects and therefore needs a broad focus on all contexts of application.
Preprint
Full-text available
In this chapter we argue that discourses on AI must transcend the language of 'ethics' and engage with power and political economy in order to constitute 'Good Data'. In particular, we must move beyond the depoliticised language of 'ethics' currently deployed (Wagner 2018) in determining whether AI is 'good' given the limitations of ethics as a frame through which AI issues can be viewed. In order to circumvent these limits, we use instead the language and conceptualisation of 'Good Data', as a more expansive term to elucidate the values, rights and interests at stake when it comes to AI's development and deployment, as well as that of other digital technologies. Good Data considerations move beyond recurring themes of data protection/privacy and the FAT (fairness, transparency and accountability) movement to include explicit political economy critiques of power. Instead of yet more ethics principles (that tend to say the same or similar things anyway), we offer four 'pillars' on which Good Data AI can be built: community, rights, usability and politics. Overall we view AI's 'goodness' as an explicly political (economy) question of power and one which is always related to the degree which AI is created and used to increase the wellbeing of society and especially to increase the power of the most marginalized and disenfranchised. We offer recommendations and remedies towards implementing 'better' approaches towards AI. Our strategies enable a different (but complementary) kind of evaluation of AI as part of the broader socio-technical systems in which AI is built and deployed.
Chapter
Full-text available
This chapter comes out of two separate research projects carried out in Colombia, South America. One, finished in 2017, was called City of Data. It was an exploration of government-led, centralised Smart City projects in the cities of Bogotá and Medellín. The other, still ongoing, is called Communication Practices in the Medellín’s Gardeners Network. It is an exploration of grass-roots gardening initiatives in Cali, Bogotá and Medellín. Both projects had to do with approaches to public data: some ‘centralised’, government-led in the form of Smart City projects and others, more in the form of citizen-led initiatives. We analysed documents, conducted semi-structured interviews with dozens of officials and citizen group leaders, and carried out participatory research. Our main goal was to analyze government-led and grass-roots-led initiatives producing and managing data to empower citizens in Medellín and Bogotá. Our theoretical perspective comes from Critical Data Studies, Decoloniality and Relational Ontologies. We found very closed and centralized data production practices in the government-led, smart city initiatives studied, but discovered what could be described as promising ‘good data’ citizen-led approaches in Medellín’s Gardeners Network (RHM). We also found some issues and challenges arising from the particular, non-Western, highly unequal context of these citizen-led initiatives.
Article
Full-text available
This article uses a socio-legal perspective to analyze the use of ethics guidelines as a governance tool in the development and use of artificial intelligence (AI). This has become a central policy area in several large jurisdictions, including China and Japan, as well as the EU, focused on here. Particular emphasis in this article is placed on the Ethics Guidelines for Trustworthy AI published by the EU Commission’s High-Level Expert Group on Artificial Intelligence in April 2019, as well as the White Paper on AI, published by the EU Commission in February 2020. The guidelines are reflected against partially overlapping and already-existing legislation as well as the ephemeral concept construct surrounding AI as such. The article concludes by pointing to (1) the challenges of a temporal discrepancy between technological and legal change, (2) the need for moving from principle to process in the governance of AI, and (3) the multidisciplinary needs in the study of contemporary applications of data-dependent AI.
Article
Full-text available
Ethics has powerful teeth, but these are barely being used in the ethics of AI today – it is no wonder the ethics of AI is then blamed for having no teeth. This article argues that ‘ethics’ in the current AI ethics field is largely ineffective, trapped in an ‘ethical principles’ approach and as such particularly prone to manipulation, especially by industry actors. Using ethics as a substitute for law risks its abuse and misuse. This significantly limits what ethics can achieve and is a great loss to the AI field and its impacts on individuals and society. This article discusses these risks and then highlights the teeth of ethics and the essential value they can – and should – bring to AI ethics now.
Article
Full-text available
This article explores technological sovereignty as a way to respond to anxieties of control in digital urban contexts, and argues that this may promise a more meaningful social license to operate smart cities. First, we present an overview of smart city developments with a critical focus on corporatization and platform urbanism. We critique Alphabet's Sidewalk Labs development in Toronto, which faces public backlash from the #BlockSidewalk campaign in response to concerns over not just privacy, but also lack of community consultation, the prospect of the city losing its civic ability to self‐govern, and its repossession of public land and infrastructure. Second, we explore what a more responsible smart city could look like, underpinned by technological sovereignty, which is a way to use technologies to promote individual and collective autonomy and empowerment via ownership, control, and self‐governance of data and technologies. To this end, we juxtapose the Sidewalk Labs development in Toronto with the Barcelona Digital City plan. We illustrate the merits (and limits) of technological sovereignty moving toward a fairer and more equitable digital society.
Article
Full-text available
The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
Article
Full-text available
‘Don’t be evil’ was part of Google’s corporate code of conduct since 2000; however, it was quietly removed in April or May 2018 and subsequently replaced with ‘do the right thing’. Questions were raised both internally and externally to the organisation regarding the substantive meaning of this imperative. Some have highlighted the company’s original intentions in creating the code of conduct, while others have used the motto as a basis for critiquing the company—such as for its advertising practices, failure to pay corporate tax or the manipulation of Google-owned content. The imperative’s removal occurred at a time when thousands of Google employees, including senior engineers, signed a letter protesting the company’s involvement in Project Maven, a Pentagon program that uses artificial intelligence to interpret video imagery, which could in turn be used to improve the targeting capability of drone strikes. Employees asserted their refusal to be involved in the business of war and expressed their wariness of the United States government’s use of technology. This article will examine the legal construct and concept of the corporation, and whether it is possible for corporations to not be evil in the twenty-first century.
Article
Full-text available
Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved.
Article
Full-text available
Generally, regulation is thought of as a constant that carries with it both a formative and conservative power, a power that standardises, demarcates and forms an order, through procedures, rules and precedents. It is dominantly thought that the singularity and formalisation of structures like rules is what enables regulation to achieve its aim of identifying, apprehending, sanctioning and forestalling/pre-empting threats and crime or harm. From this point of view, regulation serves to firmly establish fixed and stable categories of what norms, customs, morals and behaviours are applicable to a particular territory, society or community in a given time. These fixed categories are then transmitted onto individuals by convention, ritual and enforcement through imperatives of law (and technology) that mark certain behaviours as permissible and others as forbidden, off bounds. In this manner, regulation serves a programming (i.e., a calculable or determinable) purpose. It functions as a pro-active management or as a mastery of threats, risks, crimes and harms that affect a society and its security both in the future and in the present. Regulation for instance, will inscribe and codify what it determines to constitute crime or harm such as pornography, incitement of terrorism, extremist speech, racial hatred etc. These determined or calculated/calculable categories will then be enforced and regulated (e.g. through automated filtering) in order to ensure a preservation of public order within society. Drawing mainly from deconstruction, this article situates law and technologies within a wider ecological process of texts, speech and writing i.e., communication. In placing regulation within disseminatory and iterable processes of communication, this article complicates, destabilises and critiques the dominant position of determinability and calculability within the regulatory operations of law.
Chapter
Full-text available
The authors are Australia-based energy researchers who view a close link between access to energy data and the country's transition to a sustainable and just community-based energy future, which they argue is currently hampered by some major incumbent energy sector businesses and politicians. Rooftop solar (PV) panels are popular additions to Australian homes but individuals do not have access to the data about the energy they produce and consume. Access to this data would empower individuals and collectives such as community energy groups, and accordingly could hasten Australia's take-up and implementation of sustainable energy in a sustainable, communal way. The authors provide a series of recommended actions in their manifesto which would lead to this goal.
Article
Full-text available
As datafication progressively invades all spheres of contemporary society, citizens grow increasingly aware of the critical role of information as the new fabric of social life. This awareness triggers new forms of civic engagement and political action that we term “data activism”. Data activism indicates the range of sociotechnical practices that interrogate the fundamental paradigm shift brought about by datafication. Combining Science and Technology Studies with Social Movement Studies, this theoretical article offers a foretaste of a research agenda on data activism. It foregrounds democratic agency vis-à-vis datafication, and unites under the same label ways of affirmative engagement with data (“proactive data activism”, e. g. databased advocacy) and tactics of resistance to massive data collection (“reactive data activism”, e. g. encryption practices), understood as a continuum along which activists position and reposition themselves and their tactics. The article argues that data activism supports the emergence of novel epistemic cultures within the realm of civil society, making sense of data as a way of knowing the world and turning it into a point of intervention and generation of data countercultures. It offers the notion of data activism as a heuristic tool for the study of new forms of political participation and civil engagement in the age of datafication, and explores data activism as an evolving theoretical construct susceptible to contestation and revision.
Article
Full-text available
Citizen sensing, or the use of low-cost and accessible digital technologies to monitor environments, has contributed to new types of environmental data and data practices. Through a discussion of participatory research into air pollution sensing with residents of northeastern Pennsylvania concerned about the effects of hydraulic fracturing, we examine how new technologies for generating environmental data also give rise to new problems for analysing and making sense of citizen-gathered data. After first outlining the citizen data practices we collaboratively developed with residents for monitoring air quality, we then describe the data stories that we created along with citizens as a method and technique for composing data. We further mobilise the concept of ‘just good enough data’ to discuss the ways in which citizen data gives rise to alternative ways of creating, valuing and interpreting datasets. We specifically consider how environmental data raises different concerns and possibilities in relation to Big Data, which can be distinct from security or social media studies. We then suggest ways in which citizen datasets could generate different practices and interpretive insights that go beyond the usual uses of environmental data for regulation, compliance and modelling to generate expanded data citizenships.
Article
Full-text available
The Snowden leaks, first published in June 2013, provided unprecedented insights into the operations of state-corporate surveillance, highlighting the extent to which everyday communication is integrated into an extensive regime of control that relies on the ‘datafication’ of social life. Whilst such data-driven forms of governance have significant implications for citizenship and society, resistance to surveillance in the wake of the Snowden leaks has predominantly centred on techno-legal responses relating to the development and use of encryption and policy advocacy around privacy and data protection. Based on in-depth interviews with a range of social justice activists, we argue that there is a significant level of ambiguity around this kind of anti-surveillance resistance in relation to broader activist practices, and critical responses to the Snowden leaks have been confined within particular expert communities. Introducing the notion of ‘data justice’, we therefore go on to make the case that resistance to surveillance needs to be (re)conceptualized on terms that can address the implications of this data-driven form of governance in relation to broader social justice agendas. Such an approach is needed, we suggest, in light of a shift to surveillance capitalism in which the collection, use and analysis of our data increasingly comes to shape the opportunities and possibilities available to us and the kind of society we live in.
Article
Full-text available
By all accounts, 2016 is the year of the chatbot. Some commentators take the view that chatbot technology will be so disruptive that it will eliminate the need for websites and apps. But chatbots have a long history. So what's new, and what's different this time? And is there an opportunity here to improve how our industry does technology transfer?
Article
Full-text available
With the emergence of environmental sustainability and green business management, increasing demands have been made on businesses in the areas of environmental corporate social responsibility (ECSR). Furthermore, the influence of ECSR on green capital investment, environmental performance, and business competitiveness has also been the subject of attention from enterprises. However, in previous studies, the mediating role of green information technology (IT) capital in the relationship between ECSR, environmental performance, and business competitiveness, has not been investigated by researchers. In order to bridge this gap in the ECSR literature, this study aims to examine the influence of ECSR on green IT capital, and the consequent effect of green IT capital on environmental performance and business competitiveness. Data were collected from 358 companies from the top 1000 manufacturers in Taiwan. The results confirmed that ECSR has significant positive effects on green IT human capital, green IT structural capital, and green IT relational capital. Green IT structural capital and green IT relational capital have positive effects on environmental performance and business competitiveness, and environmental performance has a positive effect on business competitiveness. In addition, green IT structural capital and green IT relational capital have partial mediating effects on ECSR, environmental performance, and business competitiveness. The implications and suggestions for future research are discussed.
Article
Full-text available
Effectuation theory invests agency—intention and purposeful enactment—for new venture creation in the entrepreneurial actor(s). Based on the results of a 15-month in-depth longitudinal case study of Amsterdam-based social enterprise Fairphone, we argue that effectual entrepreneurial agency is co-constituted by distributed agency, the proactive conferral of material resources and legitimacy to an eventual entrepreneur by heterogeneous actors external to the new venture. We show how, in the context of social movement activism, an effectual network pre-committed resources to an inchoate social enterprise to produce a material artefact because it embodied the moral values of network members. We develop a model of social enterprise emergence based on these findings. We theorise the role of material artefacts in effectuation and suggest that, in the case, the artefact served as a boundary object, present in multiple social words and triggering commitment from actors not governed by hierarchical arrangements.
Article
Full-text available
To date, little attention has been given to the impact of big data in the Global South, about 60% of whose residents are below the poverty line. Big data manifests in novel and unprecedented ways in these neglected contexts. For instance, India has created biometric national identities for her 1.2 billion people, linking them to welfare schemes, and social entrepreneurial initiatives like the Ushahidi project that leveraged crowdsourcing to provide real-time crisis maps for humanitarian relief. While these projects are indeed inspirational, this article argues that in the context of the Global South there is a bias in the framing of big data as an instrument of empowerment. Here, the poor, or the “bottom of the pyramid” populace are the new consumer base, agents of social change instead of passive beneficiaries. This neoliberal outlook of big data facilitating inclusive capitalism for the common good sidelines critical perspectives urgently needed if we are to channel big data as a positive social force in emerging economies. This article proposes to assess these new technological developments through the lens of databased democracies, databased identities, and databased geographies to make evident normative assumptions and perspectives in this under-examined context.
Article
Full-text available
‘Privacy by design’ is an increasingly popular paradigm. It is the principle or concept that privacy should be promoted as a default setting of every new ICT system and should be built into systems from the design stage. The draft General Data Protection Regulation embraces ‘privacy by design’ without detailing how it can or should be applied. This paper discusses what the proposed legal obligation for ‘privacy by design’ implies in practice for online businesses. In particular, does it entail hard-coding privacy requirements in system design? First, the ‘privacy by design’ provision in the proposed Regulation is analysed and interpreted. Next, we discuss an extreme interpretation – embedding data protection requirements in system software – and identify five complicating issues. On the basis of these complications, we conclude that ‘privacy by design’ should not be interpreted as trying to achieve rule compliance by techno-regulation. Instead, fostering the right mindset of those responsible for developing and running data processing systems may prove to be more productive. Therefore, in terms of the regulatory tool-box, privacy by design should be approached less from a ‘code’ perspective, but rather from the perspective of ‘communication’ strategies.
Book
Law and the Technologies of the Twenty-First Century provides a contextual account of the way in which law functions in a broader regulatory environment across different jurisdictions. It identifies and clearly structures the four key challenges that technology poses to regulatory efforts, distinguishing between technology as a regulatory target and tool, and guiding the reader through an emerging field that is subject to rapid change. By extensive use of examples and extracts from the texts and materials that form and shape the scholarly and public debates over technology regulation, it presents complex material in a stimulating and engaging manner. Co-authored by a leading scholar in the field with a scholar new to the area, it combines comprehensive knowledge of the field with a fresh approach. This is essential reading for students of law and technology, risk regulation, policy studies, and science and technology studies.
Article
This article considers the law’s response to the emergence of robots and artificial intelligence (AI), and whether they should be considered as legal persons and accordingly the bearers of legal rights. We analyse the regulatory issues raised by robot rights through three questions: (i) could robots be granted rights? (ii) will robots be granted rights? and (iii) should robots be granted rights? On the question of whether we can recognise robot rights we examine how the law has treated different categories of legal persons and non-persons historically, finding that the concept of legal personhood is fluid and so arguably could be extended to include robots. However, as can be seen from the current debate in Intellectual Property (IP) law, AI and robots have not been recognised as the bearers of IP rights despite their ability to create and innovate, suggesting that the answer to the question of whether we will grant rights to robots is less certain. Finally, whether we should recognise rights for robots will depend on the intended purpose of regulatory reform.
Chapter
This chapter is an update to the thinking framework for Group Decision Support Systems (GDSS) proposed by Colin Eden 30 years ago. As the source paper, this chapter is a personal take on the topic, however it is a personal take rooted in substantial experience in the broad area of decision making and modelling and in some specific narrow areas of decision support. There have been major developments in the broad context surrounding GDSS, including the improved understanding of decisions on the conceptual side, and many aspects of computer development, such as artificial intelligence and big data on the technical side. Considering the volume of these changes it is surprising how much the observations, arguments and conclusions offered in the source paper are still valid today. The most important component of any GDSS is still the facilitator, and the most valuable ingredients of the GDSS process are the participants’ intuitions, creativity, opinions, arguments, agendas, personalities, networks. The outcome of the GDSS process is only valuable if it is politically feasible. Today we have a better understanding of transitional objects and their role in the GDSS process; their significance is the second after the facilitator. Artificial intelligence can be useful for GDSS in several different ways, but it cannot replace the facilitator.
Article
The European Commission recently published the policy recommendations of its “High-Level Expert Group on Artificial Intelligence”: a heavily anticipated document, particularly in the context of the stated ambition of the new Commission President to regulate in that area. This article argues that these recommendations have significant deficits in a range of areas. It analyses a selection of the Group’s proposals in context of the governance of artificial intelligence more broadly, focusing on issues of framing, representation and expertise, and on the lack of acknowledgement of key issues of power and infrastructure underpinning modern information economies and practices of optimisation.
Article
Problems of bias and fairness are central to data justice, as they speak directly to the threat that ‘big data’ and algorithmic decision-making may worsen already existing injustices. In the United States, grappling with these problems has found clearest expression through liberal discourses of rights, due process, and antidiscrimination. Work in this area, however, has tended to overlook certain established limits of antidiscrimination discourses for bringing about the change demanded by social justice. In this paper, I engage three of these limits: 1) an overemphasis on discrete ‘bad actors’, 2) single-axis thinking that centers disadvantage, and 3) an inordinate focus on a limited set of goods. I show that, in mirroring some of antidiscrimination discourse’s most problematic tendencies, efforts to achieve fairness and combat algorithmic discrimination fail to address the very hierarchical logic that produces advantaged and disadvantaged subjects in the first place. Finally, I conclude by sketching three paths for future work to better account for the structural conditions against which we come to understand problems of data and unjust discrimination in the first place.
Chapter
The Good Data Manifesto is an exploration of what attributes ‘good data’ should have from the viewpoint of practitioners dealing with massive geospatial datasets. It aims to provide guidance on these attributes; and explores the concept of FAIRER data, expanding the concept of ‘findable, accessible, interoperable, reusable’ with ‘ethical and revisable’ - aiming to guide data collection with both current knowledge of data impacts, and as we evolve new veiwpoints.
Article
Australia is a country firmly part of the Global North, yet geographically located in the Global South. This North-in-South divide plays out internally within Australia given its status as a British settler-colonial society which continues to perpetrate imperial and colonial practices vis-à-vis the Indigenous peoples and vis-à-vis Australia’s neighboring countries in the Asia-Pacific region. This article draws on and discusses five seminal examples forming a case study on Australia to examine big data practices through the lens of Southern Theory from a criminological perspective. We argue that Australia’s use of big data cements its status as a North-in-South environment where colonial domination is continued via modern technologies to effect enduring informational imperialism and digital colonialism. We conclude by outlining some promising ways in which data practices can be decolonized through Indigenous Data Sovereignty but acknowledge these are not currently the norm; so Australia’s digital colonialism/coloniality endures for the time being.
Chapter
Theories that relate to digital technology and corporate social responsibility (CSR) have been dominated by online CSR communication and disclosure practices. Almost entirely absent in such CSR research is a consideration of new areas of responsibility that are emerging from digital technologies and related online communication platforms. We argue that responsibility in the use of digital technologies requires more than just legal compliance. We therefore ask what it means to be a responsible corporation in the digital economy. We then establish an extended agenda for responsibility in the digital economy by identifying potential areas of irresponsibility and highlighting new responsibilities related to, for example, use of consumer data, service continuation, control of digital goods, and the use of artificial intelligence. In doing so, we address a need to theorize responsibilities derived from the use of technologies that have been previously silent in CSR literature or only tangentially discussed within the domain of CSR communication, even as they are a focus in other fields (especially legal compliance, or organizational performance).
Article
Lecture 2: Metaphysical Objections 1 As I said in my first lecture, the idea that there are irreducibly normative truths about reasons for action, which we can discover by thinking carefully about reasons in the usual way, has been thought to be subject to three kinds of objections: metaphysical, epistemological, and motivational or, as I would prefer to say, practical. Metaphysical objections claim that a belief in irreducibly normative truths would commit us to facts or entities that would be metaphysically odd—incompatible, it is sometimes said, with a scientific view of the world. Epistemological objections maintain that if there were such truths we would have no way of discovering them. Practical objections maintain that if conclusions about what we have reason to do were simply beliefs in a kind of fact, they could not have the practical significance that reasons are commonly supposed to have. This is often put by saying that beliefs alone cannot motivate an agent to act. I think it is better put as the claim that beliefs cannot explain action, or make acting rational or irrational in the way that accepting conclusions about reasons is normally thought to do. I will concentrate in this lecture on metaphysical objections.
Article
Tracing the confusion surrounding the concept of sovereignty to its complex referent, to epistemological relativism, and to metaphorical uses of the term "sovereignty", this article proposes an interdisciplinary approach as a partial remedy to this problem. As a preliminary step, the article starts with an analysis of the intricate relationship that exists between legal stipulations, descriptive statements and normative theories. This is followed by a six-fold classification of theories of sovereignty on the basis of their unit of analysis and analytical goals. The article concludes with an attempt to identify some of the most important linkages between these classes of theories.
The Politics of AI Ethics is a Seductive Diversion from Fixing our Broken Capitalist System
  • S Benthall
Benthall, S. 2018. The Politics of AI Ethics is a Seductive Diversion from Fixing our Broken Capitalist System. Digifesto.
From Ethics Washing to Ethics Bashing A View on Tech Ethics from Within Moral Philosophy
  • E Bietti
Bietti, E. 2020. From Ethics Washing to Ethics Bashing A View on Tech Ethics from Within Moral Philosophy. Proceedings of ACM FAT* Conference (FAT* 2020). ACM, New York. DOI: https://doi.org/10.1145/3351095.337286
Private Power, Online Information Flows and EU Law: Mind the Gap
  • A Daly
Daly, A. 2016. Private Power, Online Information Flows and EU Law: Mind the Gap. Oxford: Hart.
Artificial Intelligence, Governance and Ethics: Global Perspectives. The Chinese University of Hong Kong Faculty of Law Research Paper No. 2019-15
  • A Daly
  • T Hagendorff
  • H Li
  • M Mann
  • V Marda
  • B Wagner
  • W W Wang
  • S Witteborn
Daly, A., Hagendorff, T., Li, H., Mann, M., Marda, V., Wagner, B., Wang, W.W. and Witteborn, S. 2019. Artificial Intelligence, Governance and Ethics: Global Perspectives. The Chinese University of Hong Kong Faculty of Law Research Paper No. 2019-15. Retrieved from: https://ssrn.com/abstract=3414805. DOI: https://doi.org/10.2139/ssrn.3414805
Ethical Issues in Our Relationship with Artificial Entities
  • J Donath
Donath, J. 2020. Ethical Issues in Our Relationship with Artificial Entities. In M.D. Dubber, F. Pasquale and S. Das (Eds.), The Oxford Handbook of Ethics of AI. Oxford: Oxford University Press.
Do Not Pay Community
  • Donotpay
DoNotPay. 2020. Do Not Pay Community. Retrieved from https://donotpay .com/learn 118 AI for Everyone?
EU Privacy Law Snares Its First Tech Giant: Google
  • K Finley
Finley, K. 2019. EU Privacy Law Snares Its First Tech Giant: Google. Wired. Retreived from https://www.wired.com/story/eu-privacy-law-snares-first-tech-giant -google
Domesticating Data: Socio-Legal Perspectives on Smart Homes and Good Data Design
  • M Flintham
  • M Goulden
  • D Price
  • L Urquhart
Flintham, M., Goulden, M., Price, D. and Urquhart, L. 2019. Domesticating Data: Socio-Legal Perspectives on Smart Homes and Good Data Design. In A. Daly, S.K. Devitt and M. Mann (Eds.), Good Data, pp. 344-360. Amsterdam: Institute of Network Cultures.
A Capitalocentric Review of Technology for Sustainable Development: The Case for More-Than-Human Design
  • M Foth
  • M Mann
  • L Bedford
  • W Fieuw
  • R Walters
Foth, M., Mann, M., Bedford, L., Fieuw, W. and Walters, R. 2020. A Capitalocentric Review of Technology for Sustainable Development: The Case for More-Than-Human Design. Melville, South Africa: Global Information Society Watch (GISWatch), Association for Progressive Communications (APC).
The Good, the Bad and the Beauty of 'Good Enough Data
  • M Gutierrez
Gutierrez, M. 2019. The Good, the Bad and the Beauty of 'Good Enough Data'. In: A. Daly, S.K. Devitt and M. Mann (Eds.), Good Data, pp. 54-76. Amsterdam: Institute of Network Cultures.
Supererogation: Its Status in Ethical Theory
  • D Heyd
Heyd, D. 1982. Supererogation: Its Status in Ethical Theory. Cambridge: Cambridge University Press.
Governance of Communal Data Sharing
  • C H Ho
  • T R Chuang
Ho, C.H. and Chuang, T.R. 2019. Governance of Communal Data Sharing. In: A. Daly, S.K. Devitt and M. Mann (Eds.), Good Data, pp. 202-215. Amsterdam: Institute of Network Cultures.
Defining Technological Literacy: Towards an Epistemological Framework
  • D Ihde
Ihde, D. 2006. The Designer Fallacy and Technological Imagination. In: J. Dakers (Ed.), Defining Technological Literacy: Towards an Epistemological Framework, pp. 121-131. London: Palgrave Macmillan.
  • K Johnson
Johnson, K. 2019. AI Ethics is All About Power. Venture Beat, 1 November. Retrieved from: https://venturebeat.com/2019/11/11/ai-ethics-is-all-about -power
Law & Critique: Technology Elsewhere, (yet) Phantasmically Present. Critical Legal Thinking
  • P Kalulé
  • J Joque
Kalulé, P. and Joque, J. 2019. Law & Critique: Technology Elsewhere, (yet) Phantasmically Present. Critical Legal Thinking, 16 August. Retrieved from: https://criticallegalthinking.com/2019/08/16/law-critique-technology -elsewhere-yet-phantasmically-present
This Robot Will Handle Your Divorce Free of Charge
  • E Krause
Krause, E. 2017. This Robot Will Handle Your Divorce Free of Charge. The Wall Street Journal, 26 October. https://www.wsj.com/articles/this-robot -will-handle-your-divorce-free-of-charge-1522341583
Good Data Practices for Indigenous Data Sovereignty and Governance
  • R Lovett
  • V Lee
  • T Kukutai
  • D Cormack
  • S Carroll Rainie
  • J Walker
Lovett, R., Lee, V., Kukutai, T., Cormack, D., Carroll Rainie, S. and Walker, J. 2019. Good Data Practices for Indigenous Data Sovereignty and Governance. In A. Daly, S.K. Devitt and M. Mann (Eds.), Good Data, pp. 26-36. Amsterdam: Institute of Network Cultures.