Conference Paper

From Within: A Reflective Equilibrium Outlook on the Ethics Policies of Artificial Intelligence

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

As Artificial Intelligence (AI) technologies continue to advance at a rapid pace, ethical considerations become increasingly important, such as information privacy concerns, biases, intellectual property rights, disinformation and fake news. The potential risks and challenges of AI systems have prompted companies, organizations, and governments to develop AI Ethics policies to address these concerns. Analyzing these policies through content analysis can provide valuable insights into the ethical principles and values underlying AI development and use, therefore, this research uses AI-aided content analysis approach to explore the current practice of the AI Ethics policies with the moral philosophy lens of wide reflective equilibrium, and compares how different countries enact their own policies, and how different industries and sectors respond to the new wave of challenges. This approach involves combining human and machine coding from DiVoMiner® to analyze the policy documents, to identify and analyze trends and themes in AI Ethics policies across different organizations, sectors, and countries, with the intention to contributing to understanding AI Ethics from a moral philosophy perspective.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This paper provides a taxonomy of the different kinds of theory that may be offered of an area of law. We distinguish two basic types of philosophical accounts in special jurisprudence: nonnormative accounts and normative accounts. Section II explains the two central subspecies of nonnormative accounts of areas of law: (i) conceptual and ontological theories and (ii) reason-tracking causal theories. Section III explores normative theories of areas of law. Normative accounts subdivide into detached and committed normative accounts. Detached or committed normative accounts can be subdivided further into the following cross-cutting categories: (i) pro tanto or all-things-considered, (ii) hyper-reformist or practice-dependent, (iii) taxonomical or substantive. Section IV shows that our taxonomy does not presume a prior commitment to any particular school in general jurisprudence. This paper clarifies methodological confusion that exists in theorizing about areas of law, and contributes to the subfield of thinking generally about special jurisprudence.
Article
Full-text available
Ethics is arguably the hottest product in Silicon Valley’s1 hype cycle today, even as headlines decrying a lack of ethics in technology companies accumulate. After years of largely fruitless outside pres- sure to consider the consequences of digital technology products, the very recent past has seen a spike in the assignment of corporate resources in Silicon Valley to ethics, including hiring staff for roles we identify here as “ethics owners.” In corporate parlance, “owning” a portfolio or project means holding responsibility for it, often across multiple divisions or hierarchies within the organization. Typically, the “owner” of a project does not bear sole responsibility for it, but rather oversees integration of that project across the organization.
Article
Full-text available
We measure the data sent to their back-end servers by five browsers: Google Chrome, Mozilla Firefox, Apple Safari, Brave Browser and Microsoft Edge, during normal web browsing on both desktop and mobile devices. Our aim is to assess the privacy risks associated with this data exchange between a browser and its back-end servers. With regard to shared services, all of the browsers make use of a safe browsing service to mitigate phishing attacks and our measurements indicate that this raises few privacy concerns. Similarly, with regard to the Chrome extension update service accessed by Chromium-base browsers (Chrome, Brave, Edge). Overall, we find that both the desktop and mobile versions of Brave do not use any identifiers allowing tracking of IP address over time, and do not share details of web pages visited with backend servers. In contrast, Chrome, Firefox, Safari and Edge all share details of web pages visited with backend servers. Additionally, Chrome, Firefox and Edge all share long-lived identifiers that can be used to link connections together and so potentially allow tracking over time. In the case of Edge these are device and hardware identifiers that are hard/impossible for users to change. On mobile devices, but not desktop devices, Firefox also shares device identifiers.
Article
Full-text available
Moral theories, such as the variations on virtue ethics, deontological ethics, contractualism, and consequentialism, are expected – inter alia – to explain the basic orientation of morality, give us principles and directives, justify those, and thereby (if all goes well) guide our actions. I examine some functions and characteristics of the extant moral theories from a moral metatheoretical point of view, in order to clarify the generally assumed rivalry between them. By thinking of moral theories in analogy to languages it is argued that different moral theories are neither simply competing nor simply complementary; their respective orientations justify using them, in virtue of the problems they help to solve. But even if considerations about the functionality of a theory and the context in which it is created play an important role, they can neither be sufficient to determine these theories’ relations to one other nor for choosing between them. The challenge is to set criteria for the quality of a moral theory on a moral metatheoretical level and, in particular, to make room for future views on morality.
Article
Full-text available
https://plato.stanford.edu/entries/ethics-ai/ - Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn.
Article
Full-text available
Discussions about ethics of Big Data often focus on the ethics of data processing: collecting, storing, handling, analysing and sharing data. Data-based systems, however, do not come from nowhere. They are designed and brought into being within social spaces – or social milieu. This paper connects philosophical considerations of individual and collective capacity to enact practical reason to the influence of social spaces. Building a deeper engagement with the social imaginaries of technology development through analysis of two years of fieldwork with start-ups working on Internet of Things, this paper suggests that different action positions can emerge, with consequences for how data is understood and valued. The Disengaged, Pragmatist and Idealist ethical action positions identified in the paper reveal the ways individuals and groups negotiate possibilities for ethical action, through justifications, explanations and structuring of system features.
Article
Full-text available
The introduction of new technologies in society is sometimes met with public resistance. Supported by public policy calls for “upstream engagement” and “responsible innovation,” recent years have seen a notable rise in attempts to attune research and innovation processes to societal needs, so that stakeholders’ concerns are taken into account in the design phase of technology. Both within the social sciences and in the ethics of technology, we see many interdisciplinary collaborations being initiated that aim to address tensions between various normative expectations about science and engineering and the actual outcomes. However, despite pleas to integrate social science research into the ethics of technology, effective normative models for assessing technologies are still scarce. Rawls’s wide reflective equilibrium (WRE) is often mentioned as a promising approach to integrate insights from the social sciences in the normative analysis of concrete cases, but an in-depth discussion of how this would work in practice is still lacking. In this article, we explore to what extent the WRE method can be used in the context of technology development. Using cases in engineering and technology development, we discuss three issues that are currently neglected in the applied ethics literature on WRE. The first issue concerns the operationalization of abstract background theories to moral principles. The second issue concerns the inclusiveness of the method and the demand for openness. The third issue is how to establish whether or not an equilibrium has been reached. These issues should be taken into account when applying the methods to real-world cases involving technological risks. Applying the WRE method in the context of engaged interdisciplinary collaboration requires sensitivity for issues of power and representativeness to properly deal with the dynamics between the technical and normative researchers involved as well as society at large.
Chapter
Full-text available
Feelings-as-information theory conceptualizes the role of subjective experiences – including moods, emotions, metacognitive experiences, and bodily sensations – in judgment. It assumes that people attend to their feelings as a source of information, with different feelings providing different types of information. Whereas feelings elicited by the target of judgment provide valid information, feelings that are due to an unrelated influence can lead us astray. The use of feelings as a source of information follows the same principles as the use of any other information. Most important, people do not rely on their feelings when they (correctly or incorrectly) attribute them to another source, thus undermining their informational value for the task at hand. What people conclude from a given feeling depends on the epistemic question on which they bring it to bear; hence, inferences from feelings are contextsensitive and malleable. In addition to serving as a basis of judgment, feelings inform us about the nature of our current situation and our thought processes are tuned to meet situational requirements. The chapter reviews the development of the theory, its core propositions and representative findings
Article
Full-text available
The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics—which has traditionally focused on ethical issues surrounding humans’ use of machines—machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.
Article
We develop a simple game-theoretic model to demonstrate that with the new General Data Protection Regulation's (GDPR) right to port data between content providers (CPs), (i) the incumbent CP has less incentives to preserve users’ privacy, (ii) a new entrant CP will charge higher prices for its service, and (iii) customers of the new CP are worse off, while customers of the incumbent CP are better off.
Chapter
According to the empirical turn, we should take empirical facts into account in asking and answering philosophical, including ethical, questions about technology. In this chapter, the implications of the empirical turn for the ethics of technology are explored by investigating the relation between social acceptance (an empirical fact) and moral acceptability (an ethical judgement) of a technology. After discussing how acceptance is often problematically framed as a constraint to overcome, a preliminary analysis of the notions of acceptance and acceptability is offered. Next, the idea of a logical gap between acceptance and acceptability is explained. Although the gap is accepted, it is also argued that the distinction between acceptance and acceptability does not exactly map on the descriptive/normative distinction and that both notions are maybe best seen as thick concepts. Next, it is shown how a coherentist account of ethics, in particular John Rawls’ model of wide reflective equilibrium can account for the relation between acceptance and acceptability.
Article
Purpose – Enhancing customer participation behaviour (CPB) is critical for service firms. However, in a global context, cultural and local market factors are relevant. The purpose of this paper is to detail how and why global service firms can and should account for such factors. Prior research relied predominantly on cultural value differences to account for cross-national variation. The present study uses an index of consumers’ institutional logics of market action (CILMA) as an alternative approach to segment international markets. Design/methodology/approach – In total, 1,910 customers of financial services in 11 countries were surveyed on their CILMA as well as on costomer participation behaviour intentions (CPBI) and cognitive and affective trust as drivers. The 11 countries are then grouped according to their levels in the CILMA index. Finally a structural equations model on the drivers of CPBI is tested for direct and moderating effects of the CILMA index by comparing the two segments with a relation- vs contract-dominated CILMA. Findings – The study reveals that the CILMA index explains differences in consumer participation behaviour intention and moderates relational mechanisms; in particular, in more relational vs contractual markets, CPBI is higher, and also the effect of cognitive trust on CPBI is stronger in such settings. Global marketing managers thus should adjust CPB strategies according to observed CILMA index scores. Segmentation for CPB approaches could rely on CILMA index variations. Originality/value – The newly proposed CILMA index combines both relation- and contract-based governance dimensions to describe complex institutional fields. This index differentiates relation- from contract-dominated markets and supports the application of the CILMA scale to many nations at the same time. The CILMA index can be applied to segment international markets to explain customer cocreation behaviour and its drivers.
Article
Existing research on information privacy has mostly relied on the privacy calculus model, which views privacy-related decision-making as a rational process where individuals weigh the anticipated risks of disclosing personal data against the potential benefits. In this research, we develop an extension to the privacy calculus model, arguing that the situation-specific assessment of risks and benefits is bounded by (1) pre-existing attitudes or dispositions, such as general privacy concerns or general institutional trust, and (2) limited cognitive resources and heuristic thinking. An experimental study, employing two samples from the USA and Switzerland, examined consumer responses to a new smartphone application that collects driving behavior data and provided converging support for these predictions. Specifically, the results revealed that a situation-specific assessment of risks and benefits fully mediates the effect of dispositional factors on information disclosure. In addition, the results showed that privacy assessment is influenced by momentary affective states, indicating that consumers underestimate the risks of information disclosure when confronted with a user interface that elicits positive affect.
Unfolding the limitations of internal and external algorithmic transparency in newsrooms
  • H Cools
  • M Koliska
Cools, H. & Koliska, M. (2023). Unfolding the limitations of internal and external algorithmic transparency in newsrooms. The Joint Computation + Journalism European Data & Computational Journalism Conference, ETH Zurich.
Are Responsible AI Programs Ready for Generative AI? Experts Are Doubtful
  • E Renieris
  • D Kiron
  • S Mills
  • A Gupta
Renieris, E., Kiron,D., Mills, S. & Gupta, A. (2023, May 18). Are Responsible AI Programs Ready for Generative AI? Experts Are Doubtful. MIT Sloan Management Review.
Philosophy of Technology after the Empirical Turn
  • Dordrecht
Dordrecht (Eds.) Philosophy of Technology after the Empirical Turn (pp.177-93), the Netherlands: Springer.