Ethics and Information Technology

Published by Springer Nature
Online ISSN: 1572-8439
Learn more about this page
Recent publications
Article
With the advent of automated decision-making, governments have increasingly begun to rely on artificially intelligent algorithms to inform policy decisions across a range of domains of government interest and influence. The practice has not gone unnoticed among philosophers, worried about “algocracy” (rule by algorithm), and its ethical and political impacts. One of the chief issues of ethical and political significance raised by algocratic governance, so the argument goes, is the lack of transparency of algorithms. One of the best-known examples of philosophical analyses of algocracy is John Danaher’s “The threat of algocracy” (2016), arguing that government by algorithm undermines political legitimacy. In this paper, I will treat Danaher’s argument as a springboard for raising additional questions about the connections between algocracy, comprehensibility, and legitimacy, especially in light of empirical results about what we can expect the voters and policymakers to know. The paper has the following structure: in Sect. 2, I introduce the basics of Danaher’s argument regarding algocracy. In Sect. 3 I argue that the algocratic threat to legitimacy has troubling implications for social justice. In Sect. 4, I argue that, nevertheless, there seem to be good reasons for governments to rely on algorithmic decision support systems. Lastly, I try to resolve the apparent tension between the findings of the two preceding Sections.
 
Article
Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good reason to think such technological advances are likely, then we should take steps to address the potential for engineering responsibility.
 
Article
Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots.
 
Article
Intuitively, many people seem to hold that engaging in acts of virtual murder in videogames is morally permissible, whereas engaging in acts of virtual child molestation is morally impermissible. The Gamer’s Dilemma (Luck in Ethics Inf Technol 11:31–36, 2009) challenges these intuitions by arguing that it is unclear whether there is a morally relevant difference between these two types of virtual actions. There are two main responses in the literature to this dilemma. First, attempts to resolve the dilemma by defending an account of the relevant moral differences between virtual murder and virtual child molestation. Second, attempts to dissolve the dilemma by undermining the intuitions that ground it. In this paper, we argue that a narrow version of the Gamer’s Dilemma seems to survive attempts to resolve or dissolve it away entirely, since neither approach seems to be able to solve the dilemma for all cases. We thus provide a contextually sensitive version of the dilemma that more accurately tracks onto the intuitions of gamers. However, we also argue that the intuitions that ground the narrow version of the Dilemma may not have a moral foundation, and we put forward alternative non-moral normative foundations that seem to better account for the remaining intuitive difference between the two types of virtual actions. We also respond to proposed solutions to the Gamer’s Dilemma in novel ways and set out areas for future empirical work in this area.
 
Developed PRISMA flow diagram for ethical review of tracing apps technology
Article
We conducted a systematic literature review on the ethical considerations of the use of contact tracing app technology, which was extensively implemented during the COVID-19 pandemic. The rapid and extensive use of this technology during the COVID-19 pandemic, while benefiting the public well-being by providing information about people's mobility and movements to control the spread of the virus, raised several ethical concerns for the post-COVID-19 era. To investigate these concerns for the post-pandemic situation and provide direction for future events, we analyzed the current ethical frameworks, research, and case studies about the ethical usage of tracing app technology. The results suggest there are seven essential ethical considerations-privacy, security, acceptability, government surveillance, transparency, justice, and voluntariness-in the ethical use of contact tracing technology. In this paper, we explain and discuss these considerations and how they are needed for the ethical usage of this technology. The findings also highlight the importance of developing integrated guidelines and frameworks for implementation of such technology in the post- COVID-19 world. Supplementary information: The online version contains supplementary material available at 10.1007/s10676-022-09659-6.
 
Article
The number of people with dementia is increasing worldwide. At the same time, family and professional caregivers' resources are limited. A promising approach to relieve these carers' burden and assist people with dementia is assistive technology. In order to be useful and accepted, such technologies need to respect the values and needs of their intended users. We applied the value sensitive design approach to identify values and needs of patients with dementia and family and professional car-egivers in respect to assistive technologies to assist people with dementia in institutionalized care. Based on semi-structured interviews of residents/patients with cognitive impairment, relatives, and healthcare professionals (10 each), we identified 44 values summarized by 18 core values. From these values, we created a values' network to demonstrate the interplay between the values. The core of this network was caring and empathy as most strongly interacting value. Furthermore, we found 36 needs for assistance belonging to the four action fields of activity, care, management/administration, and nursing. Based on these values and needs for assistance, we created possible use cases for assistive technologies in each of the identified four action fields. All these use cases already are technologically feasible today but are not currently being used in healthcare facilities. This underlines the need for development of value-based technologies to ensure not only technological feasibility but also acceptance and implementation of assistive technologies. Our results help balance conflicting values and provide concrete suggestions for how engineers and designers can incorporate values into assistive technologies.
 
Article
Who is responsible when an AI machine causes something to go wrong? Or is there a gap in the ascription of responsibility? Answers range from claiming there is a unique responsibility gap, several different responsibility gaps, or no gap at all. In a nutshell, the problem is as follows: on the one hand, it seems fitting to hold someone responsible for a wrong caused by an AI machine; on the other hand, there seems to be no fitting bearer of responsibility for this wrong. In this article , we focus on a particular (aspect of the) AI responsibility gap: it seems fitting that someone should bear the legal consequences in scenarios involving AI machines with design defects; however, there seems to be no such fitting bearer. We approach this problem from the legal perspective, and suggest vicarious liability of AI manufacturers as a solution to this problem. Our proposal comes in two variants: the first one has a narrower range of application, but can be easily integrated in current legal frameworks; the second one requires a revision of current legal frameworks, but has a wider range of application. The latter variant employs a broadened account of vicarious liability. We emphasise strengths of the two variants and finally highlight how vicarious liability offers important insights for addressing a moral AI responsibility gap.
 
Article
How should policymakers respond to the risk of technological unemployment that automation brings? First, I develop a procedure for answering this question that consults, rather than usurps, individuals’ own attitudes and ambitions towards that risk. I call this the insurance argument. A distinctive virtue of this view is that it dispenses with the need to appeal to a class of controversial reasons about the value of employment, and so is consistent with the demands of liberal political morality. Second, I appeal to the insurance argument to show that governments ought not simply to provide those who are displaced by machines with unemployment benefits. Instead, it must offer re-training programmes, as well as enact more general macroeconomic policies that create new opportunities for employment. My contribution is important not only because it helps us to resolve a series of urgent policy disputes—disputes that have been discussed extensively by labour market economists and policymakers, but less so by political philosophers—but also because my analysis sheds light on more general philosophical controversies relating to risk.
 
Schematic representation of the subjective and objective evaluation in Jaspers’ psychopathological approach
Article
Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather than explainable. Yet, there is a grave lack of agreement concerning these terms in much of the literature on AI. We argue that the seminal distinction made by the philosopher and physician Karl Jaspers between different types of explaining and understanding in psychopathology can be used to promote greater conceptual clarity in the context of Machine Learning (ML). Following Jaspers, we claim that explaining and understanding constitute multi-faceted epistemic approaches that should not be seen as mutually exclusive, but rather as complementary ones as in and of themselves they are necessarily limited. Drawing on the famous example of Watson for Oncology we highlight how Jaspers’ methodology translates to the case of medical AI. Classical considerations from the philosophy of psychiatry can therefore inform a debate at the centre of current AI ethics, which in turn may be crucial for a successful implementation of ethically and legally sound AI in medicine.
 
Article
There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it is unclear either address the epistemic worries of the medical professionals using these systems. We argue these systems do require an explanation, but an institutional explanation. These types of explanations provide the reasons why the medical professional should rely on the system in practice—that is, they focus on trying to address the epistemic concerns of those using the system in specific contexts and specific occasions. But ensuring that these institutional explanations are fit for purpose means ensuring the institutions designing and deploying these systems are transparent about the assumptions baked into the system. This requires coordination with experts and end-users concerning how it will function in the field, the metrics used to evaluate its accuracy, and the procedures for auditing the system to prevent biases and failures from going unaddressed. We contend this broader explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices.
 
Article
While rapid advances in artificial intelligence (AI) hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities and the fluid, contextual ways in which they manifest point to the limits of algorithmic fairness initiatives. In particular, existing de-biasing measures tend to flatten variance within and among disabled people and abstract away information in ways that reinforce pathologization. While fair machine learning methods can help mitigate certain disparities, I argue that fairness alone is insufficient to secure accessible, inclusive AI. I then outline a disability justice approach, which provides a framework for centering disabled people’s experiences and attending to the structures and norms that underpin algorithmic bias.
 
Schematic representation of hybrid decision making
Amended overview of the hybrid decision making process
Article
This paper approaches the interaction of a health professional with an AI system for diagnostic purposes as a hybrid decision making process and conceptualizes epistemo-ethical constraints on this process. We argue for the importance of the understanding of the underlying machine epistemology in order to raise awareness of and facilitate realistic expectations from AI as a decision support system, both among healthcare professionals and the potential benefiters (patients). Understanding the epistemic abilities and limitations of such systems is essential if we are to integrate AI into the decision making processes in a way that takes into account its applicability boundaries. This will help to mitigate potential harm due to misjudgments and, as a result, to raise the trust-understood here as a belief in reliability of-in the AI system. We aim at a minimal requirement for AI meta-explanation which should distinguish machine epistemic processes from similar processes in human epistemology in order to avoid confusion and error in judgment and application. An informed approach to the integration of AI systems into the decision making for diagnostic purposes is crucial given its high impact on health and well-being of patients.
 
Article
Algorithmic decision-making based on profiling may significantly affect people’s destinies. As a rule, however, explanations for such decisions are lacking. What are the chances for a “right to explanation” to be realized soon? After an exploration of the regulatory efforts that are currently pushing for such a right it is concluded that, at the moment, the GDPR stands out as the main force to be reckoned with. In cases of profiling, data subjects are granted the right to receive meaningful information about the functionality of the system in use; for fully automated profiling decisions even an explanation has to be given. However, the trade secrets and intellectual property rights (IPRs) involved must be respected as well. These conflicting rights must be balanced against each other; what will be the outcome? Looking back to 1995, when a similar kind of balancing had been decreed in Europe concerning the right of access (DPD), Wachter et al. (2017) find that according to judicial opinion only generalities of the algorithm had to be disclosed, not specific details. This hardly augurs well for a future right of access let alone to explanation. Thereupon the landscape of IPRs for machine learning (ML) is analysed. Spurred by new USPTO guidelines that clarify when inventions are eligible to be patented, the number of patent applications in the US related to ML in general, and to “predictive analytics” in particular, has soared since 2010—and Europe has followed. I conjecture that in such a climate of intensified protection of intellectual property, companies may legitimately claim that the more their application combines several ML assets that, in addition, are useful in multiple sectors, the more value is at stake when confronted with a call for explanation by data subjects. Consequently, the right to explanation may be severely crippled.
 
Article
The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of explainability relative to other available norms of explainable decision-making. Third, in pointing out that we usually accept heuristics and uses of bounded rationality for medical decision-making by physicians, we argue that the explainability of medical decisions should not be measured against an idealized diagnostic process, but according to practical considerations. We conclude, fourth, to resolve the issue of explainability-standards by relocating the issue to the AI’s certifiability and interpretability.
 
Article
Rapid developments in Artificial Intelligence are leading to an increasing human reliance on machine decision making. Even in collaborative efforts with Decision Support Systems (DSSs), where a human expert is expected to make the final decisions, it can be hard to keep the expert actively involved throughout the decision process. DSSs suggest their own solutions and thus invite passive decision making. To keep humans actively ‘on’ the decision-making loop and counter overreliance on machines, we propose a ‘reflection machine’ (RM). This system asks users questions about their decision strategy and thereby prompts them to evaluate their own decisions critically. We discuss what forms RMs can take and present a proof-of-concept implementation of a RM that can produce feedback on users’ decisions in the medical and law domains. We show that the prototype requires very little domain knowledge to create reasonably intelligent critiquing questions. With this prototype, we demonstrate the technical feasibility to develop RMs and hope to pave the way for future research into their effectiveness and value.
 
Article
The gamer’s dilemma offers three plausible but jointly inconsistent premises: (1) Virtual murder in video games is morally permissible. (2) Virtual paedophelia in video games is not morally permissible. (3) There is no morally relevant difference between virtual murder and virtual paedophelia in video games. In this paper I argue that the gamer’s dilemma can be understood as one of three distinct dilemmas, depending on how we understand two key ideas in Morgan Luck’s (2009) original formulation. The two ideas are those of (1) occurring in a video game and (2) being a virtual instance of murder or paedophelia. Depending on the weight placed on the gaming context, the dilemma is either about in-game acts or virtual acts. And depending on the type of virtual acts we have in mind, the dilemma is either about virtual representations or virtual partial reproductions of murder and paedophelia. This gives us three dilemmas worth resolving: a gaming dilemma, a representation dilemma, and a simulation dilemma. I argue that these dilemmas are about different issues, apply to different cases, and are susceptible to different solutions. I also consider how different participants in the debate have interpreted the dilemma in one or more of these three ways.
 
Article
The internet presents not just opportunities but also risks that range, to name a few, from online abuse and misinformation to the polarisation of public debate. Given the increasingly digital nature of our societies, these risks make it essential for users to learn how to wisely use digital technologies as part of a more holistic approach to promoting human flourishing. However, insofar as they are exacerbated by both the affordances and the political economy of the internet, this article argues that a new understanding of wisdom that is germane to the digital age is needed. As a result, we propose a framework for conceptualising what we call cyber-wisdom , and how this can be cultivated via formal education, in ways that are grounded in neo-Aristotelian virtue ethics and that build on three prominent existing models of wisdom. The framework, according to which cyber-wisdom is crucial to navigating online risks and opportunities through the deployment of character virtues necessary for flourishing online, suggests that cyber-wisdom consists of four components: cyber-wisdom literacy, cyber-wisdom reasoning, cyber-wisdom self-reflection, cyber-wisdom motivation. Unlike the models on which it builds, the framework accounts for the specificity of the digital age and is both conceptual and practical. On the one hand, each component has conceptual implications for what it means to be wise in the digital age. On the other hand, informed by character education literature and practice, it has practical implications for how to cultivate cyber-wisdom in the classroom through teaching methods that match its different components.
 
Article
Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we seek to address in this paper. We outline a framework, called Explanatory Pragmatism, which we argue has two attractive features. First, it allows us to conceptualise explainability in explicitly context-, audience- and purpose-relative terms, while retaining a unified underlying definition of explainability. Second, it makes visible any normative disagreements that may underpin conflicting claims about explainability regarding the purposes for which explanations are sought. Third, it allows us to distinguish several dimensions of AI explainability. We illustrate this framework by applying it to a case study involving a machine learning model for predicting whether patients suffering disorders of consciousness were likely to recover consciousness.
 
Article
The introduction of automated vehicles promises an increase in traffic safety. Prior to its launch proof of the anticipated reduction in the sense of a positive risk balance compared with human driving performance is required from various stakeholders such as the European Union Commission, the German Ethic Commission, and the ISO TR 4804. To meet this requirement and to generate acceptance by the public and the regulatory authorities, a qualitative Risk- Benefit framework has been defined. This framework is based on literature research on approaches applied in other disciplines. This report depicts the framework, adapted from the pharmaceutical sector called PROACT-URL which serves as a structured procedure to demonstrate a positive risk balance in an understandable and transparent manner. The qualitative framework needs to be turned in quantitative methods once it should be applied. Therefore, two steps of the framework are discussed in more detail: First, the definition of adequate development thresholds that are required at an early stage of the development. Second the simulation-based assessment to prove the positive risk balance prior to the market introduction.
 
Bias mitigation through the model development pipeline: in-processing vs post-processing. The plot depicts a sub-portion of the model development pipeline and highlights: i) fairness interventions for each phase (e.g. in-processing vs post-processing), (ii) specific instances for each phase (e.g. Adversarial Debiasing, Reject Option based Classifier) considered in the paper
Adversarial Debiasing (AD) vs Rejection Option Classifier (ROC) at single data point level. The probability predictions of the AD and ROC models are plotted against each other at single data point level. The Y axis reports the predicted risk score by the AD model, and the X axis reports the predicted score by the ROC classifier. Solid black lines represent the acceptance threshold at 0.5. Dotted black lines represent the boundaries of the critical region for ROC. Empty circles represent the initial position of each single data point. Red circles represent the position of each single data point resulting, respectively, from AD or ROC
Credit risk loan application. The probability predictions of the AD and ROC interventions (on the same baseline logistic regression model) are plotted against each other for the same data points (e.g. validation set, 300 data points). The vertical axis reports the predicted score by the AD model, and the horizontal axis reports the predicted score by the ROC classifier. Solid black lines represent the acceptance decision threshold at 0.5. Dotted black lines represent the boundaries of the critical region for ROC. Black and blue circles correspond to data points with ”male” attribute; red and purple circles correspond to data points with ”female” attribute. The plot reports the classification results based on AD and ROC for the 300 data points in the validation set. Within this set, 104 data points have the ”female” attribute, and 196 ”male” attribute. For 186 out of 300 data points (47 ”female” attribute and 139 ”male” attribute) AD and ROC agree in the classification outcome. Regarding the remaining 114 data points for which the two methods disagree in the classification, we have: 88 data points (53 ”female” attribute, 35 ”male” attribute) rejected by AD but accepted by ROC, and 26 data points (4 ”female” attribute and 22 ”male” attribute) accepted by AD but rejected by ROC
Article
The importance of fairness in machine learning models is widely acknowledged, and ongoing academic debate revolves around how to determine the appropriate fairness definition, and how to tackle the trade-off between fairness and model performance. In this paper we argue that besides these concerns, there can be ethical implications behind seemingly purely technical choices in fairness interventions in a typical model development pipeline. As an example we show that the technical choice between in-processing and post-processing is not necessarily value-free and may have serious implications in terms of who will be affected by the specific fairness intervention. The paper reveals how assessing the technical choices in terms of their ethical consequences can contribute to the design of fair models and to the related societal discussions.
 
The Design and Use Process for Bespoke Surgical Tools (3D printing stages from Geng and Bidana’s (2021) model shown in grey)
The bespoke surgical tool creation and use process diagram presented to participants
The revised bespoke surgical tool creation and use process
Article
Computational design uses artificial intelligence (AI) to optimise designs towards user-determined goals. When combined with 3D printing, it is possible to develop and construct physical products in a wide range of geometries and materials and encapsulating a range of functionality, with minimal input from human designers. One potential application is the development of bespoke surgical tools, whereby computational design optimises a tool’s morphology for a specific patient’s anatomy and the requirements of the surgical procedure to improve surgical outcomes. This emerging application of AI and 3D printing provides an opportunity to examine whether new technologies affect the ethical responsibilities of those operating in high-consequence domains such as healthcare. This research draws on stakeholder interviews to identify how a range of different professions involved in the design, production, and adoption of computationally designed surgical tools, identify and attribute responsibility within the different stages of a computationally designed tool’s development and deployment. Those interviewed included surgeons and radiologists, fabricators experienced with 3D printing, computational designers, healthcare regulators, bioethicists, and patient advocates. Based on our findings, we identify additional responsibilities that surround the process of creating and using these tools. Additionally, the responsibilities of most professional stakeholders are not limited to individual stages of the tool design and deployment process, and the close collaboration between stakeholders at various stages of the process suggests that collective ethical responsibility may be appropriate in these cases. The role responsibilities of the stakeholders involved in developing the process to create computationally designed tools also change as the technology moves from research and development (R&D) to approved use.
 
Hashrate distribution in May 7th, 2021. ‘Unknown’ means that Blockchain.info was unable to determine the origin (Blockchain.com, 2021a)
Hashrate distribution over the last three years among the largest mining pools (until March 7th, 2021) (Blockchain.com, 2021b)
Bitcoin average transaction fees (all time, 7-days average). The SegWit activation led to decreasing fees per transaction as exchanges and wallets began adopting it (Kaminska, 2019). Fees started increasing again, exceeding, in April 2021, the 2017 all time high due to a combination of reasons, including increases in transactions and decreases of mining power (Harper, 2021)
Percentage of payments spending using SegWit per day (Transactionfee.info, 2021)
Article
In this study, I use the Critical Realism perspective of power to explain how the Bitcoin protocol operates as a system of power. I trace the ideological underpinnings of the protocol in the Cypherpunk movement to consider how notions of power shaped the protocol. The protocol by design encompasses structures, namely the Proof of Work and the Block Selection, that reproduce asymmetrical constraints on the entities that comprise it. These constraining structures generate constraining mechanisms, those of cost effectiveness and deanonymisation, which further restrict participating entities’ ‘power to act’, reinforcing others’ ‘power over’ them. In doing so, I illustrate that the Bitcoin protocol, rather than decentralising and distributing power across a network of numerous anonymous, trustless peers, it has instead shifted it, from the traditional actors (e.g., state, regulators) to newly emergent ones.
 
A schema of the relationships between key concepts in the conceptual model. a Means (box a) should be understood as the relevant (but not sufficient) conditions that allow capabilities to be created (box c). In this framework, Algorithmic Management practices are considered to be resources that could enhance (or hinder) the development of capabilities in the working life. b What an individual worker does with the provided means depends on their individual conversion factors. The conversion factors listed in box b are the factors that a worker has and employs to convert AM-based means/resources into capabilities. How means are converted into capabilities (box c) thus differs for each worker. c When individual conversion factors allow for it, the use of means can help to build or develop a worker’s set of capabilities, (box c), which are freedoms a worker has in their working life. Without conversion factors, AM-based means/resources will not add to the development of capabilities. d Next, it depends on a worker’s choices and priorities whether their capabilities are turned into actual functionings (achieved beings and doings). The feedback loop in this framework reflects that a worker’s choices (box d) are, under AM, often directly influenced by nudging techniques etc. that are part of AM systems (box a). However, the behaviour of workers is also fed back into the AM system. e The worker’s set of functionings (box e) are the realised capabilities: the actual beings and doings of the worker, which are the result of all the previous factors, and which together constitute a working life that is worthy of living. This means that a realised functioning adds to an agents dignity. f Finally, this development should be seen in the context of, and impacted by, the contextual factors, which can be socio-legal and organisational (box f)
Article
This paper proposes a conceptual framework to study and evaluate the impact of ‘Algorithmic Management’ (AM) on worker dignity. While the literature on AM addresses many concerns that relate to the dignity of workers, a shared understanding of what worker dignity means, and a framework to study it, in the context of software algorithms at work is lacking. We advance a conceptual framework based on a Capability Approach (CA) as a route to understanding worker dignity under AM. This paper contributes to the existing AM literature which currently is mainly focused on exploitation and violations of dignity and its protection. By using a CA, we expand this focus and can evaluate the possibility that AM might also enable and promote dignity. We conclude that our CA-based conceptual framework provides a valuable means to study AM and then discuss avenues for future research into the complex relationship between worker dignity and AM systems.
 
Article
Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature of ‘magnifying glasses’ in the automation of existing inequalities, we show how the AI technical community is calling for transparency and explainability, accountability and contestability. Not to be considered as panaceas, they all contribute to ensuring human control in novel practices that include requirement, design and development methodologies for a fairer AI. Second, we elaborate on the mounting attention for technological narratives as technology is recognized as a social practice within a specific institutional context. Not only do narratives reflect organizing visions for society, but they also are a tangible sign of the traditional lines of social, economic, and political inequalities. We conclude with a call for a diverse approach within the AI community and a richer knowledge about narratives as they help in better addressing future technical developments, public debate, and policy. AI practice is interdisciplinary by nature and it will benefit from a socio-technical perspective.
 
Explainability, transparency, trustworthiness and fairness perception relations
Fairness and Understanding scores. Fairness and understanding scores in cases of negative recommendation are presented in the left part of the graph. Fairness and understanding scores in cases of positive recommendation are presented in the right part of the graph. The colors represent the different explanation styles
Article
In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. Different explanation styles may have a different impact on users’ perception of fairness towards the system and on their understanding of the outcome of the system. Hence, there is a need to understand how various explanation styles may impact non-expert users’ perceptions of fairness and understanding of the system’s outcome. In this study we aimed at fulfilling this need. We performed a between-subject user study in order to examine the effect of various explanation styles on users’ fairness perception and understanding of the outcome. In the experiment we examined four known styles of textual explanations (case-based, demographic-based, input influence-based and sensitivity-based) along with a new style (certification-based) that reflect the results of an auditing process of the system. The results suggest that providing some kind of explanation contributes to users’ understanding of the outcome and that some explanation styles are more beneficial than others. Moreover, while explanations provided by the system are important and can indeed enhance users’ perception of fairness, their perception mainly depends on the outcome of the system. The results may shed light on one of the main problems in explainability of algorithmic systems, which is choosing the best explanation to promote users’ fairness perception towards a particular system, with respect to the outcome of the system. The contribution of this study is reflected in the new and realistic case study that was examined, in the creation and evaluation of a new explanation style that can be used as the link between the actual (computational) fairness of the system and users’ fairness perception and in the need of analyzing and evaluating explanations while taking into account the outcome of the system.
 
Article
Fairness is one of the most prominent values in the Ethics and Artificial Intelligence (AI) debate and, specifically, in the discussion on algorithmic decision-making (ADM). However, while the need for fairness in ADM is widely acknowledged, the very concept of fairness has not been sufficiently explored so far. Our paper aims to fill this gap and claims that an ethically informed re-definition of fairness is needed to adequately investigate fairness in ADM. To achieve our goal, after an introductory section aimed at clarifying the aim and structure of the paper, in section “Fairness in algorithmic decision-making” we provide an overview of the state of the art of the discussion on fairness in ADM and show its shortcomings; in section “Fairness as an ethical value”, we pursue an ethical inquiry into the concept of fairness, drawing insights from accounts of fairness developed in moral philosophy, and define fairness as an ethical value. In particular, we argue that fairness is articulated in a distributive and socio-relational dimension; it comprises three main components: fair equality of opportunity, equal right to justification, and fair equality of relationship; these components are grounded in the need to respect persons both as persons and as particular individuals. In section “Fairness in algorithmic decision-making revised”, we analyze the implications of our redefinition of fairness as an ethical value on the discussion of fairness in ADM and show that each component of fairness has profound effects on the criteria that ADM ought to meet. Finally, in section “Concluding remarks”, we sketch some broader implications and conclude.
 
Article
In recent years, increasingly advanced artificial intelligence (AI), and in particular machine learning, has shown great promise as a tool in various healthcare contexts. Yet as machine learning in medicine has become more useful and more widely adopted, concerns have arisen about the “black-box” nature of some of these AI models, or the inability to understand—and explain—the inner workings of the technology. Some critics argue that AI algorithms must be explainable to be responsibly used in the clinical encounter, while supporters of AI dismiss the importance of explainability and instead highlight the many benefits the application of this technology could have for medicine. However, this dichotomy fails to consider the particular ways in which machine learning technologies mediate relations in the clinical encounter, and in doing so, makes explainability more of a problem than it actually is. We argue that postphenomenology is a highly useful theoretical lens through which to examine black-box AI, because it helps us better understand the particular mediating effects this type of technology brings to clinical encounters and moves beyond the explainability stalemate. Using a postphenomenological approach, we argue that explainability is more of a concern for physicians than it is for patients, and that a lack of explainability does not introduce a novel concern to the physician–patient encounter. Explainability is just one feature of technological mediation and need not be the central concern on which the use of black-box AI hinges.
 
Number of articles per newspaper which discussed the app. Also includes percentage of article type
Number of mentions of each ethical issue concerning the app in the news articles, and extent to which the issue was discussed
Number of mentions of each ethical issue concerning the app in the grey literature, and extent to which the issue was discussed
Percentage of how much each ethical issue concerning either the app, or the test and trace programme, appeared in news articles, and extent to which the issue was discussed. Each ethical issue was categorised as referring to either the app or to the wider test and trace programme. For each ethical issue referring to the app, and for each ethical issue referring to the test and trace programme, we determined the percentage that the ethical issue was mentioned as a proportion of the total number of articles mentioning the app or the test and trace programme, respectively
Article
This paper explores ethical debates associated with the UK COVID-19 contact tracing app that occurred in the public news media and broader public policy, and in doing so, takes ethics debate as an object for sociological study. The research question was: how did UK national newspaper news articles and grey literature frame the ethical issues about the app, and how did stakeholders associated with the development and/or governance of the app reflect on this? We examined the predominance of different ethical issues in news articles and grey literature, and triangulated this using stakeholder interview data. Findings illustrate how news articles exceptionalised ethical debate around the app compared to the way they portrayed ethical issues relating to ‘manual’ contact tracing. They also narrowed the debate around specific privacy concerns. This was reflected in the grey literature, and interviewees perceived this to have emerged from a ‘privacy lobby’. We discuss the findings, and argue that this limited public ethics narrative masked broader ethical issues.
 
Article
This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI practitioners through the vehicle of an AI application. I conclude with four critical questions based on the discretionary account to determine if trust in particular AI applications is sound, and a brief discussion of the possibility that the main roles of the physician could be replaced by AI.
 
Article
Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers to reduce such risks. However, what these texts usually sidestep is the question of how aware developers are to the risks they are creating with these new AI technologies, and what their attitudes are towards such risks. This paper asks to rectify this gap in research, by analyzing an ongoing case study. Focusing on six Israeli AI startups in the field of radiology, I carry out a content analysis of their online material in order to examine these companies’ stances towards the potential threat their automated tools pose to patient safety and to the work-standing of healthcare professionals. Results show that these developers are aware of the risks their AI products pose, but tend to deny their own role in the technological transformation and dismiss or downplay the risks to stakeholders. I conclude by tying these findings back to current risk-reduction recommendations with regards to advanced AI technologies, and suggest which of them hold more promise in light of developers’ attitudes.
 
The value alignment process is performed in two steps: a reward specification and an ethical embedding. Rectangles stand for objects whereas rounded rectangles correspond to processes
Possible initial state of a public civility game. The agent on the left must deal with a garbage obstacle ahead
a Example of convex hull CH(M)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$CH({\mathcal {M}})$$\end{document}, represented in objective space. b Identification of the points of CH(M)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$CH({\mathcal {M}})$$\end{document} corresponding with the ethical-optimal value vector V∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{V}^*$$\end{document} (highlighted in green) and the second-best value vector V′∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{V'}^*$$\end{document} (in yellow). c Representation in weight space of CH(M)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$CH({\mathcal {M}})$$\end{document}. The minimal weight value we\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$w_e$$\end{document} for which V∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{V}^*$$\end{document} is optimal is identified with a green vertical line. (Color figure online)
Left: Visualisation in Objective Space of the convex hull of the public civility game composed by 3 policies: E (Ethical), R (Regimented) and U (Unethical). Right: Visualisation in Weight Space of the same convex hull. The painted areas indicate which policy is optimal for the varying values of the ethical weight we\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$w_e$$\end{document}: red for the Unethical policy, yellow for the Regimented one, and green for the Ethical one. (Color figure online)
Evolution of the accumulated rewards per episode that the agent obtains in the ethical environment
Article
AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent’s individual and ethical objectives. The second step consists in designing an environment wherein an agent learns to behave ethically while pursuing its individual objective. We leverage on our theoretical results to introduce an algorithm that automates our two-step approach. In the cases where value-aligned behaviour is possible, our algorithm produces a learning environment for the agent wherein it will learn a value-aligned behaviour.
 
Article
This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles of human test administrators and human subjects, adding machine learning-based AI agents to the situation and establishing invasive data collection processes as well as introducing certain biases in results. I project that the potentials for pervasive and continuous lie detection initiatives (“truth machines”) are substantial, displacing human-centered efforts to establish trust and foster integrity in organizations. I argue that if it is possible for HR managers to do so, they should cease using technologically-based lie detection systems entirely and work to foster trust and accountability on a human scale. However, if these AI-enhanced technologies are put into place by organizations by law, agency mandate, or other compulsory measures, care should be taken that the impacts of the technologies on human rights and wellbeing are considered. The article explores how AI can displace the human agent in some aspects of lie detection and credibility assessment scenarios, expanding the prospects for inscrutable, “black box” processes and novel physiological constructs (such as “biomarkers of deceit”) that may increase the potential for such human rights concerns as fairness, mental privacy, and bias. Employee interactions with autonomous lie detection systems rather with than human beings who administer specific tests can reframe organizational processes and rules concerning the assessment of personal honesty and integrity. The dystopian projection of organizational life in which analyses and judgments of the honesty of one’s utterances are made automatically and in conjunction with one’s personal profile provides unsettling prospects for the autonomy of self-representation.
 
Article
This research examines how the Nation, Punch, Vanguard and Daily Trust newspapers reported Southern Kaduna conflicts in terms of frequency, direction, placement and level of sensationalism between September 2020 and March 2021. The media which are powerful tools of communication can aid in promoting peace, unity and development as well as creating conflict along ethnic, religious and political inclinations. The study which was anchored on social responsibility and agenda setting theories used both content analysis and critical discourse analysis in order to code and interpret the data collected. A total of two hundred and twenty-four (224) editions of the newspapers under review were selected using stratified random sampling technique by days of the week. Out of the sample, only 203 editions were accessed, coded and content analysed. The research reveals among other things that the reports on Southern Kaduna conflicts were mostly straightforward news constituting 92% which lack sufficient context and background. The reports were also given less prominence as almost all the reports were buried or hidden in the inside pages. Also, most of the reports on Southern Kaduna conflicts were inflammatory and sensational in order to keep the audience glued to the newspapers at the expense of accuracy and professionalism. The study therefore suggests that, media organisations should organise extensive training on conflict-sensitive reporting so as to arm reporters with professional requisite knowledge of reporting conflicts such that the reports don't trigger more conflicts. Also, media organisations should not report conflicts in straight news format only instead; they should use editorials and features which are usually in-depth and analytical with sufficient context and background needed for conflict-sensitive journalism. And, the media should also give prominence and priority to conflict incidences in order to attract the desired government intervention which will bring about lasting solutions.
 
Ethical applications of artificial intelligence to HRM: a decision-making framework
Article
Artificial intelligence (AI) is increasingly inputting into various human resource management (HRM) functions, such as sourcing job applicants and selecting staff, allocating work, and offering personalized career coaching. While the use of AI for such tasks can offer many benefits, evidence suggests that without careful and deliberate implementation its use also has the potential to generate significant harms. This raises several ethical concerns regarding the appropriateness of AI deployment to domains such as HRM, which directly deal with managing sometimes sensitive aspects of individuals’ employment lifecycles. However, research at the intersection of HRM and technology continues to largely center on examining what AI can be used for, rather than focusing on the salient factors relevant to its ethical use and examining how to effectively engage human workers in its use. Conversely, the ethical AI literature offers excellent guiding principles for AI implementation broadly, but there remains much scope to explore how these principles can be enacted in specific contexts-of-use. By drawing on ethical AI and task-technology fit literature, this paper constructs a decision-making framework to support the ethical deployment of AI for HRM and guide determinations of the optimal mix of human and machine involvement for different HRM tasks. Doing so supports the deployment of AI for the betterment of work and workers and generates both scholarly and practical outcomes.
 
Article
Should we welcome social robots into interpersonal relationships? In this paper I show that an adequate answer to this question must take three factors into consideration: (1) the psychological vulnerability that characterizes ordinary interpersonal relationships, (2) the normative significance that humans attach to other people’s attitudes in such relationships, and (3) the tendency of humans to anthropomorphize and “mentalize” artificial agents, often beyond their actual capacities. I argue that we should welcome social robots into interpersonal relationships only if they are endowed with a social capacity that is functionally similar to our own capacity for social norms. Drawing on an interdisciplinary body of research on norm psychology, I explain why this capacity is importantly different from pre-programmed, top-down conformity to rules, in that it involves an open-ended responsiveness to social corrective feedback, such as that which humans provide to each other in expressions of praise and blame.
 
Microsoft Kinect sensor (on tripod) and projector create a virtual touchscreen on the floor of the orangutan enclosure (author photo)
Prototype of a KWO game for collaborative play between orangutans and humans (photo provided with permission by Melbourne Zoo)
Article
This paper examines how digital technologies might be used to improve ethical attitudes towards nonhuman animals, by exploring the case study of nonhuman apes kept in modern zoos. The paper describes and employs a socio-ethical framework for undermining anti-ape prejudice advanced by philosopher Edouard Machery which draws on classic anti-racism strategies from the social sciences. We also discuss how digital technologies might be designed and deployed to enable and enhance rather than impede the three anti-prejudice strategies of contact and interaction, enlightenment, and individualization. In doing so, the paper illuminates the broad potential and limitations of digital technology to both harm and benefit animals via its effects on human ethical attitudes. This examination provides guidance for future projects and empirical work on using digital technologies to promote moral respect for a range of nonhuman animals in different settings.
 
Article
In this paper, we explore and describe what is needed to allow connected and automated vehicles (CAVs) to break traffic rules in order to minimise road safety risk and to operate with appropriate transparency (according to recommendation 4 in Bonnefon et al., European Commission, 2020). Reviewing current traffic rules with particular reference to two driving situations (speeding and mounting the pavement), we illustrate why current traffic rules are not suitable for CAVs and why making new traffic rules specifically for CAVs would be inappropriate. In defining an alternative approach to achieving safe CAV driving behaviours, we describe the use of ethical goal functions as part of hybrid AI systems, suggesting that functions should be defined by governmental bodies with input from citizens and stakeholders. Ethical goal functions for CAVs would enable developers to optimise driving behaviours for safety under conditions of uncertainty whilst allowing for differentiation of products according to brand values. Such functions can differ between regions according to preferences for safety behaviours within that region and can be updated over time, responding to continual socio-technological feedback loops. We conclude that defining ethical goal functions is an urgent and necessary step from governmental bodies to enable the safe and transparent operation of CAVs and accelerate the reduction in road casualties they promise to achieve.
 
Article
During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for the concept of moral responsibility. The paper starts by highlighting the important difficulties in assigning responsibility to either technologies themselves or to their developers. Top-down and bottom-up approaches to moral responsibility are then contrasted, as we explore how they could inform debates about Responsible AI. We highlight the limits of the former ethical approaches and build the case for classical Aristotelian virtue ethics. We show that two building blocks of Aristotle’s ethics, dianoetic virtues and the context of actions, although largely ignored in the literature, can shed light on how we could think of moral responsibility for both AI and humans. We end by exploring the practical implications of this particular understanding of moral responsibility along the triadic dimensions of ethics by design, ethics in design and ethics for designers.
 
Schematic architecture of a decentralized, modular automated driving approach. Adapted from Eggert, Klingelschmitt and Damerow (2015)
The graphic illustrates an overtaking maneuver from a birdseye view. The red AV has started its overtaking maneuver of the white vehicle on a road with two-way traffic. The future positions of
The graphic illustrates an overtaking maneuver from a birds-eye view. The red AV has started its overtaking maneuver of the white vehicle on a road with two-way traffic. The future positions of the AV itself and the other car are predicted including its uncertainties, illustrated by the green and blue ellipses. (Color figure online)
Article
Automated vehicles (AVs) are expected to operate on public roads, together with non-automated vehicles and other road users such as pedestrians or bicycles. Recent ethical reports and guidelines raise worries that AVs will introduce injustice or reinforce existing social inequalities in road traffic. One major injustice concern in today’s traffic is that different types of road users are exposed differently to risks of corporal harm. In the first part of the paper, we discuss the responsibility of AV developers to address existing injustice concerns regarding risk exposure as well as approaches on how to fulfill the responsibility for a fairer distribution of risk. In contrast to popular approaches on the ethics of risk distribution in unavoidable accident cases, we focus on low and moderate risk situations, referred to as routine driving. For routine driving, the obligation to distribute risks fairly must be discussed in the context of risk-taking and risk-acceptance, balancing safety objectives of occupants and other road users with driving utility. In the second part of the paper, we present a typical architecture for decentralized automated driving which contains a dedicated module for real-time risk estimation and management. We examine how risk estimation modules can be adjusted and parameterized to redress some inequalities.
 
Article
The paper has two goals. The first is presenting the main results of the recent report Ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility written by the Horizon 2020 European Commission Expert Group to advise on specific ethical issues raised by driverless mobility, of which the author of this paper has been member and rapporteur. The second is presenting some broader ethical and philosophical implications of these recommendations, and using these to contribute to the establishment of Ethics of Transportation as an independent branch of applied ethics. The recent debate on the ethics of Connected and Automated Vehicles (CAVs) presents a paradox and an opportunity. The paradox is the presence of a flourishing debate on the ethics of one very specific transportation technology without ethics of transportation being in itself a well-established academic discipline. The opportunity is that now that a spotlight has been switched on the ethical dimensions of CAVs it may be easier to establish a broader debate on ethics of transportation. While the 20 recommendations of the EU report are grouped in three macro-areas: road safety, data ethics, and responsibility, in this paper they will be grouped according to eight philosophical themes: Responsible Innovation, road justice, road safety, freedom, human control, privacy, data fairness, responsibility. These are proposed as the first topics for a new ethics of transportation.
 
Article
Are acts of violence performed in virtual environments ever morally wrong, even when no other persons are affected? While some such acts surely reflect deficient moral character, I focus on the moral rightness or wrongness of acts. Typically it’s thought that, on Kant’s moral theory, an act of virtual violence is morally wrong (i.e., violates the Categorical Imperative) only if the act mistreats another person. But I argue that, on Kant’s moral theory, some acts of virtual violence can be morally wrong, even when no other persons or their avatars are affected. First, I explain why many have thought that, in general on Kant’s moral theory, virtual acts affecting no other persons or their avatars can’t violate the Categorical Imperative. For there are real world acts that clearly do, but it seems that when we consider the same sorts of acts done alone in a virtual environment, they don’t violate the Categorical Imperative, because no others persons were involved. But then, how could any virtual acts like these, that affect no other persons or their avatars, violate the Categorical Imperative? I then argue that there indeed can be such cases of morally wrong virtual acts—some due to an actor’s having erroneous beliefs about morally relevant facts, and others due not to error, but to the actor’s intention leaving out morally relevant facts while immersed in a virtual environment. I conclude by considering some implications of my arguments for both our present technological context as well as the future.
 
Article
According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. Our main goals in this paper are threefold: i) to bring attention to this neglected issue, ii) to clarify what exactly this concern about dehumanization might amount to, and iii) to sketch an argument for why dehumanizing the hiring process is ethically suspect. After distinguishing the use of the term “dehumanization” in this context (i.e., removing the human presence) from its more common meaning in the interdisciplinary field of dehumanization studies (i.e., conceiving of other humans as subhuman), we argue that the use of hiring algorithms may negatively impact the employee-employer relationship. We argue that there are good independent reasons to accept a substantive employee-employer relationship, as well as an applicant-employer relationship, both of which are consistent with a stakeholder theory of corporate obligations. We further argue that dehumanizing the hiring process may negatively impact these relationships because of the difference between the values of human recruiters and the values embedded in recruitment algorithms. Drawing on Nguyen’s (2021) critique of how Twitter “gamifies communication”, we argue that replacing human recruiters with algorithms imports artificial values into the hiring process. We close by briefly considering some ways to potentially mitigate the problems posed by recruitment algorithms, along with the possibility that some difficult trade-offs will need to be made.
 
Article
How do we ensure that future generally intelligent AI share our values? This is the value-alignment problem. It is a weighty matter. After all, if AI are neutral with respect to our wellbeing, or worse, actively hostile toward us, then they pose an existential threat to humanity. Some philosophers have argued that one important way in which we can mitigate this threat is to develop only AI that shares our values or that has values that ‘align with’ ours. However, there is nothing to guarantee that this policy will be universally implemented—in particular, ‘bad actors’ are likely to flout it. In this paper, I show how the predictive processing model of the mind, currently ascendant in cognitive science, may ameliorate the value-alignment problem. In essence, I argue that there are a plurality of reasons why any future generally intelligent AI will possess a predictive processing cognitive architecture (e.g. because we decide to build them that way; because it is the only possible cognitive architecture that can underpin general intelligence; because it is the easiest way to create AI.). I also argue that if future generally intelligent AI possess a predictive processing cognitive architecture, then they will come to share our pro-moral motivations (of valuing humanity as an end; avoiding maleficent actions; etc.), regardless of their initial motivation set. Consequently, these AI will pose a minimal threat to humanity. In this way then, I conclude, the value-alignment problem is significantly ameliorated under the assumption that future generally intelligent AI will possess a predictive processing cognitive architecture.
 
Distribution of human legal responsibility attribution preferred by participants (n = 1524)
Mean of the nine responses associated with the five levels of human legal responsibility attribution made by the transport authority. Error bars =  ± 2 standard error (SE)
Mean of the nine responses associated with the misattribution levels. Zero value means no misattribution. Negative values mean that the human driver was overly attributed liability from the participants’ perspective. Positive values mean the manufacturer was overly attributed liability from the participants’ perspective. Error bars =  ± 2 SE
Mean of the nine responses associated with the different misattribution levels in the case where the human driver was assigned full liability by the transport authority. Negative values mean that the human driver was overly attributed liability from the participants’ perspective. Error bars =  ± 2 SE
Article
A human driver and an automated driving system (ADS) might share control of automated vehicles (AVs) in the near future. This raises many concerns associated with the assignment of responsibility for negative outcomes caused by them; one is that the human driver might be required to bear the brunt of moral and legal responsibilities. The psychological consequences of responsibility misattribution have not yet been examined. We designed a hypothetical crash similar to Uber’s 2018 fatal crash (which was jointly caused by its distracted driver and the malfunctioning ADS). We incorporated five legal responsibility attributions (the human driver should bear full, primary, half, secondary, and no liability, that is, the AV manufacturer should bear no, secondary, half, primary, and full liability). Participants (N = 1524) chose their preferred liability attribution and then were randomly assigned into one of the five actual liability attribution conditions. They then responded to a series of questions concerning liability assignment (fairness and reasonableness), the crash (e.g., acceptability), and AVs (e.g., intention to buy and trust). Slightly more than 50% of participants thought that the human driver should bear full or primary liability. Legal responsibility misattribution (operationalized as the difference between actual and preferred liability attributions) negatively influenced these mentioned responses, regardless of overly attributing human or manufacturer liability. Overly attributing human liability (vs. manufacturer liability) had more negative influences. Improper liability attribution might hinder the adoption of AVs. Public opinion should not be ignored in developing a legal framework for AVs.
 
Article
The problem of fair machine learning has drawn much attention over the last few years and the bulk of offered solutions are, in principle, empirical. However, algorithmic fairness also raises important conceptual issues that would fail to be addressed if one relies entirely on empirical considerations. Herein, I will argue that the current debate has developed an empirical framework that has brought important contributions to the development of algorithmic decision-making, such as new techniques to discover and prevent discrimination, additional assessment criteria, and analyses of the interaction between fairness and predictive accuracy. However, the same framework has also suggested higher-order issues regarding the translation of fairness into metrics and quantifiable trade-offs. Although the (empirical) tools which have been developed so far are essential to address discrimination encoded in data and algorithms, their integration into society elicits key (conceptual) questions such as: What kind of assumptions and decisions underlies the empirical framework? How do the results of the empirical approach penetrate public debate? What kind of reflection and deliberation should stakeholders have over available fairness metrics? I will outline the empirical approach to fair machine learning, i.e. how the problem is framed and addressed, and suggest that there are important non-empirical issues that should be tackled. While this work will focus on the problem of algorithmic fairness, the lesson can extend to other conceptual problems in the analysis of algorithmic decision-making such as privacy and explainability.
 
Prediction of sensitive from less sensitive information
Minimal model of typical data processing cycle for PA. Dashed lines: additional steps ( 7 + 8 ) and feedback loop to reduce Type A unfair bias (cf. the section "Collective ethical concerns")
Minimal model of typical data processing cycle for PA. Dashed lines: additional steps (7+8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$7+8$$\end{document}) and feedback loop to reduce Type A unfair bias (cf. the section “Collective ethical concerns”)
Article
Data analytics and data-driven approaches in Machine Learning are now among the most hailed computing technologies in many industrial domains. One major application is predictive analytics, which is used to predict sensitive attributes, future behavior, or cost, risk and utility functions associated with target groups or individuals based on large sets of behavioral and usage data. This paper stresses the severe ethical and data protection implications of predictive analytics if it is used to predict sensitive information about single individuals or treat individuals differently based on the data many unrelated individuals provided. To tackle these concerns in an applied ethics, first, the paper introduces the concept of “predictive privacy” to formulate an ethical principle protecting individuals and groups against differential treatment based on Machine Learning and Big Data analytics. Secondly, it analyses the typical data processing cycle of predictive systems to provide a step-by-step discussion of ethical implications, locating occurrences of predictive privacy violations. Thirdly, the paper sheds light on what is qualitatively new in the way predictive analytics challenges ethical principles such as human dignity and the (liberal) notion of individual privacy. These new challenges arise when predictive systems transform statistical inferences, which provide knowledge about the cohort of training data donors, into individual predictions, thereby crossing what I call the “prediction gap”. Finally, the paper summarizes that data protection in the age of predictive analytics is a collective matter as we face situations where an individual’s (or group’s) privacy is violated using data other individuals provide about themselves, possibly even anonymously.
 
Empirical and normative expectations regarding AVs' postcollision behavior. The top row of the figure shows the participants' empirical expectations regarding AVs' post-collision behavior, and the bottom row shows the participants' normative expectations. Ques-
The moral costs of hit-and-runs. Notes The figure shows participants’ responses to the vignettes in each treatment. a Shows the proportions of participants who would divert the vehicle towards the single worker to spare the five workers. b Shows the average ratings of the relative morality of diverting the vehicle compared to continuing straight. Boxes display the 95% confidence intervals. P-values are based on chi-squared tests in (a) and t-tests in (b)
Empirical and normative expectations regarding AVs’ post-collision behavior. The top row of the figure shows the participants’ empirical expectations regarding AVs’ post-collision behavior, and the bottom row shows the participants’ normative expectations. Questions regarding normative expectations contained a third response category “I don’t care,” which did not exist for the questions regarding empirical expectations
Preferences for AVs with or without post-collision capabilities. Notes: The figure shows participants’ average preferences for AVs with and without capabilities for corresponding post-collision behaviors. Participants could express their preferences on a slider between 0 and 100, with 50 labeled as “in between” in each case. Error bars display standard errors of the mean
Willingness to pay for devices enabling appropriate post-collision behavior. Notes: The figure shows participants’ average stated (un-)willingness to pay to provide AVs with the necessary equipment to be capable of appropriate post-collision behavior. Participants could express their (un-)willingness to pay for these devices on a slider between 0 and 100, with 50 labeled as “in between.” Error bars display standard errors of the mean
Article
We address the considerations of the European Commission Expert Group on the ethics of connected and automated vehicles regarding data provision in the event of collisions. While human drivers’ appropriate post-collision behavior is clearly defined, regulations for automated driving do not provide for collision detection. We agree it is important to systematically incorporate citizens’ intuitions into the discourse on the ethics of automated vehicles. Therefore, we investigate whether people expect automated vehicles to behave like humans after an accident, even if this behavior does not directly affect the consequences of the accident. We find that appropriate post-collision behavior substantially influences people’s evaluation of the underlying crash scenario. Moreover, people clearly think that automated vehicles can and should record the accident, stop at the site, and call the police. They are even willing to pay for technological features that enable post-collision behavior. Our study might begin a research program on post-collision behavior, enriching the empirically informed study of automated driving ethics that so far exclusively focuses on pre-collision behavior.
 
Article
Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral relevance of the reality requirement and the different ways one can deal with it, the risk of anthropocentric bias in this discussion, and the underlying epistemological assumptions and political questions. This response is not only relevant to Sparrow’s argument or to robot ethics but also touches upon central issues in virtue ethics.
 
Top-cited authors
Jeroen van den hoven
  • Delft University of Technology
Amanda Sharkey
  • The University of Sheffield
Iyad Rahwan
  • Max Planck Institute for Human Development
Luciano Floridi
  • University of Oxford - University of Bologna
Engin Bozdag
  • Philips, Amsterdam