Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Existing ethical AI guidelines have two issues: firstly, very few are specific to healthcare [14], despite the fact that AI for healthcare involves unique ethical issues [2]; and secondly, they emphasise adherence to 'ethical principles' [15] without complementary translations into actionable practices [16,17]. As such, there remains a pressing need to operationalise ethics throughout the development pipeline of AI for healthcare [18]. ...
... Of the ten total human values in this theory, we reference only the four found to be most cited in software engineering literature [23]. Acknowledging the broad nature of these human values, we further subcategorise them into specific, granular ethical principles, as outlined in a recent scoping review of AI ethics publications [14]. The one-to-many mapping of human values to ethical principles is arbitrary, and is used only as a foundation to present an organised overview of ethical issues identified in the AI literature. ...
... We chose Scopus and Google Scholar to identify relevant articles, searching for literature at the intersection of AI, healthcare, and existing guidelines for ethical AI. Noting that the publication of generic ethical AI guidelines has increased exponentially over recent years [14], we focussed on scholarship at the intersection of ethical AI and healthcare wherever possible. We assessed the first 200 articles identified by Scopus and Google Scholar, then adopted a forward and backward snowballing approach to identify papers offering actionable solutions for operationalising ethics throughout the AI lifecycle. ...
Article
Full-text available
Artificial intelligence (AI) offers much promise for improving healthcare. However, it runs the looming risk of causing individual and societal harms; for instance, exacerbating inequalities amongst minority groups, or enabling compromises in the confidentiality of patients’ sensitive data. As such, there is an expanding, unmet need for ensuring AI for healthcare is developed in concordance with human values and ethics. Augmenting “principle-based” guidance that highlight adherence to ethical ideals (without necessarily offering translation into actionable practices), we offer a solution-based framework for operationalising ethics in AI for healthcare. Our framework is built from a scoping review of existing solutions of ethical AI guidelines, frameworks and technical solutions to address human values such as self-direction in healthcare. Our view spans the entire length of the AI lifecycle: data management, model development, deployment and monitoring. Our focus in this paper is to collate actionable solutions (whether technical or non-technical in nature), which can be steps that enable and empower developers in their daily practice to ensuring ethical practices in the broader picture. Our framework is intended to be adopted by AI developers, with recommendations that are accessible and driven by the existing literature. We endorse the recognised need for ‘ethical AI checklists’ co-designed with health AI practitioners, which could further operationalise the technical solutions we have collated. Since the risks to health and wellbeing are so large, we believe a proactive approach is necessary for ensuring human values and ethics are appropriately respected in AI for healthcare.
... In response to this reality, there has been a proliferation of policy and guideline proposals for ethical artificial intelligence and machine learning (AI/ML) research. Jobin et al. (2019) surveyed several global initiatives for AI/ML and found no fewer than 84 documents containing ethics principles for AI research, with 88% of these having been released since 2016. More broadly, the World Economic Forum has identified almost three hundred separate efforts to develop ethical principles for AI (Russell, 2019). ...
... Signatories are invited to commit to 'the development of AI at the service of the individual and the common good' (Université de Montréal, 2017). Proposals of this sort typically highlight issues concerning transparency, justice and fairness, non-maleficence, responsibility, and privacy, among others (Jobin et al., 2019). These initiatives generally take one of two approaches to foster the ethical practice of AI research: proposing principles to guide the sociallyresponsible development of AI or examining the societal impacts of AI (Luccioni & Bengio, 2019). ...
... These guidelines, codes, and principles for the responsible creation and use of new technologies come from a wide array of sources, including academia, professional associations, and non-profit organisations; 2 governments; 3 and industry, including for-profit corporations. 4 Several researchers have noted that the very fact that a diverse set of stakeholders would exert such an effort to issue AI principles and policies is strongly indicative that these stakeholders have a vested interest in shaping policies on AI ethics to fit their own priorities (Wagner, 2018;Benkler, 2019;Greene et al., 2019;Jobin et al., 2019). ...
Article
Full-text available
Policy and guideline proposals for ethical artificial intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for a common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma—namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost of cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.
... Against the backdrop of ethical problems in AI development, some approaches were developed to tackle those issues that emerge with further development and implementation of AI in society. Jobin, Ienca, and Vayena (2019) collected and analyzed corresponding ethical AI guidelines from around the globe. Interestingly, many ethical guidelines were proposed by private companies or political institutions (e.g., the EU-Commission proposed a framework for the development of ethical AI (European Commission, 2019)), but also by academia and research institutions such as the Association for Computing Machinery (ACM) (Association for Computing Machinery, 2018;Jobin et al., 2019). ...
... Jobin, Ienca, and Vayena (2019) collected and analyzed corresponding ethical AI guidelines from around the globe. Interestingly, many ethical guidelines were proposed by private companies or political institutions (e.g., the EU-Commission proposed a framework for the development of ethical AI (European Commission, 2019)), but also by academia and research institutions such as the Association for Computing Machinery (ACM) (Association for Computing Machinery, 2018;Jobin et al., 2019). However, many of these guidelines do not address the broad public or civil society and instead target specific stakeholder groups. ...
... Consequently, ethical AI guidelines can not necessarily be equated with AI that benefits the whole society or that may recognize and include multiple societal perspectives at all. Looking at the sources and target groups of the ethical guidelines summarized by Jobin et al. (2019), only a small subset aims for AI for the Common Good or the broad public, respectively. However, that does not imply that the other ethical guidelines exclude these aims, but they simply do not mention or prioritize them as the main goal to satisfy respective stakeholders. ...
Preprint
Full-text available
Building and implementing ethical AI systems that benefit the whole society is cost-intensive and a multi-faceted task fraught with potential problems. While computer science focuses mostly on the technical questions to mitigate social issues, social science addresses citizens' perceptions to elucidate social and political demands that influence the societal implementation of AI systems. Thus, in this study, we explore the salience of AI issues in the public with an emphasis on ethical criteria to investigate whether it is likely that ethical AI is actively requested by the population. Between May 2020 and April 2021, we conducted 15 surveys asking the German population about the most important AI-related issues (total of N=14,988 respondents). Our results show that the majority of respondents were not concerned with AI at all. However, it can be seen that general interest in AI and a higher educational level are predictive of some engagement with AI. Among those, who reported having thought about AI, specific applications (e.g., autonomous driving) were by far the most mentioned topics. Ethical issues are voiced only by a small subset of citizens with fairness, accountability, and transparency being the least mentioned ones. These have been identified in several ethical guidelines (including the EU Commission's proposal) as key elements for the development of ethical AI. The salience of ethical issues affects the behavioral intentions of citizens in the way that they 1) tend to avoid AI technology and 2) engage in public discussions about AI. We conclude that the low level of ethical implications may pose a serious problem for the actual implementation of ethical AI for the Common Good and emphasize that those who are presumably most affected by ethical issues of AI are especially unaware of ethical risks. Yet, once ethical AI is top of the mind, there is some potential for activism.
... Although several review papers have been published during the past few years, each of them focuses on a certain aspect(s) of AI ethics, and there is still a lack of comprehensive reviews to provide a full picture of this field. For instance, a brief review of ethical issues in AI was provided in [11], AI ethics guidelines and principles were investigated in [12] and [13], [14] focused on bias and fairness in ML, [15] only reviewed the safety in reinforcement learning, [16] reviewed the security and privacy of federated learning, [17] dedicated to a survey of privacy and security issues in deep learning, [18] concentrated on explainable AI, [19] covered the key ethical and privacy issues in AI and traced how such issues have changed over the past few decades using the bibliometric approach. Thus, this paper is dedicated to presenting a systematic and comprehensive overview of AI ethics from diverse aspects (or topics), thereby providing informative guidance for the community to practice ethical AI in the future. ...
... An excellent survey and analysis of the current principles and guidelines on ethical AI has been given in 2019 by Jobin et al. [12], who conducted a review of 84 ethical guidelines released by national or international organizations from various countries. Jobin et al. [12] found strong widespread agreement on five key principles, that is, transparency, justice and fairness, non-maleficence, responsibility, and privacy, among many. ...
... An excellent survey and analysis of the current principles and guidelines on ethical AI has been given in 2019 by Jobin et al. [12], who conducted a review of 84 ethical guidelines released by national or international organizations from various countries. Jobin et al. [12] found strong widespread agreement on five key principles, that is, transparency, justice and fairness, non-maleficence, responsibility, and privacy, among many. However, many new guidelines and recommendations for AI ethics have been released in the past two years, making Jobin's paper obsolete because many important documents were not included. ...
Article
Full-text available
Artificial intelligence (AI) has profoundly changed and will continue to change our lives. AI is being applied in more and more fields and scenarios such as autonomous driving, medical care, media, finance, industrial robots, and internet services. The widespread application of AI and its deep integration with the economy and society have improved efficiency and produced benefits. At the same time, it will inevitably impact the existing social order and raise ethical concerns. Ethical issues, such as privacy leakage, discrimination, unemployment, and security risks, brought about by AI systems have caused great trouble to people. Therefore, AI ethics, which is a field related to the study of ethical issues in AI, has become not only an important research topic in academia, but also an important topic of common concern for individuals, organizations, countries, and society. This paper will give a comprehensive overview of this field by summarizing and analyzing the ethical risks and issues raised by AI, ethical guidelines and principles issued by different organizations, approaches for addressing ethical issues in AI, methods for evaluating the ethics of AI. Additionally, challenges in implementing ethics in AI and some future perspectives are pointed out. We hope our work will provide a systematic and comprehensive overview of AI ethics for researchers and practitioners in this field, especially the beginners of this research discipline.
... A lot of work is left for designers to translate such knowledge to their own practice. To illustrate this point we briefly summarize a number of prominent systematic reviews and meta-analyses drawn from across disciplines (Jobin et al., 2019;Morley et al., 2019;Shneiderman, 2020). Jobin et al. (2019) identify eleven overarching ethical values and principles. ...
... To illustrate this point we briefly summarize a number of prominent systematic reviews and meta-analyses drawn from across disciplines (Jobin et al., 2019;Morley et al., 2019;Shneiderman, 2020). Jobin et al. (2019) identify eleven overarching ethical values and principles. These are, in order of frequency of the number of sources featuring them: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity. ...
Article
Full-text available
As the use of AI systems continues to increase, so do concerns over their lack of fairness, legitimacy and accountability. Such harmful automated decision-making can be guarded against by ensuring AI systems are contestable by design: responsive to human intervention throughout the system lifecycle. Contestable AI by design is a small but growing field of research. However, most available knowledge requires a significant amount of translation to be applicable in practice. A proven way of conveying intermediate-level, generative design knowledge is in the form of frameworks. In this article we use qualitative-interpretative methods and visual mapping techniques to extract from the literature sociotechnical features and practices that contribute to contestable AI, and synthesize these into a design framework.
... This commonly repeated narrative suggests an ethical crisis in the design and adoption of data-driven technologies. Calls for ethical, responsible, fair, transparent and accountable technologies have proliferated; as have initiatives that seek to certify technology production as 'ethical', leading to a burgeoning field examining data and AI ethics, where AI (referring to 'Artificial Intelligence') is used as a broad catch-all term for a range of data-based automated systems (Floridi, 2009(Floridi, , 2013Floridi and Cowls, 2019;Jobin et al., 2019;Whittlestone et al., 2019). This field seeks to retain the benefits of data-driven technology innovation while limiting, mitigating or responding to ethical problems. ...
... As this field of data and AI ethics expands, scholars and practitioners have begun to move beyond an 'ethical principles approach' to consider 'ethics in practice'. The shift towards focusing on ethics in practice addresses some of the weaknesses of the principles approachincluding misuse by industry actors who interpret principles as 'softer version[s] of the law' (Jobin et al., 2019), and may hold the potential to build on a wider range of ethical principles as they fit their interests. This could mean leveraging not only the consequentialist ethics often used to make ethical principles realizable and attainable, but also other ethical approaches. ...
Article
Full-text available
This paper identifies and addresses persistent gaps in the consideration of ethical practice in ‘technology for good’ development contexts. Its main contribution is to model an integrative approach using multiple ethical frameworks to analyse and understand the everyday nature of ethical practice, including in professional practice among ‘technology for good’ start-ups. The paper identifies inherent paradoxes in the ‘technology for good’ sector as well as ethical gaps related to (1) the sometimes-misplaced assignment of virtuousness to an individual; (2) difficulties in understanding social constraints on ethical action; and (3) the often unaccounted for mismatch between ethical intentions and outcomes in everyday practice, including in professional work associated with an ‘ethical turn’ in technology. These gaps persist even in contexts where ethics are foregrounded as matters of concern. To address the gaps, the paper suggests systemic, rather than individualized, considerations of care and capability applied to innovation settings, in combination with considerations of virtue and consequence. This paper advocates for addressing these challenges holistically in order to generate renewed capacity for change at a systemic level.
... Mason (1986) firstly addressed ethical issues of the information age such as privacy, accuracy, property, and accessibility. More recent studies have reviewed existing ethical AI Principles and found that the missing considerations of ethics in IS are mainly due to the impaired linkage between the abstract ethical principles and the technical implementation (Hagendorff 2020) as well as the overall lack of implementation strategies (Jobin et al. 2019). Some studies have discussed the topic of ethical AI regarding systemic risks (e.g., Crawford & Calo, 2016) or unintended negative consequences such as algorithmic bias or discrimination (e.g., Veale & Binns, 2017). ...
... Some studies have discussed the topic of ethical AI regarding systemic risks (e.g., Crawford & Calo, 2016) or unintended negative consequences such as algorithmic bias or discrimination (e.g., Veale & Binns, 2017). Although scientific literature is evolving, most recognized publications are issued by private organizations, e.g., Google (2021) and Microsoft (2021) that rely on AI for their business purpose, and international organizations, e.g., the European Commission (2019) which are concerned with the societal well-being (Jobin et al. 2019). However, since it is not evident that contemporary RAs are using AI for their automated recommendations (e.g., Bianchi and Briere 2021), the guidelines and considerations by the AI ethics literature are not directly applicable for the design of ethical RA. ...
Conference Paper
Full-text available
Automated investing in form of Robo-Advice (RA) has promising qualities, e.g., mitigating personal biases through algorithms and enable financial advice for less wealthy clients. However, RA is criticized for its rudimentary personalization ability questioning its fiduciary duties, nontransparent recommendations and violations of data privacy and security. These ethical issues pose significant risks, especially for the less financially educated targeted clients, who could be exploited by RA as illustrated in the movie “Wolf of Wall Street”. Yet, a distinct ethical perspective on RA design is missing in literature. Based on scientific literature on RA and international standards and guidelines of ethical financial advice we derive eight meta-requirements and develop 15 design principles, that can guide more ethical and trustworthy RA design. We further evaluated and enhanced the design artifact through interviews with domain experts from science and practice. With our study we provide design knowledge that enables more ethical RA outcomes.
... The broad emergence of ethics guidelines indicates a high demand for practical guidance which brings together theoretically derived ethical concerns but also the everyday experience of researchers and practitioners (Stahl, Timmermans, & Mittelstadt, 2016). Besides hands-on guidance, additional meta studies on the role of ethics guidelines also address the guidelines' shortcomings (Hagendorff, 2020;Jobin, Ienca, & Vayena, 2019;Mittelstadt, 2019). Due to the abundance of guidelines and research on ethical questions, SMA researchers face the challenge to identify those ethical questions, which are relevant for their specific research process. ...
... Other power issues can occur between third-party funding institutions or between researchers and research subjects, especially when researching more vulnerable groups (Leurs, 2017). The value of transparency is very prevalent in most ethics guidelines (Jobin et al., 2019) and is becoming increasingly important in research (e.g., open data initiatives). Thereby, a challenging question for SMA researchers is how much access into social media datasets is acceptable without violating individual privacy (Abbasi et al., 2016). ...
Article
Full-text available
En route to the unravelling of today’s multiplicity of societal challenges, making sense of social data has become a crucial endeavour in Information Systems (IS) research. In this context, Social Media Analytics (SMA) has evolved to a promising field of data-driven approaches, guiding researchers in the process of collecting, analysing, and visualising social media data. However, the handling of such sensitive data requires careful ethical considerations to protect data subjects, online communities, and researchers. Hitherto, the field lacks consensus on how to safeguard ethical conduct throughout the research process. To address this shortcoming, this study proposes an extended version of a SMA framework by incorporating ethical reflection phases as an addition to methodical steps. Following a design science approach, existing ethics guidelines and expert interviews with SMA researchers and ethicists serve as the basis for redesigning the framework. It was eventually assessed through multiple rounds of evaluation in the form of focus group discussions and questionnaires with ethics board members and SMA experts. The extended framework, encompassing a total of five iterative ethical reflection phases, provides simplified ethical guidance for SMA researchers and facilitates the ethical self-examination of research projects involving social media data.
... In this regard, different organizations and technology giants developed committees to draft the AI ethics guidelines. Google and SAP presented the guidelines and policies to develop ethically aligned AI systems [7]. Similarly, the Association of Computing Machinery (ACM), Access Now, and Amnesty International jointly proposed the principles and guidelines to develop an ethically mature AI system [7]. ...
... Google and SAP presented the guidelines and policies to develop ethically aligned AI systems [7]. Similarly, the Association of Computing Machinery (ACM), Access Now, and Amnesty International jointly proposed the principles and guidelines to develop an ethically mature AI system [7]. In Europe, the independent high-level expert group on artificial intelligence (AI HLEG) developed the guidelines for promoting trustworthy AI [2]. ...
Preprint
Full-text available
Despite their commonly accepted usefulness, Artificial Intelligence (AI) technologies are concerned with ethical unreliability. Various guidelines, principles, and regulatory frameworks are designed to ensure that AI technologies bring ethical well-being. However, the implications of AI ethics principles and guidelines are still being debated. To further explore the significance of AI ethics principles and relevant challenges, we conducted an empirical survey of 99 AI practitioners and lawmakers from twenty countries across five continents. Study findings confirm that transparency, accountability, and privacy are the most critical AI ethics principles. On the other hand, lack of ethical knowledge, no legal frameworks, and lacking monitoring bodies are found the most common AI ethics challenges. The impact analysis of the challenges across AI ethics principles reveals that conflict in practice is a highly severe challenge. Our findings stimulate further 1 arXiv:2207.01493v1 [cs.CY] 30 Jun 2022 research, epically empowering existing capability maturity models to support the quality assessment of ethics-aware AI systems.
... The ethical implications of AI have sparked concern from governments, the public, and even companies. 1 According to some meta-studies on AI ethics guidelines, the most frequently discussed themes include fairness, privacy, accountability, transparency, and robustness [1][2][3]. Less commonly broached, but not entirely absent, are issues relating to the rights of potentially sentient or autonomous forms of AI [4,5]. ...
... The ethical implications of AI have sparked concern from governments, the public, and even companies. 1 According to some meta-studies on AI ethics guidelines, the most frequently discussed themes include fairness, privacy, accountability, transparency, and robustness [1][2][3]. Less commonly broached, but not entirely absent, are issues relating to the rights of potentially sentient or autonomous forms of AI [4,5]. One much more significant, and more immediately present, issue has, however, been almost entirely neglected: AI's impact on non-human animals. 2 There have, we acknowledge, been discussions of AI in connection with endangered species and ecosystems, 3 but we are referring to questions relating to AI's impact on individual animals. ...
Article
Full-text available
The ethics of artificial intelligence, or AI ethics, is a rapidly growing field, and rightly so. While the range of issues and groups of stakeholders concerned by the field of AI ethics is expanding, with speculation about whether it extends even to the machines themselves, there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals. This paper seeks to explore the kinds of impact AI has on nonhuman animals, the severity of these impacts, and their moral implications. We hope that this paper will facilitate the development of a new field of philosophical and technical research regarding the impacts of AI on animals, namely, the ethics of AI as it affects nonhuman animals.
... Most approaches to AI ethics-including industrial AI ethics-are general. The current trend is to work at the level of broad ethical principles, generating sets of principles from expert working groups (Jobin et al. 2019), such as the Ethics Guidelines for Trustworthy AI (EU High-Level Expert Group on Artificial Intelligence 2019)-HLEG-or collating such principles together under more general categories (Zhou et al. 2020). The application of these principles is typically developed into 'tools' or 'frameworks,' which are offered not as a series of definite suggestions, but merely as efforts to promote reflection in designers and developers. ...
... In AI ethics, a supposedly objective and neutral view, a product of the thinking of seventeenth century modern philosophy, has come to ground the working approaches of tech fields, such as data science and engineering, as Birhane notes (Birhane 2021). This in turn has focused the issues for general AI ethics away from what would be practically relevant to the industrial context, by evolving AI ethics in the direction of broad principles, which tend to be addressed to "multiple stakeholder groups" (Jobin et al. 2019). ...
Article
Full-text available
In this article we present a new approach to practical artificial intelligence (AI) ethics in heavy industry, which was developed in the context of an EU Horizons 2020 multi partner project. We begin with a review of the concept of Industry 4.0, discussing the limitations of the concept, and of iterative categorization of heavy industry generally, for a practical human centered ethical approach. We then proceed to an overview of actual and potential AI ethics approaches to heavy industry, suggesting that current approaches with their emphasis on broad high-level principles are not well suited to AI ethics for industry. From there we outline our own approach in two sections. The first suggests tailoring ethics to the time and space situation of the shop floor level worker from the ground up, including giving specific and evolving ethical recommendations. The second describes the ethicist’s role as an ethical supervisor immersed in the development process and interpreting between industrial and technological (tech) development partners. In presenting our approach we draw heavily on our own experiences in applying the method in the Use Cases of our project, as examples of what can be done.
... Companies involved in autonomous driving vehicles have developed a greater awareness and commitment to the ethical aspects of autonomous driving vehicles (Martinho et al. 2021). Indeed, ethics is considered one of the main elements of trust (Jobin, Ienca, and Vayena 2019;Panetta 2019), which plays an essential role in the formation of individual attitudes toward autonomous driving vehicles (Lackes et al. 2020), intention to use them (Bruckes et al. 2019), and their adoption (Lackes et al. 2020). ...
... Trust in autonomous driving vehicles has been found to play a critical role in individual attitudes toward autonomous driving vehicles (Lackes et al. 2020), intentions to use them (Bruckes et al. 2019), and their acceptance (Lackes et al. 2020), with ethics considered one of the main elements of trust (Jobin et al. 2019;Panetta 2019). Bruckes et al. also demonstrate the importance of institutional trust, composed of perceived technical protection and situational normality, as the main driver of technological trust (Bruckes et al. 2019). ...
Conference Paper
Full-text available
In recent years, artificial intelligence has essentially contributed to the progress of human society in various application areas. However, it also gave raise to questioning its ethical principles in application domains such as autonomous driving. Despite a plethora of research related to acceptance of autonomously driving vehicles in the MIS research community, how ethical principles are seen by individuals have been mostly left out of consideration so far. The goal of this study is to provide an understanding of how people would like AI-enabled autonomous vehicles to behave. Respondents are asked how they would like to see an autonomous car react in different scenarios, who they think should set the standards for the car behavior, and who they think should be responsible for accidents and crashes involving driverless vehicles. The results of the survey are evaluated both in aggregated form and by means of a cluster analysis.
... Lessons can be learned from AI ethics across other sectors. Global AI ethical guidelines converge to 5 core themes: transparency; justice and fairness; non-maleficence; responsibility; and autonomy 13,14 . These themes are able to provide broad guidance for those developing and utilising digital tools in surgery but there is a lack of guidance to cover specific ethical and data governance issues related to the practice of surgery. ...
... A review of the literature surrounding data governance and ethical issues across the implementation of digital surgery identified key themes which formed the basis of the scoping round 13,[33][34][35][36][37] . In addition, participants were asked about their understanding of the term digital surgery and to identify key barriers and future research goals concerning digital surgery (see Supplementary Methods for full questionnaire). ...
Article
Full-text available
The use of digital technology is increasing rapidly across surgical specialities, yet there is no consensus for the term ‘digital surgery’. This is critical as digital health technologies present technical, governance, and legal challenges which are unique to the surgeon and surgical patient. We aim to define the term digital surgery and the ethical issues surrounding its clinical application, and to identify barriers and research goals for future practice. 38 international experts, across the fields of surgery, AI, industry, law, ethics and policy, participated in a four-round Delphi exercise. Issues were generated by an expert panel and public panel through a scoping questionnaire around key themes identified from the literature and voted upon in two subsequent questionnaire rounds. Consensus was defined if >70% of the panel deemed the statement important and <30% unimportant. A final online meeting was held to discuss consensus statements. The definition of digital surgery as the use of technology for the enhancement of preoperative planning, surgical performance, therapeutic support, or training, to improve outcomes and reduce harm achieved 100% consensus agreement. We highlight key ethical issues concerning data, privacy, confidentiality and public trust, consent, law, litigation and liability, and commercial partnerships within digital surgery and identify barriers and research goals for future practice. Developers and users of digital surgery must not only have an awareness of the ethical issues surrounding digital applications in healthcare, but also the ethical considerations unique to digital surgery. Future research into these issues must involve all digital surgery stakeholders including patients.
... At that point, there is a critical problem in finding the responsible when AI takes control over humans [33]. Through these concerns and possibilities, a critical evaluation has to be made to determine how to interact and manage AI technology [40] to strategize the world's future. Through this journey, humanity's first consideration has to be 'caution' [41]. ...
... With the emergence of the brand new vision of artificial consciousness, a new ethical understanding will also emerge in case of sustaining its' own existence. Through the ethical considerations, the will to survive [45] and expand its presence and claim dominance over other entities on earth, AC will challenge to design whole new social and artificially natural structures [40]. At that point, some ethical questions and responses that have been asked and answered by humansboth in egocentric or ecocentric visions, will be changed in a machine-centered way [46]. ...
Article
Full-text available
Artificial intelligence is yet a beneficial agent for sustainable development actions by providing unique contributions to technological advancements focused on various wicked problems, such as; the depletion of natural resources, social inequality, climate crisis and neoliberal growth policies. Rather than a group of humans' biased deficient actions and anthropocentric development strategies to reach a more sustainably designed future, AI is the one possible game-changer that may be the way of activating an alternative ecocentric mindset. However, there is also an unclear risk contingency about the way of integration of AI into planet-scale of actions. The interference of AI into these processes may cause some authorization and dominance related problems, which is crucial in defining the dynamics of human-machine interaction and AI-ecology interaction. The aim of the study is to review and analyze the literature of current theories and the possible future interaction between artificial consciousness and human consciousness, in consideration of sustainability, by defining some speculative cause and effect relations. The human-machine interaction, the strategies for assigning roles for AC, its' potential and possible negative-positive impacts have been investigated by considering some possible scenarios related to the decisions about the future of AI in the context of sustainability. The positioning, authorization and limitations of AI are evaluated along with some possible future envisionings. As a result, it is crucial to manage and steer the development of AI and identify the hierarchical and strategic actions of AI integrated value creation and development processes to ensure the safety of a sustainable future.
... The development of autonomous systems has led to an increased focus on ethics (Himmelreich, 2018;Martinez-Martin, 2019), especially as it has been found that malevolent AI have demonstrable adverse effects on humans and present significant security issues (Brundage et al., 2018;Pistono & Yampolskiy, 2016). Researchers and practitioners have spent a great deal of effort attempting to develop ethical frameworks and guidelines for AI in recent years (Jobin et al., 2019). Many of these attempts have coalesced around human values like justice, fairness, privacy, non-maleficence, transparency, and responsibility (Jobin et al., 2019). ...
... Researchers and practitioners have spent a great deal of effort attempting to develop ethical frameworks and guidelines for AI in recent years (Jobin et al., 2019). Many of these attempts have coalesced around human values like justice, fairness, privacy, non-maleficence, transparency, and responsibility (Jobin et al., 2019). Others have emphasized the importance of reliability, safety, and trustworthiness (Shneiderman, 2020). ...
Article
Advancements and implementations of autonomous systems coincide with an increased concern for the ethical implications resulting from their use. This is increasingly relevant as autonomy fulfills teammate roles in contexts that demand ethical considerations. As AI teammates (ATs) enter these roles, research is needed to explore how an AT’s ethics influences human trust. This current research presents two studies which explore how an AT’s ethical or unethical behavior impacts trust in that teammate. In Study 1, participants responded to scenarios of an AT recommending actions which violated or abided by a set of ethical principles. The results suggest that ethicality perceptions and trust are influenced by ethical violations, but only ethicality depends on the type of ethical violation. Participants in Study 2 completed a focus group interview after performing a team task with a simulated AT that committed ethical violations and attempted to repair trust (apology or denial). The focus group responses suggest that ethical violations worsened perceptions of the AT and decreased trust, but it could still be trusted to perform tasks. The AT’s apologies and denials did not repair damaged trust. The studies’ findings suggest a nuanced relationship between trust and ethics and a need for further investigation into trust repair strategies following ethical violations.
... These fall into three broad categories: binding agreements (8), voluntary commitments (44), and recommendations (115). Similarly, the OECD maintains a live database showing over 700 initiatives related to AI policy from 60 countries, territories and the EU. 1 In a recent study, Jobin et al. (2019) identified 84 different ethical AI standards, produced by a range of private companies, government agencies, research institutions, and other organizations. They identified 11 overarching principles, namely (in order of popularity): transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity. ...
Article
Full-text available
Calls for “ethical Artificial Intelligence” are legion, with a recent proliferation of government and industry guidelines attempting to establish ethical rules and boundaries for this new technology. With few exceptions, they interpret Artificial Intelligence (AI) ethics narrowly in a liberal political framework of privacy concerns, transparency, governance and non-discrimination. One of the main hurdles to establishing “ethical AI” remains how to operationalize high-level principles such that they translate to technology design, development and use in the labor process. This is because organizations can end up interpreting ethics in an ad-hoc way with no oversight, treating ethics as simply another technological problem with technological solutions, and regulations have been largely detached from the issues AI presents for workers. There is a distinct lack of supra-national standards for fair, decent, or just AI in contexts where people depend on and work in tandem with it. Topics such as discrimination and bias in job allocation, surveillance and control in the labor process, and quantification of work have received significant attention, yet questions around AI and job quality and working conditions have not. This has left workers exposed to potential risks and harms of AI. In this paper, we provide a critique of relevant academic literature and policies related to AI ethics. We then identify a set of principles that could facilitate fairer working conditions with AI. As part of a broader research initiative with the Global Partnership on Artificial Intelligence, we propose a set of accountability mechanisms to ensure AI systems foster fairer working conditions. Such processes are aimed at reshaping the social impact of technology from the point of inception to set a research agenda for the future. As such, the key contribution of the paper is how to bridge from abstract ethical principles to operationalizable processes in the vast field of AI and new technology at work.
... Despite all these benefits, a major ethical issue remains, i.e. who holds ultimate responsibility for the outcome of an automated procedure? In the absence of regulatory guidelines, this remains an open question (Jobin et al 2019). However, from the perspective of the patient, the neurosurgeon always holds the ultimate responsibility for the surgical outcome. ...
Article
Full-text available
Objective: Accurate identification of functional cortical regions is essential in neurological resection. The central sulcus (CS) is an important landmark that delineates functional cortical regions. Median nerve stimulation (MNS) is a standard procedure to identify the position of the CS intraoperatively. In this paper, we introduce an automated procedure that uses MNS to rapidly localize the CS and create functional somatotopic maps. Approach: We recorded electrocorticographic signals from 13 patients who underwent MNS in the course of an awake craniotomy. We analyzed these signals to develop an automated procedure that determines the location of the CS and that also produces functional somatotopic maps. Main results: The comparison between our automated method and visual inspection performed by the neurosurgeon shows that our procedure has a high sensitivity (89%) in identifying the CS. Further, we found substantial concordance between the functional somatotopic maps generated by our method and passive functional mapping (92% sensitivity). Significance: Our automated MNS-based method can rapidly localize the CS and create functional somatotopic maps without imposing additional burden on the clinical procedure. With additional development and validation, our method may lead to a diagnostic tool that guides neurosurgeon and reduces postoperative morbidity in patients undergoing resective brain surgery.
... Second, as already firmly established in other fields of applied ethics such as bioethics and medical ethics, the ethical discussion can work along the lines of formulating principles that are expected to provide orientation regarding what, morally, ought to be done-similar to a catalogue of duties (cf. Jobin et al., 2019). Third, the ethical evaluation can proceed with the formulation of ideals and a positive vision to establish goals either for a good, 'virtuous' use of AI or even a virtuous AI itself (cf. ...
Article
Full-text available
The paper presents an ethical analysis and constructive critique of the current practice of AI ethics. It identifies conceptual substantive and procedural challenges and it outlines strategies to address them. The strategies include countering the hype and understanding AI as ubiquitous infrastructure including neglected issues of ethics and justice such as structural background injustices into the scope of AI ethics and making the procedures and fora of AI ethics more inclusive and better informed with regard to philosophical ethics. These measures integrate the perspective of AI justice into AI ethics, strengthening its capacity to provide comprehensive normative orientation and guidance for the development and use of AI that actually improves human lives and living together.
... This is arguably particularly important for governing AI and data intensive entities, as they are part of a relatively immature industry with rapid growth is struggling to find good governance approaches. There is, for example, an extreme proliferation of frameworks for responsible, trustworthy, and otherwise "ethical" AI (Floridi & Cowls, 2019;Jobin, Ienca, & Vayena, 2019;Mittelstadt, 2019), and ongoing debates about the relationships between ethics and politics and regulation, both in and of corporations using AI and data based solutions (Floridi, 2018;Saetra & Fosch-Villaronga, 2021). ...
... Races for supremacy in a domain through AI may however have detrimental consequences since participants to the race may well ignore ethical and safety checks in order to speed up the development and reach the market first. AI researchers and governance bodies, such as the EU, are urging to consider together both the normative and the social impact of major technological advancements concerned (Declaration, 2018;Jobin et al., 2019;European Commission, 2020; Future of Life Institute, 2019). However, given the breadth and depth of AI and its advances, it is not an easy task to assess when and which AI technology in a concrete domain needs to be regulated. ...
... Organizations, government bodies, and scholars are developing and fine-tuning impact assessment tools for AI systems [128]- [130]. Such tools help translate relevant principles (such as privacy, transparency and fairness [131]) into practical evaluations. Efforts to identify risks via impact assessments are already conducted for data protection compliance in many countries, and similar initiatives can be helpful to deal with the challenges presented by AI systems. ...
Article
Full-text available
The capabilities of Artificial Intelligence (AI) evolve rapidly and affect almost all sectors of society. AI has been increasingly integrated into criminal and harmful activities, expanding existing vulnerabilities, and introducing new threats. This article reviews the relevant literature, reports, and representative incidents which allows to construct a typology of the malicious use and abuse of systems with AI capabilities. The main objective is to clarify the types of activities and corresponding risks. Our starting point is to identify the vulnerabilities of AI models and outline how malicious actors can abuse them. Subsequently, we explore AI-enabled and AI-enhanced attacks. While we present a comprehensive overview, we do not aim for a conclusive and exhaustive classification. Rather, we provide an overview of the risks of enhanced AI application, that contributes to the growing body of knowledge on the issue. Specifically, we suggest four types of malicious abuse of AI (integrity attacks, unintended AI outcomes, algorithmic trading, membership inference attacks) and four types of malicious use of AI (social engineering, misinformation/fake news, hacking, autonomous weapon systems). Mapping these threats enables advanced reflection of governance strategies, policies, and activities that can be developed or improved to minimize risks and avoid harmful consequences. Enhanced collaboration among governments, industries, and civil society actors is vital to increase preparedness and resilience against malicious use and abuse of AI.
... Now it is also required to encompassing the medium and long-term social impacts that their implementations may generate. Jobin et al. (2019) performed a benchmarking analysis of eighty-four documents containing ethical guidelines for the use of artificial intelligence (considering data preparation and processing, modeling, and evaluation of results, applied to many uses such as data analytics and decision-making) in public and private companies identified a global convergence around five ethical principles: (1) transparency, (2) justice, (3) non-maleficence, (4) responsibility and (5) privacy. In another benchmarking analysis, Fjeld et al. (2020) consider that thirty-two documents converged on eight ethical principles: (1) accountability, (2) equity and non-discrimination, (3) human control of technology, (4) privacy, (5) professional responsibility, (6) promotion of human values, (7) security and (8) transparency and the ability to be explained. ...
Chapter
This essay presents a deeper look into the impacts of using modeling to support water management decision-making lies on potential periodic reconsiderations of conceptual and mathematical model premises. Geoethics brings a relationship between the geoscientists and modeling experts into the social responsibility of using modeling for water management and governance. The validation of those models is crucial to assess how trustworthy is the model applied for the decision making. Ready-to-go practices often do not help us to understand when we can call a model assessment as validation or not. This chapter suggests considering the validation process as an open question on water modeling which is more complex than merely calculating model assessment indexes. Current and future generations of geoscientists with expertise in artificial intelligence, machine learning, and/or geostatistics should clarify validation assumptions wherever possible. Thus, the model validity for its application can be harnessed by another one, resulting in a more flexible and creative usage, allowing the interaction increase between the geoscientist and the decision-makers. Geoethics is a tool that integrates ethics, geosciences, and human activities which are presented here into a new tool for a validation addressing of water modeling.
... In fact, SR models are particularly well suited for human interpretability and in-depth analysis (Otte, 2013;Virgolin et al., 2021b;La Cava et al., 2021). This aspect enables a safe and responsible use of machine learning models for high-stakes societal applications, as requested in the AI acts by the European Union and the United States (European Commission, 2021;117th US Congress, 2022;Jobin et al., 2019). Moreover, it enables scientists to gain deeper knowledge about the phenomena that underlie the data. ...
Preprint
Symbolic regression (SR) is the task of learning a model of data in the form of a mathematical expression. By their nature, SR models have the potential to be accurate and human-interpretable at the same time. Unfortunately, finding such models, i.e., performing SR, appears to be a computationally intensive task. Historically, SR has been tackled with heuristics such as greedy or genetic algorithms and, while some works have hinted at the possible hardness of SR, no proof has yet been given that SR is, in fact, NP-hard. This begs the question: Is there an exact polynomial-time algorithm to compute SR models? We provide evidence suggesting that the answer is probably negative by showing that SR is NP-hard.
... Rather than examining how the knowledge produced by expert communities has structured AI policy interventions, existing analyses have primarily focused on analyzing political outcomes. First, a growing body of literature has studied the wave of AI ethics documents released by governments and private actors since 2016 [2,3,4,5,6]. These studies have found an emerging global consensus on ethical principles such as transparency, privacy, and non-maleficence-a high-level consensus that masks diverging ideas about the meaning of these principles and how to translate them into concrete AI policies. ...
Conference Paper
While the knowledge produced by experts has been widely recognized to play a salient role in shaping policy on technological issues, the interaction between AI expertise and the evolving AI governance landscape has received little attention thus far. To address this gap, the present paper leverages insights from STS and International Relations to explore how different expert communities have constructed AI as a governance problem. More specifically, it presents the preliminary results of a qualitative frame analysis of 90 policy documents published by experts from industry, civil society, and the research community. The analysis finds that AI expertise is a highly contested field, as experts not only disagree on why AI is problematic and what policies are required, but, more fundamentally, about which artifacts, ideas, and practices make up AI in the first place. The paper proposes that the epistemic disagreements concerning AI have political consequences, as they engender protracted ontological politics that jeopardize the development of effective governance interventions. Against this background, the findings raise critical questions about the prevailing tendency of governance interventions to target the elusive and contested object 'artificial intelligence.
... As has been discussed, data is at the heart of contemporary approaches to AI, which raises numerous challenging issues centred on data protection, privacy, and ownership, and on data analysis. These ethical issues have received a great deal of attention (summarized by Jobin et al., 2019). Similarly, the ethics of educational data has also been the focus of much research (e.g. ...
Article
Full-text available
Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and ultimately accelerate the progress towards SDG 4. However, these rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks. This publication offers guidance for policy-makers on how best to leverage the opportunities and address the risks, presented by the growing connection between AI and education. It starts with the essentials of AI: definitions, techniques and technologies. It continues with a detailed analysis of the emerging trends and implications of AI for teaching and learning, including how we can ensure the ethical, inclusive and equitable use of AI in education, how education can prepare humans to live and work with AI, and how AI can be applied to enhance education. It finally introduces the challenges of harnessing AI to achieve SDG 4 and offers concrete actionable recommendations for policy-makers to plan policies and programmes for local contexts.
... Therefore, domain-overarching research, especially reviews are needed, to amalgamate the scattered knowledge and identify underlying patterns, akin to efforts in artificial intelligence. 248 Otherwise, it is possible that the rapidly evolving digital health landscape outpaces scientific inquiry. ...
Article
Background Mental health conditions, such as depression and anxiety affect a large proportion of the population in England; it has been estimated that one-in-six adults show symptoms of common mental health disorders in any given week. Additionally, the outbreak of COVID-19 in March 2019 and the measures that have been implemented to curb the spread of the disease have negatively affected some individuals’ mental health and wellbeing. Therefore, to tackle the ongoing mental health crisis, the development of support mechanisms that are easy to access and embedded in the primary care network, is required. Community pharmacies are accessible without the need for an appointment and pharmacists are recognised as currently under-utilised, yet highly skilled primary healthcare providers. Thus, community pharmacy presents as an ideal candidate for establishing an alternative source of mental health support within the primary care network. Additionally, preliminary evidence suggests that pharmacy-recorded transactional data, as registered on loyalty cards, can be indicative of underlying health conditions, including mental health issues. Therefore, the tracking and analysing of these data could facilitate the identification of individuals at risk and, in turn, enable pharmacists to offer targeted support. However, currently there is limited evidence pertaining to public attitudes towards mental health support provided in pharmacies and the utilisation of transactional data to identify individuals at risk. Aim To evaluate public attitudes towards mental health support provided in community pharmacy using purchasing data as a tool to identify individuals at risk of developing mental health issues. Methods This study adopted an explanatory, sequential mixed methods research design, encapsulating two separate research streams. In research stream one, the views of pharmacy users towards mental health support provided in pharmacies were investigated. Research stream two evaluated the views of university students and pharmacy users towards utilising transactional data to identify individuals at risk of developing mental health conditions. Both research streams commenced with the development and subsequent distribution of surveys amongst the population of interest, in order to describe individuals’ attitudes quantitatively. The obtained data were subjected to descriptive and inferential statistical analyses performed in Stata (Release 16). The results informed the subsequent qualitative research phases. Semi-structured interviews were conducted with some pharmacy users (n=9) and university students (n=17), to provide an in-depth understanding of individuals’ stances towards both topics. The obtained narrative data were analysed thematically, utilising the software NVivo (Version 12) to aid with data management. Results Pharmacy users’ attitudes towards mental health support provided in pharmacies ranged from scepticism to being moderately supportive in 2019 (n=3449) and 2020 (n=1474), respectively. Individuals who reported higher levels of trust in community pharmacists exhibited more positive attitudes; self-reporting a diagnosis of depression and/or anxiety was found to be predictive of more negative attitudes. Qualitatively, the importance of trust for public acceptance of mental health support provided in pharmacies was reiterated, and factors influencing individuals’ stances were identified, such as facilitators, advantages and barriers for pharmacy provided mental health care. In research stream two, university students as well as pharmacy users exhibited greater support for the utilisation of aggregate-level loyalty card data in health research than utilising these data to identify individuals specifically. Based on the student interviews, a preliminary framework of factors affecting individuals’ stances was developed. First, aspects pertaining to the data provider, the prospective data user and the nature of the data itself were found influence students’ attitudes. Secondly, university students performed a risk-benefit assessment, and in the instance that the expected benefits outweigh potential risks, students supported the utilisation of loyalty card data for the proposed purpose. Thirdly, greater understanding and trust in the prospective data user acted as facilitators in university students’ thought-process. Pharmacy users’ acceptance considerations appeared to be influenced by similar aspects. Conclusions and recommendations There is public support for establishing community pharmacy as an alternative source for mental health support within the primary care network; especially a role for pharmacists as an information hub and intermediary between pharmacy users and other healthcare professionals was endorsed. However, equipping pharmacists with the necessary toolkit to fulfil this role is crucial, e.g. by offering pharmacy-specific mental health first aid classes, or expanding existing services, such as the new medicines service and the community pharmacy consultation service. Secondly, trust between pharmacy users and pharmacists was found to be fundamental for public acceptance of new services in pharmacies. Therefore, pharmacy-practice research, which evaluates potential trust-enhancing mechanisms, is required; the results should guide future policy. Thirdly, there appears to be public support for the tracking and analysing of transactional data, especially if the benevolence of the approach is emphasised. Likewise, obtaining trust is fundamental for obtaining public acceptability. The importance of trust is widely recognised in the digital health landscape, and the implementation of trust-enhancing measures is a focal point of current policy. Pharmacy practice research and policymaking should draw lessons from these developments, if the tracking and analysing of transactional data in the realm of pharmacy-provided mental healthcare, is sought after.
... Similarly, the Montréal Declaration for Responsible AI states the collective impact of realizing its enumerated principles (which allude to FAT though adopting slightly different terminology) as "lay[ing] the foundation for cultivating social trust toward artificially intelligent systems" [29]-their accompanying report [28] using the word "trust" over 40 times. In their review of documents containing ethical principles for AI, Jobin et al. [54] identify trust as one of the 11 common principles (featured in 28 out of 84 documents) alongside transparency, fairness, and responsibility/accountability; and specifically found that 12 of these documents viewed transparency as key to fostering trust. ...
Preprint
Full-text available
Efforts to promote fairness, accountability, and transparency are assumed to be critical in fostering Trust in AI (TAI), but extant literature is frustratingly vague regarding this 'trust'. The lack of exposition on trust itself suggests that trust is commonly understood, uncomplicated, or even uninteresting. But is it? Our analysis of TAI publications reveals numerous orientations which differ in terms of who is doing the trusting (agent), in what (object), on the basis of what (basis), in order to what (objective), and why (impact). We develop an ontology that encapsulates these key axes of difference to a) illuminate seeming inconsistencies across the literature and b) more effectively manage a dizzying number of TAI considerations. We then reflect this ontology through a corpus of publications exploring fairness, accountability, and transparency to examine the variety of ways that TAI is considered within and between these approaches to promoting trust.
... A small number of guidelines include oversight/enforcement mechanisms with the vast majority of these guidelines emerging from Europe and the United States. In terms of geographic distribution, the data show high representation of the more economically developed countries (Jobin et al, 2019). Another case in view is the formation of the Advanced Technology External Advisory Council (ATEAC) by Google with the mandate to 'develop responsible AI' (Walker, 2019). ...
Article
Full-text available
The study seeks to understand how the AI ecosystem might be implicated in a form of knowledge production which reifies particular kinds of epistemologies over others. Using text mining and thematic analysis, this paper offers a horizon scan of the key themes that have emerged over the past few years during the AIEd debate. We begin with a discussion of the tools we used to experiment with digital methods for data collection and analysis. This paper then examines how AI in education systems are being conceived, hyped, and potentially deployed into global education contexts. Findings are categorised into three themes in the discourse: (1) geopolitical dominance through education and technological innovation; (2) creation and expansion of market niches, and (3) managing narratives, perceptions, and norms.
Chapter
This chapter provides an introduction to this book (Law and Artificial Intelligence: Regulating AI and Applying it in Legal Practice) and an overview of all the chapters. The book deals with the intersection of law and Artificial Intelligence (AI). Law and AI interact in two different ways, which are both covered in this book: law can regulate AI and AI can be applied in legal practice. AI is a new generation of technologies, mainly characterized by being self-learning and autonomous. This means that AI technologies can continuously improve without (much) human intervention and can make decisions that are not pre-programmed. Artificial Intelligence can mimic human intelligence, but not necessarily so. Similarly, when AI is implemented in physical technologies, such as robots, it can mimic human beings (e.g., socially assistive robots acting like nurses), but it can also look completely different if it has a more functional shape (e.g., like an industrial arm that picks boxes in a factory). AI without a physical component can sometimes be hardly visible to end users, but evident to those that created and manage the system. In all its different shapes and sizes, AI is rapidly and radically changing the world around us, which may call for regulation in different areas of law. Relevant areas in public law include non-discrimination law, labour law, humanitarian law, constitutional law, immigration law, criminal law and tax law. Relevant areas in private law include liability law, intellectual property law, corporate law, competition law and consumer law. At the same time, AI can be applied in legal practice. In this book, the focus is mostly on legal technologies, such as the use of AI in legal teams, law-making, and legal scholarship. This introductory chapter concludes with an overview of the structure of this book, containing introductory chapters on what AI is, chapters on how AI is (or could be) regulated in different areas of both public and private law, chapters on applying AI in legal practice, and chapters on the future of AI and what these developments may entail from a legal perspective.
Chapter
Discrimination and bias are inherent problems of many AI applications, as seen in, for instance, face recognition systems not recognizing dark-skinned women and content moderator tools silencing drag queens online. These outcomes may derive from limited datasets that do not fully represent society as a whole or from the AI scientific community's western-male configuration bias. Although being a pressing issue, understanding how AI systems can replicate and amplify inequalities and injustice among underrepresented communities is still in its infancy in social science and technical communities. This chapter contributes to filling this gap by exploring the research question: what do diversity and inclusion mean in the context of AI? This chapter reviews the literature on diversity and inclusion in AI to unearth the underpinnings of the topic and identify key concepts, research gaps, and evidence sources to inform practice and policymaking in this area. Here, attention is directed to three different levels of the AI development process: the technical, the community, and the target user level. The latter is expanded upon, providing concrete examples of usually overlooked communities in the development of AI, such as women, the LGBTQ+ community, senior citizens, and disabled persons. Sex and gender diversity considerations emerge as the most at risk in AI applications and practices and thus are the focus here. To help mitigate the risks that missing sex and gender considerations in AI could pose for society, this chapter closes with proposing gendering algorithms, more diverse design teams, and more inclusive and explicit guiding policies. Overall, this chapter argues that by integrating diversity and inclusion considerations, AI systems can be created to be more attuned to all-inclusive societal needs, respect fundamental rights, and represent contemporary values in modern societies.
Article
Physicians and Patients are overwhelmed with the number and variety of digital health technologies coming to market. Marketing authorizations by the U.S. FDA and its European counterparts normally bear signal effects: A product has been tested in a way that it is safe and efficacious for its intended purpose. This is currently not the case for digital health technologies (DHTs) given their characteristics, changes in actors and use contexts and lack of specific regulation in regard to those challenges. This regulatory gap, i.e. the lack of effective regulation of such technologies, poses a threat to patient-consumers. Alternatives to regulatory agency-based assessments are evaluated and proposed to offer some value in bridging the current regulatory gap until it is closed but cannot replace the role of regulatory agencies.
Book
Full-text available
This is the Arabic version of UNESCO book AI and Education: Guidance for Policy-makers. Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and ultimately accelerate the progress towards SDG 4. However, these rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks. This publication offers guidance for policy-makers on how best to leverage the opportunities and address the risks, presented by the growing connection between AI and education. It starts with the essentials of AI: definitions, techniques and technologies. It continues with a detailed analysis of the emerging trends and implications of AI for teaching and learning, including how we can ensure the ethical, inclusive and equitable use of AI in education, how education can prepare humans to live and work with AI, and how AI can be applied to enhance education. It finally introduces the challenges of harnessing AI to achieve SDG 4 and offers concrete actionable recommendations for policy-makers to plan policies and programmes for local contexts.
Book
Full-text available
This is the Russian version of UNESCO book AI and Education: Guidance for Policy-makers. Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and ultimately accelerate the progress towards SDG 4. However, these rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks. This publication offers guidance for policy-makers on how best to leverage the opportunities and address the risks, presented by the growing connection between AI and education. It starts with the essentials of AI: definitions, techniques and technologies. It continues with a detailed analysis of the emerging trends and implications of AI for teaching and learning, including how we can ensure the ethical, inclusive and equitable use of AI in education, how education can prepare humans to live and work with AI, and how AI can be applied to enhance education. It finally introduces the challenges of harnessing AI to achieve SDG 4 and offers concrete actionable recommendations for policy-makers to plan policies and programmes for local contexts. Short Summary ‘
Article
Full-text available
A 4ª Revolução Industrial é o culminar da era digital. Atualmente, tecnologias como robótica, nanotecnologia, genética e inteligência artificial prometem transformar nosso mundo e a maneira como vivemos. O campo da Segurança e da Ética da Inteligência Artificial (IA) são áreas de pesquisa emergentes que vêm ganhando popularidade nos últimos anos. Diversas organizações de cunho privado, público e não governamentais têm publicado diretrizes propondo princípios éticos para a regulamentação do uso e desenvolvimento de sistemas inteligentes autônomos. Meta-análises do campo de pesquisa em Ética da IA apontam uma convergência sobre certos princípios éticos que, supostamente, governam a indústria da IA. Entretanto, pouco se sabe sobre a eficiência desta forma de “Ética”. Neste estudo, gostaríamos de realizar uma análise crítica do atual estado da Ética da IA, e sugerir que essa forma de governança baseada em diretrizes éticas principialista não é suficiente para normatizar a indústria da IA e seus desenvolvedores. Acreditamos que drásticas mudanças sejam necessárias, tanto nos processos de formação de profissionais das áreas ligadas ao desenvolvimento de software e sistemas inteligentes quanto no aumento da regulamentação desses profissionais e sua indústria. Para tanto, sugerimos que o Direito se beneficie das contribuições recentes da Bioética, de forma a explicitar em termos legais as contribuições da Ética da IA para a governança.
Article
In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.
Thesis
Full-text available
The paper covers the topic of artificial intelligence (AI) in the context of entrepreneurship in Europe and offers an unprecedented discussion and research on AI startups, more precisely on critical success factors (CSF) and innovation. This research aims to identify the impact CSF have on the success of AI startups and to determine the role of innovation within this framework. Overall, the study will serve as an educational resource for entrepreneurs and academia and will offer an introductory view on CSF of AI Startups and the environment of innovation which they face. The preliminary factors were extracted from the final CSF model proposed by Chorev and Anderson (2006), as they presented a similar environment background. A quantitative research method was proposed and data was collected through an online questionnaire. The responses were then statistically analysed through the SPSS system. According to the 32 completed forms, internal factors are more impactful than external factors on AI startup success, more specifically factors related to the core team, such as commitment and expertise. Business and marketing strategy, product development and customer relations also show a high impact on business success. The significance of innovation was identified in the relation of internal CSF with business success, as well as in the association between the product development factor and success. Although the study presents limitations due to sample size, it offers an introductory view on CSF of AI Startups and the environment of innovation which they face, which is a significant milestone in the research conducted on this topic and a future benchmark for further studies. Keywords: Europe, AI startups, Critical success factors, innovation
Article
Despite the tremendous promise offered by artificial intelligence (AI) for healthcare in South Africa, existing policy frameworks are inadequate for encouraging innovation in this field. Practical, concrete and solution-driven policy recommendations are needed to encourage the creation and use of AI systems. This article considers five distinct problematic issues which call for policy development: (i) outdated legislation; (ii) data and algorithmic bias; (iii) the impact on the healthcare workforce; (iv) the imposition of liability dilemma; and (v) a lack of innovation and development of AI systems for healthcare in South Africa. The adoption of a national policy framework that addresses these issues directly is imperative to ensure the uptake of AI development and deployment for healthcare in a safe, responsible and regulated manner.
Article
Full-text available
Background In recent years, innovations in artificial intelligence (AI) have led to the development of new healthcare AI (HCAI) technologies. Whilst some of these technologies show promise for improving the patient experience, ethicists have warned that AI can introduce and exacerbate harms and wrongs in healthcare. It is important that HCAI reflects the values that are important to people. However, involving patients and publics in research about AI ethics remains challenging due to relatively limited awareness of HCAI technologies. This scoping review aims to map how the existing literature on publics’ views on HCAI addresses key issues in AI ethics and governance. Methods We developed a search query to conduct a comprehensive search of PubMed, Scopus, Web of Science, CINAHL, and Academic Search Complete from January 2010 onwards. We will include primary research studies which document publics’ or patients’ views on machine learning HCAI technologies. A coding framework has been designed and will be used capture qualitative and quantitative data from the articles. Two reviewers will code a proportion of the included articles and any discrepancies will be discussed amongst the team, with changes made to the coding framework accordingly. Final results will be reported quantitatively and qualitatively, examining how each AI ethics issue has been addressed by the included studies. Discussion Consulting publics and patients about the ethics of HCAI technologies and innovations can offer important insights to those seeking to implement HCAI ethically and legitimately. This review will explore how ethical issues are addressed in literature examining publics’ and patients’ views on HCAI, with the aim of determining the extent to which publics’ views on HCAI ethics have been addressed in existing research. This has the potential to support the development of implementation processes and regulation for HCAI that incorporates publics’ values and perspectives.
Article
Internships are a common way for firms to hire college-educated workers, prompting concerns about how internship hiring affects various forms of inequality in the transition from school to work. Some of these concerns center on whether internships might be less accessible for workers from non-white racial groups. In this paper, I examine racial disparities in internship hiring and argue that, relative to full-time hiring, in internship hiring firms have less information about candidates’ qualifications and are also less motivated to screen candidates intensely. Therefore, group-based status beliefs play a larger role in the screening of intern candidates than in the screening of full-time candidates, leading to larger disadvantages for low-status workers (i.e., non-white workers). I examine these claims using data from a Silicon Valley software firm recruiting for both software engineering internships and entry-level software engineering positions. I find evidence consistent with such “cursory screening” of intern candidates leading to non-white (i.e., Asian, Hispanic, Black) job candidates being more strongly disadvantaged relative to white candidates in competing for internships as compared with full-time positions.
Article
Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.
Research
Full-text available
ÍNDICE: 1. Introducción. 2. Inteligencia artificial: concepto y riesgos asociados. 2.1. ¿Qué es la inteligencia artificial? 2.2. La IA como sistema sociotécnico. 2.3. Riesgos asociados al uso de sistemas inteligentes. 3. El florecimiento de documentos que abordan principios éticos para una IA antropocéntrica. 3.1. Breve introducción: ¿Por qué se necesita una ética para la inteligencia artificial? 3.2. Guías, recomendaciones y otros documentos que contienen principios éticos. 3.3. Panorama global y regional principios éticos para una IA antropocéntrica. 3.3.1. Introducción. 3.3.2. Principales iniciativas del sector privado. 3.3.2.1. Principios de la IA por Asilomar. 3.3.2.2. Iniciativa global para la ética de los sistemas autónomos e inteligentes del Instituto de Ingenieros Eléctricos y Electrónicos —IEEE—. 3.3.2.3. Inteligencia artificial ética en Google. 3.3.2.4. Iniciativa de Microsoft. 3.3.2.5. Iniciativa de Meta (ex Facebook). 3.3.3. Iniciativas desde el sector público. 3.3.3.1. Iniciativas supranacionales. 3.3.3.1.1. UNESCO. 3.3.3.1.2. Principios de la IA por la OCDE. 3.3.3.1.3. Directrices éticas del Grupo de expertos de alto nivel de la Comisión Europea. 3.3.3.2. Iniciativas gubernamentales. 3.3.3.2.1. Brasil. 3.3.3.2.1. China. 3.3.3.2.2. Colombia. 3.3.3.2.3. Estados Unidos. 3.3.3.2.4. India. 3.3.3.2.5. Japón. 3.3.3.2.6. Reino Unido. 3.3.3.2.7. España. 3.3.3.2.8. Uruguay. 4. Consensos en torno a los principios éticos necesarios para una IA antropocéntrica. 5. Conclusiones.
Book
Full-text available
This is the Portuguese version UNESCO report "K-12 AI curricula: A mapping of government-endorsed AI curricula" As AI technology represents a new subject area for K–12 schools worldwide, there is a lack of historical knowledge for governments, schools and teachers to draw from in defining AI competencies and designing AI curricula. This mapping exercise analyses existing AI curricula with a specific focus on the curriculum content and learning outcomes, and delineates development and validation mechanisms, curriculum alignment, the preparation of learning tools and required environments, the suggested pedagogies, and the training of teachers. Key considerations are drawn from the analysis to guide the future planning of enabling policies, the design of national curricula or institutional study programmes, and implementation strategies for AI competency development.
Article
Full-text available
Numerous AI ethics checklists and frameworks have been proposed focusing on different dimensions of ethical AI such as fairness, explainability, and safety. Yet, no such work has been done on developing transparent AI systems for real-world educational scenarios. This paper presents a Transparency Index framework that has been iteratively co-designed with different stakeholders of AI in education, including educators, ed-tech experts, and AI practitioners. We map the requirements of transparency for different categories of stakeholders of AI in education and demonstrate that transparency considerations are embedded in the entire AI development process from the data collection stage until the AI system is deployed in the real world and iteratively improved. We also demonstrate how transparency enables the implementation of other ethical AI dimensions in Education like interpretability, accountability, and safety. In conclusion, we discuss the directions for future research in this newly emerging field. The main contribution of this study is that it highlights the importance of transparency in developing AI-powered educational technologies and proposes an index framework for its conceptualization for AI in education.
Article
Many commercial actors in the tech sector publish ethics guidelines as a means to ‘wash away’ concerns raised about their policies. For some academics, this phenomenon is reason to replace ethics with other tools and methods in an attempt to make sure that the tech sector does not cross any moral Rubicons. Others warn against the tendency to reduce a criticism of ‘ethics washing’ into one of ethics simpliciter. In this essay, I argue firstly that the dominant focus on principles, dilemmas, and theory in conventional ethical theories and practices could be an explanation of it lacking resistance to abuse by dominant actors, and hence its rather disappointing capacity to stop, redirect, or at least slow down big tech’s course. Secondly, drawing from research on casuistry and political philosopher Raymond Geuss, this essay will make a case for a question, rather than theory or principle-based ethical data practice. The emphasis of this approach is placed on the acquisition of a thorough understanding of a social-political phenomenon like tech development. This approach should be replenished with one extra component to the picture of the repoliticized data ethics drawn so far: the importance of ‘exemplars,’ or stories. Precisely the fact that one should acquire an in-depth understanding of the problem in practice will also allow one to look in the past, present, or future for similar and comparable stories from which one can learn.
Conference Paper
Full-text available
The National Human Rights Council (CNDH) considers human rights relevant to the field of Artificial Intelligence within an international context characterized by a holistic reflection on the matter. Numerous initiatives from international, regional, and national bodies are currently developing. Approaching this topic from a systemic perspective, the establishment of a definition of Artificial Intelligence is required. While it may prove challenging to find a comprehensive and conventional definition, given the multiple angles of approach, we have adopted the following definition: Artificial Intelligence is both a scientific field (integrating multiple scientific ranges: mathematics, informatics, neurology, psychology, engineering, sociology…) that aims to create a technological equivalent to human intelligence, on the one hand; and autonomous intelligent systems with algorithms capable of performing actions that have so far been created exclusively by humans, or that help or make decisions or self-learn through the data at their disposal, on the other. In today’s world where digitization is a lever for societies’ growth and evolution, Artificial Intelligence is used in a wide array of fields, such as: in the field of mobility and image processing (facial recognition, automated archiving, localization, cryptography, etc.); in education; in data processing and decision-making assistance; in maintenance; in data transfers and documentation; in banking and accounting; in health and medicine; in planning; in the field of mapping; building simulations; information and communication. Artificial Intelligence is thus amongst the mechanisms that facilitate the enjoyment of fundamental rights and freedoms by citizens. However, the use of Artificial Intelligence is not devoid of risks to certain rights and freedoms, namely the right to physical integrity and integrity of data, the right to freedom of opinion and expression, the right to access information, the right to privacy, consumer rights, equality and non-discrimination, protection of vulnerable groups (e.g., children, persons with disabilities), the right to physical and psychological integrity, freedom of elections, the right to employment, freedom of assembly, freedom of peaceful demonstration, ... The Council shares the conviction of the United Nations High Commissioner for Human Rights that “Artificial Intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if used without sufficient regard to how they affect people’s human rights... This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks.” Considering the enormous opportunities that Artificial Intelligence provides to facilitate access to rights and freedoms, on one side, and the risks that its use poses to certain rights and freedoms, on the other, the Council, through its human rights based approach, seeks to propose ways to achieve the following objectives: - Development of Artificial Intelligence in line with a constructive approach to human rights and the values of a democratic society; - Study and adequately address the effects of artificial intelligence on human rights; - Artificial Intelligence actors to assume responsibility for its use; - Citizens to enjoy the benefits of technology associated with artificial intelligence in respect of human rights. After conducting broad consultations with all national stakeholders, the Council organized an international seminar in Rabat on December 3rd, 2021, to discuss international initiatives in the organization of artificial intelligence with regard to human rights, the various standards, guidelines, and regulations, and governing principles in the field.
Article
Full-text available
In the original publication of this article, the Table 1 has been published incorrectly. Now the same has been provided in this correction. The publisher apologizes for the error made during production.
Article
Full-text available
Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved.
Article
Full-text available
Cancer is not just one disease, but a large group of almost 100 diseases. Its two main characteristics are uncontrolled growth of the cells in the human body and the ability of these cells to migrate from the original site and spread to distant sites. If the dispersion is not controlled, cancer can outcome in death. One out of every four deaths in the United States (US) is from cancer. It is second only to heart disease as a cause of death in the US. About 1.2 million Americans are diagnosed with cancer per annum; apart from 500,000 die of cancer every year.Palliative care is a well-established approach to maintaining quality of life in end-stage cancer patients. Palliative care nurses have to complete basic diploma/degree/post-graduation in nursing with special training/experience in palliative care. Palliative care nurses often work in collaboration with doctors, allied health professionals, social workers, physiotherapists, and other multidisciplinary clinical care. There is a unique body of knowledge with direct application to the practice of palliative care nursing. This includes pain and symptom management, end-stage disease processes, spiritual and culturally sensitive care of patients and their families, interdisciplinary collaborative practice, loss and grief issues, patient education and advocacy, ethical and legal considerations, and communication skills, etc. The Need for the Palliative Care Nurse is a model that is persistent with basic nursing values, which combines caring for patients and their families behindhand of their culture, age, socioeconomic status, or diagnoses, and engaging in caring relationships that transcend time, circumstances, and location.
Conference Paper
Full-text available
The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them.
Article
Full-text available
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.
Article
Full-text available
Effy Vayena and colleagues argue that machine learning in medicine must offer data protection, algorithmic transparency, and accountability to earn the trust of patients and clinicians.
Article
Full-text available
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.
Article
Full-text available
This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
Article
Full-text available
This article argues that an ethical framework will help to harness the potential of AI while keeping humans in control.
Article
Full-text available
This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.
Article
Full-text available
Google Scholar and Google Search are considered to be important sources of grey literature, governmental and institutional reports (Haddaway et al. 2015; Hagstrom et al. 2015). Therefore, although Google Scholar and Google Search have their limitations and should not be used as the only source for systematic reviews, both seemed to be apt for the purposes of some types of qualitative systematic reviews.
Article
Full-text available
Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining (DADM) and fairness, accountability and transparency machine learning (FATML), their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent indirect discrimination-by-proxy, such as redlining. Such organisations might also lack the knowledge and capacity to identify and manage fairness issues that are emergent properties of complex sociotechnical systems. This paper presents and discusses three potential approaches to deal with such knowledge and information deficits in the context of fairer machine learning. Trusted third parties could selectively store data necessary for performing discrimination discovery and incorporating fairness constraints into model-building in a privacy-preserving manner. Collaborative online platforms would allow diverse organisations to record, share and access contextual and experiential knowledge to promote fairness in machine learning systems. Finally, unsupervised learning and pedagogically interpretable algorithms might allow fairness hypotheses to be built for further selective testing and exploration. Real-world fairness challenges in machine learning are not abstract, constrained optimisation problems, but are institutionally and contextually grounded. Computational fairness tools are useful, but must be researched and developed in and with the messy contexts that will shape their deployment, rather than just for imagined situations. Not doing so risks real, near-term algorithmic harm.
Article
Full-text available
Artificial intelligence and brain–computer interfaces must respect and preserve people's privacy, identity, agency and equality, say Rafael Yuste, Sara Goering and colleagues.
Article
Full-text available
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.
Article
Full-text available
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.
Article
Full-text available
The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.
Article
Full-text available
Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field [1],[2], and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research [3], and some health care journals are moving in this direction [4]. As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in four leading medical journals in 1985 and 1986 and found that none met all eight explicit scientific criteria, such as a quality assessment of included studies [5]. In 1987, Sacks and colleagues [6] evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in six domains. Reporting was generally poor; between one and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement [7]. In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials [8]. In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1: Conceptual Issues in the Evolution from QUOROM to PRISMA Completing a Systematic Review Is an Iterative Process The conduct of a systematic review depends heavily on the scope and quality of included studies: thus systematic reviewers may need to modify their original review protocol during its conduct. Any systematic review reporting guideline should recommend that such changes can be reported and explained without suggesting that they are inappropriate. The PRISMA Statement (Items 5, 11, 16, and 23) acknowledges this iterative process. Aside from Cochrane reviews, all of which should have a protocol, only about 10% of systematic reviewers report working from a protocol [22]. Without a protocol that is publicly accessible, it is difficult to judge between appropriate and inappropriate modifications.
Article
Full-text available
Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement--a reporting guideline published in 1999--there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
Article
Full-text available
Background The scoping review has become an increasingly popular approach for synthesizing research evidence. It is a relatively new approach for which a universal study definition or definitive procedure has not been established. The purpose of this scoping review was to provide an overview of scoping reviews in the literature.MethodsA scoping review was conducted using the Arksey and O'Malley framework. A search was conducted in four bibliographic databases and the gray literature to identify scoping review studies. Review selection and characterization were performed by two independent reviewers using pretested forms.ResultsThe search identified 344 scoping reviews published from 1999 to October 2012. The reviews varied in terms of purpose, methodology, and detail of reporting. Nearly three-quarter of reviews (74.1%) addressed a health topic. Study completion times varied from 2 weeks to 20 months, and 51% utilized a published methodological framework. Quality assessment of included studies was infrequently performed (22.38%).Conclusions Scoping reviews are a relatively new but increasingly common approach for mapping broad topics. Because of variability in their conduct, there is a need for their methodological standardization to ensure the utility and strength of evidence. Copyright © 2014 John Wiley & Sons, Ltd.
Article
Full-text available
We hypothesize that there is a general bias, based on both innate predispositions and experience, in animals and humans, to give greater weight to negative entities (e.g., events, objects, personal traits). This is manifested in 4 ways: (a) negative potency (negative entities are stronger than the equivalent positive entities), (b) steeper negative gradients (the negativity of negative events grows more rapidly with approach to them in space or time than does the positivity of positive events, (c) negativity dominance (combinations of negative and positive entities yield evaluations that are more negative than the algebraic sum of individual subjective valences would predict), and (d) negative differentiation (negative entities are more varied, yield more complex conceptual representations, and engage a wider response repertoire). We review evidence for this taxonomy, with emphasis on negativity dominance, including literary, historical, religious, and cultural sources, as well as the psychological literatures on learning, attention, impression formation, contagion, moral judgment, development, and memory. We then consider a variety of theoretical accounts for negativity bias. We suggest that I feature of negative events that make them dominant is that negative entities are more contagious than positive entities.
Article
Full-text available
The paper investigates the ethics of information transparency (henceforth transparency). It argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles. A new definition of transparency is offered in order to take into account the dynamics of information production and the differences between data and information. It is then argued that the proposed definition provides a better understanding of what sort of information should be disclosed and what sort of information should be used in order to implement and make effective the ethical practices and principles to which an organisation is committed. The concepts of “heterogeneous organisation” and “autonomous computational artefact” are further defined in order to clarify the ethical implications of the technology used in implementing information transparency. It is argued that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing.
Article
Full-text available
Current debates over the relation between climate change and conflict originate in a lack of data, as well as the complexity of pathways connecting the two phenomena.
Article
Technology companies are running a campaign to bend research and regulation for their benefit; society must fight back, says Yochai Benkler. Technology companies are running a campaign to bend research and regulation for their benefit; society must fight back, says Yochai Benkler. “Inside an algorithmic black box, societal biases are rendered invisible and unaccountable.”
Article
Artificial intelligence (AI) and deep learning are entering the mainstream of clinical medicine. For example, in December 2016, Gulshan et al¹ reported development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. An accompanying editorial by Wong and Bressler² pointed out limits of the study, the need for further validation of the algorithm in different populations, and unresolved challenges (eg, incorporating the algorithm into clinical work flows and convincing clinicians and patients to “trust a ‘black box’”). Sixteen months later, the Food and Drug Administration (FDA)³ permitted marketing of the first medical device to use AI to detect diabetic retinopathy. FDA reduced the risk of releasing the device by limiting the indication for use to screening adults who do not have visual symptoms for greater than mild retinopathy, to refer them to an eye care specialist.
Article
Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data, argue James Zou and Londa Schiebinger. Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data.
Article
In this article, we recognize the profound effects that algorithmic decision making can have on people’s lives and propose a harm-reduction framework for algorithmic fairness. We argue that any evaluation of algorithmic fairness must take into account the foreseeable effects that algorithmic design, implementation, and use have on the well-being of individuals. We further demonstrate how counterfactual frameworks for causal inference developed in statistics and computer science can be used as the basis for defining and estimating the foreseeable effects of algorithmic decisions. Finally, we argue that certain patterns of foreseeable harms are unfair. An algorithmic decision is unfair if it imposes predictable harms on sets of individuals that are unconscionably disproportionate to the benefits these same decisions produce elsewhere. Also, an algorithmic decision is unfair when it is regressive, that is, when members of disadvantaged groups pay a higher cost for the social benefits of that decision.
Book
The author investigates how to produce realistic and workable ethical codes or regulations in this rapidly developing field to address the immediate and realistic longer-term issues facing us. She spells out the key ethical debates concisely, exposing all sides of the arguments, and addresses how codes of ethics or other regulations might feasibly be developed, looking for pitfalls and opportunities, drawing on lessons learned in other fields, and explaining key points of professional ethics. The book provides a useful resource for those aiming to address the ethical challenges of AI research in meaningful and practical ways.
Article
As artificial intelligence puts many out of work, we must forge new economic, social and educational systems, argues Yuval Noah Harari.
Article
Fears about the future impacts of artificial intelligence are distracting researchers from the real risks of deployed systems, argue Kate Crawford and Ryan Calo.
Article
Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.
Article
This paper focuses on scoping studies, an approach to reviewing the literature which to date has received little attention in the research methods literature. We distinguish between different types of scoping studies and indicate where these stand in relation to full systematic reviews. We outline a framework for conducting a scoping study based on our recent experiences of reviewing the literature on services for carers for people with mental health problems. Where appropriate, our approach to scoping the field is contrasted with the procedures followed in systematic reviews. We emphasize how including a consultation exercise in this sort of study may enhance the results, making them more useful to policy makers, practitioners and service users. Finally, we consider the advantages and limitations of the approach and suggest that a wider debate is called for about the role of the scoping study in relation to other types of literature reviews.
Article
The authors examine a number of examples of “soft law”: written and unwritten instruments and influences which shape administrative decision-making. Rather than rendering bureaucratic processes more transparent and cohesive, or fostering greater accountability and consistency among decision-makers, “soft law” in this context frequently reinforces artificial divisions. Moreover, it insulates decisions and decision-makers from the kinds of critical inquiry typically associated with “hard law.” If it is to realize its potential as a bridge between law and policy, and lend meaning to core principles – like fairness and reliability – soft law ought to be subjected to similarly critical consideration. The authors maintain that doing so allows one to preserve soft law’s promise of flexibility. Moreover, one avoids falling prey to the misleading dichotomies soft law tends to bolster in the absence of critical administrative, political, and judicial scrutiny.
Article
This article examines the legal status of "soft law" in the fields of medicine and medical research. Many areas of clinical practice and research involve complex and rapidly changing issues for which the law provides no guidance. Instead, guidance for physicians and researchers comes from what has often been called "soft law"--non-legislative, non-regulatory sources, such as ethics policy statements, codes, and guidelines from professional or quasi-governmental bodies. This article traces the evolution of these "soft law" instruments: how they are created, how they are adopted within the professional community, and how they become accepted by the courts. It studies the relationship between soft law instruments and the courts. It includes an examination of the approaches to judicial analysis used by the courts in theory and in practice. The authors then examine the jurisprudence to see how courts will adopt professional norms as the legal standard of care in some circumstances and not others. They consider the legal concerns and ethical issues surrounding the weight attached to professional practices and norms in law. The authors demonstrate how practices and policies that guide professional conduct may ultimately bear weight as norms recognizable and enforceable within the legal sphere.
Science must examine the future of work
Science must examine the future of work. Nature 550, 301-302 (2017).
The Cambridge Handbook of Artificial Intelligence
  • N Bostrom
  • E Yudkowsky
Bostrom, N. & Yudkowsky, E. in The Cambridge Handbook of Artificial Intelligence (eds Frankish, K. & Ramsey, W. M.) 316-334 (Cambridge Univ. Press, 2014). https://doi.org/10.1017/CBO9781139046855.020
  • A Etzioni
  • O Etzioni
Etzioni, A. & Etzioni, O. AI assisted ethics. Ethics Inf. Technol. 18, 149-156 (2016).
Linking artificial intelligence principles
  • Y Zeng
  • E Lu
  • C Huangfu
Zeng, Y., Lu, E. & Huangfu, C. Linking artificial intelligence principles. Preprint at https://arxiv.org/abs/1812.04814 (2018).
Alphabetical list of resources
  • P Boddington
Boddington, P. Alphabetical list of resources. Ethics for Artificial Intelligence https://www.cs.ox.ac.uk/efai/resources/alphabetical-list-of-resources/ (2018).
A round up of robotics and AI ethics
  • A Winfield
Winfield, A. A round up of robotics and AI ethics. Alan Winfield's Web Log http://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical. html (2017).
Googling for grey: using Google and Duckduckgo to find grey literature
  • C Hagstrom
  • S Kendall
  • H Cunningham
Hagstrom, C., Kendall, S. & Cunningham, H. Googling for grey: using Google and Duckduckgo to find grey literature. In Abstracts of the 23rd
  • Cochrane Colloquium
Cochrane Colloquium Vol. 10, LRO 3.6, 40 (Cochrane Database of Systematic Reviews, 2015).