Conference Paper

What Do We Teach When We Teach Tech Ethics?: A Syllabi Analysis

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... While computing has been practiced since early times, the rapidity with which calculations can be performed today is unprecedented [5]. Digitalisation, computerisation, and the related fourth industrial revolution (4IR) are shaping the world in fundamental ways, raising many new classes of challenges, opportunities, and ethical dilemmas [3], [4], [6]- [9]. Indeed, AI may bring a number of benefits to societies and help in tackling complex global challenges [2]. ...
... The new social and ethical dilemmas brought about by computing and computers-fundamentally different than any dilemmas known before-has resulted in reshaping of AI and AI-ethics training in universities [3], [4]. A challenge for the CSER community is viewing AI-ethics too narrowly inside the epistemological and methodological boundaries of computer science only [4]. ...
... [14]). One challenge is isolating ethical perspectives in standalone AI-ethics courses rather than including ethics holistically in CSER curricula [3]. ...
... While academics have been studying the topics of teaching AI and AI ethics for more than half a century (e.g., Chand, 1974;Gehman, 1984;Martin et al., 1996;Applin, 2006;Ahmad, 2014), the systematic assessment of the topics, developments, and trends in teaching AI ethics is a relatively recent endeavor. However, most of the previous research that focused on a systematic analysis of teaching AI ethics suffered from one or more of the following limitations: 1) having a limited disciplinary scope (e.g., integration of ethics only in courses in machine-learning, Saltz et al., 2019;engineering, Bielefeldt et al., 2019;Nasir et al., 2021; human-computer interaction, Khademi & Hui, 2020;software engineering, Towell, 2003;or distributed systems, Abad, Ortiz-Holguin, & Boza, 2021); 2) having a limited geographical coverage and, as explained in Hughes et al. (2020), Mohamed et al. (2020), being biased towards Western cultures (e.g., Moller & Crick, 2018;Fiesler et al., 2020;Garrett et al., 2020;Raji et al., 2021;Homkes & Strikwerda, 2009); or 3) including courses taught at only one single level (e.g., introductory level, Becker & Fitzpatrick, 2019). ...
... Most importantly, all these previous attempts to map the teaching AI ethics field are human-driven approaches, with topics of interest manually identified based on grouping the instructor-described topics into higher-level categories (e.g., Fiesler et al., 2020) or on open coding (e.g., Garrett et al., 2020;Raji et al., 2021). However, such approaches are sensitive to the subjectivity and noise inherent in human decisions and the limited ability of human analysts to work effectively at very large scales. ...
... After doing that, we distil a model of current pedagogical practices in the domain of teaching AI ethics. Our approach to analysing the use of pedagogical concepts in AI ethics courses is unique as, in contrast with previous research (e.g., Fiesler et al., 2020;Saltz et al., 2019;Bielefeldt et al., 2019), we anchor our study in well recognized canons from pedagogy science, that is, Bloom's taxonomy (Krathwohl, 2002) and Biggs' constructive alignment principle (Biggs & Tang, 2011), as explained in Section 2.3. ...
Article
Full-text available
The domain of Artificial Intelligence (AI) ethics is not new, with discussions going back at least 40 years. Teaching the principles and requirements of ethical AI to students is considered an essential part of this domain, with an increasing number of technical AI courses taught at several higher-education institutions around the globe including content related to ethics. By using Latent Dirichlet Allocation (LDA), a generative probabilistic topic model, this study uncovers topics in teaching ethics in AI courses and their trends related to where the courses are taught, by whom, and at what level of cognitive complexity and specificity according to Bloom’s taxonomy. In this exploratory study based on unsupervised machine learning, we analyzed a total of 166 courses: 116 from North American universities, 11 from Asia, 36 from Europe, and 10 from other regions. Based on this analysis, we were able to synthesize a model of teaching approaches, which we call BAG (Build, Assess, and Govern), that combines specific cognitive levels, course content topics, and disciplines affiliated with the department(s) in charge of the course. We critically assess the implications of this teaching paradigm and provide suggestions about how to move away from these practices. We challenge teaching practitioners and program coordinators to reflect on their usual procedures so that they may expand their methodology beyond the confines of stereotypical thought and traditional biases regarding what disciplines should teach and how. This article appears in the AI & Society track.
... There are two primary approaches to integrating such components into the curriculum: (i) stand-alone courses that focus on ethical issues such as FACT-AI topics, and (ii) a holistic curriculum where ethics are introduced and tackled in each course Fiesler, Garrett, and Beard 2020;Peck 2017). In general, the latter is rare (Saltz et al. 2019;Peck 2017; 1 https://paperswithcode.com/rc2020 Fiesler, Garrett, and Beard 2020), and can be difficult to organize due to a lack of qualified faculty or relevant expertise (Raji, Scheuerman, and Amironesei 2021; Bates et al. 2020). ...
... There are two primary approaches to integrating such components into the curriculum: (i) stand-alone courses that focus on ethical issues such as FACT-AI topics, and (ii) a holistic curriculum where ethics are introduced and tackled in each course Fiesler, Garrett, and Beard 2020;Peck 2017). In general, the latter is rare (Saltz et al. 2019;Peck 2017; 1 https://paperswithcode.com/rc2020 Fiesler, Garrett, and Beard 2020), and can be difficult to organize due to a lack of qualified faculty or relevant expertise (Raji, Scheuerman, and Amironesei 2021; Bates et al. 2020). Since we were designing one new course to be added into an existing program, we opted for the first approach. ...
... We analyze our course according to the criteria outlined in Fiesler, Garrett, and Beard (2020), where the authors analyze 202 courses on "tech ethics". Their survey examines (i) the departments the courses are taught from, as well as the home departments of the course instructors, (ii) the topics covered in the courses, organized into 15 categories, and (iii) the learning outcomes in the courses. ...
Preprint
Full-text available
In this work we explain the setup for a technical, graduate-level course on Fairness, Accountability, Confidentiality and Transparency in Artificial Intelligence (FACT-AI) at the University of Amsterdam, which teaches FACT-AI concepts through the lens of reproducibility. The focal point of the course is a group project based on reproducing existing FACT-AI algorithms from top AI conferences, and writing a report about their experiences. In the first iteration of the course, we created an open source repository with the code implementations from the group projects. In the second iteration, we encouraged students to submit their group projects to the Machine Learning Reproducibility Challenge, which resulted in 9 reports from our course being accepted to the challenge. We reflect on our experience teaching the course over two academic years, where one year coincided with a global pandemic, and propose guidelines for teaching FACT-AI through reproducibility in graduate-level AI programs. We hope this can be a useful resource for instructors to set up similar courses at their universities in the future.
... For example, Practical Data Ethics is a series of six free online lessons that was originally delivered as an evening course at the University of San Francisco and is now be offered by Ai.Fast [16]. The syllabus (not exhaustive) is sourced from an analysis of 100 technical ethics syllabus [17] and covers misinformation, bias and fairness and algorithmic colonialism. [18] maintains a list of over 292 courses currently running tech ethics curricula at university level. ...
... Table I lists all demographic questions in the survey, whilst Table II lists the subset of questions from the survey relevant to the specific aim of this paper. RQ1 is analyzed using the following survey questions (2,3,4,6,7,14,15,17,18), RQ2 using the following survey questions (2,3,4,6,7,14,15,17,18,22 -28) and finally RQ3 was analyzed using the following survey questions (32,33). What is your current education? ...
... Work-based introductory continuous professional development (CPD) modules were suggested (47.5%). 17.5% stated that they would like to learn about AI through a Board game and 25% through playing a mobile app. 25% thought family workshops would be a good idea and 27.5% thought booklets would also support learning. ...
... Related issues, including the privacy, accuracy, property, and accessibility of information, have been widely discussed in previous studies (Mason, 1986;Fallis, 2007;Tavani, 2016), and the global coronavirus pandemic in the past 2 years has greatly sparked public concerns and attention to information privacy and ethics . With the clearer tension between developing and using technology, the enhancement of information literacy education for all people, especially informaticists' knowledge and skills of information ethics, has received increasing research and practical attention from the industrial and education sectors (Stahl et al., 2016;Eskens, 2020;Fiesler et al., 2020;Stark et al., 2020;Wu et al., 2020). However, the nature, strategies, and pedagogies of ethics education are all parts of a longstanding debate within information and computer science (Saltz et al., 2019). ...
... However, the ways different educational institutions and individuals teach ethics are far from homogeneous in terms of the format, content, and structure. In addition, the reviews of school curricula have also found that instruction in professional ethics actually accounts for a rather small percentage of the total information ethics curriculum (Lin and Chou, 2014;Fiesler et al., 2020); most of the pre-service information professionals remained unfamiliar with professional ethics until they entered the workplace. Possible reasons could include school instructors' perceptions of discrepancies between their disciplinary expertise and the teaching of ethics (Martin, 1997), and as instruction in information ethics was regarded as resource-intensive (Grosz et al., 2019;Fiesler et al., 2020), a lack of structured curriculum and learning mechanisms could have diminished the generalizability of professional ethics instruction (Al-Ansari and Yousef, 2002;Mora et al., 2017). ...
... In addition, the reviews of school curricula have also found that instruction in professional ethics actually accounts for a rather small percentage of the total information ethics curriculum (Lin and Chou, 2014;Fiesler et al., 2020); most of the pre-service information professionals remained unfamiliar with professional ethics until they entered the workplace. Possible reasons could include school instructors' perceptions of discrepancies between their disciplinary expertise and the teaching of ethics (Martin, 1997), and as instruction in information ethics was regarded as resource-intensive (Grosz et al., 2019;Fiesler et al., 2020), a lack of structured curriculum and learning mechanisms could have diminished the generalizability of professional ethics instruction (Al-Ansari and Yousef, 2002;Mora et al., 2017). ...
Article
Full-text available
Taking advantage of the nature of games to deal with conflicting desires through contextual practices, this study illustrated the formal process of designing a situated serious game to facilitate learning of information ethics, a subject that heavily involves decision making, dilemmas, and conflicts between personal, institutional, and social desires. A simulation game with four mission scenarios covering critical issues of privacy, accuracy, property, and accessibility was developed as a situated, authentic and autonomous learning environment. The player-learners were 40 college students majoring in information science and computer science as pre-service informaticists. In this study, they played the game and their game experiences and decision-making processes were recorded and analyzed. The results suggested that the participants’ knowledge of information ethics was significantly improved after playing the serious game. From the qualitative analysis of their behavioral features, including paths, time spans, and access to different materials, the results supported that the game designed in this study was helpful in improving participants’ understanding, analysis, synthesis, and evaluation of information ethics issues, as well as their judgments. These findings have implications for developing curricula and instructions in information ethics education.
... This study is framed within the social contract theory as defined in the works of Curtis et al. (2017) as well as Fiesler, Garrett and Beard (2020). Aristotle first advocated this theory. ...
... Rather it holds that man is accountable to society and is bound by societal moral expectations (Holden et al., 2012;Koenane & Mangena, 2017). At the core of this theory is the acknowledgment that absolute freedom could only result in vicious moral chaos since individual desires are bound to clash with the societal common good (Thornburg & Oguz, 2015;Curtis et al., 2017;Fiesler et al., 2020). ...
... As confirmed by the findings, the above-mentioned forms of punishment not only cause distress to the learners but sometimes even push the learners to drop out of school. Apart from being tantamount to child abuse and compromising the teaching professional ethics (Gelmez-Burakgazi & Can, 2018;Devika & Dilip, 2019); they also contravene the social contract that teachers got into by virtue of choosing the teaching career (Curtis et al., 2017;Fiesler et al., 2020). ...
... Fiesler et al. analyzed 115 syllabi from university tech ethics courses and found "that many topics within tech ethics are high level and conceptual when it comes to the impact of technology on societye.g., how human decisions are built into code, how technology can reproduce and augment existing social inequalities, how data is created by and directly impacts people, and how choices made at both the level of companies and in small bits of code combine to create large-scale social consequences" [13]. Consequently, they argue that tech ethics "could be part of every computing course" [13]. ...
... Fiesler et al. analyzed 115 syllabi from university tech ethics courses and found "that many topics within tech ethics are high level and conceptual when it comes to the impact of technology on societye.g., how human decisions are built into code, how technology can reproduce and augment existing social inequalities, how data is created by and directly impacts people, and how choices made at both the level of companies and in small bits of code combine to create large-scale social consequences" [13]. Consequently, they argue that tech ethics "could be part of every computing course" [13]. Recent work in undergraduate computing ethics include designs for standalone ethics courses [11,32]; integrated ethics across the curriculum [7,16]; and integrated ethics modules or lessons in courses such as machine learning [35], human-centered computing [37], and introductory CS [10,12]. ...
... Much of the research on justice-centered approaches to computing education focuses on K-12 learning environments [2,6,24,33,34,36,38,40]; much less research focuses on higher education. Of the three features (ethics, identity, and political vision), higher CS education has predominantly focused on ethics [7, 10-13, 16, 32, 35, 37], with some of the earliest work in academic communities appearing at SIGCSE Technical Symposium in 1972 [13]. Work on identity in higher CS education is relatively more recent (e.g. ...
Preprint
Full-text available
Justice-centered approaches to equitable computer science (CS) education prioritize the development of students' CS disciplinary identities toward social justice rather than corporations, industry, empire, and militarism by emphasizing ethics, identity, and political vision. However, most research in justice-centered approaches to equitable CS education focus on K-12 learning environments. In this position paper, we problematize the lack of attention to justice-centered approaches to CS in higher education and then describe a justice-centered approach for undergraduate Data Structures and Algorithms that (1) critiques sociopolitical values of data structure and algorithm design and dominant computing epistemologies that approach social good without design justice; (2) centers students in culturally responsive-sustaining pedagogies to resist dominant computing culture and value Indigenous ways of living in nature; and (3) ensures the rightful presence of political struggles through reauthoring rights and problematizing the political power of computing. Through a case study of this Critical Comparative Data Structures and Algorithms pedagogy, we argue that justice-centered approaches to higher CS education can help students not only critique the ethical implications of nominally technical concepts, but also develop greater respect for diverse epistemologies, cultures, and narratives around computing that can help all of us realize the socially-just worlds we need.
... However, little is known about such ethics courses in AI/tech curricula on a global scale. A recent review of 115 tech ethics syllabi from university technology ethics courses by Fiesler et al. [50] found a lack of consistency in the course content taught and a lack of standards. Course content may cover topics as diverse as law, policy, privacy, and surveillance, as well as social and environmental impact, cybersecurity, and medical/health [50]. ...
... A recent review of 115 tech ethics syllabi from university technology ethics courses by Fiesler et al. [50] found a lack of consistency in the course content taught and a lack of standards. Course content may cover topics as diverse as law, policy, privacy, and surveillance, as well as social and environmental impact, cybersecurity, and medical/health [50]. For Fiesler et al. [50], this broad topic range and inconsistency in teaching content across syllabi do not come as a surprise, given the current lack of standards enabling educators with leeway to design courses according to their own discretion. ...
... Course content may cover topics as diverse as law, policy, privacy, and surveillance, as well as social and environmental impact, cybersecurity, and medical/health [50]. For Fiesler et al. [50], this broad topic range and inconsistency in teaching content across syllabi do not come as a surprise, given the current lack of standards enabling educators with leeway to design courses according to their own discretion. As Garret et al. [56] note, "if AI education is in the infancy stage of development, then AI ethics education is barely an embryo." ...
Article
Full-text available
This paper proposes to generate awareness for developing Artificial intelligence (AI) ethics by transferring knowledge from other fields of applied ethics, particularly from business ethics, stressing the role of organizations and processes of institutionalization. With the rapid development of AI systems in recent years, a new and thriving discourse on AI ethics has (re-)emerged, dealing primarily with ethical concepts, theories, and application contexts. We argue that business ethics insights may generate positive knowledge spillovers for AI ethics, given that debates on ethical and social responsibilities have been adopted as voluntary or mandatory regulations for organizations in both national and transnational contexts. Thus, business ethics may transfer knowledge from five core topics and concepts researched and institutionalized to AI ethics: (1) stakeholder management, (2) standardized reporting, (3) corporate governance and regulation, (4) curriculum accreditation, and as a unified topic (5) AI ethics washing derived from greenwashing. In outlining each of these five knowledge bridges, we illustrate current challenges in AI ethics and potential insights from business ethics that may advance the current debate. At the same time, we hold that business ethics can learn from AI ethics in catching up with the digital transformation, allowing for cross-fertilization between the two fields. Future debates in both disciplines of applied ethics may benefit from dialog and cross-fertilization, meant to strengthen the ethical depth and prevent ethics washing or, even worse, ethics bashing.
... While various approaches exist to practicing and researching values in design (see, e.g., Cockton, 2006;Belman et al., 2009;Iversen and Leong, 2012;Friedman and Hendry, 2019;Nissenbaum, 2021b), there are only few examples of how to teach students about values in design (Frauenberger and Purgathofer, 2019;Barendregt et al., 2020;Nilsson et al., 2020;Nissenbaum, 2021a, for recent overviews see Fiesler et al., 2020;Hendry et al., 2020). ...
... Similarly, (Frauenberger and Purgathofer, 2019;Nilsson et al., 2020) are developing teaching materials for educating responsible designers. To both describe current trends in computing ethics coursework and to provide guidance for further ethics inclusion in computing, Fiesler et al. present an in-depth qualitative analysis of syllabi from university technology ethics courses (Fiesler et al., 2020). Finally, Pillai et al. (2021) recently argued that beyond defining ethics, an ethics curriculum must enable practitioners to reflect and allow consideration of intended and unintended consequences of the technologies they create from the ground up, rather than as a fix or an afterthought (Pillai et al., 2021). ...
Article
Full-text available
There is an increasing awareness of the importance of considering values in the design of technology. There are several research approaches focused on this, such as e.g., value-sensitive design, value-centred human–computer interaction (HCI), and value-led participatory design, just to mention a few. However, less attention has been given to developing educational materials for the role that values play in HCI, why hands-on teaching activities are insufficient, and especially teaching activities that cover the full design process. In this article, we claim that teaching for ethics and values in HCI is not only important in some parts of the design and development process, but equally important all through. We will demonstrate this by a unique collection of 28 challenges identified throughout the design process, accompanied by inspirational suggestions for teaching activities to tackle these challenges. The article is based on results from applying a modified pedagogical design pattern approach in the iterative development of an open educational resource containing teaching and assessment activities and pedagogical framework, and from pilot testing. Preliminary results from pilots of parts of the teaching activities indicate that student participants experience achieving knowledge about how to understand and act ethically on human values in design, and teachers experience an increased capacity to teach for values in design in relevant and innovative ways. Hopefully, this overview of challenges and inspirational teaching activities focused on values in the design of technology can be one way to provide teachers with inspiration to sensitize their students and make them better prepared to become responsible designers by learning how to address and work with values in HCI.
... However, fairness is one of the prevailing ethical concerns with automated decision-making, and is a key stumbling block to advance the confident use of AI in application areas where the automated decisions impact human interests, rights, and lives. A few of the recent incidents of "ethical crisis" [9] in AI include Cambridge Analytica's involvement in influencing hundreds of elections globally [7], Google employee protests over military contracts [18], biased algorithms in Amazon's hiring processes [14], and racial discrimination in predictive policing [16]. In the context of decision-making, a fair decision is free from favoritism or prejudice towards individuals or groups based on their inherent or acquired characteristics. ...
... Fiesler et. al. [9] analyze 115 syllabi from university technology ethics courses that advance the inclusion of ethics in the computing curriculum. Reich et. ...
... A recent paper (Fiesler et al., 2020) surveys Computer Science computer ethics classes in 94 universities located mainly in the US, showed that there is much variability in the content of computer ethics courses which they attribute to the lack of standards in this particular subject. This is not to say that there are no common patterns. ...
Article
Full-text available
Within the Computer Science community, many ethical issues have emerged as significant and critical concerns. Computer ethics is an academic field in its own right and there are unique ethical issues associated with information technology. It encompasses a range of issues and concerns including privacy and agency around personal information, Artificial Intelligence and pervasive technology, the Internet of Things and surveillance applications. As computing technology impacts society at an ever growing pace, there are growing calls for more computer ethics content to be included in Computer Science curricula. In this paper we present the results of a survey that polled faculty from Computer Science and related disciplines about teaching practices for computer ethics at their institutions. The survey was completed by respondents from 61 universities across 23 European countries. Participants were surveyed on whether or not computer ethics is taught to Computer Science students at each institution, the reasons why computer ethics is or is not taught, how computer ethics is taught, the background of staff who teach computer ethics and the scope of computer ethics curricula. This paper presents and discusses the results of the survey.
... The capabilities for human-centred AI need explicit attention. Human-centred design, ethics and philosophy are commonly not extensively taught in K-12 or university curricula, and there is a particular absence of these topics in engineering and computer science degrees (Fiesler, Garrett, & Beard, 2020). This is a threat to the development of ethical AI. ...
Article
Full-text available
The proliferation of AI in many aspects of human life—from personal leisure, to collaborative professional work, to global policy decisions—poses a sharp question about how to prepare people for an interconnected, fast-changing world which is increasingly becoming saturated with technological devices and agentic machines. What kinds of capabilities do people need in a world infused with AI? How can we conceptualise these capabilities? How can we help learners develop them? How can we empirically study and assess their development? With this paper, we open the discussion by adopting a dialogical knowledge-making approach. Our team of 11 co-authors participated in an orchestrated written discussion. Engaging in a semi-independent and semi-joint written polylogue, we assembled a pool of ideas of what these capabilities are and how learners could be helped to develop them. Simultaneously, we discussed conceptual and methodological ideas that would enable us to test and refine our hypothetical views. In synthesising these ideas, we propose that there is a need to move beyond AI-centred views of capabilities and consider the ecology of technology, cognition, social interaction, and values.
... (ABET, 2021) To survey these impacts, students can be exposed to a variety of case studies that illustrate computing's discrete and granular effects as well as its more systemic and widespread consequences. (Baecker and Ronald, 2019;Fiesler et al., 2020) While empirical case studies go a long way towards fulfilling ABET's "impact" requirements, these efforts can be supplemented and contextualized by asking students to consider a more general and fundamental question about the relationship that humans have with technology. In its simplest formulation, the question can be posed as follows: "Are we in control of our technology? ...
Article
Full-text available
This paper describes an innovative learning activity for educating students about human-computer interaction. The goal of this learning activity is to familiarize students with the way instrumentalists on the one hand, and technological determinists on the other, conceive of human-technology interaction, and to assess which theory students favor. This paper describes and evaluates the efficacy of this learning activity and presents preliminary data on student responses. It also establishes a framework for understanding how students initially perceive human-technology interaction and how that understanding can be used to personalize and improve their learning. Instrumentalists believe that technology can be understood simply as a tool or neutral instrument that humans use to achieve their own ends. In contrast, technological determinists believe that technology is not fully under human control, that it has some degree of autonomy, and that it has its own ends. Exposing students to these two theories of human-technological interaction provides five benefits: First, the competing theories deepen students’ ability to describe how technology and humans interact. Second, they provide an ethical framework that students can use to describe how technology and humans should interact. Third, they provide students with a vocabulary that they can use to talk about human freedom and how the design of computing technology may constrain or expand that freedom. Fourth, by challenging students to articulate what theory they favor, the learning is personalized. Fifth, because the learning activity challenges students to express their personal beliefs about how humans and technology interact, the learning activity can help instructors develop a clearer understanding of those beliefs and whether they reinforce what Erin Cech has identified as a culture of depoliticization and disengagement in engineering culture.
... A small but growing body of research shows concrete examples of teaching ML to beginners [118], [119], [126], [127]. New social and ethical dilemmas created by AI also call for reshaping of related training in AI ethics [128], [129]. ...
Article
Full-text available
The need for organized computing education efforts dates back to the 1950s. Since then, computing education research (CER) has evolved and matured from its early initiatives and separation from mathematics education into a respectable research specialization of its own. In recent years, a number of meta-research papers, reviews, and scientometric studies have built overviews of CER from various perspectives. This paper continues that approach by offering new perspectives on the past and present state of CER: analyses of influential papers throughout the years, of the theoretical backgrounds of CER, of the institutions and authors who create CER, and finally of the top publication venues and their citation practices. The results reveal influential contributions from early curriculum guidelines to rigorous empirical research of today, the prominence of computer programming as a topic of research, evolving patterns of learning-theory usage, the dominance of high-income countries and a cluster of 52 elite institutions, and issues regarding citation practices within the central venues of dissemination.
... Commonplace are (often implicit) burning questions such as what makes life worth living and how we can make the world a better place through design. Ethics-based methods for design are proliferating [31], and in universities there is a growing interest in teaching ethical approaches to sociotechnical design and evaluation [41]. The past few years have seen the inception of agencies dealing with such issues, including the Center for Humane Technology (founded in 2018) and the Sacred Design Lab (2020), as well as countless academic research centers, including the Technology Ethics Center at the University of Notre Dame (2019) and the Ethics, Technology, and Human Interaction Center at Georgia Tech (2020)-to say nothing of the myriad such initiatives that existed prior. ...
Preprint
Full-text available
Out of the three major approaches to ethics, virtue ethics is uniquely well suited as a moral guide in the digital age, given the pace of sociotechnical change and the complexity of society. Virtue ethics focuses on the traits, situations and actions of moral agents, rather than on rules (as in deontology) or outcomes (consequentialism). Even as interest in ethics has grown within HCI, there has been little engagement with virtue ethics. To address this lacuna and demonstrate further opportunities for ethical design, this paper provides an overview of virtue ethics for application in HCI. It reviews existing HCI work engaging with virtue ethics, provides a primer on virtue ethics to correct widespread misapprehensions within HCI, and presents a deductive literature review illustrating how existing lines of HCI research resonate with the practices of virtue cultivation, paving the way for further work in virtue-oriented design.
... Technology ethics is not a new research domain. It has been studied in different contexts, for example, online communities [30], ethics education [34], gender and tech [24,36]. Similarly, in HRI/CRI, researchers have reviewed various example settings where ethical issues arise and proposed specific principles that one should consider as an HRI/CRI practitioner [61]. ...
Article
Full-text available
Recent advancements in socially assistive robotics (SAR) have shown a significant potential of using social robotics to achieve increasing cognitive and affective outcomes in education. However, the deployments of SAR technologies also bring ethical challenges in tandem, to the fore, especially in under-resourced contexts. While previous research has highlighted various ethical challenges that arise in SAR deployment in real-world settings, most of the research has been centered in resource-rich contexts, mainly in developed countries in the ‘Global North,’ and the work specifically in the educational setting is limited. This research aims to evaluate and reflect upon the potential ethical and pedagogical challenges of deploying a social robot in an under-resourced context. We base our findings on a 5-week in-the-wild user study conducted with 12 kindergarten students at an under-resourced community school in New Delhi, India. We used interaction analysis with the context of learning, education, and ethics to analyze the user study through video recordings. Our findings highlighted four primary ethical considerations that should be taken into account while deploying social robotics technologies in educational settings; (1) language and accent as barriers in pedagogy, (2) effect of malfunctioning, (un)intended harms, (3) trust and deception, and (4) ecological viability of innovation. Overall, our paper argues for assessing the ethical and pedagogical constraints and bridging the gap between non-existent literature from such a context to evaluate better the potential use of such technologies in under-resourced contexts.
... While some universities have taught computing ethics courses (within both computer science and other fields) for many years 250 the emphasis on ethics within computing education has increased dramatically in recent years. 251 When University of Colorado, Boulder information science professor Casey Fiesler tweeted a link to an editable spreadsheet of tech ethics classes in November 2017, it quickly grew to a crowdsourced list of more than 200 courses. 252 This plethora of courses represents a dramatic shift in computer science training and culture, with ethics becoming a popular topic of discussion and study after being largely ignored by the mainstream of the field just a few years prior. ...
Article
Full-text available
Artificial intelligence (AI), autonomous systems, and robotics are digital technologies that impact us all today,and will have momentous impact on the development of humanity and transformation of our society in the very near future. AI is implicated in the fields of computer science, law, philosophy, economics, religion, ethics. health, and more. This paper discusses the emerging field of AI ethics, how the tech industry is viewed by some as using AI ethics as window-dressing, or ethics-washing, and how employees have advanced corporate social responsibility and AI ethics as a check to big tech, with governments and public opinion often following with actions to develop responsible AI, in the aftermath of employee protests, such as against Google, Amazon, Microsoft, Salesforce, and others. This straightfoward definition of ethics put forth by Walz and Firth-Butterfield is easiest to work with, when discussing ethical applications and design of AI. “Ethics is commonly referred to as the study of morality. Morality... is a system of rules and values for guiding human conduct, as well as principles for evaluating those rules. Consequently, ethical behavior does not necessarily mean “good” behavior. Ethical behavior instead indicates compliance with specific values. Such values can be commonly accepted as being part of human nature (e.g., the protection of human life, freedom, and human dignity) or as a moral expectation characterizing beliefs and convictions of specific groups of people (e.g., religious rules). Moral expectations may also be of individual nature (e.g., an entrepreneur’s expectation that employees accept a company’s specific code of conduct). This broad definition is used here because….the benefit of this neutral definition of ethics is that it enables one to address the issue of ethical diversity from a regulatory and policymaking perspective. Industry self governance is unlikely to fully protect the public interest when it comes to powerful general purpose technologies. It is encouraging to see that there is significant effort being made from those in government, such as the US Department of Defense and the Joint Artificial Intelligence Center (JAIC),as well as from civil society to promote responsible and trustworthy AI. U.S. federal government activity addressing AI accelerated during the 115th and 116th Congresses. President Donald Trump issued two executive orders, establishing the American AI Initiative (E.O. 13859) and promoting the use of trustworthy AI in the federal government (E.O. 13960). Federal committees, working groups, and other entities have been formed to coordinate agency activities, help set priorities, and produce national strategic plans and reports, including an updated National AI Research and Development Strategic Plan and a Plan for Federal Engagement in Developing Technical Standards and Related Tools in AI. In Congress, committees held numerous hearings, and Members introduced a wide variety of legislation to address federal AI investments and their coordination; AI-related issues such as algorithmic bias and workforce impacts; and AI technologies such as facial recognition and deepfakes. At least four laws enacted in the 116th Congress focused on AI or included AI-focused provisions: • The National Defense Authorization Act for FY2021 (P.L. 116-283) included provisions addressing various defense- and security-related AI activities, as well as the expansive National Artificial Intelligence Initiative Act of 2020 (Division E). • The Consolidated Appropriations Act, 2021 (P.L. 116-260) included the AI in Government Act of 2020 (Division U, Title I), which directed the General Services Administration to create an AI Center of Excellence to facilitate the adoption of AI technologies in the federal government. • The Identifying Outputs of Generative Adversarial Networks (IOGAN) Act (P.L. 116-258) supported research on Generative Adversarial Networks (GANs), the primary technology used to create deepfakes. • P.L. 116-94 established a financial program related to exports in AI among other areas.1 Despite the differences we see and shall see between nations’ approaches to AI, there are also numerous synergies. There are many opportunities for governments and organizations to coordinate and collaborate internationally. This is likely to be increasingly important as many of the challenges and opportunities from AI extend well beyond national borders. AI regulation is hard for national governments to do by themselves. There are certainly issues of national competitiveness, but failing to partner internationally on AI development will not serve anyone's interests. The role of inter governmental initiatives is really valuable in responsible AI to support its development. The OECD AI recommendation is a really encouraging example. The OECD Principles on Artificial Intelligence promote artificial intelligence that is innovative and trustworthy and that respects human rights and democratic values. They were adopted in May 2019 by OECD member countries when they approved the OECD Council Recommendation on Artificial Intelligence. The OECD AI Principles are the first such principles signed up to by governments. The OECD AI Principles set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field. They complement existing OECD standards in areas such as privacy, digital security risk management and responsible business conduct. To ensure the successful implementation of the Principles, the OECD launched the AI Policy Observatory in February 2020. The Observatory publishes practical guidance about how to implement the AI Principles, and supports a live database of AI policies and initiatives globally. It also compiles metrics and measurement of global AI development and uses its convening power to bring together the private sector, governments, academia, and civil society. In June 2019, the G20 adopted human-centered AI Principles that draw from the OECD AI Principles. Over 40 countries including the U.S. as well as some non OECD members have signed on to the OECD AI principles. This is the first intergovernmental AI standard to date. Thus, international coordination on AI is not only critical but possible. AI will impact everyone so everyone should have a say. It is really valuable and important at these relatively early stages of AI governance that we make the effort to hear from all people, including those who struggle to be heard. Keywords: AI, AI ethics, artificial intelligence,digital dementia,OECD AI Principles, corporate governance, the Joint Artificial Intelligence Center (JAIC), national security, Responsible AI, Privacy, Children, IoT, Sex Robots, Care Robots, Internet of Things, Internet of Toys, Smart Toys, Coppa, Cayla
... The urgency and profound importance of ethics in AI is signalled by recent landmark studies (Angwin et al. [57]; Buolamwini and Gebru [59]), seminal books (Noble [60]; O'Neil [61]; Eubanks [62]; Pasquale [63]), newly found conferences exclusively dedicated to AI Ethics (e.g. AIES and FAccT), the fastgrowing adoption of ethics into syllabi in computational departments [64], increased attention to policy and regulation of AI [65], and increasing interest in ethics boards and research teams dedicated to ethics in major tech corporations. ...
Preprint
Full-text available
Machine learning (ML) and artificial intelligence (AI) tools increasingly permeate every possible social, political, and economic sphere; sorting, taxonomizing and predicting complex human behaviour and social phenomena. However, from fallacious and naive groundings regarding complex adaptive systems to datasets underlying models, these systems are beset by problems, challenges, and limitations. They remain opaque and unreliable, and fail to consider societal and structural oppressive systems, disproportionately negatively impacting those at the margins of society while benefiting the most powerful. The various challenges, problems and pitfalls of these systems are a hot topic of research in various areas, such as critical data/algorithm studies, science and technology studies (STS), embodied and enactive cognitive science, complexity science, Afro-feminism, and the broadly construed emerging field of Fairness, Accountability, and Transparency (FAccT). Yet, these fields of enquiry often proceed in silos. This thesis weaves together seemingly disparate fields of enquiry to examine core scientific and ethical challenges, pitfalls, and problems of AI. In this thesis I, a) review the historical and cultural ecology from which AI research emerges, b) examine the shaky scientific grounds of machine prediction of complex behaviour illustrating how predicting complex behaviour with precision is impossible in principle, c) audit large scale datasets behind current AI demonstrating how they embed societal historical and structural injustices, d) study the seemingly neutral values of ML research and put forward 67 prominent values underlying ML research, e) examine some of the insidious and worrying applications of computer vision research, and f) put forward a framework for approaching challenges, failures and problems surrounding ML systems as well as alternative ways forward.
... The other main lever in universities for computing ethics is its educational work. Ethics courses in computing cover issues such as law and policy, surveillance, inequality and justice, and often, AI and algorithms [29]. Alternatively, this content can be integrated broadly across the curriculum [35,43]-though it rarely arises in machine learning courses currently [68]. ...
Preprint
Full-text available
Artificial intelligence (AI) research is routinely criticized for its real and potential impacts on society, and we lack adequate institutional responses to this criticism and to the responsibility that it reflects. AI research often falls outside the purview of existing feedback mechanisms such as the Institutional Review Board (IRB), which are designed to evaluate harms to human subjects rather than harms to human society. In response, we have developed the Ethics and Society Review board (ESR), a feedback panel that works with researchers to mitigate negative ethical and societal aspects of AI research. The ESR's main insight is to serve as a requirement for funding: researchers cannot receive grant funding from a major AI funding program at our university until the researchers complete the ESR process for the proposal. In this article, we describe the ESR as we have designed and run it over its first year across 41 proposals. We analyze aggregate ESR feedback on these proposals, finding that the panel most commonly identifies issues of harms to minority groups, inclusion of diverse stakeholders in the research plan, dual use, and representation in data. Surveys and interviews of researchers who interacted with the ESR found that 58% felt that it had influenced the design of their research project, 100% are willing to continue submitting future projects to the ESR, and that they sought additional scaffolding for reasoning through ethics and society issues.
... Heightened public attention to data misuse and discrimination has prompted many university educators to prioritize technology and data ethics in curriculum design (Bates et al., 2020;Fiesler et al., 2020;Metcalf et al., 2015). While some have called for integrating curriculum on ethical codes of conduct into data science programs (Saltz et al., 2018), others have argued for supporting environments where students can grapple with ethical and political dilemmas when writing code (Malazita and Resetar, 2019;Martin and Weltz, 1999;Peck, 2019). ...
Article
Full-text available
All datasets emerge from and are enmeshed in power-laden semiotic systems. While emerging data ethics curriculum is supporting data science students in identifying data biases and their consequences, critical attention to the cultural histories and vested interests animating data semantics is needed to elucidate the assumptions and political commitments on which data rest, along with the externalities they produce. In this article, I introduce three modes of reading that can be engaged when studying datasets—a denotative reading (extrapolating the literal meaning of values in a dataset), a connotative reading (tracing the socio-political provenance of data semantics), and a deconstructive reading (seeking what gets Othered through data semantics and structure). I then outline how I have taught students to engage these methods when analyzing three datasets in Data and Society—a course designed to cultivate student competency in politically aware data analysis and interpretation. I show how combined, the reading strategies prompt students to grapple with the double binds of perceiving contemporary problems through systems of representation that are always situated, incomplete, and inflected with diverse politics. While I introduce these methods in the context of teaching, I argue that the methods are integral to any data practice in the conclusion.
... The urgency and profound importance of fairness, justice, and ethics in Artificial Intelligence (AI) has been made studies [6,13,60] and foundational books [17,22,25,29,58,61,62,86]. Furthermore, the critical importance of the topic is marked by newly founded conferences exclusively dedicated to AI Ethics (e.g. AIES and FAccT), the fast-growing adoption of ethics into syllabi in computational departments [26], newly introduced requirements for the inclusion of broader impacts statements for AI and Machine Learning (ML) papers submitted in premier AI conferences such as NeurIPS 1 , increased attention to policy and regulation of AI [38], and increasing interest in ethics boards and research teams dedicated to ethics in major tech corporations. ...
Preprint
Full-text available
How has recent AI Ethics literature addressed topics such as fairness and justice in the context of continued social and structural power asymmetries? We trace both the historical roots and current landmark work that have been shaping the field and categorize these works under three broad umbrellas: (i) those grounded in Western canonical philosophy, (ii) mathematical and statistical methods, and (iii) those emerging from critical data/algorithm/information studies. We also survey the field and explore emerging trends by examining the rapidly growing body of literature that falls under the broad umbrella of AI Ethics. To that end, we read and annotated peer-reviewed papers published over the past four years in two premier conferences: FAccT and AIES. We organize the literature based on an annotation scheme we developed according to three main dimensions: whether the paper deals with concrete applications, use-cases, and/or people's lived experience; to what extent it addresses harmed, threatened, or otherwise marginalized groups; and if so, whether it explicitly names such groups. We note that although the goals of the majority of FAccT and AIES papers were often commendable, their consideration of the negative impacts of AI on traditionally marginalized groups remained shallow. Taken together, our conceptual analysis and the data from annotated papers indicate that the field would benefit from an increased focus on ethical analysis grounded in concrete use-cases, people's experiences, and applications as well as from approaches that are sensitive to structural and historical power asymmetries.
... Research on computer science education has revealed that the traditional approach of teaching ethics as distinct from the subject content often fails to prepare students for real-world work (Boss, 1994;Gardner, 1991). Fiesler et al. (2020) analyzed 51 university-level AI and ML courses and found that a majority of the courses cover ethics-related topics within the last two classes. Ethics topics are often considered as a part of these technical courses "if time allows." ...
Article
The rapid expansion of artificial intelligence (AI) necessitates promoting AI education at the K-12 level. However, educating young learners to become AI literate citizens poses several challenges. The components of AI literacy are ill-defined and it is unclear to what extent middle school students can engage in learning about AI as a sociotechnical system with socio-political implications. In this paper we posit that students must learn three core domains of AI: technical concepts and processes, ethical and societal implications, and career futures in the AI era. This paper describes the design and implementation of the Developing AI Literacy (DAILy) workshop that aimed to integrate middle school students' learning of the three domains. We found that after the workshop, most students developed a general understanding of AI concepts and processes (e.g., supervised learning and logic systems). More importantly, they were able to identify bias, describe ways to mitigate bias in machine learning, and start to consider how AI may impact their future lives and careers. At exit, nearly half of the students explained AI as not just a technical subject, but one that has personal, career, and societal implications. Overall, this finding suggests that the approach of incorporating ethics and career futures into AI education is age appropriate and effective for developing AI literacy among middle school students. This study contributes to the field of AI Education by presenting a model of integrating ethics into the teaching of AI that is appropriate for middle school students.
... In addition to offering insights to researchers and practitioners, we believe our work presents a concrete tool to educators and students interested in issues of bias and unfairness in ML. In recent years, call for "greater integration of ethics across computer science curriculum" have amplified (see, e.g., [14]). However, instructors without a background in the area may lack the necessary tools to cover these issues in depth [22,26]. ...
Preprint
Motivated by the growing importance of reducing unfairness in ML predictions, Fair-ML researchers have presented an extensive suite of algorithmic "fairness-enhancing" remedies. Most existing algorithms, however, are agnostic to the sources of the observed unfairness. As a result, the literature currently lacks guiding frameworks to specify conditions under which each algorithmic intervention can potentially alleviate the underpinning cause of unfairness. To close this gap, we scrutinize the underlying biases (e.g., in the training data or design choices) that cause observational unfairness. We present a bias-injection sandbox tool to investigate fairness consequences of various biases and assess the effectiveness of algorithmic remedies in the presence of specific types of bias. We call this process the bias(stress)-testing of algorithmic interventions. Unlike existing toolkits, ours provides a controlled environment to counterfactually inject biases in the ML pipeline. This stylized setup offers the distinct capability of testing fairness interventions beyond observational data and against an unbiased benchmark. In particular, we can test whether a given remedy can alleviate the injected bias by comparing the predictions resulting after the intervention in the biased setting with true labels in the unbiased regime -- that is, before any bias injection. We illustrate the utility of our toolkit via a proof-of-concept case study on synthetic data. Our empirical analysis showcases the type of insights that can be obtained through our simulations.
Article
The design of new technologies is a cooperative task (between designers on teams, and between designers and users) with ethical import. Studying technology development teams' engagement with the ethical aspects of their work is important, but engagement with ethical issues is an unobservable construct without agreement on what observable factors comprise it. Ethical sensitivity (ES), a construct studied in medicine, accounting, and other professions, offers a framework of observable factors by operationalizing ethical engagement in workplaces into component parts. However, ES has primarily been studied as a property of individuals rather than groups and in professions outside of computing. This paper uses a corpus of 108 ES studies from 1985-2020 to adapt the framework for studies of technology design teams. From the ES corpus, we build an umbrella framework that conceptualizes ES as comprising the moment of noticing an ethical problem (recognition), the process of building understanding of the situation (particularization), and the decision about what to do (judgment). This framework makes theoretical and methodological contributions to the study of how ethics are operationalized on design teams. We find that ethical sensitivity provides useful language for studies of collaboration and communication around ethics; suggests opportunities for, and evaluations of, ethical interventions for design workplaces; and connects team members' backgrounds, educational experiences, work practices, and organizational factors to design decisions. Simultaneously, existing research in HCI and CSCW addresses the limited range of research methods currently employed in the ES literature, adding rich, contextualized data about situated and embodied ethical practice to the theory.
Article
Artificial Intelligence (AI) systems are embedded in institutions that are not diverse, that are inequitable, unjust, and exclusionary. How do we address the interface between AI systems and an unjust world, in service to human flourishing? One mechanism for addressing AI Ethics is AI Ethics Education: training those who will build, use, and/or be subject to AI systems to have clear moral reasoning, make responsible decisions, and take right actions. This paper presents, as part of a larger project, work on what AI Ethics instructors currently do and how they describe their motivating concerns. I find that although AI Ethics content and pedagogy is varied, there are some common motivating concerns particular to this kind of teaching, which may be useful in structuring future guidance for new AI Ethics teachers, evaluating existing pedagogy, guiding research on new pedagogies, and promoting discussion with the AI Ethics community.
Article
Full-text available
The daily influence of new technologies on shaping and reshaping human lives necessitates attention to the ethical development of the future computing workforce. To improve computer science students' ethical decision-making, it is important to know how they make decisions when they face ethical issues. This article contributes to the research and practice of computer ethics education by identifying the factors that influence ethical decision-making of computer science students and providing implications to improve the process. Using a constructivist grounded theory approach, the data from the text of the students' discussion postings on three ethical scenarios in computer science and the follow-up interviews were analyzed. Based on the analysis, relating to real-life stories, thoughtfulness about responsibilities that come from the technical knowledge of developers, showing care for users or others who might be affected, and recognition of fallacies contributed to better ethical decision-making. On the other hand, falling for fallacies and empathy for developers negatively influenced students' ethical decision-making process. Based on the findings, this study presents a model of factors that influence the ethical decision-making process of computer science students, along with implications for future researchers and computer ethics educators.
Preprint
Full-text available
This special issue interrogates the meaning and impacts of "tech ethics": the embedding of ethics into digital technology research, development, use, and governance. In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology. Yet as more groups embrace the concept of ethics, critical discourses have emerged questioning whose ethics are being centered, whether "ethics" is the appropriate frame for improving technology, and what it means to develop "ethical" technology in practice. This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action. This special issue engages with the normative and contested notions of ethics itself, how ethics has been integrated with technology across domains, and potential paths forward to support more just and egalitarian technology. Rather than starting from philosophical theories, the authors in this issue orient their articles around the real-world discourses and impacts of tech ethics--i.e., tech ethics in action.
Purpose As data-driven tools increasingly shape our life and tech ethics crises become strikingly frequent, data ethics coursework is urgently needed. The purpose of this study is to map the field of data ethics curricula, tracking relations between courses, instructors, texts and writers, and present a proof-of-concept interactive website for exploring these relations. This method is designed to be used in curricular research and development and provides multiple vantage points on this multidisciplinary field. Design/methodology/approach The authors use data science methods to foster insights about the field of data ethics education and literature. The authors present a semantic, linked open data graph in the Resource Description Framework, along with proof-of-concept analyses and an exploratory website. Its framework is open-source and language-agnostic, providing the seed for future contributions of code, syllabi and resources from the global data ethics community. Findings This method provides a convenient means of exploring an overview of the field of data ethics’ social and textual relations. For educators designing or refining a course, the authors provide a method for curricular introspection and discovery of transdisciplinary curricula. Research limitations/implications The syllabi the authors have collected are self-selected and represent only a subset of the field. Furthermore, this method exclusively represents a course’s assigned literature rather than a holistic view of what courses teach. The authors present a prototype rather than a finished product. Originality/value This curricular survey provides a new way of modeling a field of study, using existing ontologies to organize graph data into a comprehensible overview. This framework may be repurposed to map the institutional knowledge structures of other disciplines, as well.
Article
The daily influence of new technologies on shaping and reshaping human lives necessitates attention to the ethical development of the future computing workforce. To improve computer science students’ ethical decision-making, it is important to know how they make decisions when they face ethical issues. This article contributes to the research and practice of computer ethics education by identifying the factors that influence ethical decision-making of computer science students and providing implications to improve the process. Using a constructivist grounded theory approach, the data from the text of the students’ discussion postings on three ethical scenarios in computer science and the follow-up interviews were analyzed. Based on the analysis, relating to real-life stories, thoughtfulness about responsibilities that come from the technical knowledge of developers, showing care for users or others who might be affected, and recognition of fallacies contributed to better ethical decision-making. On the other hand, falling for fallacies and empathy for developers negatively influenced students' ethical decision-making process. Based on the findings, this study presents a model of factors that influence the ethical decision-making process of computer science students, along with implications for future researchers and computer ethics educators.
Preprint
Since any training in AI ethics is first and foremost indebted to a conception of ethics training in general, we identify the specific requirements related to the ethical dimensions of this cutting-edge technological innovation. We show how a pragmatist approach inspired by the work of John Dewey allows us to clearly identify both the essential components of such training and the specific fields related to the development of AI systems. More precisely, by focusing on some central characteristics of such a pragmatist approach, namely anti-foundationalism, anti-dualism and anti-skepticism, characteristics shared by the philosophies of the main representatives of the pragmatist movement, we will see how the different components of the ethical competence - namely ethical sensitivity, reflexive capacities and dialogical capacities - can be conceived in a dynamic and interdependent way. We will then be able to examine the specific fields of training in AI ethics, insisting on the necessary complementarity between the specific moral dilemmas associated with this technology and the technical, social and normative (especially legislative) aspects in order to adequately grasp the ethical issues related to the design, development and deployment of AI systems. In doing so, we will be able to determine the requirements that should guide the implementation of an adequate training in AI ethics, by providing benchmarks for the teaching of these issues.
Article
The Computer Science for All movement is bringing CS to K-12 classrooms across the nation. At the same time, new technologies created by computer scientists have been reproducing existing inequities that directly impact today's youth, while being “promoted and perceived as more objective or progressive than the discriminatory systems of a previous era” [1, p. 5–6]. Current efforts are being made to expose students to the social impact and ethics of computing at both the K-12 and university-level—which we refer to as “socially responsible computing” (SRC) in this paper. Yet there is a lack of research describing what such SRC teaching and learning actively involve and look like, particularly in K-12 classrooms. This paper fills this gap with findings from a research-practice partnership, through a qualitative study in an Advanced Placement Computer Science Principles classroom enrolling low-income Latino/a/x students from a large urban community. The findings illustrate 1) details of teaching practice and student learning during discussions about SRC; 2) the impact these SRC experiences have on student engagement with CS; 3) a teacher's reflections on key considerations for effective SRC pedagogy; and 4) why students’ perspectives and agency must be centered through SRC in computing education.
Article
In response to public scrutiny of data-driven algorithms, the field of data science has adopted ethics training and principles. Although ethics can help data scientists reflect on certain normative aspects of their work, such efforts are ill-equipped to generate a data science that avoids social harms and promotes social justice. In this article, I argue that data science must embrace a political orientation. Data scientists must recognize themselves as political actors engaged in normative constructions of society and evaluate their work according to its downstream impacts on people's lives. I first articulate why data scientists must recognize themselves as political actors. In this section, I respond to three arguments that data scientists commonly invoke when challenged to take political positions regarding their work. In confronting these arguments, I describe why attempting to remain apolitical is itself a political stance—a fundamentally conservative one—and why data science's attempts to promote “social good” dangerously rely on unarticulated and incrementalist political assumptions. I then propose a framework for how data science can evolve toward a deliberative and rigorous politics of social justice. I conceptualize the process of developing a politically engaged data science as a sequence of four stages. Pursuing these new approaches will empower data scientists with new methods for thoughtfully and rigorously contributing to social justice.
Article
Weaponized in support of deregulation and self-regulation, “ethics” is increasingly identified with technology companies' self-regulatory efforts and with shallow appearances of ethical behavior. So-called “ethics washing” by tech companies is on the rise, prompting criticism and scrutiny from scholars and the tech community. The author defines “ethics bashing” as the parallel tendency to trivialize ethics and moral philosophy. Underlying these two attitudes are a few misunderstandings: (1) philosophy is understood in opposition and as alternative to law, political representation, and social organizing; (2) philosophy and “ethics” are perceived as formalistic, vulnerable to instrumentalization, and ontologically flawed; and (3) moral reasoning is portrayed as mere “ivory tower” intellectualization of complex problems that need to be dealt with through other methodologies. This article argues that the rhetoric of ethics and morality should not be reductively instrumentalized, either by the industry in the form of “ethics washing”, or by scholars and policy-makers in the form of “ethics bashing”. Grappling with the role of philosophy and ethics requires moving beyond simplification and seeing ethics as a mode of inquiry that facilitates the evaluation of competing tech policy strategies. We must resist reducing moral philosophy's role and instead must celebrate its special worth as a mode of knowledge-seeking and inquiry. Far from mandating self-regulation, moral philosophy facilitates the scrutiny of various modes of regulation, situating them in legal, political, and economic contexts. Moral philosophy indeed can explainin the relationship between technology and other worthy goals and can situate technology within the human, the social, and the political.
Article
This article introduces the special issue “Technology Ethics in Action: Critical and Interdisciplinary Perspectives”. In response to recent controversies about the harms of digital technology, discourses and practices of “tech ethics” have proliferated across the tech industry, academia, civil society, and government. Yet despite the seeming promise of ethics, tech ethics in practice suffers from several significant limitations: tech ethics is vague and toothless, has a myopic focus on individual engineers and technology design, and is subsumed into corporate logics and incentives. These limitations suggest that tech ethics enables corporate “ethics-washing”: embracing the language of ethics to defuse criticism and resist government regulation, without committing to ethical behavior. Given these dynamics, I describe tech ethics as a terrain of contestation where the central debate is not whether ethics is desirable, but what “ethics” entails and who gets to define it. Current approaches to tech ethics are poised to enable technologists and technology companies to label themselves as “ethical” without substantively altering their practices. Thus, those striving for structural improvements in digital technologies must be mindful of the gap between ethics as a mode of normative inquiry and ethics as a practical endeavor. In order to better evaluate the opportunities and limits of tech ethics, I propose a sociotechnical approach that analyzes tech ethics in light of who defines it and what impacts it generates in practice.
Thesis
Full-text available
While Artificial Intelligence (AI) is raising more and more ethical issues (fairness, explainability, privacy, etc.), a set of regulatory tools and methods have emerged over the past few years, such as fairness metrics, explanation procedures, anonymization methods, and so on. When data are granular, voluminous and behavioral, these “responsible tools” have trouble regulating AI modelswithout using the normative categories usually applied to formulate critiques. How to normalize AI that pretend to compute the world without categories ? To answer this question, we have developed, using the technical literature from AI ethics, an algorithmic method to regulate AI regarding ethical issues, such as discrimination, opacity and privacy. We then formulate four empirical and theoretical critiques to highlight the limitations of technical tools to address ethics. First, we pinpoint the limitations of the methods that generate post-hoc explanations of "black box" models. Second, we show that we cannot only rely on transparency to address ethical issues, since explanations cannot always reveal discriminations. We then demonstrate, using concepts from Boltanski’s pragmatic sociology, that the methods for fighting discrimination tend towards a “complex domination system” in which AI constantly modifies the contours of reality without offering any outlet for criticism. Finally, we show that AI is more generally part of a movement of extension that dilutes the role of institutions. These four empirical and theoretical criticisms finally allow us to adjust our first proposal to normalize AI. Starting from a technical tool, we finally propose an open and material inquiry allowing to constantly update the question of ends and means within AI collectives.
Article
Full-text available
Science fiction in particular offers students a way to cultivate their capacity for moral imagination.
Article
Full-text available
Computing technologies and artifacts are increasingly integrated into most aspects of our professional, social, and private lives. One consequence of this growing ubiquity of computing is that it can have significant ethical implications that computing professionals need to be aware of. The relationship between ethics and computing has long been discussed. However, this is the first comprehensive survey of the mainstream academic literature of the topic. Based on a detailed qualitative analysis of the literature, the article discusses ethical issues, technologies that they are related to, and ethical theories, as well as the methodologies that the literature employs, its academic contribution, and resulting recommendations. The article discusses general trends and argues that the time has come for a transition to responsible research and innovation to ensure that ethical reflection of computing has practical and manifest consequences.
Article
Full-text available
The second report from Project ImpactCS is given here, and a new required area of study - ethics and social impact - is proposed.
Article
Full-text available
Learning outcomes are broad statements of what is achieved and assessed at the end of a course of study. The concept of learning outcomes and outcome-based education is high on today's education agenda. The idea has features in common with the move to instructional objectives which became fashionable in the 1960s, but which never had the impact on education practice that it merited. Five important differences between learning outcomes and instructional objectives can be recognized: (1) Learning outcomes, if set out appropriately, are intuitive and user friendly. They can be used easily in curriculum planning, in teaching and learning and in assessment. (2) Learning outcomes are broad statements and are usually designed round a framework of 8-12 higher order outcomes. (3) The outcomes recognize the authentic interaction and integration in clinical practice of knowledge, skills and attitudes and the artificiality of separating these. (4) Learning outcomes represent what is achieved and assessed at the end of a course of study and not only the aspirations or what is intended to be achieved. (5) A design-down approach encourages ownership of the outcomes by teachers and students.
Article
This article establishes and addresses opportunities for ethics integration into Machine-learning (ML) courses. Following a survey of the history of computing ethics and the current need for ethical consideration within ML, we consider the current state of ML ethics education via an exploratory analysis of course syllabi in computing programs. The results reveal that though ethics is part of the overall educational landscape in these programs, it is not frequently a part of core technical ML courses. To help address this gap, we offer a preliminary framework, developed via a systematic literature review, of relevant ethics questions that should be addressed within an ML project. A pilot study with 85 students confirms that this framework helped them identify and articulate key ethical considerations within their ML projects. Building from this work, we also provide three example ML course modules that bring ethical thinking directly into learning core ML content. Collectively, this research demonstrates: (1) the need for ethics to be taught as integrated within ML coursework, (2) a structured set of questions useful for identifying and addressing potential issues within an ML project, and (3) novel course models that provide examples for how to practically teach ML ethics without sacrificing core course content. An additional by-product of this research is the collection and integration of recent publications in the emerging field of ML ethics education.
Article
A Harvard-based pilot program integrates class sessions on ethical reasoning into courses throughout its computer science curriculum.
Article
Use of a codebook to categorize meaning units is a well-known research strategy in qualitative inquiry. However, methodology for the creation of a codebook in practice is not standardized, and specific guidance for codebook ideation is sometimes unclear, especially to novice qualitative researchers. This article describes the procedure that was utilized to create a codebook, which adapted an affinity diagram methodology (Scupin, 1997), an approach used in user-centered design. For this research, affinity diagramming was applied to a method outlined by Kurasaki (2000) for codebook ideation. Annotations of a subset of military veterans’ transcripts were utilized in congruence with affinity diagramming to create a codebook to categorize the phenomenon of reintegration into the civilian community after service (Haskins Lisle, 2017). This method could be useful in exploratory research that utilizes a codebook generated in vivo from annotations, especially for novice researchers who are overwhelmed by the codebook creation phase.
Conference Paper
This special session will involve three related components. It will begin with a history of the ACM Code of Ethics and Professional Conduct (the Code), its evolving presence in the computing curriculum guidelines over time, and its documented use outside of academe. This will lead into an overview of the major changes to the Code that occurred in the most recent update. The third component and primary focus of the session will be to work with participants to identify ways that ACM and the ACM Committee on Professional Ethics (COPE) can help Computer Science educators integrate the Code as broadly as possible into diverse programs, ranging from Kindergarten to PhD-level. The outcome of the session would be a preliminary set of guidelines for programs and departments to adopt the Code, potential challenges to be addressed when formalizing those guidelines, and suggested approaches to resolve these difficulties. If attendance is sufficiently large, we would adopt a jigsaw model, breaking into smaller focus groups that are tasked with distinct portions of the Code. Each group reports back at the end, and members of COPE will collate the results into a document for future distribution and work.
Article
An important public discussion is underway on the values and ethics of digital technologies as designers work to prevent misinformation campaigns, online harassment, exclusionary tools, and biased algorithms. This monograph reviews 30 years of research on theories and methods for surfacing values and ethics in technology design. It maps the history of values research, beginning with critique of design from related disciplines and responses in Human-Computer Interaction (HCI) research. The review then explores ongoing controversies in values-oriented design, including disagreements around terms, expressions and indicators of values and ethics, and whose values to consider. Next, the monograph describes frameworks that attempt to move values-oriented design into everyday design settings. These frameworks suggest open challenges and opportunities for the next 30 years of values in HCI research.
Conference Paper
This paper presents Quantified Self: Immersive Data and Theater Experience (QSelf) as a case study in collaborative and interdisciplinary learning and toward a project-based education model that promotes technical art projects. 22 students from several departments engaged in a semester-long effort to produce an immersive theater show centered on ethical uses of personal data, a show that drew more than 240 people over 6 performances. The project was housed out of the computer science department and involved multiple computer science undergraduate and graduate students who had the chance to work with students from the department of theater and dance. By analyzing the technical artifacts students created and post-interviews, we found this project created a novel and productive space for computer science students to gain applied experience and learn about the social impacts of their work while the arts students gained a fluency and understanding around the technical issues presented.
Conference Paper
Our paper offers several novel activities for teaching ethics in the context of a computer science (CS) class. Rather than approaches that teach ethics as an isolated course, we outline and discuss multiple ethics education interventions meant to work in the context of an existing technical course. We piloted these activities in an Human Centered Computing course and found strong engagement and interest from our students in ethics topics without sacrificing core course material. Using a pre/post survey and examples from student assignments, we evaluate the impact of these interventions and discuss their relevance to other CS courses. We further make suggestions for embedding ethics in other CS education contexts.
Conference Paper
Data science is a new field that integrates aspects of computer science, statistics and information management. As a new field, ethical issues a data scientist may encounter have received little attention to date, and ethics training within a data science curriculum has received even less attention. To address this gap, this article explores the different codes of conduct and ethics frameworks related to data science. We compare this analysis with the results of a systematic literature review focusing on ethics in data science. Our analysis identified twelve key ethics areas that should be included within a data science ethics curriculum. Our research notes that none of the existing codes or frameworks covers all of the identified themes. Data science educators and program coordinators can use our results as a way to identify key ethical concepts that can be introduced within a data science program.
Article
Mock trials are an effective and fun way of eliciting thoughtful dialogue from students, and encouraging them to produce incisive analyses of current ethical dilemmas related to computers and society. This paper describes our experience using mock trials in two computer ethics courses. Each trial was centered on a specific controversial and ethically or legally ambiguous topic related to current computer usage in society. Students participated in a series of mock trials during the term, alternating their role in each trial between jury, proponent, and opponent. Class participation was nearly 100% for every trial, with many students electing to define their own sub-role within their assigned major role. The logistics of the trials were initially difficult to administer and monitor; however, they quickly became manageable as we gained more experience with the opportunities and pitfalls associated with the mock-trial system, and as students volunteered suggestions for improvements.
Article
Usability has been widely implemented in technical communication curricula and workplace practices, but little attention has focused specifically on how usability and its pedagogy are addressed in our literature. This study reviews selected technical communication textbooks, pedagogical and landmark texts, and online course syllabi and descriptions, and argues that meager attention is given to usability, thus suggesting the need for more in-depth and productive discussions on usability practices, strategies, and challenges.
Article
This paper presents the results of a study of the effect of a business ethics course in enhancing the ability of students to recognize ethical issues. The findings show that compared to students who do not complete such a course, students enrolled in a business ethics course experience substantial improvement in that ability.
Article
Most scholars, including Lawrence Kohlberg, have maintained that the principles of human development can mesh readily with the goals of the educational system. However, children's intuitive theories and conceptions turn out to be so powerful that they often undermine the overt goals of education. Indeed, there is typically a disjunction between early forms of understanding, the forms that school attempts to inculcate, and the kinds of knowledge required for expert performance in a domain. Though the issue has not been investigated, such disjunctions may obtain in the moral domain as well. It should be possible to bridge the gap between developmental and educational concerns; but such connection can only take place if the robustness of early conceptions is fully acknowledged and appropriate interventions are designed.
Article
The issue of responsibility on the part of the computer professional is one that has blossomed very recently. Members of the computing field have become very much aware of some of the issues, even if there isn't any consensus on how to handle them. The ACM debates on standards of professional conduct and on the desirability of a professional society such as the ACM taking formal positions on social issues are illustrative. It is the position of this paper that the faculty member should try to prepare the student to make better decisions for himself rather than try to persuade him to take particular stands on particular issues.
Article
‘the duty of the law schools is to help its students to understand the ultimate significance of the lifework they have undertaken: to see the ultimate purpose of a lawyer's work… .’ [Brainard Currie] The Lord Chancellor's Advisory Committee on Legal Education and Conduct (ACLEC) has recently called upon academic law teachers of the undergraduate degree in law to take more of an interest in professional ethics. This means that academic law teachers can no longer set the subject aside as something to be dealt with during vocational legal education. Professional ethics must be taught pervasively, ie at each stage of legal education. This paper argues, however, that professional ethics must be taught pervasively in a further sense: even within the undergraduate curriculum, the task of educating tomorrow's lawyers in professional ethics cannot be left to one or more specialists in the subject.
Article
Reports on a study of the effect of community service on 71 undergraduate students. Finds that community service work combined with discussion of relevant moral issues is an effective way of moving students into the postconventional stage of principled moral reasoning. Discusses other benefits of community service work. (CFR)
Article
A survey of 106 medical students assessing their interest in and attitudes to medical ethics in the curriculum is reported by the authors. Results indicate that 64 per cent of the students rated the importance of medical ethics to good medical care as high or critical and 66 per cent desired to learn more about the topic. However, in reports of patient encounters identifying ethical issues, less than six per cent of the students reported a frequency of more than one such patient encounter per week. The students also demonstrated a greater awareness of more obvious ethical issues than of more subtle, less publicised issues. When asked how medical ethics should be taught, the students clearly affirmed a desire for an integrated exposure to the subject throughout the medical curriculum. Possible implications of these findings for medical education are discussed.
Article
Despite attempts to describe the "ideal" medical ethics curriculum, few data exist describing current practices in medical ethics education to guide curriculum directors. This study aimed to determine the scope and content of required, formal ethics components in the curricula of U.S. medical schools. A questionnaire sent to all curriculum directors of four-year medical schools in the U.S. (n = 121) requested course syllabi for all required, formal ethics components in the four-year curriculum. Syllabi were coded and analyzed to produce a profile of course objectives, teaching methods, course contents, and methods for assessing students. Questionnaires were returned by 87 representatives of the schools (72%). A total of 69 (79%) required a formal ethics course, and 58 (84%) provided their ethics course syllabi. Analysis and codification of all syllabi identified ten course objectives, eight teaching methods, 39 content areas, and six methods of assessing students. The means for individual schools were three objectives, four teaching methods, 13 content areas, and two methods of assessment. The 58 syllabi either required or recommended 1,191 distinct readings, only eight of which were used by more than six schools. Ethics education is far from homogeneous among U.S. medical schools, in both content and extensiveness. While the study of syllabi demonstrates significant areas of overlap with recent efforts to identify an "ideal" ethics curriculum for medical students, several areas of weakness emerged that require attention from medical educators.
The Ethical Engine: Integrating Ethical Design into
  • Evan Peck
More than 50 tech ethics courses with links to syllabi
  • Cory Doctorow
What Our Tech Ethics Crisis Says About the State of Computer Science Education . How We Get to Next
  • Casey Fiesler
Tech's ethical 'dark side': Harvard Stanford and others want to address it
  • Natasha Singer
Yonatan Zunger Computer science faces an ethics crisis; the Cambridge Analytica scandal proves it
  • The Boston Globe
Amazon Reportedly Killed an AI Recruitment System Because It Couldn't Stop the Tool from Discriminating Against Women
  • David Meyer
The Business of War': Google Employees Protest Work for the Pentagon
  • Scott Shane
  • Daisuke Wakabayashi