Conference Paper

More Than "If Time Allows": The Role of Ethics in AI Education

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Ethical training is taken as an essential mechanism in the education of various domains (Roberts et al. 2005;Robinson 2007;Grosz et al. 2019). Though the education of AI ethics is still in the early stage of development, the field is considered as an important aspect of the AI education that ought to be integrated into college education in a systematic way (Garrett, Beard, and Fiesler 2020;Borenstein and Howard 2020). Only when individuals have a basic understanding of AI ethics, and ethical behaviors and decisions, are they able to make ethical decisions in the development or deployment of AI in the real-world (Furey and Martin 2019). ...
... First, regulation and accreditation within AI educational domains needs to include a requirement for ethical human-AI teamwork. While ethics is seen as an essential part to a contemporary AI education (Garrett, Beard, and Fiesler 2020), the complexities of teamwork outlined above necessitate its explicit consideration. In addition to serving as a guide for building ethical HATs, the model could also provide an essential educational resource in teaching students and professionals to understand the human-AI teaming relationship that they may be responsible for implementing and monitoring. ...
... While academics have been studying the topics of teaching AI and AI ethics for more than half a century (e.g., Chand, 1974;Gehman, 1984;Martin et al., 1996;Applin, 2006;Ahmad, 2014), the systematic assessment of the topics, developments, and trends in teaching AI ethics is a relatively recent endeavor. However, most of the previous research that focused on a systematic analysis of teaching AI ethics suffered from one or more of the following limitations: 1) having a limited disciplinary scope (e.g., integration of ethics only in courses in machine-learning, Saltz et al., 2019;engineering, Bielefeldt et al., 2019;Nasir et al., 2021; human-computer interaction, Khademi & Hui, 2020;software engineering, Towell, 2003;or distributed systems, Abad, Ortiz-Holguin, & Boza, 2021); 2) having a limited geographical coverage and, as explained in Hughes et al. (2020), Mohamed et al. (2020), being biased towards Western cultures (e.g., Moller & Crick, 2018;Fiesler et al., 2020;Garrett et al., 2020;Raji et al., 2021;Homkes & Strikwerda, 2009); or 3) including courses taught at only one single level (e.g., introductory level, Becker & Fitzpatrick, 2019). ...
... Most importantly, all these previous attempts to map the teaching AI ethics field are human-driven approaches, with topics of interest manually identified based on grouping the instructor-described topics into higher-level categories (e.g., Fiesler et al., 2020) or on open coding (e.g., Garrett et al., 2020;Raji et al., 2021). However, such approaches are sensitive to the subjectivity and noise inherent in human decisions and the limited ability of human analysts to work effectively at very large scales. ...
Article
Full-text available
The domain of Artificial Intelligence (AI) ethics is not new, with discussions going back at least 40 years. Teaching the principles and requirements of ethical AI to students is considered an essential part of this domain, with an increasing number of technical AI courses taught at several higher-education institutions around the globe including content related to ethics. By using Latent Dirichlet Allocation (LDA), a generative probabilistic topic model, this study uncovers topics in teaching ethics in AI courses and their trends related to where the courses are taught, by whom, and at what level of cognitive complexity and specificity according to Bloom’s taxonomy. In this exploratory study based on unsupervised machine learning, we analyzed a total of 166 courses: 116 from North American universities, 11 from Asia, 36 from Europe, and 10 from other regions. Based on this analysis, we were able to synthesize a model of teaching approaches, which we call BAG (Build, Assess, and Govern), that combines specific cognitive levels, course content topics, and disciplines affiliated with the department(s) in charge of the course. We critically assess the implications of this teaching paradigm and provide suggestions about how to move away from these practices. We challenge teaching practitioners and program coordinators to reflect on their usual procedures so that they may expand their methodology beyond the confines of stereotypical thought and traditional biases regarding what disciplines should teach and how. This article appears in the AI & Society track.
... For Fiesler et al. [50], this broad topic range and inconsistency in teaching content across syllabi do not come as a surprise, given the current lack of standards enabling educators with leeway to design courses according to their own discretion. As Garret et al. [56] note, "if AI education is in the infancy stage of development, then AI ethics education is barely an embryo." ...
... The Accreditation Board for Engineering and Technology (ABET) requires students to have "[a]n understanding of professional, ethical, legal, security and social issues and responsibilities" [1,135]. However, the precise implementation of ethics in the curricula is left to the institutions and professors [56]. In this regard, curricula design may draw on the rich and multifaceted literature already established in applied ethics [3,69,104,105,128,139], to provide a diverse scope stretching across Western and Eastern ethics [41], and to include topical approaches, even informed by other scientific fields, such as cognitive (neuro) science [62,63]. ...
Article
Full-text available
This paper proposes to generate awareness for developing Artificial intelligence (AI) ethics by transferring knowledge from other fields of applied ethics, particularly from business ethics, stressing the role of organizations and processes of institutionalization. With the rapid development of AI systems in recent years, a new and thriving discourse on AI ethics has (re-)emerged, dealing primarily with ethical concepts, theories, and application contexts. We argue that business ethics insights may generate positive knowledge spillovers for AI ethics, given that debates on ethical and social responsibilities have been adopted as voluntary or mandatory regulations for organizations in both national and transnational contexts. Thus, business ethics may transfer knowledge from five core topics and concepts researched and institutionalized to AI ethics: (1) stakeholder management, (2) standardized reporting, (3) corporate governance and regulation, (4) curriculum accreditation, and as a unified topic (5) AI ethics washing derived from greenwashing. In outlining each of these five knowledge bridges, we illustrate current challenges in AI ethics and potential insights from business ethics that may advance the current debate. At the same time, we hold that business ethics can learn from AI ethics in catching up with the digital transformation, allowing for cross-fertilization between the two fields. Future debates in both disciplines of applied ethics may benefit from dialog and cross-fertilization, meant to strengthen the ethical depth and prevent ethics washing or, even worse, ethics bashing.
... Ethics scholars deliberate on the dilemmas of privacy and user consent for research purposes in this emerging field of "social computing" [56]. Undeniably, this question falls into the wider theme of the societal impacts of AI in consumer products. ...
... Undeniably, this question falls into the wider theme of the societal impacts of AI in consumer products. As early as the 1980s and 1990s, ethics educators have highlighted the challenge for digital computing applications to responsibly protect marginalized communities from harm through the integration of good ethical practices [56,57]. Ethical and privacy concerns are to be carefully resolved as social media users increasingly include minors who may not fully grasp the implication of digitally traceable personal information. ...
Article
Full-text available
Background The popularization of social media has led to the coalescing of user groups around mental health conditions; in particular, depression. Social media offers a rich environment for contextualizing and predicting users’ self-reported burden of depression. Modern artificial intelligence (AI) methods are commonly employed in analyzing user-generated sentiment on social media. In the forthcoming systematic review, we will examine the content validity of these computer-based health surveillance models with respect to standard diagnostic frameworks. Drawing from a clinical perspective, we will attempt to establish a normative judgment about the strengths of these modern AI applications in the detection of depression. Methods We will perform a systematic review of English and German language publications from 2010 to 2020 in PubMed, APA PsychInfo, Science Direct, EMBASE Psych, Google Scholar, and Web of Science. The inclusion criteria span cohort, case-control, cross-sectional studies, randomized controlled studies, in addition to reports on conference proceedings. The systematic review will exclude some gray source materials, specifically editorials, newspaper articles, and blog posts. Our primary outcome is self-reported depression, as expressed on social media. Secondary outcomes will be the types of AI methods used for social media depression screen, and the clinical validation procedures accompanying these methods. In a second step, we will utilize the evidence-strengthening P opulation, I ntervention, C omparison, O utcomes, S tudy type ( PICOS ) tool to refine our inclusion and exclusion criteria. Following the independent assessment of the evidence sources by two authors for the risk of bias, the data extraction process will culminate in a thematic synthesis of reviewed studies. Discussion We present the protocol for a systematic review which will consider all existing literature from peer reviewed publication sources relevant to the primary and secondary outcomes. The completed review will discuss depression as a self-reported health outcome in social media material. We will examine the computational methods, including AI and machine learning techniques which are commonly used for online depression surveillance. Furthermore, we will focus on standard clinical assessments, as indicating content validity, in the design of the algorithms. The methodological quality of the clinical construct of the algorithms will be evaluated with the COnsensus-based Standards for the selection of health status Measurement Instruments (COSMIN) framework. We conclude the study with a normative judgment about the current application of AI to screen for depression on social media. Systematic review registration International Prospective Register of Systematic Reviews PROSPERO (registration number CRD42020187874 ).
... 150 What ethical topics of AI should be taught, and how they should be taught, is one of the central endeavours in this category. 151,152 Others dispute whether the new internal ratings based (IRB) model would provide appropriate oversight mechanisms for health-related AI studies. 153 User trust in AI-based educational systems and responsible AI-based educational systems are two other considerations. ...
Article
Full-text available
Technology Futurist, USA Sudha Jamthe is the CEO of IoTDisruption.com and a globally recognised Technology Futurist with a 20+ year mix of entrepreneurial, academic and operational experience from eBay, PayPal and GTE. She is the author of six books and teaches Internet of Things (IoT), artificial intelligence (AI) and autonomous vehicles business courses at Stanford Continuing Studies and at the Business School of AI. Sudha has an MBA from Boston University and BSc in computer science engineering from Madras University. Abstract Given that the topic of artificial intelligence (AI) ethics is novel, and many studies are emerging to uncover AI's ethical challenges, the current study aims to analyse and visualise the research patterns and influential elements in this field. This paper analyses 1,646 Scopus-indexed publications using bibliometric analysis and cluster content analysis. To classify the most prominent elements and delineate the intellectual framework as well as the emerging patterns and gaps, we utilised keyword co-occurrence analysis and bibliographic coupling analysis and network visualisation of authors, countries, sources, documents and institutions. In particular, we detected nine major Saheb et al. applications of AI in which ethics of AI is highly discussed, 24 ethical categories and 66 ethical concerns. Using the VOSviewer software, we also identified the general ethical concerns with the greatest total link strength regardless of their cluster associations. Then, focusing on the most recent articles (2020-21), we performed a cluster content analysis of the identified topic clusters and ethical concerns. This analysis guided us in detecting literature gaps and prospective topics and in developing a conceptual framework to illustrate a comprehensive image of ethical AI research trends. This study will assist policymakers, regulators, developers, engineers and researchers in better understanding AI's ethical challenges and identifying the most pressing concerns that need to be tackled.
... There have been multiple calls for introducing ethics in computer science courses in general, and in AI programs in particular [23,45,71,100,110,119,120]. Several surveys have investigated how existing responsible computing courses are organized [35,38,101,106]. ...
Preprint
Model explainability has become an important problem in machine learning (ML) due to the increased effect that algorithmic predictions have on humans. Explanations can help users understand not only why ML models make certain predictions, but also how these predictions can be changed. In this thesis, we examine the explainability of ML models from three vantage points: algorithms, users, and pedagogy, and contribute several novel solutions to the explainability problem.
... Ethics of AI is thus extremely broad and even ill-defined by definition. Yet, its relevance and usefulness in AI education should not be understated [22]. ...
Conference Paper
Full-text available
The ongoing AI revolution has disrupted several industry sectors and will keep having an unprecedented impact on all areas of society. This is predicted to force a major proportion of the workforce to re-educate itself during the next few decades. Consequently, this has led to a growing demand for multidisciplinary AI education also for students outside computer science. Therefore, a 25 credit (ECTS) cross-disciplinary study module on AI, targeting students in all faculties, was designed. We present findings from the design and implementation of the study module as well as students' initial perceptions towards AI at the beginning of the study module. Enrollment for the first implementation of the study module began in autumn 2019. The student distribution (N=144) between faculties was the following: natural sciences (n=37), social sciences (n=23), law (n=17), education (n=17), economics (n=16), medicine (n=10), humanities (n=10) and open university (n=14). Based on a survey distributed to students (N=34), the primary reason for enrolling to study AI was interest towards the subject, followed by the need of AI skills at work and relevance of AI in society.
... This paper is concerned with the ethical education of stakeholders in AI systems rather than the ethics of decisions made by educational AI-based platforms (which are considered in Jobin et al. (2019), Latham and Goltz (2019), and Marcinkowski et al. (2020)). Garrett et al. (2020) introduced two ways of teaching ethics in AI: in a standalone course or by integrating ethics into technical courses. Burton et al. (2015) proposed to teach ethics as a standalone course using science fiction. ...
Article
This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources; we also survey the existing materials and past experiences in such education, noting the scarcity of suitable material on aspects of fairness in particular. We use an example of a college admission platform to illustrate our ideas. We conclude with recommendations for further work in the area and report on the first steps taken towards achieving this goal in the framework of an academic graduate seminar course, a graduate summer school, an embedded lecture in a software engineering course, and a workshop for high school teachers.
... Those that did focused overwhelmingly on bias, fairness, and privacy [55]. Although courses focused specifically on AI ethics cover a wider set of issues including consequences of algorithms, technically tractable issues like bias and privacy are still dominant [56]. We suggest that AI ethics education focus not solely on a few prominent or technically tractable issues nor on general awareness-building alone, but also on impact assessment as an overarching framework to understand AI's impacts on human well-being. ...
Article
In this paper, we review the gap between high-level principles aimed at responsible uses of AI and the translation of those principles into effective practices. We review six potential explanations for the gap: tensions related to organizational incentives and values, a need to make sense of the complexity of AI's impacts, disciplinary divides in understanding problems and solutions, the distribution of accountability and functional separation within organizations, the need for holistic management of knowledge processes, and a lack of clarity and guidance around tool usage. We argue that stakeholders interested in realizing AI's potential for good should advance research on understanding the principles-to-practices gap and attend to these issues when proposing solutions and best practices.
... Scholars in the pedagogy of ethics have engaged in a long and ongoing conversation about integrating ethics in computing fields and domains [46] such as HCI [42,58], Machine Learning [50], Artificial Intelligence Programming [31], and Cybersecurity [9], among others. Recent research has focused attention on the content and role of technology ethics courses, as societal interest in ethics and values has increased, and computing educators are increasingly arguing for ethics to have a more central role in computing curricula (e.g., [30,32,56]). ...
Conference Paper
Full-text available
In conjunction with the increasing ubiquity of technology, computing educators have identifed the need for pedagogical engagement with ethical awareness and moral reasoning. Typical approaches to incorporating ethics in computing curricula have focused primarily on abstract methods, principles, or paradigms of ethical reasoning, with relatively little focus on examining and developing students' pragmatic awareness of ethics as grounded in their everyday work practices. In this paper, we identify and describe computing stu-dents' negotiation of values as they engage in authentic design problems through a lab protocol study. We collected data from four groups of three students each, with each group including participants from either undergraduate User Experience Design students, Industrial Engineering students, or a mix of both. We used a thematic analysis approach to identify the roles that students took on to address the design prompt. Through our analysis, we found that the students took on a variety of "dark" roles that resulted in manipulation of the user and prioritization of stakeholder needs over user needs, with a focus either on building solutions or building rationale for design decisions. We found these roles to actively propagate through design discourses, impacting other designers in ways that frequently reinforced unethical decision making. Even when students were aware of ethical concerns based on their educational training, this awareness did not consistently result in ethically-sound decisions. These fndings indicate the need for additional ethical supports to inform everyday computing practice, including means of actively identifying and balancing negative societal impacts of design decisions. The roles we have identifed may productively support the development of pragmatically-focused ethical training in computing education, while adding more precision to future analysis of computing student discourses and outputs.
... There is a growing body of research investigating how to design AIrelated learning experiences for novice audiences. Researchers are developing curricula for both K-12 audiences [2,48,50] and non-CS majors at universities [6,19,46]. Others are developing courses, interactive online tools, and programming platforms that can engage novice audiences in learning about AI (e.g. [1,15,30,54]). ...
Conference Paper
Fostering public AI literacy (i.e. a high-level understanding of artificial intelligence (AI) that allows individuals to critically and effectively use AI technologies) is increasingly important as AI is integrated into individuals’ everyday lives and as concerns about AI grow. This paper investigates how to design collaborative, creative, and embodied interactions that foster AI learning and interest development. We designed three prototypes of collaborative, creative, and/or embodied learning experiences that aim to communicate AI literacy competencies. We present the design of these prototypes as well as the results from a user study that we conducted with 14 family groups (38 participants). Our data analysis explores how collaboration, creativity, and embodiment contributed to AI learning and interest development across the three prototypes. The main contributions of this paper are: 1) three designs of AI literacy learning activities and 2) insights into the role creativity, collaboration, and embodiment play in AI learning experiences.
... An analysis of 200 "technical" AI/ML courses by Saltz et al. revealed only 12% of courses included some mention of ethics. Of the 12% of "technical" AI/ML courses that mentioned ethics, ethics-related topics were relegated to the last two classes in the schedule and, in one course, left as a discussion topic only "if time allows" [15]. Dominant approaches to CS education reinforce the perception of CS as an anti-political discipline through the epistemic, cultural, and ideological "infrastructures of abstraction" that treat "technical" content as the only content that "counts" [25]. ...
Preprint
Full-text available
Justice-centered approaches to equitable computer science (CS) education prioritize the development of students' CS disciplinary identities toward social justice rather than corporations, industry, empire, and militarism by emphasizing ethics, identity, and political vision. However, most research in justice-centered approaches to equitable CS education focus on K-12 learning environments. In this position paper, we problematize the lack of attention to justice-centered approaches to CS in higher education and then describe a justice-centered approach for undergraduate Data Structures and Algorithms that (1) critiques sociopolitical values of data structure and algorithm design and dominant computing epistemologies that approach social good without design justice; (2) centers students in culturally responsive-sustaining pedagogies to resist dominant computing culture and value Indigenous ways of living in nature; and (3) ensures the rightful presence of political struggles through reauthoring rights and problematizing the political power of computing. Through a case study of this Critical Comparative Data Structures and Algorithms pedagogy, we argue that justice-centered approaches to higher CS education can help students not only critique the ethical implications of nominally technical concepts, but also develop greater respect for diverse epistemologies, cultures, and narratives around computing that can help all of us realize the socially-just worlds we need.
... There have been multiple calls for introducing ethics in computer science courses in general, and in AI programs in particular (Singer 2018;Grosz et al. 2019;Saltz et al. 2019;Skirpan et al. 2018;Danyluk et al. 2021;O'Neil 2017;Angwin et al. 2016;Leonelli 2016;National Academies of Sciences et al. 2018). Several surveys have investigated how existing ethics courses in computer science are organized Garrett, Beard, and Fiesler 2020;Raji, Scheuerman, and Amironesei 2021;Peck 2017). ...
Preprint
Full-text available
In this work we explain the setup for a technical, graduate-level course on Fairness, Accountability, Confidentiality and Transparency in Artificial Intelligence (FACT-AI) at the University of Amsterdam, which teaches FACT-AI concepts through the lens of reproducibility. The focal point of the course is a group project based on reproducing existing FACT-AI algorithms from top AI conferences, and writing a report about their experiences. In the first iteration of the course, we created an open source repository with the code implementations from the group projects. In the second iteration, we encouraged students to submit their group projects to the Machine Learning Reproducibility Challenge, which resulted in 9 reports from our course being accepted to the challenge. We reflect on our experience teaching the course over two academic years, where one year coincided with a global pandemic, and propose guidelines for teaching FACT-AI through reproducibility in graduate-level AI programs. We hope this can be a useful resource for instructors to set up similar courses at their universities in the future.
... Those that did focused overwhelmingly on bias, fairness, and privacy [58]. While courses focused specifically on AI ethics cover a wider set of issues including consequences of algorithms, technically tractable issues like bias and privacy are still prominent [26]. We suggest that AI ethics education focus not solely on a few prominent or technically tractable issues nor on general awareness building alone, but also on impact assessment as an overarching framework to understand AI's impacts on human well-being. ...
Preprint
Full-text available
Companies have considered adoption of various high-level artificial intelligence (AI) principles for responsible AI, but there is less clarity on how to implement these principles as organizational practices. This paper reviews the principles-to-practices gap. We outline five explanations for this gap ranging from a disciplinary divide to an overabundance of tools. In turn, we argue that an impact assessment framework which is broad, operationalizable, flexible, iterative, guided, and participatory is a promising approach to close the principles-to-practices gap. Finally, to help practitioners with applying these recommendations, we review a case study of AI's use in forest ecosystem restoration, demonstrating how an impact assessment framework can translate into effective and responsible AI practices.
... Borenstein & Howard. 2021;Garrett et al. 2020;Holmes et al. 2021;Latham & Goltz. 2019). ...
Conference Paper
Full-text available
A significant number of children in relation to educational institutions in the European Union, subjected to inappropriate and discriminatory position, represent the basis of the document Union of Equality: Strategy for the Rights of Persons with Disabilities 2021-2030 (COM, 2021). At the same time, the social model of inclusion in education (Slee et al., 2019; Sunko, 2021) and articles 2 and 23 of the Convention on the Rights of the Child (1989) advocating equal rights of all children to education, indicate the dichotomy of desired and achieved. Each individual, whether he/she has certain difficulties or not, differs in his/her abilities, and each of them has their “personal needs”. It is important to note that students with special educational needs are students with disabilities and gifted students. Meeting the diverse needs of students through inclusive practices is often difficult or even impossible for those teachers who have not acquired the necessary skills and knowledge, so it is imperative to empower and support teachers primarily through formal education so that teachers learn to use effective inclusive teaching methods at all levels (Loveys, 2022). The aim of this research was to determine whether personal experience and student’s attended academic year of the teacher study (N = 304, all academic years) from three Teachers’ Faculties in the Republic of Croatia, correlate with their sense of personal competence, motivation for further professional development, or the need to change the study program. The results of this research show that students’ personal experiences with children with developmental disabilities (DD) affect the sense of their personal competence for working with children with DD, and that students of all attended academic years are equally motivated to teach children with DD. They also point out the need for additional training, and 84.64% of them emphasize the importance of practice in learning that deals with teaching children with DD. Data suggest that the same percentage of students feel the need to change/adapt the content of the study program accordingly. The main feature and implication of this research is the insight into the development of future teachers’ needs for further higher education for teaching children with DD.
... This approach creates a barrier to literacy amongst the public (Long & Magerko, 2020). While ethical issues related to AI have received increased attention (Ashok et al., 2022;Jobin et al., 2019;Kuipers, 2020;Mehrabi et al., 2021;Prunkl, 2022), ethics thus far have rarely been an explicit component of AI courses (Saltz et al., 2019), and limited information is available on the ethical considerations covered in AI classes (Garrett et al., 2020). ...
Article
Full-text available
Emerging research is highlighting the importance of fostering artificial intelligence (AI) literacy among educated citizens of diverse academic backgrounds. However, what to include in such literacy programmes and how to teach literacy is still under-explored. To fill this gap, this study designed and evaluated an AI literacy programme based on a multi-dimensional conceptual framework, which developed participants' conceptual understanding, literacy, empowerment and ethical awareness. It emphasised conceptual building, highlighted project work in application development and initiated teaching ethics through application development. Thirty-six university students with diverse academic backgrounds joined and completed this programme, which included 7 hours on machine learning, 9 hours on deep learning and 14 hours on application development. Together with the project work, the results of the tests, surveys and reflective writings completed before and after these courses indicate that the programme successfully enhanced participants' conceptual understanding, literacy, empowerment and ethical awareness. The programme will be extended to include more participants, such as senior secondary school students and the general public. This study initiates a pathway to lower the barrier to entry for AI literacy and addresses a public need. It can guide and inspire future empirical and design research on fostering AI literacy among educated citizens of diverse backgrounds.
... Similarly in Australia, Gorur et al. [26] surveyed 12 curricula in universities, finding that they focused on micro-ethical concepts like professionalism while lacking macro-ethical agendas such as betterment of society and the planet. Ethics units are rarely included in computer science courses, and several of these are even shunted into the last few sessions if time allows [24], demonstrating the lowly status of ethics in AI education. ...
Article
Full-text available
As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.
... Borenstein & Howard. 2021;Garrett et al. 2020;Holmes et al. 2021;Latham & Goltz. 2019). ...
Chapter
Full-text available
After 2011 a new alternative educational form appeared on the palette of Hungarian public education: learning communities that provide alternative education for schoolchildren who take part in alternative or mainstream education as private pupils. The learning communities are not schools in the traditional way, but we can simplify it as home-schooling in a more organized way. The conditions of learning communities and regulations in connection with the fulfilment of compulsory education vary in different countries and there is a difference in private pupils’ legal relationship in regard to how permissive or restrictive the status of being a private pupil is. The learning community as an alternative way of education has appeared in more European countries and even beyond Europe, this research discusses three European countries – Austria, Hungary and Romania – the way they regulate the fulfilment of compulsory education and their regulations in how they permit being a private pupil, as well as the attitude of educational governance towards this new form ofalternative education.
... Previous works have brought to attention that engineering students may never come across topics of ethics during their education, which further complicates this problem (Saltz et al., 2019). The combination of standalone modules and the insertion of activities about the topic in multiple technical courses across secondary education programmes might prove to be the most effective approach in the long term, as advocated by previous research (Garrett et al., 2020). It is also fundamental to keep probing strategies for the challenging quest of turning indifferent students into caring ethical agents in their future careers. ...
Article
Full-text available
Contemporary dilemmas about the role and impact of digital technologies in society have motivated the inclusion of topics of computing ethics in university programmes. Many past works have investigated how different pedagogical approaches and tools can support learning and teaching such a subject. This brief research report contributes to these efforts by describing a pilot study examining how engineering students learn from and apply ethical principles when making design decisions for an introductory User Experience (UX) design project. After a short lecture, students were asked to design and evaluate the ethical implications of digital health intervention prototypes. This approach was evaluated through the thematic analysis of semi-instructed interviews conducted with 12 students, focused on the benefits and limitations of teaching ethics this way. Findings indicate that it can be very challenging to convey the importance of ethics to unaware and uninterested students, an observation that calls for a much stronger emphasis on moral philosophy education throughout engineering degrees. This paper finishes with a reflection on the hardships and possible ways forward for teaching and putting UX design ethics into practice. The lessons learned and described in this report aim to contribute to future pedagogical efforts to enable ethical thinking in computing education.
... Borenstein & Howard. 2021;Garrett et al. 2020;Holmes et al. 2021;Latham & Goltz. 2019). ...
Chapter
Full-text available
Besides state-funded schools, private schools play a role in public education both abroad and in Hungary, however the financial aid they receive from the governmental budget is different from country to country. There are countries where they receive the same amount of support that state-funded institutions get. Whereas there are other private institutions that cannot gain any financial resources from the subsidy. Financial contribution by the government to educational costs, however, always goes together with a restriction of the autonomy of schools by said government. These restrictions may include forcing the exemption of tuition fees or mandating that private schools cannot control the admission of pupils. Moreover, it might convey the restriction of the pedagogical autonomy of alternative private schools according to the educational system’s degree of centralization. The liberal and decentralized Hungarian education system has become centralized again due to the current government’s aspiration of creating an integrated and unified educational policy. In this study, we seek to answer the question of how the financial contribution of the state to the operation of alternative private schools affects their pedagogical autonomy.
... Borenstein & Howard. 2021;Garrett et al. 2020;Holmes et al. 2021;Latham & Goltz. 2019). ...
Conference Paper
Full-text available
Supporting student mental health and wellbeing continues to be a foremost concern in Higher Education (HE), as rates of students presenting with mental health conditions, distress and poor wellbeing increase and as demand for counselling and support services exceeds supply. The age range of students in third level education often coincides with the challenging transitional period of emerging adulthood, where instabilities are further compounded by academic, financial, and social pressures. As HE institutes are distinct settings where academic work, hobbies, social life, as well as health and other services are often integrated, HE presents a unique opportunity to address this wider societal concern through a systems-based lens. Despite calls for holistic, whole of institution approaches, a transformation of student wellbeing has yet to be realised. During a national initiative for valuing teaching and learning in HE in Ireland, the authors hosted an engaged online event to mobilise learning and action in the student wellbeing community. The event included case study presentations from existing examples of wellbeing in the curriculum, a panel discussion on the national landscape, and an open discussion on the future of wellbeing in HE. Attendees included academic faculty, HE management, researchers, staff from health and counselling services and health promotion, student representatives, careers and support services, and others. Data were collected via the recorded oral contributions, Zoom chat, and an anonymous survey. A deductive thematic analysis was completed with the guiding concept of an institution as a system that supports wellbeing. Findings proposed shared values as the compass for organisational culture, leaders and decision makers as key enablers of change, academic structures as both a vehicle to promote positive wellbeing and mitigate negative impacts, academic staff as the embodiment of the institution and its values, and the student voice as a guide for informed decision making. Recognising an institution as part of a wider system of HE which is influenced by political and economic climates, there is a requirement for HE to set out its stall with respect to its purpose and responsibility to wellbeing. This affirmation could enable a shared understanding of and commitment to wellbeing across the sector, through which collaborative and system-based efforts to support wellbeing can be actioned.
Preprint
Full-text available
Since the education sector is associated with highly dynamic business environments which are controlled and maintained by information systems, recent technological advancements and the increasing pace of adopting artificial intelligence (AI) technologies constitute a need to identify and analyze the issues regarding their implementation in education sector. However, a study of the contemporary literature reveled that relatively little research has been undertaken in this area. To fill this void, we have identified the benefits and challenges of implementing artificial intelligence in the education sector, preceded by a short discussion on the concepts of AI and its evolution over time. Moreover, we have also reviewed modern AI technologies for learners and educators, currently available on the software market, evaluating their usefulness. Last but not least, we have developed a strategy implementation model, described by a five-stage, generic process, along with the corresponding configuration guide. To verify and validate their design, we separately developed three implementation strategies for three different higher education organizations. We believe that the obtained results will contribute to better understanding the specificities of AI systems, services and tools, and afterwards pave a smooth way in their implementation.
Chapter
Full-text available
Since the education sector is associated with highly dynamic business environments which are controlled and maintained by information systems, recent technological advancements and the increasing pace of adopting artificial intelligence (AI) technologies constitute a need to identify and analyze the issues regarding their implementation in education sector. However, a study of the contemporary literature reveled that relatively little research has been undertaken in this area. To fill this void, we have identified the benefits and challenges of implementing artificial intelligence in the education sector, preceded by a short discussion on the concepts of AI and its evolution over time. Moreover, we have also reviewed modern AI technologies for learners and educators, currently available on the software market, evaluating their usefulness. Last but not least, we have developed a strategy implementation model, described by a five-stage, generic process, along with the corresponding configuration guide. To verify and validate their design, we separately developed three implementation strategies for three different higher education organizations. We believe that the obtained results will contribute to better understanding the specificities of AI systems, services and tools, and afterwards pave a smooth way in their implementation.
Article
The daily influence of new technologies on shaping and reshaping human lives necessitates attention to the ethical development of the future computing workforce. To improve computer science students’ ethical decision-making, it is important to know how they make decisions when they face ethical issues. This article contributes to the research and practice of computer ethics education by identifying the factors that influence ethical decision-making of computer science students and providing implications to improve the process. Using a constructivist grounded theory approach, the data from the text of the students’ discussion postings on three ethical scenarios in computer science and the follow-up interviews were analyzed. Based on the analysis, relating to real-life stories, thoughtfulness about responsibilities that come from the technical knowledge of developers, showing care for users or others who might be affected, and recognition of fallacies contributed to better ethical decision-making. On the other hand, falling for fallacies and empathy for developers negatively influenced students' ethical decision-making process. Based on the findings, this study presents a model of factors that influence the ethical decision-making process of computer science students, along with implications for future researchers and computer ethics educators.
Preprint
Since any training in AI ethics is first and foremost indebted to a conception of ethics training in general, we identify the specific requirements related to the ethical dimensions of this cutting-edge technological innovation. We show how a pragmatist approach inspired by the work of John Dewey allows us to clearly identify both the essential components of such training and the specific fields related to the development of AI systems. More precisely, by focusing on some central characteristics of such a pragmatist approach, namely anti-foundationalism, anti-dualism and anti-skepticism, characteristics shared by the philosophies of the main representatives of the pragmatist movement, we will see how the different components of the ethical competence - namely ethical sensitivity, reflexive capacities and dialogical capacities - can be conceived in a dynamic and interdependent way. We will then be able to examine the specific fields of training in AI ethics, insisting on the necessary complementarity between the specific moral dilemmas associated with this technology and the technical, social and normative (especially legislative) aspects in order to adequately grasp the ethical issues related to the design, development and deployment of AI systems. In doing so, we will be able to determine the requirements that should guide the implementation of an adequate training in AI ethics, by providing benchmarks for the teaching of these issues.
Article
Full-text available
Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved.
Article
Full-text available
Artificial intelligence (AI) ethics is now a global topic of discussion in academic and policy circles. At least 84 public–private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. According to recent meta-analyses, AI ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach for the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.
Conference Paper
Full-text available
A key goal of the fair-ML community is to develop machine-learning based systems that, once introduced into a social context, can achieve social and legal outcomes such as fairness, justice, and due process. Bedrock concepts in computer science---such as abstraction and modular design---are used to define notions of fairness and discrimination, to produce fairness-aware learning algorithms, and to intervene at different stages of a decision-making pipeline to produce "fair" outcomes. In this paper, however, we contend that these concepts render technical interventions ineffective, inaccurate, and sometimes dangerously misguided when they enter the societal context that surrounds decision-making systems. We outline this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science. We draw on studies of sociotechnical systems in Science and Technology Studies to explain why such traps occur and how to avoid them. Finally, we suggest ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions, and by drawing abstraction boundaries to include social actors rather than purely technical ones.
Conference Paper
Full-text available
In February of 2017, Google announced the first SHA1 col- lision. Using over nine quintillion computations (over 6,500 years of compute time), a group of academic and industry researchers produced two different PDF files with identical SHA1 checksums. But why? After all, SHA1 had already been deprecated by numerous standards and advisory bodies. This paper uses the SHA1 collision compute as a site for surfacing the space of ecological risks, and sociotechnical rewards, associated with the performance of large computes. I forward a theory of polemic computation, in which com- putes exert agency in sociotechnical discourses not through computational results, but through feats, the expenditure of significant material resources. This paper does not make specific claims about the (ecological, political, labor) limits within which polemic computes must operate in order to be considered acceptable. Instead, this paper raises the question of how such limits could be established, in the face of polemic computes' significant costs and difficult-to-measure rewards.
Article
Full-text available
The 7th Symposium on Educational Advances in Artificial Intelligence (EAAI'17, co-chaired by Sven Koenig and Eric Eaton) launched the EAAI New and Future AI Educator Program to support the training of early-career university faculty, secondary school faculty, and future educators (PhD candidates or postdocs who intend a career in academia). As part of the program, awardees were asked to address one of the following "blue sky" questions: 1. How could/should Artificial Intelligence (AI) courses incorporate ethics into the curriculum? 2. How could we teach AI topics at an early undergraduate or a secondary school level? 3. AI has the potential for broad impact to numerous disciplines. How could we make AI education more interdisciplinary, specifically to benefit non-engineering fields? This paper is a collection of their responses, intended to help motivate discussion around these issues in AI education.
Article
Full-text available
Computing technologies and artifacts are increasingly integrated into most aspects of our professional, social, and private lives. One consequence of this growing ubiquity of computing is that it can have significant ethical implications that computing professionals need to be aware of. The relationship between ethics and computing has long been discussed. However, this is the first comprehensive survey of the mainstream academic literature of the topic. Based on a detailed qualitative analysis of the literature, the article discusses ethical issues, technologies that they are related to, and ethical theories, as well as the methodologies that the literature employs, its academic contribution, and resulting recommendations. The article discusses general trends and argues that the time has come for a transition to responsible research and innovation to ensure that ethical reflection of computing has practical and manifest consequences.
Article
Full-text available
The second report from Project ImpactCS is given here, and a new required area of study - ethics and social impact - is proposed.
Article
Persons with disabilities face many barriers to full participation in society, and the rapid advancement of technology has the potential to create ever more. Building equitable and inclusive technologies for people with disabilities demands paying attention to more than accessibility, but also to how social attitudes towards disability are represented within technology. Representations perpetuated by machine learning (ML) models often inadvertently encode undesirable social biases from the data on which they are trained. This can result, for example, in text classification models producing very different predictions for I am a person with mental illness , and I am a tall person . In this paper, we present evidence of such biases in existing ML models, and in data used for model development. First, we demonstrate that a machine-learned model to moderate conversations classifies texts which mention disability as more "toxic". Similarly, a machine-learned sentiment analysis model rates texts which mention disability as more negative. Second, we demonstrate that neural text representation models that are critical to many ML applications can also contain undesirable biases towards mentions of disabilities. Third, we show that the data used to develop such models reflects topical biases in social discourse which may explain such biases in the models - for instance, gun violence, homelessness, and drug addiction are over-represented in discussions about mental illness.
Article
This article establishes and addresses opportunities for ethics integration into Machine-learning (ML) courses. Following a survey of the history of computing ethics and the current need for ethical consideration within ML, we consider the current state of ML ethics education via an exploratory analysis of course syllabi in computing programs. The results reveal that though ethics is part of the overall educational landscape in these programs, it is not frequently a part of core technical ML courses. To help address this gap, we offer a preliminary framework, developed via a systematic literature review, of relevant ethics questions that should be addressed within an ML project. A pilot study with 85 students confirms that this framework helped them identify and articulate key ethical considerations within their ML projects. Building from this work, we also provide three example ML course modules that bring ethical thinking directly into learning core ML content. Collectively, this research demonstrates: (1) the need for ethics to be taught as integrated within ML coursework, (2) a structured set of questions useful for identifying and addressing potential issues within an ML project, and (3) novel course models that provide examples for how to practically teach ML ethics without sacrificing core course content. An additional by-product of this research is the collection and integration of recent publications in the emerging field of ML ethics education.
Article
A Harvard-based pilot program integrates class sessions on ethical reasoning into courses throughout its computer science curriculum.
Article
Use of a codebook to categorize meaning units is a well-known research strategy in qualitative inquiry. However, methodology for the creation of a codebook in practice is not standardized, and specific guidance for codebook ideation is sometimes unclear, especially to novice qualitative researchers. This article describes the procedure that was utilized to create a codebook, which adapted an affinity diagram methodology (Scupin, 1997), an approach used in user-centered design. For this research, affinity diagramming was applied to a method outlined by Kurasaki (2000) for codebook ideation. Annotations of a subset of military veterans’ transcripts were utilized in congruence with affinity diagramming to create a codebook to categorize the phenomenon of reintegration into the civilian community after service (Haskins Lisle, 2017). This method could be useful in exploratory research that utilizes a codebook generated in vivo from annotations, especially for novice researchers who are overwhelmed by the codebook creation phase.
Book
As seen in Wired and Time A revealing look at how negative biases against women of color are embedded in search engine results and algorithms Run a Google search for “black girls”—what will you find? “Big Booty” and other sexually explicit terms are likely to come up as top search terms. But, if you type in “white girls,” the results are radically different. The suggested porn sites and un-moderated discussions about “why black women are so sassy” or “why black women are so angry” presents a disturbing portrait of black womanhood in modern society. In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color. Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance. An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century.
Conference Paper
As online platforms increasingly collect large amounts of data about their users, there has been growing public concern about privacy around issues such as data sharing. Controversies around practices perceived as surprising or even unethical often highlight patterns of privacy attitudes when they spark conversation in the media. This paper examines public reaction "in the wild" to two data sharing controversies that were the focus of media attention-regarding the social media and communication services Facebook and WhatsApp, as well as the email service unroll.me. These controversies instigated discussion of data privacy and ethics, accessibility of website policies, notions of responsibility for privacy, cost-benefit analyses, and strategies for privacy management such as non-use. An analysis of reactions and interactions captured by comments on news articles not only reveals information about pervasive privacy attitudes, but also suggests communication and design strategies that could benefit both platforms and users.
Article
Internet protocol development is a social process, and resulting protocols are shaped by their developers’ politics and values. This article argues that the work of protocol development (and more broadly, infrastructure design) poses barriers to developers’ reflection upon values and politics in protocol design. A participant observation of a team developing internet protocols revealed that difficulties defining the stakeholders in an infrastructure and tensions between local and global viewpoints both complicated values reflection. Further, Internet architects tended to equate a core value of interoperability with values neutrality. The article describes how particular work practices within infrastructure development overcame these challenges by engaging developers in praxis: situated, lived experience of the social nature of technology.
Article
Usability has been widely implemented in technical communication curricula and workplace practices, but little attention has focused specifically on how usability and its pedagogy are addressed in our literature. This study reviews selected technical communication textbooks, pedagogical and landmark texts, and online course syllabi and descriptions, and argues that meager attention is given to usability, thus suggesting the need for more in-depth and productive discussions on usability practices, strategies, and challenges.
Conference Paper
A national web-based survey using SurveyMonkey.com was administered to 700 undergraduate computer science programs in the United States as part of a stratified random sample of 797 undergraduate computer science programs. The 251 program responses (36% response rate) regarding social and professional issues (computer ethics) are presented. This article describes the demographics of the respondents, presents results concerning whether programs teach social and professional issues, who teaches, the role of training in these programs, the decision making process as it relates to computer ethics and why some programs are not teaching computer ethics. Additionally, we provide suggestions for computer science programs regarding ethics training and decision-making and we share reasons why schools are not teaching computer ethics.
Should Prison Sentences Be Based On Crimes That Haven't Been Committed Yet?
  • Ben Casselman
  • Dana Goldstein
Can AI Really Solve Facebook's Problems?
  • Larry Greenemeier
Google engineer apologizes after Photos app tags two black people as gorillas. The Verge
  • Loren Grushe
If animals have rights should robots?
  • Nathan Heller
Can an algorithm tell when kids are in danger? The New York Times Magazine
  • Dan Hurley
How We Analyzed the COMPAS Recidivism Algorithm
  • Jeff Larson Surya Mattu
  • Lauren Kirchner
  • Julia Angwin
Elon Musk and Mark Zuckerberg Debate Artificial Intelligence. The Atlantic
  • Ian Bogost
Facing the Great Reckoning Head-On
  • Boyd Danah
Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach
  • Carole Cadwalladr
  • Emma Graham-Harrison
San Francisco Bans Facial Recognition Technology. The New York Times
  • Kate Conger
  • Richard Fausset
  • Serge F Kovaleski
The Meaning of Life in a World Without Work. The Guardian
  • Yuval Noah Harari
How Are Universities Responding to the Tech Skills Gap? CMS Wire2
  • Dom Nicastro
End of the road: will automation put an end to the American trucker? The Guardian
  • Dominic Rushe
Tech's ethical "dark side": Harvard Stanford and others want to address it
  • Natasha Singer
Facebook and Engineering the Public. The Message
  • Zeynep Tufekci
Why Tech's Approach to Fixing Its Gender Inequality Isn't Working
  • Alison Wynn
Facebook Figured Out My Family Secrets And It Won't Tell Me How. Gizmodo
  • Kashmir Hill
Why Stanford Researchers Tried to Create a 'Gaydar' Machine. The New York Times
  • Heather Murphy
They're Watching You at Work. The Atlantic
  • Don Peck
Mark Zuckerberg Needs to Shut Up
  • Siva Vaidhyanathan
To Save Everything Click Here: The Folly of Technological Solutionism. Allen Lane. Evgeny Morozov. 2013. To Save Everything Click Here: The Folly of Technological Solutionism
  • Evgeny Morozov
How Humans Respond to Robots?: Building Public Policy through Good Design
  • Heather Knight
Codebook Development for Team-Based Qualitative Analysis
  • MacQueen Kathleen M.