Article

Towards a framework for local interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Ethical frameworks for text generators (TGs) in education are generally concerned with person-alized instruction, a dependency on data, biases in training data, academic integrity, and lack of creativity from students. While broad-level, institutional guidelines provide value in understanding the ethical dimensions of artificial intelligence (AI) for the classroom, there is a need for a more ecological understanding of how AI ethics might be constructed locally, one that takes into account the negotiation of AI between teacher and student. This article investigates how an educational ethical framework for AI use emerges through a qualitative case study of one composition student's interaction with and understanding of using ChatGPT as a type of writing partner. Analysis of interview data and student logs uncover what we term an emergent "local ethic"-a framework that is capable of exploring unique ethical considerations, values, and norms that develop at the most foundational unit of higher education-the individual classroom. Our framework is meant to provide a heuristic for other writing teacher-scholars as they interrogate issues related to pedagogy, student criticality, agency, reliability, and access within the context of powerful AI systems.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Critiques of this approach contend that adherence to fixed principles overlooks the fluid context-dependent nature of ethical decision-making (Frisicaro-Pawlowski 2018;Vetter et al. 2024) and refute the expectation that students will act out of a sense of duty and comply willingly with predetermined ethical codes. For instance, Smith (2016) has argued for understanding ethical decision-making as dynamic, deeply intertwined within specific rhetorical and disciplinary contexts. ...
... This approach suggests that ethical sensibility should be adaptive and responsive to the unique demands of each discursive situation, rather than anchored in static codes of conduct. In alignment with this view, Vetter et al. (2024) call for an ecological understanding of AI ethics and introduces the concept of a 'local ethics' (2). This framework encourages the negotiations of AI ethics at the ground level between students, AI agents and institutional codes, and so positions the classroom as the space for ethical exploration. ...
... Similarly, Frisicaro-Pawlowski (2018) advocates for a dialogical process for the teaching of ethics, one that evolves within local contexts and is subject to continuous negotiation. This shift mirrors broader theoretical advancements in AI ethics, where the emphasis moves away from indoctrinating students with predetermined values toward facilitating their ability to adapt ethically across different contexts (Vetter et al. 2024). ...
... Ethical implications are critical in integrating LLMs in education. AI tools must enhance educational practices while addressing ethical concerns (Vetter et al., 2024). The introduction of ChatGPT in academic settings has spurred ethical discussions and empirical studies on its use by students (Niloy et al., 2024). ...
... The ethical use of AI in assessments raises several important concerns, particularly around fairness, academic integrity, and the authenticity of student work. The capabilities of LLMs pose significant challenges to maintaining the integrity of student submissions (Vetter et al., 2024). Human instructors cannot consistently differentiate between AI-generated text and student-authored content, further complicating the issue. ...
... Rather than attempting to outlaw the use of AI in education, educators should consider the development of a comprehensive framework that integrates AI tools into their teaching practices (Vetter et al., 2024). Such a framework would allow instructors to adapt their pedagogy, using AI to enhance learning objectives rather than viewing it as a threat to academic integrity. ...
Article
The advancement of Artificial Intelligence (AI) and Large Language Models (LLMs) ushers in a new era in education, characterized by more adaptive, personalized learning experiences. This literature review examines the profound impact of these technologies on student engagement, achievement, and personalized learning within higher education institutions. Through a systematic analysis of scholarly articles from 2022 to 2024, this review explores how AI is reshaping educational practices through enhanced feedback mechanisms, predictive analytics, and innovative teaching methodologies. The findings indicate that AI significantly improves student support services by enabling early identification of at-risk students and by facilitating tailored educational interventions. Moreover, the deployment of chatbots and LLMs, such as GPT (generative pre-trained transformer) and BERT (bidirectional encoder representations from transformers), offers promising enhancements in instructional strategies and student assessments, fostering richer, interactive learning environments. However, the integration of these technologies also introduces ethical challenges, necessitating consideration of issues such as data privacy and bias. The review emphasizes the need for ethical frameworks and responsible AI usage to ensure technology enhances educational outcomes without compromising fairness or integrity. Future research directions are suggested, focusing on broader AI applications across various educational settings and the need for longitudinal studies to assess the long-term effects of AI integration in education.
... Given the prevalence of visual communication in an increasingly technologically-driven world, generative artificial intelligence (GenAI) design tools have started to prompt a rethinking of multimodal composition and learning. In composition and writing studies, AI text generators such as ChatGPT (AlAfnan et al., 2023;Cummings et al., 2024;Dobrin, 2023;Vee, 2023;Vetter et al., 2024;Yan, 2023) and AI image generators such as DALL⋅E and Midjourney (Kang & Yi, 2023;Liu et al., 2024) can be creative tools that influence and extend students' existing composition process. AI text generators have been found to enhance students' learning motivation (Deng & Yu, 2023;Erito, 2023), writing outcomes (Malik et al., 2023;Zhang, 2024), and conceptual understanding (Ng et al., 2021). ...
... The use of GenAI technologies, available through Adobe Firefly and Open AI's DALL⋅E, has the potential to reshape the students' multimodal design process. Indeed, GenAI provides an exciting opportunity to build on and add to the existing understanding of the composition process and the traditional framings of agency and authorship Vetter et al., 2024). These perspectives challenge traditional perspectives of agency by suggesting that agency is not solely confined to human subjects; instead, the perspectives underscore the interconnectedness of humans and nonhumans. ...
... This study points toward human-machine collaboration in the creative process of multimodal composition. In this sense, the findings of this study also echo the posthumanist approach to critical AI literacy (Burriss & Leander, 2024;Leander & Burriss, 2020;Tham et al., 2022;Vetter et al., 2024), which extends beyond merely recognizing and analyzing computational and algorithmic agents. Instead, the posthumanist approach requires that individuals actively build rhetorical ecologies with these nonhuman agents Leander & Burriss, 2020;Vetter et al., 2024). ...
... Moreover, integrating AI text generators necessitates ongoing discussions within academic communities about the ethical implications of technological advancements (Vetter et al., 2024). Institutions play a crucial role in establishing policies that promote responsible use of AI, ensuring transparency and accountability in research practices. ...
... However, these tools raise ethical concerns, particularly regarding authorship, academic integrity, and the potential for plagiarism (Dwivedi et al., 2023). AI's lack of critical thinking and contextual understanding challenges the rigor and authenticity of academic work, necessitating clear guidelines and educational strategies to ensure responsible use while preserving ethical standards in research (Miao et al., 2021;Vetter et al., 2024). ...
... As maintained by Miao et al. (2021) and Vetter et al. (2024), this study found that mechanical and repetitive use of these terms reflects how AI-generated content can produce refined but formulaic writing. This pattern of overuse implies that while AI tools can enhance the clarity and structure of academic arguments, they also risk homogenizing the language and style of academic theses. ...
Article
Full-text available
This paper investigates the use of Artificial Intelligence (AI) in MA thesis writing, addressing a notable gap in existing research that primarily focuses on broader academic contexts. While AI's role in undergraduate essays and general academic writing has been explored, the specific use in the genre of MA theses, characterized by rigorous academic inquiry and advanced scholarly engagement, remains underexplored. This study examines the frequency and contextual usage of specific lexical items in 53 MA theses in linguistics, literature, discourse, and culture studies, aiming to identify patterns indicative of AI-generated content. Employing a systematic comparison of MA theses defended before, and after the release of AI text generators, the research tracks the usage of targeted lexical items to discern deviations suggestive of AI influence. Through analyzing these patterns, the study seeks to provide empirical insights into integrating AI technologies in graduate-level writing, contributing to theoretical understanding and offering practical implications for educational institutions and policymakers. The findings indicate a dramatic increase in the salience of specific lexical items frequently used by ChatGPT compared to the frequency of their use before the release of this text generator. The findings inform the ethical considerations and pedagogical strategies necessary for responsibly incorporating AI into graduate writing instruction, ensuring the integrity of scholarly communication practices.
... Advocating a human-centered approach, these organizations laid the ground for conceiving ethical guidelines as safeguarding fundamental rights and prioritizing human autonomy. Despite the importance of such frameworks, it should be noted that they are merely directional and unbinding [16] and lack a localized approach that becomes relevant at the classroom level [35]. In effect, the "effectiveness of guidelines or ethical codes is almost zero and [. . ...
... While the advent of AI in higher education has influenced its various functions and processes, we observed that text-generation AI systems particularly targeted data handling and academic writing. We also noticed that most studies on AI in higher education focused on the pedagogical dimension, particularly teaching and learning processes [35]. Given the need for a contextualized conceptualization of AI readiness, we selected university-based researchers for this study, both graduate students and experienced faculty, who would be expected to be AI-ready for particular tasks and within specific socio-cultural contexts. ...
... While the norm has been to develop and regularly update ethical guidelines for AI use in research, we contend that such guidelines should be tailored to the specific needs and contexts of academic research. Therefore, we align our implications with Vetter et al. in their call for a localized ethical framework for AI use [35]. Researchers are encouraged to actively engage in the development of such frameworks within their specific disciplines. ...
Article
Full-text available
Taking a human-centered socio-cultural perspective, this study explored the manifold individual and structural processes that contribute to researchers’ AI readiness. Forty-three graduate students and faculty at one university in Qatar took part in this Q methodology study. The results represented participants’ collective perspectives on what they considered relevant to their AI readiness. A 5 + 1-factor solution was accepted, illustrating diverse perspectives and no consensus. The factors were termed based on their main foci, as follows, (F-1) how technical skills are acquired, (F-2) when it is all about ethics, (F-3) when technical skills meet ethical considerations, (F-4a and F-4b) when opposites concede, and (F-5) how collaborations reflect AI readiness. The results revealed the diversity of viewpoints among participants, and the interrelations among some factors. This study recommended a holistic approach to enhance AI readiness. It suggested integrating targeted educational initiatives and developing localized ethical frameworks to promote responsible AI use across various research disciplines.
... Authors in [97] provide an in-depth investigation of the ethical concerns related to the utilization of AI text generators in educational environments. The authors make a strong argument for the creation of ethical frameworks that are tailored to specific circumstances and consider the interactions between teachers and students. ...
... The ethical challenges posed by these AI models collectively point to a fundamental shift in the nature of trust and authenticity in digital spaces [96,97]. As the line between human-generated and AI-generated content blurs, we're entering an era of "computational authenticity" -where the authenticity of content is increasingly determined by algorithmic means rather than human judgment. ...
Article
Full-text available
This review examines the ethical, social, and technical challenges posed by AI-generated text tools, focusing on their rapid advancement and widespread adoption. An exhaustive literature search across many databases, strict inclusion/exclusion criteria, and a rigorous analysis procedure are all parts of our systematic review technique. This guarantees an impartial and complete study of the current status of AI-generated text tools. The study analyzes prominent language models, including GPT-3, GPT-4, LaMDA, PaLM, Claude, Jasper, and Llama 2, evaluating their capabilities in natural language processing and generation. The analysis reveals significant advancements, with GPT-3 demonstrating a 92% accuracy rate on standard natural language understanding benchmarks, outperforming LaMDA (88%) and PaLM (85%). To illustrate real-world implications, the review presents a case study of ChatGPT's application in healthcare, where it achieved 80% consistency with expert opinions in assessing acute ulcerative colitis. This case highlights both the potential benefits and ethical concerns of AI in critical domains. Quantitative bias analysis shows that GPT-3 generated biased content in 15% of test cases involving sensitive topics, a higher rate than LaMDA (12%) and PaLM (10%). We provide an in-depth analysis of fairness and bias issues, particularly in image generation tasks depicting professional roles. Our research synthesizes insights from technical advancements, ethical considerations, and real-world applications across healthcare, education, and creative sectors. We address critical privacy concerns and data protection challenges, noting struggles in AI-generated text detection and investigating AI's potential in enabling cyberattacks. We underscore the need for comprehensive governance systems and multidisciplinary cooperation. To provide a cohesive analysis of the ethical considerations surrounding AI-generated text tools, we employ a multifaceted ethical framework drawing on established theories. Utilitarianism, which seeks to maximize happiness for everyone; deontology, which places an emphasis on right and wrong; and Virtue Ethics, which analyzes the moral nature of deeds and actors, are all included in this framework. In this article, we use this approach to investigate AI ethics from a variety of angles, including privacy, prejudice, and social implications, as well as concerns of justice and fairness. Moreover, the study critically examines existing and proposed legal frameworks addressing AI ethics, identifying regulatory gaps and proposing adaptive policy recommendations to address the unique challenges posed by AI-generated text tools. Our review contributes a critical analysis of AI-generated text tools, their impacts, and the need for responsible innovation. The study provides precise guidelines for the ethical development and implementation of AI, highlighting the need to strike a balance between technical progress and ethical concerns to guarantee that AI technologies have a beneficial effect on society while protecting human values. The emergence of generative artificial intelligence (AI) signifies a substantial revolution in our methods of interacting with language and information.
... Along with the impact on professions and jobs, AI systems can influence individuals through the potential for misinformation to be easily generated and spread, potentially harming individuals and democratic processes (Schick, 2023). Various stakeholders have attempted to define broad policy guidelines for AI application across disciplines, industries, and economic sectors (Vetter et al., 2024). Scholars have begun to track the status of regulatory initiatives regarding AI worldwide. ...
... Only two studies (Vandemeulebroucke, 2024;Wörsdörfer, 2024b) did not mention the ChatGPT model. Seven studies (Fassbender, 2024;Khan and Umer, 2024;Piller, 2023;Rojas, 2024;Sison et al., 2023;Stahl and Eke, 2024;Vetter et al., 2024) conducted their research on ChatGPT, while 1 study (Salah et al. 2023) researched ChatGPT and the Bard model. 2 studies (Bartlett and Camba, 2024;Bendel, 2023) conducted their research on image generative AI models such as DALL-E 2, Stable Diffusion, and Midjourney. ...
Article
Full-text available
As generative artificial intelligence (generative AI) technology rapidly develops, new tools are being introduced to the market, and its use in many areas, from education to healthcare, is quickly increasing. Therefore, ethical research must keep pace with these developments and address the new challenges. In this way, AI can benefit society and prevent potential harm. This study was conducted to identify ethical issues in the use of generative AI, highlight prominent issues, and provide an overview through a systematic literature review. A systematic search was conducted in Scopus, Web of Science, and ScienceDirect databases to retrieve articles examining ethical aspects of generative AI with no year restrictions. The search terms were "generative artificial intelligence," "generative AI," "GenAI," or "GAI," with the combination of "ethic," "ethics," or "ethical." Studies were selected using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Forty-three articles were included in the review after the screening process. According to the research results, the "justice and fairness" principle was emphasized in all the articles examined. The least examined ethical principles were the principle of "solidarity", which expresses unity in society or group, and the principle of "dignity", which means the value an individual feels for himself and his rights. The authors of the 43 articles are mainly from the United States (n = 31), followed by China (n = 15) and the United Kingdom (n = 13). Of the 43 articles reviewed, 41 mentioned ChatGPT, albeit as an example. This study reviews the literature on the ethical use of generative AI and presents challenges and solutions.
... It is important to note that when the study took place, access to ChatGPT 3.5 was free of charge, meaning that the students faced no financial burdens or barriers in utilizing this tool during their writing process. To encourage ethical uses of GenAI (Fiesler et al., 2020;Vetter et al., 2024), the course policy did not allow the students to generate an entire essay or complete paragraphs with ChatGPT and turn in the AI-generated texts as their drafts and final assignments. Instead, the instructor demonstrated how to use GenAI partially and selectively in their report project to mitigate the risk of plagiarism and the ethical concerns with academic integrity. ...
... The findings of this study, through this lens, highlight the need for technical writing instructors to guide students in the ethical and effective use of ChatGPT within established writing guidelines (J. V. Jiang et al., 2024;Vetter et al., 2024). Encouraging the use of ChatGPT, while emphasizing the importance of ethical parameters, can help students enhance their writing skills without compromising their own vision and voice. ...
Article
Full-text available
Guided by the scholarly understanding of generative artificial intelligence, this study explores technical writing students’ engagement with ChatGPT during their writing process. This study is informed by the classical five canons of rhetoric, along with the contemporary reinterpretations of the canons. Employing a qualitative analysis of interviews, this study argues that the students’ engagement with ChatGPT allows for reframing and expanding the notions of the writing process and rhetorical canons in the previous literature.
... Ethical considerations in education involve personalized instruction, data dependency, biases in training data, academic integrity, and creativity. Guidelines developed by organizations like OpenAI and the U.S. Department of Education address broader frameworks for AI ethics in education [25]. The joint MLA-CCCC Task Force on Writing and AI provides recommendations emphasizing academic integrity and support for educators [26]. ...
... • Promoting transparency, safety, and ethics in AI use [25]. ...
Article
Full-text available
The report provides an opportunity for educational enrichment by offering essential information and analysis on one of the most important and exciting topics in contemporary science and technology. The choice to write about this topic is motivated by its relevance, complexity, and significance for modern society and technological development.
... The potential for AI to replicate and produce works derivative of existing human-created content without clear attribution or compensation further complicates IP rights. Developing legal and ethical frameworks that recognize and protect the contributions of both human creators and AI technologies, while encouraging innovation, is essential in navigating these concerns (Andrieux et al., 2024;Patton et al., 2023;Ray, 2023b;Vetter et al., 2024). ...
... Academic integrity, authorship and plagiarism: One of the most pressing concerns is the challenge to academic integrity. Generative AI's ability to produce essays, reports and research papers can facilitate plagiarism and undermine the development of critical thinking and problem-solving skills in students (Casal and Kessler, 2023;Gallent-Torres et al., 2023;Lund et al., 2023;Vetter et al., 2024). Distinguishing between student-generated work and AI-generated content becomes increasingly difficult, complicating the evaluation of student's understanding and mastery of the subject matter (Alier et al., 2024;Song, 2024). ...
Article
Purpose The purpose of this study is to comprehensively examine the ethical implications surrounding generative artificial intelligence (AI). Design/methodology/approach Leveraging a novel methodological approach, the study curates a corpus of 364 documents from Scopus spanning 2022 to 2024. Using the term frequency-inverse document frequency (TF-IDF) and structural topic modeling (STM), it quantitatively dissects the thematic essence of the ethical discourse in generative AI across diverse domains, including education, healthcare, businesses and scientific research. Findings The results reveal a diverse range of ethical concerns across various sectors impacted by generative AI. In academia, the primary focus is on issues of authenticity and intellectual property, highlighting the challenges of AI-generated content in maintaining academic integrity. In the healthcare sector, the emphasis shifts to the ethical implications of AI in medical decision-making and patient privacy, reflecting concerns about the reliability and security of AI-generated medical advice. The study also uncovers significant ethical discussions in educational and financial settings, demonstrating the broad impact of generative AI on societal and professional practices. Research limitations/implications This study provides a foundation for crafting targeted ethical guidelines and regulations for generative AI, informed by a systematic analysis using STM. It highlights the need for dynamic governance and continual monitoring of AI’s evolving ethical landscape, offering a model for future research and policymaking in diverse fields. Originality/value The study introduces a unique methodological combination of TF-IDF and STM to analyze a large academic corpus, offering new insights into the ethical implications of generative AI across multiple domains.
... The widespread adoption of AI tools raises issues related to intellectual property, data security, and academic honesty. For example, when a student relies on AI-generated code to solve an assignment, the question of authorship and academic integrity becomes problematic (Vetter, 2024). Similarly, if AI systems are trained on biased or incomplete datasets, there is a risk that these biases will be replicated in the educational content delivered to students. ...
Article
Full-text available
Generative artificial intelligence (generative AI) has emerged as one of the most transformative technological advancements of the 21st century. In the realm of computer science education, its potential to revolutionize curriculum design, pedagogy, and the overall learning experience has generated considerable interest. This paper offers a comprehensive theoretical analysis of the multifaceted impact of generative AI on computer science education. Distinct from empirical studies, this research exclusively engages in a rigorous discussion anchored in existing theoretical frameworks and scholarly insights. Drawing from constructivist learning theory, technology acceptance models, and ethical considerations, the paper explores how generative AI tools might reshape the roles of educators and learners, transform the delivery of educational content, and stimulate innovation in computer science curricula. Furthermore, the analysis interrogates the potential challenges and risks associated with these technologies, including the dilemmas of academic integrity, algorithmic bias, and a possible overreliance on automation. The discussion concludes with reflections on the future trajectory of AI-enhanced learning environments and recommendations for theoretical development that may guide future empirical inquiries.
... Stakeholders try to formulate broad A.I. policy guidelines across diverse industries. They emphasize the need for the accountability of A.I. systems [60]. They suggest using NLP models as supplements and not replacements for human interaction. ...
Article
Full-text available
Open artificial intelligence (A.I.) applications, including ChatGPT, are gaining recognition across diverse research domains, including healthcare, due to their effective handling of inquiries related to A.I. implementation in healthcare. Despite the growing use of blockchain technology in healthcare systems, existing research struggles with storage limitations, computational efficiency, and scalability issues. Therefore, an optimized approach is required to address these critical challenges, including ethical risks. This study explores the synergistic potential of ChatGPT's integration with blockchain technology. This study used a mixed methods approach. Content analysis was used to analyze the qualitative and quantitative data generated by the ChatGPT application. The Byte Pair Encoding (BPE) strategy is used to compress the proposed versions of smart contracts. Code metrics are used to evaluate original and compressed versions of smart contracts. The research identifies seven primary overhead challenges in BCT, with maintenance cost being a less‐explored aspect. The BPE's compression results show that 23%–26% of data size was reduced in compressed smart contract versions. Moreover, the study's results show 32.5% and 35% performance improvement in two compressed versions of smart contracts, respectively. The study findings showed that ChatGPT removed the redundant checks by simplifying variable names and adjusting spacing for better smart contracts. Ethical implications are recognized, including privacy, biases, transparency, and academic integrity. ChatGPT demonstrates synergistic capabilities when integrated with BCT. The proposed research is better at overcoming overheads in blockchain‐based healthcare systems than the existing works. ChatGPT holds promise in addressing overhead challenges in healthcare BCT. Its potential role in healthcare presents valuable applications to improve the efficiency and effectiveness of BCT in the healthcare domain.
... Moreover, the local context plays a crucial role in how AI ethics are applied and understood. Vetter et al. (2024) propose a framework for local interrogation of AI ethics, focusing on text generators like ChatGPT. Their study highlights the necessity of adapting ethical guidelines to fit local educational contexts to address specific ethical issues effectively. ...
Article
Full-text available
Introduction: The advent of artificial intelligence in education has brought forward tools like ChatGPT, which can potentially enhance students' academic writing abilities. However, there is limited empirical evidence examining its effectiveness and students' perceptions of its utility in academic contexts. Purpose: This study aimed to measure the effect of using ChatGPT on students' academic writing abilities and to investigate students’ perceived experiences regarding the use of ChatGPT in their writing process. Method: An explanatory mixed-method design was employed, incorporating a quantitative experiment followed by a qualitative investigation. The quantitative phase involved 102 fifth-semester students from an English education department at a university in Indonesia. These students were randomized into clusters based on their proximate writing test scores, resulting in two homogenous classes of 25 students each. These classes were then assigned to either an experimental group, which received 14 sessions using ChatGPT as a learning tool for academic writing, or a control group, which received 14 sessions using non-generative tools. Pre-tests and post-tests were administered to both groups. The qualitative phase involved interviews with 10 selected students from the experimental group to explore their perceived experiences with ChatGPT. Results: The pre-test scores indicated homogeneity between the experimental and control groups, with scores of 57.15 and 56.35 respectively. After the intervention, the post-test scores revealed significant improvement in the experimental group, with an average score of 81.11 compared to 60.30 in the control group. Statistical analysis demonstrated a significant disparity between the two groups (p-value = 0.0000 < 0.05 for the experimental group and p-value = 0.067 > 0.05 for the control group), suggesting that the use of ChatGPT significantly enhanced students' academic writing abilities. The qualitative findings supported these results, with students reporting that ChatGPT facilitated idea generation, organization, and construction in their writing process. Conclusion: The study concludes that ChatGPT significantly improves students' academic writing abilities, as evidenced by both quantitative and qualitative data. The tool's capacity to assist in the formulation and organization of ideas presents substantial potential for its use in academic research and writing. Given these findings, ChatGPT could be a valuable addition to the educational toolkit for enhancing academic writing skills.
... With rules that support ethical, honest, and fair use of AI in education and learning, a balanced approach is advocated allowing flexibility at both institutional and personal levels (Cacho, 2024). Including artificial intelligence in education also calls for a localized ethical framework emphasizing pedagogy, student agency, and access that considers the negotiating between teachers and students (Vetter et al., 2024). Information sessions that show both the advantages and disadvantages of artificial intelligence, so trying to equip students with the knowledge to make informed decisions, show that students still lack clarity about AI use despite the development of rules (Ross & Baines, 2024). ...
Article
Full-text available
The purpose of this paper is to investigate the complex relationship that exists between AI and personal data used in educational settings. The paper also highlights the importance of having a strong thesis statement to direct the research. A particular emphasis is placed on data privacy, security, and algorithmic bias in this study, which investigates the ethical concerns that are associated with the integration of artificial intelligence. In order to protect the students' right to privacy, it emphasizes the significance of obtaining informed consent and maintaining transparency in the data collection and processing processes. In spite of the fact that it acknowledges the difficulties that are posed by concerns regarding data privacy and security, the research highlights the potential benefits of artificial intelligence in terms of improving educational outcomes and personalizing learning experiences. Additionally, the paper emphasizes the importance of developing ethical guidelines and policies that are in line with the rapid advancements in artificial intelligence technology. This will guarantee that students' data rights are maintained and honoured. By addressing these important concerns, this paper seeks to add to a thorough knowledge of the ethical and legal difficulties related with including artificial intelligence into education. Furthermore, the study promotes a multidisciplinary approach to properly negotiate these complexity.
... Participants stated that they could not distinguish whether the products created were prepared directly with AI or not. Vetter et al. (2024), in their study focusing on the ethical concerns that arise in writing texts with AI, stated that they were constantly in doubt about whether the work was the product of the student or the AI. It is evident that the ethical and safety aspects of AI-based learning tools are important for teachers. ...
Article
Full-text available
This study aims to reveal teachers' thoughts on AI. Thus, from the teacher’s perspective, it is to expose the positive and negative effects of AI technology on students and the educational process and to create a source for predicting the problems that may be encountered during its integration into the education system. Phenomenological design, one of the qualitative research methods, was used. The participant group of the study consisted of eight teachers from different branches formed by the maximum variation, one of the purposeful sampling types. In the interview conducted with the focus group interview, the questions in the semi-structured interview form prepared by taking expert opinions were asked to the participants, and the data of the research were collected. The collected data were analyzed with descriptive analysis and coding was executed. The research findings indicate that AI is perceived differently by teachers, that its applications vary, that technology presents both advantages and downsides and that there are individual and societal concerns that warrant attention. Teachers generally exhibit amazing, helpful, useful, and favorable attitudes toward the increasingly widespread usage of AI, demonstrating a high level of awareness of the technology. It is recommended to conduct applied research on the use of AI in education to clarify its positive and negative aspects.
... Supporting this, a recent study on U.S. universities' GenAI policies revealed an open yet cautious approach, prioritizing ethical usage, accuracy, and data privacy, while providing resources like workshops and syllabus templates to aid educators in adapting GenAI effectively in their teaching practices (Wang et al., 2024). Several studies have also highlighted the importance of involving students in the development of GenAI training curricula and policies and guidelines for the ethical and responsible use of GenAI that directly affect their academic work (Camacho-Zuñiga et al. 2024;Magrill & Magrill, 2024;Moya & Eaton, 2024;Vetter et al., 2024;Goldberg et al., 2024;Bannister et al., 2024;Chen et al., 2024;Malik et al., 2024). ...
Conference Paper
The integration of Generative Artificial Intelligence (GenAI) in higher education offers transformative opportunities alongside significant challenges for both educators and students. This study, part of the ERASMUS+ project Teaching and Learning with Artificial Intelligence (TaLAI), aims to explore the familiarity, usage patterns, and perceptions of GenAI in academic settings. A survey of 152 students (mainly from Germany, Belgium, and the Netherlands) and 118 educators (81 professors, 37 trainers) reveals widespread GenAI use, with ChatGPT being the most common tool. Findings indicate both enthusiasm for GenAI’s potential benefits and concerns regarding ethical implications, academic integrity, and its impact on learning processes. While students and educators recognize GenAI’s ability to enhance learning and productivity, uncertainties persist regarding assessment practices and its potential short and long-term effects on various aspect such as decision making, creativity, and mem ory performance. The study also highlights gaps in institutional support and policy, emphasizing the need for clearer communication to ensure responsible AI adoption. This paper contributes to the ongoing discussions on GenAI in higher education and is aimed at educators, policymakers, and researchers concerned with its responsible use. By addressing students’ and educators' both perspectives and concerns, institutions and policymakers can develop well-informed strategies and guidelines that promote responsible and effective use of GenAI, ultimately enhancing the overall teaching and learning experience in academic environments.
... Findings reveal the concept of a "local ethic," a dynamic, student-teacher negotiated framework that shapes AI's role in learning environments. The study recommends that educators foster critical discussions on AI authorship, agency, and reliability while adapting assignments and policies to reflect these evolving ethical considerations [12]. ...
Preprint
The growing use of generative AI tools like ChatGPT has raised urgent concerns about their impact on student learning, particularly the potential erosion of critical thinking and creativity. As students increasingly turn to these tools to complete assessments, foundational cognitive skills are at risk of being bypassed, challenging the integrity of higher education and the authenticity of student work. Existing AI-generated text detection tools are inadequate; they produce unreliable outputs and are prone to both false positives and false negatives, especially when students apply paraphrasing, translation, or rewording. These systems rely on shallow statistical patterns rather than true contextual or semantic understanding, making them unsuitable as definitive indicators of AI misuse. In response, this research proposes a proactive, AI-resilient solution based on assessment design rather than detection. It introduces a web-based Python tool that integrates Bloom's Taxonomy with advanced natural language processing techniques including GPT-3.5 Turbo, BERT-based semantic similarity, and TF-IDF metrics to evaluate the AI-solvability of assessment tasks. By analyzing surface-level and semantic features, the tool helps educators determine whether a task targets lower-order thinking such as recall and summarization or higher-order skills such as analysis, evaluation, and creation, which are more resistant to AI automation. This framework empowers educators to design cognitively demanding, AI-resistant assessments that promote originality, critical thinking, and fairness. It offers a sustainable, pedagogically sound strategy to foster authentic learning and uphold academic standards in the age of AI.
... Localized and human-centered frameworks are pivotal for addressing AI's ethical challenges.. (Vetter, et al. 2024) introduce the concept of a "local ethic," advocating for classroom-specific ethical considerations to address the unique dynamics between teachers, students, and AI tools. ...
Article
Full-text available
This systematic literature review examines the ethical frameworks surrounding the adoption of artificial intelligence (AI) in the professional practices of higher education lecturers. As AI becomes increasingly prevalent in academia, it presents ethical challenges, including concerns about academic integrity, data privacy, algorithmic fairness, and its responsible implementation in teaching and research. These issues underscore the need for robust ethical guidelines to support educators in navigating the complexities of AI integration. The study aims to analyze existing ethical frameworks for AI adoption in higher education, identify key challenges and opportunities in AI integration, and develop comprehensive guidelines for responsible AI implementation. The methodology followed the PRISMA guidelines, employing qualitative systematic review through content analysis and thematic synthesis. Advanced searches in Scopus and Web of Science databases identified 34 primary studies that met the inclusion criteria, focusing on peer-reviewed articles published in 2024 about AI ethics in higher education. The findings were divided into three themes: (1) Ethical Concerns and Academic Integrity, (2) Pedagogical Strategies and Educational Impact and (3) Policies and Frameworks for AI Integration. Results indicate a growing need for standardized ethical This work is licensed under CC BY 4.0 frameworks, with 85% of studies emphasizing the importance of balancing innovation with ethical considerations. Conclusions highlight the necessity for adaptable and inclusive frameworks that prioritize accountability, transparency, and equity in AI use within higher education.
... Researchers can leverage this structure to develop AI systems that are capable of identifying patterns, relationships, and hierarchies within their datasets. Ethical uses of algorithms also carry additional communal benefits Vetter et al. 2024b). For instance, researchers can develop LLMs to reduce Wikipedia's community workload (Smith et al. 2020), maintain human judgment in decision-making, support diverse workflows, foster positive engagement with editors (especially newcomers), and establish trust in both people and algorithms. ...
Article
Full-text available
As a collaboratively edited and open-access knowledge archive, Wikipedia offers a vast dataset for training artificial intelligence (AI) applications and models, enhancing data accessibility and access to information. However, reliance on the crowd-sourced encyclopedia raises ethical issues related to data provenance, knowledge production, curation, and digital labor. Drawing on critical data studies, feminist posthumanism, and recent research at the intersection of Wikimedia and AI, this study employs problem-centered expert interviews to investigate the relationship between Wikipedia and large language models (LLMs). Key findings include the unclear role of Wikipedia in LLM training, ethical issues, and potential solutions for systemic biases and sustainability challenges. By foregrounding these concerns, this study contributes to ongoing discourses on the responsible use of AI in digital knowledge production and information management. Ultimately, this article calls for greater transparency and accountability in how big tech entities use open-access datasets like Wikipedia, advocating for collaborative frameworks prioritizing ethical considerations and equitable representation.
... Emphasises the need for careful consideration and integration of GenAI technologies in education 134. van den Berg and du Plessis(2023)Contribution of generative GenAI tools in lesson planning and critical thinking ChatGPT provides lesson plans and support mechanisms, enhancing critical thinking 135.Vetter et al. (2024) Ethical frameworks for text generators (TGs) in education Emergent 'local ethic' framework explores unique ethical considerations, values, and norms in GenAI use in education Objective optimisation problem in education Significant improvement in GenAI knowledge and skill mastery among students. Natural language understanding, computer vision, and biometrics are the most suitable technologies for higher education 137.Wang et al. (2023) GenAI's impact on students' creativity and learning performance GENAI capability in HEIs significantly affects self-efficacy, creativity, and learning ...
Article
This systematic review investigates the impact of generative artificial intelligence (GenAI) tools on developing academic skills in higher education. Analysing 158 studies published between 2021 and 2024, it focuses on the impact of GenAI tools on the development of cognitive, technical and interpersonal skills. The results reveal that 94% of the sampled studies reported significant improvements in cognitive skills, like critical thinking, problem‐solving, analytical and metacognitive abilities, facilitated by personalised learning and feedback. Indeed, the development of technical skills was reported in research (24%), writing (26%), data analysis (33%) and technical literacy (18%). Additionally, GenAI tools were found to promote interpersonal skills by fostering interactive and engaging learning environments, with notable skills development in communication (24%), organisation (26%), empathy (5%) and teamwork (45%). Hence, this review underscores the importance of ethical and responsible use of GenAI tools, ongoing monitoring and active stakeholder engagement to maximise their benefits in developing cognitive, technical and interpersonal skills in higher education. They offer a promising avenue for academic advancement by fostering critical thinking, enhancing technical proficiency and promoting effective communication and teamwork. Therefore, GenAI tools significantly enhance academic skills; however, their integration requires a robust ethical framework and sustained examination of their long‐term impacts.
... While accuracy is a common area of generative AI research, another important area f research relates to the ethics of these generative systems. Ethical AI research is generally broken down into a number of common topic areas including: Bias, Robustness, Reliability, and Toxicity (Zhuo, Huang, Chen, & Xing, 2023;Vetter et al., 2024;Pant et al., 2024). ...
Article
Full-text available
The prevalence of Artificial Intelligence (AI), in particular Large Language Models (LLMs) in multiple areas of society is rapidly growing. This surge in popularity has attracted users of all age groups, leading to a substantial increase in the number of individuals interacting with AI tools. This research aims to examine the way both current and new users engage with ChatGPT (a generative AI chatbot developed by OpenAI and launched in 2022) specifically for academic purposes, and evaluate the effectiveness of this engagement. The project seeks to scrutinize the accuracy of the results produced by ChatGPT, as well as the functionality of the interface and prompt generation within ChatGPT. Additionally, concerns regarding the ethical implications of employing an AI agent for academic research and writing along with accessibility and availability are examined. The study involves college students specializing in the field of biological science as well as their use of ChatGPT to develop research reports. This study finds that despite advances in ChatGPT, users struggle with creating effective inputs due to user interface challenges. It calls for improved LLM interfaces and user education while emphasizing the need for equitable access and ethical considerations by developers.
... The study generated important implications for balancing the strengths and limitations of ChatGPT and human feedback for assessing student essays. Other research also examined the role of generative AI tools in L1 multimodal writing instruction (Tan et al., 2024), L1 student writers' perceptions of ChatGPT as writing partner and AI ethics in college composition classes (Vetter et al., 2024), and the collaborative experience of writing instructors and students in integrating generative AI into writing (Bedington et al., 2024). ...
Article
Full-text available
Contemporary Filipino literature, while evolving, often reflects and reinforces gender norms. This study delves into how gender bias is linguistically constructed in these narratives, examining the potential impact of such representations on societal perceptions and behaviours. To investigate this, a Systemic Functional Linguistic approach, specifically Ideational Meta-Function, was employed to analyse seven selected Filipino literary stories. The analysis focused on linguistic choices, including vocabulary, personal noun usage and the depiction of physical attributes, emotions and behaviour. The findings revealed that the use of simple and label-like terms frequently reinforces traditional gender stereotypes, while more complex and nuances language could either reproduced or challenge these stereotypes. Personal male nouns were often associated with masculine occupations while female nouns were linked to stereotypical roles. Moreover, the portrayal of physical attributes, emotions and behaviour often adhered to traditional gender norms, with males depicted as strong and powerful, and females as fragile and emotionally reactive. Such representations can limit individual expression and perpetuate gender inequality. This study underscores the power of language in shaping gender perceptions and behaviours. It highlights the need of critical analysis of literary texts to identify and challenge gender bias. By understanding how language construct gender, we can work towards more equitable and inclusive representations in literature and society.
... Otra tendencia destacable en cuanto a temporalidad estuvo relacionada con la integridad académica. Según se contrastó en la literatura, esta se concentró fundamentalmente en la IA generativa, la creación de textos de diversa índole y la educación (Vetter et al., 2024). ...
Article
Full-text available
Este artículo explora el impacto y las tendencias de la inteligencia artificial (IA) en la educación superior a través de un estudio bibliográfico de la base de datos Scopus. Se empleó una metodología de revisión documental en tres etapas: un análisis bibliométrico de publicaciones, un análisis temático de los diez artículos más recientes en acceso abierto y la integración de datos con una visión holística del tema. Los hallazgos revelan que la transformación de la educación superior tras la introducción de la IA ha redimensionado la personalización el aprendizaje, la automatización de evaluaciones y la optimización de la gestión universitaria. La adopción de IA en las universidades enfrenta desafíos como: la desigualdad tecnológica, la inadecuada infraestructura y ha puesto en relieve diversas preocupaciones éticas relacionadas con malas prácticas, privacidad y equidad. Los hallazgos proporcionan una base sólida para futuras investigaciones y para la implementación efectiva de IA en contextos educativos
... Additionally, as noted by Al-kfairy et al. [41], in order to use AI systems in education, it is essential that ethical concerns be addressed, such as ensuring accountability and fairness in explainable AI. Similar to this, Vetter et al. [42] highlighted the significance of a local challenge of AI ethics, especially in sensitive data, where explainability is essential in fostering trust and supporting both educators and students in making decisions. Integrating these perspectives into future research could help bridge the gap between practical model performance and the ethical considerations of deploying AI in educational settings. ...
Article
Full-text available
Many of the articles on AI in education compare the performance and fairness of different models, but few specifically focus on quantitatively analyzing their explainability. To bridge this gap, we analyzed key evaluation metrics for two machine learning models—ANN and DT—with a focus on their performance and explainability in predicting student outcomes using the OULAD. The methodology involved evaluating the DT, an intrinsically explainable model, against the more complex ANN, which requires post hoc explainability techniques. The results show that, although the feature-based and structured decision-making process of the DT facilitates natural interpretability, it struggles to model complex data relationships, often leading to misclassification. In contrast, the ANN demonstrated higher accuracy and stability but lacked transparency. Crucially, the ANN showed great fidelity in result predictions when it used the LIME and SHAP methods. The results of the experiments verify that the ANN consistently outperformed the DT in prediction accuracy and stability, especially with the LIME method. However, improving the interpretability of ANN models remains a challenge for future research.
... Además, el uso de inteligencia artificial en la educación plantea preguntas éticas significativas (Kajiwara & Kawabata, 2024). La IA funciona sobre la base de algoritmos y conjuntos de datos que, aunque técnicamente avanzados, no están libres de sesgos (Corrêa et al., 2023;Vetter et al., 2024). La personalización del aprendizaje mediante IA, por ejemplo, puede generar desigualdades si los algoritmos no consideran adecuadamente las diferencias de contexto o si los datos utilizados para entrenar el sistema contienen prejuicios. ...
Article
Full-text available
Este artículo explora el impacto de la inteligencia artificial (IA) en la enseñanza del derecho, analizando sus beneficios y los desafíos éticos y pedagógicos que plantea para la formación universitaria de futuros juristas. El objetivo fue evaluar cómo la IA puede transformar el aprendizaje jurídico sin comprometer la integridad ética y pedagógica del proceso educativo. A través de una metodología cualitativa con un diseño documental, se llevó a cabo un análisis de contenido de diversas herramientas de IA educativas, evaluando aspectos como la personalización del aprendizaje, la accesibilidad, la automatización de la retroalimentación y la usabilidad. Los resultados indican que la IA permite personalizar el aprendizaje y optimizar la retroalimentación y evaluación en tiempo real, aunque plantea riesgos de sesgo algorítmico y de accesibilidad limitada. Además, el uso de IA puede modificar la dinámica en el aula y reducir la interacción directa con los docentes, afectando el desarrollo ético de los estudiantes. En conclusión, si bien la IA tiene un gran potencial en la enseñanza jurídica, su implementación debe estar acompañada de una supervisión activa y un marco ético sólido que garanticen una educación inclusiva y equitativa, preservando la calidad y los valores pedagógicos en el aprendizaje del derecho.
... For example, the use of ChatGPT in education may lead to ethical and academic integrity issues. 7,8 Previous studies have shown that under certain conditions, ChatGPT can sometimes produce inaccurate outputs, including references, citations, mathematical expressions, and scientific conclusions. [8][9][10] Nevertheless, despite challenges, students are still eager to use ChatGPT due to its significant usefulness in assisting with their academic tasks. ...
... Additionally, the growing accessibility of AI-generated content has blurred the boundaries of originality in academic work, making it more difficult to maintain and enforce academic standards [58]. The potential misuse of AI tools threatens to undermine the core values of higher education, such as critical thinking, ethical scholarship, and independent learning [59]. ...
Article
Full-text available
This paper explores the potential of generative artificial intelligence (AI) to transform higher education. Generative AI is a technology that can create new content, like text, images, and code, by learning patterns from existing data. As generative AI tools become more popular, there is growing interest in how AI can improve teaching, learning, and research. Higher education faces many challenges, such as meeting diverse learning needs and preparing students for fast-changing careers. Generative AI offers solutions by personalizing learning experiences, making education more engaging, and supporting skill development through adaptive content. It can also help researchers by automating tasks like data analysis and hypothesis generation, making research faster and more efficient. Moreover, generative AI can streamline administrative tasks, improving efficiency across institutions. However, using AI also raises concerns about privacy, bias, academic integrity, and equal access. To address these issues, institutions must establish clear ethical guidelines, ensure data security, and promote fairness in AI use. Training for faculty and AI literacy for students are essential to maximize benefits while minimizing risks. The paper suggests a strategic framework for integrating AI in higher education, focusing on infrastructure, ethical practices, and continuous learning. By adopting AI responsibly, higher education can become more inclusive, engaging, and practical, preparing students for the demands of a technology-driven world.
... This aligns with the prior study by Bag et al. [56], which emphasizes that reliability is a key component of ethics. The discussion between distrust and trust expresses ethical values associated with reliability [57]. Hence, the hypothesis is stated as: ...
Chapter
Developing academic policies responsive to their institution's specific needs requires skills similar to those needed by a writing tutor—or writing center director. In this paper, a Writing Center Director and Technology Ethicist describe how they developed an AI policy that reflected best practices in both fields, and the ways in which writing together allowed them to clarify the educational goals for the institution as well. This act of collaborative writing ultimately outlines new possibilities for writing center advocacy that go beyond outreach and rely on enacting writing center praxis within university administration.
Article
Full-text available
The rapid integration of artificial intelligence (AI) into language education presents complex ethical challenges that demand critical examination. One way to explore the multifaceted ethical implications of AI tools in language teaching and learning through the lens of transparency, accountability, and equity is by drawing upon the OECD principles for responsible AI implementation. The study investigates three primary ethical dimensions: transparency in AI tool usage, accountability for AI-mediated learning outcomes, and equity in access and implementation. Through a comprehensive review of current literature and practical implementations, the paper explores guidelines for ethical AI integration into language education that prioritize student learning while mitigating potential technological risks. Recommendations emerging from the analysis include emphasizing the need for transparent disclosure protocols, honing students’ awareness of AI capabilities and limitations, and establishing responsible accountability mechanisms. The research ultimately argues for a balanced approach that leverages AI's transformative potential while maintaining human-centered pedagogical principles, highlighting the critical role of ongoing evaluation and adaptive strategies in navigating the ethical frontiers of AI in language education.
Article
Information literacy (IL) is a fundamental skill before entering the new era of massive Artificial Intelligence (AI) technology usage. In 2023, the Chinese government highlighted its importance by releasing new IL standards with four dimensions: information metacognition, information knowledge, information application and creation, and information ethics. It took a long time to build standards based on investigation of a vast majority of education institutions and public schools. However, international schools were barely noticed, despite the place having evident advantages in implementing inquiry-based projects that can naturally cultivate students’ IL abilities. Therefore, taking a typical international high school in Beijing as the case, the aim was to identify the starting point to specifically carry out IL education by revealing the current IL status and correlations among the four IL dimensions. Employing a mixed method approach, a questionnaire was distributed in the target school, followed by interviews with high school students who had conducted their research following certain IL instructions and completed their first-lifetime academic essays. The results indicated the importance of information knowledge and can significantly have a positive impact on information application and creation; high school students also value information knowledge greatly and emphasized AI as a new form of information access tool, that needs more attention to critical thinking.
Article
A significant portion of the academic community, including students, teachers, and researchers, has incorporated generative artificial intelligence (GenAI) tools into their everyday tasks. Alongside increased productivity and numerous benefits, specific challenges that are fundamental to maintaining academic integrity and excellence must be addressed. This paper examines whether ethical implications related to copyrights and authorship, transparency, responsibility, and academic integrity influence the usage of GenAI tools in higher education, with emphasis on differences across academic segments. The findings, based on a survey of 883 students, teachers, and researchers at University North in Croatia, reveal significant differences in ethical awareness across academic roles, gender, and experience with GenAI tools. Teachers and researchers demonstrated the highest awareness of ethical principles, personal responsibility, and potential negative consequences, while students—particularly undergraduates—showed lower levels, likely due to limited exposure to structured ethical training. Gender differences were also significant, with females consistently demonstrating higher awareness across all ethical dimensions compared to males. Longer experience with GenAI tools was associated with greater ethical awareness, emphasizing the role of familiarity in fostering understanding. Although strong correlations were observed between ethical dimensions, their connection to future adoption was weaker, highlighting the need to integrate ethical education with practical strategies for responsible GenAI tool use.
Book
Full-text available
Bringing together leading scholars and practitioners, Rethinking Writing Education in the Age of Generative AI offers a timely exploration of pressing issues in writing pedagogies within an increasingly AI-mediated educational landscape. From conceptual and empirical work to theory-guided praxis, the book situates the challenges we face today within the historical evolution of writing education and our evolving relationship with AI technologies. Covering a range of contexts such as L2/multilingual writing, first-year writing, writing centers, and writing program administration and faculty development, the book examines various AI-informed writing pedagogies and practices. Drawing on interdisciplinary perspectives from writing studies, education, and applied linguistics, the book bridges theory and practice to address critical questions of innovation, ethics, and equity in AI-supported teaching. This book is essential for writing educators and researchers looking to leverage AIs to facilitate the teaching and learning of writing in critical and transformative ways.
Digital multimodal composing (DMC) has garnered attention in second language (L2) writing classrooms. The introduction of artificial intelligence (AI) has been a game-changer in this field, providing tools that amplify DMC by translating text into images and videos. However, there is a research gap on how these tools are utilised by learners. This study aims to fill this gap by applying a resemiotisation perspective to examine how learners employ AI tools to integrate linguistic, semiotic, and technological elements in translating written genres into video formats. Conducted at a comprehensive university in China, this research involved 75 undergraduates in an English writing course. During the course, students used an AI-powered text-to-video platform called Pictory to convert technical proposals into videos. Pictory facilitated the students’ video creation by enabling them to craft search queries to generate video clips. Data sources included students’ technical proposal outlines, video composition plans, reflections on the text-to-video generation process, and the produced video compositions. Through a content analysis of students’ reasons for revising search queries during their text-to-video conversion processes as well as a comparative analysis of their original and revised search texts alongside the multimodal elements within the resulting videos, the study revealed that text-to-video resemiotisation in AI-enhanced DMC involved students modifying written texts (transformation) and the AI technology converting them into videos (transduction), diverging from traditional DMC where both the transformation and transduction stages are overseen by human agents. Specifically, during the text-to-video resemiotisation processes, students implemented various customisation initiatives targeting search credibility, scope, relevance, and modality to generate suitable video clips. This study enriches our understanding of DMC in AI-enhanced learning contexts, providing insights for future DMC curriculum development that effectively leverages AI tools to improve learners’ DMC skills.
Article
The integration of artificial intelligence (AI) technologies into education presents both significant opportunities and challenges for educators. For AI to be effectively and ethically implemented in the educational environment, teachers must possess the necessary competencies and understanding of the ethical implications associated with these technologies. The article examines critical aspects of teacher competence and ethical aspects that must be paid attention to for the successful implementation of AI in education. Using data from a survey conducted among 102 participants, we analyse the distribution of teachers’ qualification levels and their views on ethical challenges, advantages and disadvantages arising from the use of AI in education and teachers’ readiness to use the new technology (opinions on how AI will improve their work, how easy the use of AI, what is the general attitude and intention to use technology in the future and what is the organisational support and individual characteristics of the teachers themselves). The results show that the majority of teachers have the highest qualifications an average level of digital competence and a low level of AI literacy, which indicates the need to improve the skills of teachers in the implementation of AI, the self-assessment by teachers of the ethics of using AI is higher than the assessment of understanding, knowledge and consideration in the practice of ethical norms It is important to plan and develop critical competencies for the effective use of AI in education, ensuring its safe and ethical implementation based on technological, pedagogical training and the formation of ethical literacy.
Article
Full-text available
This collective systematic literature review is part of an Erasmus+ project, “TaLAI: Teaching and Learning with AI in Higher Education”. The review investigates the current state of Generative Artificial Intelligence (GenAI) in higher education, aiming to inform curriculum design and further developments within digital education. Employing a descriptive, textual narrative synthesis approach, the study analysed literature across four thematic areas: learning objectives, teaching and learning activities, curriculum development, and institutional support for ethical and responsible GenAI use. The review analysed 93 peer-reviewed articles from eight databases using a keyword-based search strategy, a collaborative coding process involving multiple researchers, in vivo coding and transparent documentation. The findings provide an overview of recommendations for integrating GenAI into teaching and learning, contributing to the development of effective and ethical AI-enhanced learning environments in higher education. The literature reveals consensus on the importance of incorporating GenAI into higher education. Common themes like mentorship, personalised learning, creativity, emotional intelligence, and higher-order thinking highlight the persistent need to align human-centred educational practices with the capabilities of GenAI technologies.
Article
Artificial intelligence (AI) has extensively developed, impacting different sectors of society, including higher education, and has attracted the attention of various educational stakeholders, leading to a growing number of research on its integration into education. Hence, this systematic literature review examines the impact of integrating AI tools in higher education on students' personal and collaborative learning environments. Analysis of 148 articles published between 2021 and 2024 indicates that AI Tools improve personalised learning and assessments, communication and engagement, and scaffolding performance and motivation. Additionally, they promote a collaborative learning environment by providing peer-learning opportunities, enhanced learner-content interaction and cooperative learning support. Indeed, strategies such as skills development, ethical use, academic integrity and instructional content design. Acknowledged limitations include ethical considerations, particularly privacy and bias, which require ongoing attention. Hence, it is recommended to create a good balance between AI-mediated and human interaction in learning environments, a key area of future exploration.
Chapter
This chapter examines AI's role in enhancing cultural intelligence in transnational higher education. AI tools personalize learning, enhance cross-cultural communication, and develop responsive curricula. The chapter addresses ethical concerns like data privacy and algorithmic bias while advocating for AI to foster inclusive learning environments. It explores AI's impact on curriculum design, assessment, and professional development, stressing ethical implementation and adaptation. The conclusion forecasts AI's influence on educational practices and cultural diversity appreciation.
Article
Full-text available
This systematic literature review explored the impact of integrating AI tools in higher education using the Zone of Proximal Development (ZPD) by Lev Vygotsky. It examined how AI tools assist the students in identifying and operating within their ZPD, how to create and facilitate a collaborative learning environment, and how to provide the necessary scaffolding for effective learning. The sample included 158 empirical studies which were retrieved from Web of Science, Scopus, and ERIC, published between 2021 and 2024. Findings indicated that AI tools assist learners in personalising their self-assessment through social and technological interactions, and effective communication; they improve motivation, learning engagement, and learning support, which leads to better academic performance, student maturation, and development. Additionally, AI tools were found suitable for creating collaborative learning environments, empowering learners, and facilitating meaningful interactions. Furthermore, results emphasise the need for educator’s professional development, ethical AI deployment, and the integration of AI into designing meaningful learning experiences. These results indicate that there should be equitable access to training and effective resolutions for ethical challenges that may undermine the integrity of the learning process. The study highlights strategic AI integration to significantly enhance learning outcomes and student engagement, focusing on academic integrity and complementing traditional educational methods. Recommendations are that there is a need for longitudinal studies to assess the long-term impact of AI on learning outcomes, student engagement, and the development of critical thinking and problem-solving skills.
Article
This study explores how confidence levels in user prompts affect AI-generated resume text. Using six varied prompts for AI models ChatGPT-3.5, Gemini, and Perplexity, it examines how AI interprets and responds to different confidence levels. The findings reveal significant differences in AI-generated resumes based on prompt confidence, highlighting the need to adapt resume pedagogy for the AI age. Emphasizing the importance of teaching genre conventions and developing critical AI literacies, the study offers practical recommendations for integrating AI tools into resume writing instruction to better prepare students for an increasingly digital world.
Article
Full-text available
Almost two years after the initial shock with generative artificial intelligence, as a community of professionals in the teaching of writing we are still wondering how to integrate it into our practice, what implications it may have, and how best to prepare for it. While the conversation has only just begun, we can already approach experiences, research, and best practices associated with taking advantage of this type of technology. For this thematic issue, we wanted to include the perspective of Dr. Matthew A. Vetter who, along with his research group, has begun to outline valuable paths and considerations to guide us in the process of adapting to generative artificial intelligence. Dr. Vetter is a Professor of English at Indiana University of Pennsylvania, where he has taught undergraduate and graduate writing and rhetoric for the past eight years. A scholar in writing, rhetoric, and digital communication, Vetter's research explores how technologies shape writing and writing pedagogy. He is interested in ideological and epistemological functions of technologies and digital communities and the possibilities for human intervention and praxis within those spaces. Following this agenda, in recent years, his attention has turned to generative AI ethics and practices. offers useful pedagogical criteria for integrating generative AI into the teaching of writing. For example, he proposes an update to the writing process as well as heuristic considerations toward an ethics in the use of generative AI. Given the accelerated growth and spread of generative AI, educators in composition face the challenge of integrating this technology into their teaching while navigating a steep learning curve. In this context, what are your thoughts about balancing a situated approach to teaching while keeping in tune with technology affordances and global trends? This is a very challenging time for teachers of writing, yes, but we should also keep in mind that the sudden widespread availability of generative AI (genAI) provides us an
Article
Full-text available
ChatGPT is an AI tool that has sparked debates about its potential implications for education. We used the SWOT analysis framework to outline ChatGPT's strengths and weaknesses and to discuss its opportunities for and threats to education. The strengths include using a sophisticated natural language model to generate plausible answers, self-improving capability, and providing personalised and real-time responses. As such, ChatGPT can increase access to information, facilitate personalised and complex learning, and decrease teaching work-load, thereby making key processes and tasks more efficient. The weaknesses are a lack of deep understanding, difficulty in evaluating the quality of responses, a risk of bias and discrimination, and a lack of higher-order thinking skills. Threats to education include a lack of understanding of the context, threatening academic integrity, perpetuating discrimination in education, democratising plagiarism, and declining high-order cognitive skills. We provide agenda for educational practice and research in times of ChatGPT.
Article
Full-text available
This empirical study examines ChatGPT as an educational and learning tool. It investigates the opportunities and challenges that ChatGPT provides to the students and instructors of communication, business writing, and composition courses. It also strives to provide recommendations. After conducting 30 theory-based and application-based ChatGPT tests, it is found that ChatGPT has the potential of replacing search engines as it provides accurate and reliable input to students. For opportunities, the study found that ChatGPT provides a platform for students to seek answers to theory-based questions and generate ideas for application-based questions. It also provides a platform for instructors to integrate technology in classrooms and conduct workshops to discuss and evaluate generated responses. For challenges, the study found that ChatGPT, if unethically used by students, may lead to human unintelligence and unlearning. This may also present a challenge to instructors as the use of ChatGPT negatively affects their ability to differentiate between meticulous and automaton-dependent students, on the one hand, and measure the achievement of learning outcomes, on the other hand. Based on the out-come of the analysis, this study recommends communication, business writing, and composition instructors to (1) refrain from making theory-based questions as take-home assessments, (2) provide communication and business writing students with detailed case-based and scenario-based assessment tasks that call for personalized answers utilizing critical, creative, and imaginative thinking incorporating lectures and textbook material, (3) enforce submitting all take-home assessments on plagiarism detection software, especially for composition courses, and (4) integrate ChatGPT generated responses in classes as examples to be discussed in workshops. Remarkably, this study found that ChatGPT skillfully paraphrases regenerated responses in a way that is not detected by similarity detection software. To maintain their effectiveness, similarity detection software providers need to upgrade their software to avoid such incidents from slipping unnoticed.
Article
Full-text available
Keying in on Ring Fit Adventure as a game of analysis, we trace the new materialist rhetoric and thing-power of gaming assemblage across physical and cultural borders. Our new materialist analysis at once builds on and extends beyond the existing scholarship on the hybridity of gaming procedure and bodily movement already made available through a previous generation of embodied gameplay. We take such materialist theorization a bit further to forward a co-constitutive examination of the game's design mechanism in relation to diverse players’ embodied gaming experience. Based on our multimodal analysis of the gaming procedure and diverse YouTube player video reviews of the game, this article reveals that a pluralistic perspective on embodiment—including linguistic, cultural, and corporeal diversity—have the potentiality to disrupt the dominant rhetoric of physical wellness. We conclude this article with implications for the design of a more accessible gaming experience that attends to the thing-power in exercise as well as suggestions for the development of robust digital rhetoric practices and pedagogies.
Article
Full-text available
This paper shares results from a pedagogical experiment that assigns undergraduates to “cheat” on a final class essay by requiring their use of text-generating AI software. For this assignment, students harvested content from an installation of GPT-2, then wove that content into their final essay. At the end, students offered a “revealed” version of the essay as well as their own reflections on the experiment. In this assignment, students were specifically asked to confront the oncoming availability of AI as a writing tool. What are the ethics of using AI this way? What counts as plagiarism? What are the conditions, if any, we should place on AI assistance for student writing? And how might working with AI change the way we think about writing, authenticity, and creativity? While students (and sometimes GPT-2) offered thoughtful reflections on these initial questions, actually composing with GPT-2 opened their perspectives more broadly on the ethics and practice of writing with AI. In this paper, I share how students experienced those issues, connect their insights to broader conversations in the humanities about writing and communication, and explain their relevance for the ethical use and evaluation of language models.
Article
Full-text available
We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may benefit their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fill this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient.
Book
Full-text available
Living in a networked world means never really getting to decide in any thoroughgoing way who or what enters your “space” (your laptop, your iPhone, your thermostat . . . your home). With this as a basic frame-of-reference, James J. Brown’s Ethical Programs examines and explores the rhetorical potential and problems of a hospitality ethos suited to a new era of hosts and guests. Brown reads a range of computational strategies and actors, from the general principles underwriting the Transmission Control Protocol (TCP), which determines how packets of information can travel through the internet, to the Obama election campaign’s use of the power of protocols to reach voters, harvest their data, incentivize and, ultimately, shape their participation in the campaign. In demonstrating the kind of rhetorical spaces networked software establishes and the access it permits, prevents, and molds, Brown makes a significant contribution to the emergent discourse of software studies as a major component of efforts in broad fields including media studies, rhetorical studies, and cultural studies.
Article
Full-text available
The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of an ethical claim depends in part on the life history of the person who is making it. For both these reasons, machines could at best be engineered to provide a shallow simulacrum of ethics, which would have limited utility in confronting the ethical and policy dilemmas associated with AI.
Article
Full-text available
This article uses a socio-legal perspective to analyze the use of ethics guidelines as a governance tool in the development and use of artificial intelligence (AI). This has become a central policy area in several large jurisdictions, including China and Japan, as well as the EU, focused on here. Particular emphasis in this article is placed on the Ethics Guidelines for Trustworthy AI published by the EU Commission’s High-Level Expert Group on Artificial Intelligence in April 2019, as well as the White Paper on AI, published by the EU Commission in February 2020. The guidelines are reflected against partially overlapping and already-existing legislation as well as the ephemeral concept construct surrounding AI as such. The article concludes by pointing to (1) the challenges of a temporal discrepancy between technological and legal change, (2) the need for moving from principle to process in the governance of AI, and (3) the multidisciplinary needs in the study of contemporary applications of data-dependent AI.
Article
Full-text available
Deep learning techniques are growing in popularity within the field of artificial intelligence (AI). These approaches identify patterns in large scale datasets, and make classifications and predictions, which have been celebrated as more accurate than those of humans. But for a number of reasons, including nonlinear path from inputs to outputs, there is a dearth of theory that can explain why deep learning techniques work so well at pattern detection and prediction. Claims about “superhuman” accuracy and insight, paired with the inability to fully explain how these results are produced, form a discourse about AI that we call enchanted determinism. To analyze enchanted determinism, we situate it within a broader epistemological diagnosis of modernity: Max Weber’s theory of disenchantment. Deep learning occupies an ambiguous position in this framework. On one hand, it represents a complex form of technological calculation and prediction, phenomena Weber associated with disenchantment. On the other hand, both deep learning experts and observers deploy enchanted, magical discourses to describe these systems’ uninterpretable mechanisms and counter-intuitive behavior. The combination of predictive accuracy and mysterious or unexplainable properties results in myth-making about deep learning’s transcendent, superhuman capacities, especially when it is applied in social settings. We analyze how discourses of magical deep learning produce techno-optimism, drawing on case studies from game-playing, adversarial examples, and attempts to infer sexual orientation from facial images. Enchantment shields the creators of these systems from accountability while its deterministic, calculative power intensifies social processes of classification and control.
Article
Full-text available
Writing studies uses the case study at its primary method and methodology. This article offers a theoretical grounding about case study research and especially internet case study research. It argues that boundaries and spheres of influences are crucial to constructing an effective case study. It advocates for avoiding overstatement and overload when constructing internet case studies. It then discusses the ethics of case studies, focusing searchability. It concludes by discussing the value of case studies in general.
Article
Full-text available
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
Article
Full-text available
As a composition teacher endeavoring to spark controversial and yet interesting discussions in class, I have been drawn to the recent “monkey selfie” lawsuit (Slotkin, 2017), which productively adds to the theoretical framing of nonhuman authorship in digital media spaces. It started in an Indonesian forest, when a macaque monkey named Naruto took a series of photograph selfies with a camera belonging to British photographer David Slater. The selfie image ended up being uploaded on Wikipedia Commons as a public domain photograph. Citing copyright, Slater asked Wikipedia to remove the image, but was later sued by People for the Ethical Treatment of Animals (PETA), an animal protection organization, for violating the copyright of the monkey. The lawsuit soon swept through social media, culminating in the twitter posts with thousands of retweets by a Vlogger named Calum McSwiggan, who condemned PETA for “ruining the photographer” in the “monkey selfie” case (Gladwell, 2017). Even though the case has been settled with Naruto being denied his copyright, the lawsuit has drawn public attention to the issue of animal authorship and copyright. When I first introduced this news story to the students in my composition classes, their reaction was a mixture of surprise and amusement, as if they were trying to say “What? Are you serious?” The story may earn a similar reception from teachers and scholars, too. In the academic sphere, there has not been a shared understanding among postmodern and poststructural theorists of who assumes authorship for a text, i.e., whether the authorship is at the hands of the putative author, the reader, or the text itself (Barthes, 1977; Derrida, 1981; Foucault, 1987). Regardless of the disparate takes on the issue, the philosophical debate surrounding authorship has to be extended to the nonhumans.
Article
Full-text available
This paper explores Wikipedia bots and problematic information in order to consider implications for cultivating students’ critical media literacy. While we recognize the key role of Wikipedia bots in addressing and reducing problematic information (misinformation and disinformation) on the encyclopedia, it is ultimately reductive to construe bots as merely having benign impacts. In order to understand bots and other algorithms as more than just tools, we turn towards a postdigital theorization of these as ‘agents’ that co-produce knowledge in conjunction with human editors and actors. This paper presents case studies of three specific bots on Wikipedia, including ClueBot NG, AAlertbot, and COIBot, each of which engages in some type of information validation in the encyclopedia. The activities involving these bots, illustrated in these case studies, ultimately support our argument that information validation processes in Wikipedia are complicated by their distribution across multiple human-computer relations and agencies. Despite the programming of these bots for combating problematic information, their efficacy is challenged by social, cultural, and technical issues related to misogyny, systemic bias, and conflict of interest. Studying the function of Wikipedia bots makes space for extending educational models for critical media literacy. In the postdigital era of problematic information, students should be on the alert for how the human and the nonhuman, the digital and the nondigital, interfere and exert agency in Wikipedia’s complex and highly volatile processes of information validation.
Article
Full-text available
This article examines the role that algorithms may play as audiences when teaching writing on the World Wide Web. It argues that introducing the provisional term “algorithmic audience” reflects three prior conceptions of audience, including concrete situations, discourse community, and participatory audiences. It then offers a three-part classroom approach: identifying the biases of those who design algorithms, managing metadata, and anticipating audience response. I argue that the term “algorithmic audience” may help students to write for audiences beyond the instructor from within the confines of the classroom.
Article
Full-text available
Whereas composition studies tends to use ethics and morality interchangeably, these terms may work better when explicitly distinguished, rearticulated as a topic, and kept in heuristic conflict. The more the tension between them is exploited, the closer our approach to a pedagogy not so much ethical as just. © 2017 by the National Council of Teachers of English. All rights reserved.
Article
Full-text available
The author proposes a concept of ethics for the writing course, one derived from a moral theory that is both old and new and one that engages us when we teach such practices as making claims, providing evidence, and choosing metaphors in corollary discussions of honesty, accountability, generosity, intellectual courage, and other qualities. These and similar qualities are what Aristotle called 'virtues,' and they are the subject of that branch of moral philosophy known as 'virtue ethics' today. While the word virtue may sound strange to us today, Duffy argues that the tradition of the virtues has much to offer teachers and students and can clarify what it means, in an ethical sense, to be a 'good writer' in a skeptical, postmodern moment. © 1998-2017 National Council of Teachers of English. All rights reserved in all media.
Article
Full-text available
This essay focuses on new materialist reconfigurations of social theory that alter understandings of agency, identity, subjectivity, and power. This research lends itself to recognizing writing as radically distributed across time and space, and as always entwined with a whole host of others. After overviewing new materialist efforts to draft a robust concept of matter, I explore the value of this work for twenty-first-century writing studies through the lens of acknowledgments, a genre wherein relationality is dramatized.
Article
Translingual approaches to composition promise to nudge the field fully away from outdated concepts of linguistic diversity, replacing judgments of correctness and assumptions—about discrete languages with analyses of local, situational negotiations and pragmatic competence. Yet in fully displacing the monolingual “native speaker” with the translingual—composer, the approach replaces one linguistic hero with another—a fully competent “user” who shuttles between languages. This article seeks to extend translingualism’s—analysis of (metaphorical) language ecologies into the material surroundings of language contact situations. Drawing on scholarship on affect, vital materialism, and material—rhetorics, it suggests an empirical reorientation that diffuses attention beyond human language-using rhetors in order to account for shared rhetorical agency.
Article
Since its maiden release into the public domain on November 30, 2022, ChatGPT garnered more than one million subscribers within a week. The generative AI tool ⎼ChatGPT took the world by surprise with it sophisticated capacity to carry out remarkably complex tasks. The extraordinary abilities of ChatGPT to perform complex tasks within the field of education has caused mixed feelings among educators, as this advancement in AI seems to revolutionize existing educational praxis. This is an exploratory study that synthesizes recent extant literature to offer some potential benefits and drawbacks of ChatGPT in promoting teaching and learning. Benefits of ChatGPT include but are not limited to promotion of personalized and interactive learning, generating prompts for formative assessment activities that provide ongoing feedback to inform teaching and learning etc. The paper also highlights some inherent limitations in the ChatGPT such as generating wrong information, biases in data training, which may augment existing biases, privacy issues etc. The study offers recommendations on how ChatGPT could be leveraged to maximize teaching and learning. Policy makers, researchers, educators and technology experts could work together and start conversations on how these evolving generative AI tools could be used safely and constructively to improve education and support students’ learning.
Book
Reassembling the Social is a fundamental challenge from one of the world’s leading social theorists to how we understand society and the ‘social ‘. Bruno Latour’s contention is that the word ‘social’, as used by Social Scientists, has become laden with assumptions to the point where it has become misnomer. When the adjective is applied to a phenomenon, it is used to indicate a stablilized state of affairs, a bundle of ties that in due course may be used to account for another phenomenon. But Latour also finds the word used as if it described a type of material, in a comparable way to an adjective such as ‘wooden’ or ‘steely ‘. Rather than simply indicating what is already assembled together, it is now used in a way that makes assumptions about the nature of what is assembled. It has become a word that designates two distinct things: a process of assembling; and a type of material, distinct from others. Latour shows why ‘the social’ cannot be thought of as a kind of material or domain, and disputes attempts to provide a ‘social explanations’ of other states of affairs. While these attempts have been productive (and probably necessary) in the past, the very success of the social sciences mean that they are largely no longer so. At the present stage it is no longer possible to inspect the precise constituents entering the social domain. Latour returns to the original meaning of ‘the social’ to redefine the notion, and allow it to trace connections again. It will then be possible to resume the traditional goal of the social sciences, but using more refined tools. Drawing on his extensive work examining the ‘assemblages’ of nature, Latour finds it necessary to scrutinize thoroughly the exact content of what is assembled under the umbrella of Society. This approach, a ‘sociology of associations’, has become known as Actor-Network-Theory, and this book is an essential introduction both for those seeking to understand Actor-Network Theory, or the ideas of one of its most influential proponents.
Article
This article follows up on the conversation about new streams of approaches in technical communication and user experience (UX) design, i.e., design thinking, content strategy, and artificial intelligence (AI), which afford implications for professional practice. By extending such implications to technical communication pedagogy, we aim to demonstrate the importance of paying attention to these streams in our programmatic development and provide strategies for doing so.
Chapter
The European Parliament recently proposed to grant the personhood of autonomous AI, which raises fundamental questions concerning the ethical nature of AI. Can they be moral agents? Can they be morally responsible for actions and their consequences? Here we address these questions, focusing upon, inter alia, the possibilities of moral agency and moral responsibility in artificial general intelligence; moral agency is a precondition for moral responsibility (which is, in turn, a precondition for legal punishment). In the first part of the paper we address the moral agency status of AI in light of traditional moral philosophy, especially Kant’s, Hume’s, and Strawson’s, and clarify the possibility of Moral AI (i.e., AI with moral agency) by discussing the Ethical Turing Test, the Moral Chinese Room Argument, and Weak and Strong Moral AI. In the second part we address the moral responsibility status of AI, and thereby clarify the possibility of Responsible AI (i.e., AI with moral responsibility). These issues would be crucial for AI-pervasive technosociety in the (possibly near) future, especially for post-human society after the development of artificial general intelligence.
Article
We understand sociotechnical systems (STSs) as uniting the social and technical tiers to provide abstractions for capturing how autonomous principals interact with each other. Accountability is a foundational concept in STSs and an essential component of achieving ethical outcomes. In simple terms, accountability involves identifying who can call whom to account and who must provide an accounting of what and when. Although accountability is essential in any application involving autonomous parties, established methods don't support it. We formulate an accountability requirement as one where one principal is accountable to another regarding some conditional expectation. Our metamodel for STSs captures accountability requirements as relational constructs inspired from legal concepts, such as commitments, authorization, and prohibition. We apply our metamodel to a healthcare process and show how it helps address the problems of ineffective interaction identified in the original case study.
Book
In Still Life with Rhetoric, Laurie Gries forges connections between new materialism, actor network theory, and rhetoric to explore how images become rhetorically active in a digitally networked, global environment. Rather than study how an already-materialized “visual text” functions within a specific context, Gries investigates how images often circulate and transform across media, genre, and location at viral rates. A four-part case study of Shepard Fairey’s now iconic Obama Hope image elucidates how images reassemble collective life as they actualize in different versions, enter into various relations, and spark a firework of activity across the globe. While intent on tracking the rhetorical life of a single, multiple image, Still Life with Rhetoric is most concerned with studying rhetoric in motion. To account for an image’s widespread circulation and emergent activities, Gries introduces iconographic tracking-a digital research method for tracing an image’s divergent rhetorical becomings. Yet Gries also articulates a dynamic set of theoretical principles for studying rhetoric as a distributed, generative, and unforeseeable event that is applicable beyond the study of visual rhetoric. With an eye toward futurity-the strands of time beyond a thing’s initial moment of production and delivery-Still Life with Rhetoric intends to be taken up by those interested in visual rhetoric, research methods, and theory. © 2015 by the University Press of Colorado. All rights reserved.
Article
Public rhetoric pedagogy can benefit from an ecological perspective that sees change as advocated not through a single document but through multiple mundane and monumental texts. This article summarizes various approaches to rhetorical ecology, offers an ecological read of the Montgomery bus boycotts, and concludes with pedagogical insights on a first-year composition project emphasizing rhetorical ecologies.
Book
Humanity has sat at the center of philosophical thinking for too long. The recent advent of environmental philosophy and posthuman studies has widened our scope of inquiry to include ecosystems, animals, and artificial intelligence. Yet the vast majority of the stuff in our universe, and even in our lives, remains beyond serious philosophical concern. This book develops an object-oriented ontology that puts things at the center of being—a philosophy in which nothing exists any more or less than anything else, in which humans are elements but not the sole or even primary elements of philosophical interest. And unlike experimental phenomenology or the philosophy of technology, this book’s alien phenomenology takes for granted that all beings interact with and perceive one another. This experience, however, withdraws from human comprehension and becomes accessible only through a speculative philosophy based on metaphor.
Article
Science studies has often been against the normative dimension of epistemology, which made a naturalistic study of science impossible. But this is not to say that a new type of normativity cannot be detected at work inscience studies. This is especially true in the second wave of studies dealing with the body, which has aimed at criticizing the physicalization of the body without falling into the various traps of a phenomenology simply added to a physical substrate. This article explores the work of Isabelle Stengers and Vinciane Despret in that respect, and shows how it can be used to rethink the articulation between the various levels that make up a body.
Article
Both first-order creative, intuitive thinking and second-order critical thinking can and should be encouraged in writing instruction. The first helps generate ideas, and the second is useful in refining expression. The two kinds of thinking enhance different writing skills and can be mutually reinforcing. (MSE)
The politics of AI: ChatGPT and political bias
  • J Baum
  • J Villasenor
Baum, J., & Villasenor, J. (2023). The politics of AI: ChatGPT and political bias. Brookings, 8 MayRetrieved December 13, 2023from https://www.brookings.edu/ articles/the-politics-of-ai-chatgpt-and-political-bias/.
  • Bolukbasi
  • Chang Tolga
  • Kai-Wei Zou
  • James
  • Saligrama
  • Venkatesh
  • Adam Kalai
Bolukbasi, Tolga, Chang, Kai-Wei, Zou, James, Saligrama, Venkatesh, & Kalai, Adam. (2016). Quantifying and reducing stereotypes in word embeddings. arXiv preprint arXiv:1606.06121.
Writing posthumanism, posthuman writing
  • S I Dobrin
Dobrin, S. I. (Ed.). (2015). Writing posthumanism, posthuman writing. Parlor Press.
Object-oriented ontology: A new theory of everything
  • G Harman
Harman, G. (2018). Object-oriented ontology: A new theory of everything. Penguin.
Overview of the issues, statement of principles, and recommendations
  • Mla-Cccc
MLA-CCCC Joint Task Force on Writing and AI. (2023). Overview of the issues, statement of principles, and recommendations. https://hcommons.org/app/uploads/sites/ 1003160/2023/07/MLA-CCCC-Joint-Task-Force-on-Writing-and-AI-Working-Paper-1.pdf.
European artificial intelligence policy: Mapping the institutional landscape. Cardiff: Data Justice Lab
  • J Niklas
  • D Lina
Niklas, J., & Lina, D. (2020). European artificial intelligence policy: Mapping the institutional landscape. Cardiff: Data Justice Lab, Cardiff University. OpenAI. (2023a). ChatGPT (May 24 version) [Large language model]. https://chat.openai.com/chat. Palmquist, M. (2018). The bedford researcher (6th edition). Bedford/St. Martins.
Exclusive: The $2 per hour workers who made chatgpt safer
  • B Perrigo
Perrigo, B. (2023). Exclusive: The $2 per hour workers who made chatgpt safer. January 18. Time https://time.com/6247678/openai-chatgpt-kenya-workers/.