Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way
Abstract
In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values.
Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens.
... While these tools enhance efficiency and accessibility, they may inadvertently discourage students from engaging in the deep, reflective thinking necessary for mastering challenging concepts (Luckin et al., 2016). Similarly, in professional settings, reliance on AI for decision-making can erode critical thinking skills, particularly in industries that demand rigorous analysis and independent judgment, such as healthcare and finance (Dignum, 2020). Moreover, cognitive offloading through AI tools raises questions about long-term cognitive development and resilience. ...
... The implications of these findings extend to societal trust in AI and its governance. Studies by Winfield (2019) and Dignum (2020) emphasise that fostering transparency and accountability in AI systems is critical to alleviating public fears. For instance, the adoption of explainable AI-systems capable of articulating the rationale behind their decisions-is increasingly seen as a necessary step toward building trust. ...
... Enhancing explainability and fostering public education about AI's capabilities and limitations are critical for mitigating fears and promoting responsible AI adoption (Gerlich, 2024a;Zarsky, 2016). Studies by Dignum (2020) advocate for the integration of ethical AI frameworks, emphasising the necessity of cultural and institutional adaptations to overcome implementation barriers. Public education campaigns focusing on AI's limitations and potential can empower individuals to engage critically with AI systems. ...
The rapid proliferation of artificial intelligence (AI) across societal domains marks a pivotal transformation in how individuals interact with technology, solve problems, and perceive the world. AI’s ability to augment decision-making, automate processes, and provide tailored services is lauded as a cornerstone of progress. However, this development also triggers significant concerns regarding its cognitive, social, and ethical implications. While some embrace AI’s promise of innovation, others express anxieties rooted in fears of job displacement, privacy erosion, and diminished human cognition. This paper synthesises findings from two studies—“Public Anxieties About AI” and “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking”—to explore these challenges and their implications for critical thinking, business, and society at large. In doing so, it integrates extensive references from these works to substantiate the analysis.
... As a historical legacy, I argue that the coloniality of language will also predictably affect GenAI. These limitations demand critical literacy approaches (Leander & Burriss, 2020) to ensure a responsible use of AI (Dignum, 2019). In a context where discursive dehumanization 4 is increasingly becoming more subtle (Kteily & Landry, 2022), there is a need for linguists, language teachers and learners to become more aware of it. ...
... Applying LCT specialization codes to GenAI explanations of linguistic identities reveals sociolinguistic hierarchies that rank languages in terms of value from invaluable to valuable languages. These findings contribute to research on responsible AI (Boxleitner, 2023;Dignum, 2019) by offering an interdisciplinary framework to critically approach AIGC. Specifically, the study informs responsible AI research from the sociology of knowledge and decolonial thought. ...
... They call for critical toolkits such as the one demonstrated in this study to be adapted as required and subsequently adopted in language learning settings. In addition, they underline the central role of user awareness and critical literacy (Leander & Burriss, 2020) for achieving responsible AI goals (Dignum, 2019), inviting language teachers, learners, and career advisors to adopt critical approaches in their engagement with GenAI. ...
The growing implementation of Generative AI (GenAI) in education has implications on the representation of knowledge and identity across languages. In a context where content biases have been reported in AI-generated content, it becomes relevant to interrogate the ways in which AI technologies represent different linguistic identities. This article conducts a systematic analysis of AI-generated content to identify the potential discursive strategies that can contribute to the perpetuation of existing sociolinguistic hierarchies. Data for this study consist of a set of GenAI explanations of assorted linguistic identities comprising dominant and non-dominant languages. The method combines specialization codes from the sociology of knowledge with discourse analysis. Specialization codes are composed of two axes with a differing degree of emphasis (+/−) on epistemic and social relations (ER/SR). This tool is useful for understanding explanations because it focuses on what sort of information is considered legitimate knowledge and what kinds of knowers are considered valid. The analysis of epistemic and social relations reveals a sociolinguistic hierarchy articulated across three definitory aspects of identity: relationship to the world, structuring across time and space, and possibilities for the future.
... Since user satisfaction is a significant factor that determines the future success and popularity of AI systems, this research is intended to shed light on how human-centred design can be incorporated into AI design and development. Furthermore, this research discusses the ethical issues that AI systems raise, including bias and explainability, as well as how HCAI principles can address these issues (Dignum, 2019;Russell & Norvig, 2016). ...
... For instance, studies in the application of AI in the education sector revealed that self-adaptive learning platforms sometimes replicated the inequality problem because the content adaptation was based on a presumption of equal potentiality among learners (Chen, 2020). This points to the importance of augmenting AI systems with monitoring and supervision to guarantee that the systems are both equitable and answerable (Dignum, 2019). ...
... Although HCAI fosters trust through end-user control and information transparency, the growing use of AI-based systems has implications for the collection, storage, and processing of personal information. With the increasing use of AI systems, high attention should be paid to privacy concerns and the protection of personal data during the development of AI systems, while also implementing consent and anonymization tools (Dignum, 2019). ...
This research examines Human-Centered AI (HCAI) and its contribution to user involvement and user experience with intelligent systems in the healthcare, finance, and education industries. HCAI focuses on deploying AI in a fashion that considers human abilities, values, and emotions in a way that enhances convenience, adaptability, and reliability. The study adopts a qualitative and quantitative mixed methodology in conducting surveys and interviews with users and AI developers examining the effects of human-centered design attributes explanation and user control on user satisfaction. The results imply that such systems of AI which follow the principles of human-centered design have more acceptance and endorsement among users. In particular, participants stressed the need for effective articulation of decision-making processes by AI tools and provision for some degree of manual control over the tools. At the same time, the analysis has uncovered the existence of persistent ethical problems, such as bias, privacy, reliability of AI systems, which need to be addressed further. According to the findings, the application of the HCAI approach can greatly boost the experience and confidence of users in using AI devices. Practical implications are related to the go and bias justice, implementing and enhancing information on explainability, feedback, privacy and integrated ethics into the systems. In this paper, we present additional development for the application of ethical artificial intelligence.
... These considerations include supporting AI systems that prioritize human values, provide openness and interpretability and consider AI technology's social and ethical drawbacks. However, ethical considerations also extend to ensuring that AI systems respect data privacy, human rights, and accountability (Dignum, 2019). In addition, UNESCO emphasized the urgent need to provide fair access to AI technology and eliminate the inherent biases that can potentially hinder its equitable use. ...
... In addition, this study asserted the need for policies and guidelines at the level of institutions to regulate the use of these emerging technologies for teachers and students alike. These findings align with those of previous studies (Amado et al., 2024;Dignum, 2019;Pedro et al., 2019;UNESCO, 2021). Surprisingly, this study showed that GAI tools limited students' creativity because they made students dependent on these smart tools to create content. ...
This study aims to explore the implications of using AI-generative tools (tools for generative AI (GAI)) in teaching and learning practices in higher education settings. This exploratory study employs a mixed-methods approach. Data was collected through focus-group discussions, participants' reflections and questionnaires. The participants of this study were 65 undergraduate students who enrolled in a university. The GAI tools were integrated into the course assignments. This study found that most students chose to use GAI tools alongside traditional tools to perform their assignments and exhibited a positive attitude towards using GAI tools to accomplish their tasks. The most significant impacts of integrating these emerging-technology tools in the course included a reduction in the time needed to complete the assignments and efficiency and creativity in producing different types of interactive digital content. However, notable challenges were identified regarding the quality and authenticity of the new content. In addition, the findings revealed significant differences between the pre- and post-tests mean scores using GAI tools in students’ learning, further reinforcing the effectiveness of these tools. Finally, it is necessary to develop clear policies and guidelines while using GAI in higher education.
... How these capacities can be implemented? Moral AI aims indeed to create some mechanisms for embedding ethical behavior in artificial agents considering the technical tools used to achieve it (Dignum, 2019). In addition to this classification, Studies (Bickley & Torgler, 2023;Dignum, 2019) pursues the ethics in AI focusing on three scopes: ...
... Moral AI aims indeed to create some mechanisms for embedding ethical behavior in artificial agents considering the technical tools used to achieve it (Dignum, 2019). In addition to this classification, Studies (Bickley & Torgler, 2023;Dignum, 2019) pursues the ethics in AI focusing on three scopes: ...
The increasing influence of artificial intelligence (AI) on overall life aspects has led to numerous worries despite its advantages, providing a better quality of life for humans. Therefore, some mechanisms are required to enhance individuals’ trust in computer systems and prevent the adverse autonomous behaviors of intelligent agents. Consideration of the cognitive potentials of humans and use it in intelligent systems is an undeniable principle and shortcut. Therefore, morality and ethics in AI have received attention in theory and practice over recent decades. This study investigates the attempts to develop artificial moral agents based on an engineering viewpoint that focuses on technical aspects. The current challenges and gaps are expressed, and some recommendations are proposed for those interested in further studies in this field.
... Y, también colocar como obligatorias las auditorías y evaluaciones de sesgo algorítmico periódicas de los sistemas de IA utilizados en mediación para identificar y corregir posibles derivas. De nuevo, hacemos uso de Virginia Dignum, quien en su libro Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way aborda el desarrollo de normativas específicas y prácticas éticas para la implementación de IA, destacando la importancia de la protección de datos, la equidad, el consentimiento informado y las auditorías de sesgo algorítmico (Dignum, 2019). ...
Este artículo examina el potencial de la mediación asistida por la inteligencia artificial (IA) para abordar y resolver conflictos en áreas críticas principalmente en la educación, la salud y la seguridad. El objetivo es explorar cómo la integración de herramientas de IA en procesos de mediación puede mejorar la eficacia, la accesibilidad y los resultados de los esfuerzos de pacificación. Enfatiza el hecho de que la mediación asistida por IA ofrece un enfoque prometedor para la resolución de conflictos en los campos esenciales mencionados. Asimismo, destaca que para maximizar su efectividad y garantizar la equidad, es esencial abordar los desafíos éticos y prácticos multidisciplinarios asociados con la tecnología, toda vez que a medida que avanza la colaboración entre mediadores, tecnólogos y profesionales del campo es clave para desarrollar soluciones que equilibren la innovación tecnológica, en particular la IA con las necesidades humanas y los derechos fundamentales.
... 6.2.1 Accountability and Responsibility Boundary in AI Responses. Researchers have outlined principles for responsible AI, emphasizing fairness, explainability, and accountability [8,29], which in clinical contexts extend to opacity, responsibility, and reliability [86]. Building on these foundations, our study demonstrates practical implementations of responsible AI in RPM. ...
Cancer surgery is a key treatment for gastrointestinal (GI) cancers, a group of cancers that account for more than 35% of cancer-related deaths worldwide, but postoperative complications are unpredictable and can be life-threatening. In this paper, we investigate how recent advancements in large language models (LLMs) can benefit remote patient monitoring (RPM) systems through clinical integration by designing RECOVER, an LLM-powered RPM system for postoperative GI cancer care. To closely engage stakeholders in the design process, we first conducted seven participatory design sessions with five clinical staff and interviewed five cancer patients to derive six major design strategies for integrating clinical guidelines and information needs into LLM-based RPM systems. We then designed and implemented RECOVER, which features an LLM-powered conversational agent for cancer patients and an interactive dashboard for clinical staff to enable efficient postoperative RPM. Finally, we used RECOVER as a pilot system to assess the implementation of our design strategies with four clinical staff and five patients, providing design implications by identifying crucial design elements, offering insights on responsible AI, and outlining opportunities for future LLM-powered RPM systems.
... It is in this context that the expanding field of responsible AI in journalism emphasises the necessity of ethical and accountable AI systems development, deployment, and usage inside newsrooms (Trattner et al. 2022). Rather than exclusively focusing on the faults and foibles of how media workers use these systems, the concept of responsible AI demands that AI-based systems and the companies that develop and deploy them (see Kak, Myers West, and Whittaker 2023) adhere to societal values and human rights and prevent detrimental outcomes (Dignum 2019). Particularly for journalists and news organisations, for whom AI brings forth both unique challenges and opportunities (Fridman, Krøvel, and Palumbo 2023), there is a pressing demand for further exploration into the creation of ethical media technology guardrails. ...
The effective adoption of responsible AI practices in journalism requires a concerted effort to bridge different perspectives, including technological, editorial, and managerial. Among the many challenges that could impact information sharing around responsible AI inside news organisations are knowledge silos, where information is isolated within one part of the organisation and not easily shared with others. This study aims to study how knowledge silos might affect the adoption of responsible AI practices in journalism through a cross-case study of four Dutch media outlets. We examine individual and organisational barriers to AI knowledge sharing and the extent to which knowledge silos could impede the operationalisation of responsible AI initiatives inside these newsrooms. To address this question, we conducted 14 semi-structured interviews with a strategic sample of editors, managers, and journalists at de Telegraaf, de Volkskrant, NOS, and RTL Nederland. The interviews aimed to uncover insights into the existence of knowledge silos, their effects on responsible AI practice adoption, and the organisational practices influencing these dynamics. Our results emphasise the importance of creating better structures for sharing information on AI across all layers of news organisations and highlight the need for research on knowledge silos as an impediment to responsible AI production.
... This marks a departure from the traditional top-down approach of translating specifications into formal algorithms. This evolution is inherently tied to the challenge of developing responsible AI (Dignum, 2019), which demands integrating social, ethical, and contextual considerations beyond purely technical imperatives. The developer's role has evolved from designing deterministic algorithms to mediating between human intentions, ethical considerations, and AI capabilities. ...
Generative Artificial Intelligence (GenAI) represents a fundamental shift in AI development, moving from rule-based systems to neural networks capable of creating novel content and solving complex problems through pattern recognition and contextual understanding. This evolution challenges traditional Computer Science (CS) paradigms, as evidenced by innovations in large language models and diffusion-based image generation. This paper investigates how GenAI's emergence affects education and research in computer science and related fields. Through White's cultural model—examining technological, societal, and institutional dimensions—we analyse how GenAI's capabilities diverge from traditional CS approaches in both theory and practice. Our research reveals specific challenges for higher education, including the need to teach contextual reasoning, handle emergent behaviors, and develop adaptive problem-solving skills. We propose educational strategies such as project-based learning with GenAI tools and cross-disciplinary integration. These recommendations aim to establish GenAI as a distinct academic discipline while preparing students and researchers for its increasing role in scientific and professional practices.
... This marks a departure from the traditional top-down approach of translating specifications into formal algorithms. This evolution is inherently tied to the challenge of developing responsible AI (Dignum, 2019), which demands integrating social, ethical, and contextual considerations beyond purely technical imperatives. The developer's role has evolved from designing deterministic algorithms to mediating between human intentions, ethical considerations, and AI capabilities. ...
Generative Artificial Intelligence (GenAI) represents a fundamental shift in AI development, moving from rule-based systems to neural networks capable of creating novel content and solving complex problems through pattern recognition and contextual understanding. This evolution challenges traditional Computer Science (CS) paradigms, as evidenced by innovations in large language models and diffusion-based image generation. This paper investigates how GenAI's emergence affects education and research in computer science and related fields. Through White's cultural model—examining technological, societal, and institutional dimensions—we analyse how GenAI's capabilities diverge from traditional CS approaches in both theory and practice. Our research reveals specific challenges for higher education, including the need to teach contextual reasoning, handle emergent behaviors, and develop adaptive problem-solving skills. We propose educational strategies such as project-based learning with GenAI tools and cross-disciplinary integration. These recommendations aim to establish GenAI as a distinct academic discipline while preparing students and researchers for its increasing role in scientific and professional practices.
... A key principle that underpins implementation frameworks is Responsible AI, "the practice of developing, using, and governing AI in a human-centered way to ensure that AI is worthy of being trusted and adheres to fundamental human values" (Vassilakopoulou, 2020). It signifies the development of intelligent systems that maintain fundamental human values to ensure human flourishing and well-being in a sustainable world (Dignum, 2019). AI systems should also contribute toward global sustainability challenges by ensuring effective computational models through energy-aware solutions and greener data centers and promoting AI use to help achieve sustainability goals (Chatterjee & Rao, 2020). ...
Artificial Intelligence (AI) has emerged as a transformative technology with the potential to revolutionize various sectors, from healthcare to finance, education, and beyond. However, successfully implementing AI systems remains a complex challenge, requiring a comprehensive and methodologically sound framework. This paper contributes to this challenge by introducing the Trustworthy, Optimized, Adaptable, and Socio-Technologically harmonious (TOAST) framework. It draws on insights from various disciplines to align technical strategy with ethical values, societal responsibilities, and innovation aspirations. The TOAST framework is a novel approach designed to guide the implementation of AI systems, focusing on reliability, accountability, technical advancement, adaptability, and socio-technical harmony. By grounding the TOAST framework in healthcare case studies, this paper provides a robust evaluation of its practicality and theoretical soundness in addressing operational, ethical, and regulatory challenges in high-stakes environments, demonstrating how adaptable AI systems can enhance institutional efficiency, mitigate risks like bias and data privacy, and offer a replicable model for other sectors requiring ethically aligned and efficient AI integration.
... This mediating role of fairness is particularly relevant in educational contexts, where the ethicality of AI systems is closely tied to their ability to deliver equitable and unbiased outcomes [65]. Thus, this study suggests that: and justice in AI interactions [76]. Changes in Teacher-Student Dynamics (CTSD), the fourth variable, are measured through four items that consider the reduction in meaningful face-to-face interactions due to the use of ChatGPT, the facilitation of administrative tasks by ChatGPT allowing more time for student engagement, the negative impact of ChatGPT reliance on the development of students' social skills, and the belief that AI technologies like ChatGPT should complement rather than replace teacher-student interactions. ...
This study delves into the ethical implications of implementing ChatGPT in Cambodian educational settings chosen for their unique pedagogical challenges in integrating AI tools. The research explores the use of ChatGPT as a supplementary educational tool to create study materials, facilitate discussions, and provide student feedback. Data from 297 students and teachers in various Cambodian educational institutions were collected through structured questionnaires and analyzed using partial least squares structural equation Modeling (PLS-SEM). The study systematically investigates the influence of data privacy concerns, perceived bias, fairness, and teacher-student dynamics on the perceived ethicality and subsequent adoption of ChatGPT. The results show that concerns about data privacy and perceived bias significantly and negatively impact ethical perceptions. However, fairness is also a mediating factor in mitigating these adverse effects. For instance, when AI tools provide equitable support, concerns about bias tend to diminish, thereby improving ethical perception. Furthermore, the reduction in face-to-face interactions, including personalized guidance, spontaneous discussions, and non-verbal cues, negatively affects the perceived ethicality of AI tools by undermining trust and reducing meaningful human connections. These insights provide practical recommendations for educational institutions to ensure responsible and equitable integration of AI technologies, ultimately supporting an ethically sound and effective learning environment.
... This contrasts with economics-focused applications, where privacy remains essential but is balanced with other ethical considerations like transparency in auditing contexts. In healthcare, protecting user data aligns with legal mandates like HIPAA, making privacy a non-negotiable dimension, while in economic applications, transparency takes precedence due to regulatory oversight needs [53]. ...
Generative AI technologies, particularly Large Language Models (LLMs), have transformed numerous domains by enhancing convenience and efficiency in information retrieval, content generation, and decision-making processes. However, deploying LLMs also presents diverse ethical challenges, and their mitigation strategies remain complex and domain-dependent. This paper aims to identify and categorize the key ethical concerns associated with using LLMs, examine existing mitigation strategies, and assess the outstanding challenges in implementing these strategies across various domains. We conducted a systematic mapping study, reviewing 39 studies that discuss ethical concerns and mitigation strategies related to LLMs. We analyzed these ethical concerns using five ethical dimensions that we extracted based on various existing guidelines, frameworks, and an analysis of the mitigation strategies and implementation challenges. Our findings reveal that ethical concerns in LLMs are multi-dimensional and context-dependent. While proposed mitigation strategies address some of these concerns, significant challenges still remain. Our results highlight that ethical issues often hinder the practical implementation of the mitigation strategies, particularly in high-stake areas like healthcare and public governance; existing frameworks often lack adaptability, failing to accommodate evolving societal expectations and diverse contexts.
... As AI primarily is developed by computer scientists and engineers within academia, and by industrial developers, in the following we argue that these actors carry a responsibility to engage with the values of communities from which their models draw material and data (Dignum, 2019). This requires an interdisciplinary approach, and especially integrating ethnographic perspectives to engage with the people involved -both actors within the community, as well as actors outside it such as researchers, developers and company employees. ...
... AI has the potential to revolutionise CALL by offering personalised and adaptive learning experiences that can accelerate language acquisition (Kohnke et al., 2023). However, its rapid advancement also raises significant concerns that must be addressed to ensure that it is used responsibly and ethically (Dignum, 2019). Addressing these challenges requires rigorous testing, continuous improvement, responsible deployment practices, content moderation, data privacy protection and fact-checking. ...
... Cognitive continuity, achieved through neural mapping technologies, ensures the digitization of complex thought patterns and decision-making processes [11] . Emotional simulation, as detailed by Hanson et al. [12] , leverages algorithms to replicate empathy and social interaction, bridging the gap between humans and robots. Additionally, advancements in sensory perception allow robots to exceed human limitations, processing tactile, auditory and visual stimuli with unprecedented precision [13] . ...
The preservation of human essence in robotics represents a groundbreaking confluence of neuroscience, artificial intelligence and advanced sensory technologies. This article explores how cognitive continuity, emotional simulation, sensory perception and memory preservation contribute to maintaining individuality and human identity within robotic entities. Neural mapping digitizes thought patterns, algorithms replicate emotional responses and advanced sensors emulate human senses. Ethical considerations surrounding consent and equitable access are crucial to this transformative journey. By enabling humanity to transcend biological limitations, this approach redefines identity and legacy in a technologically evolved world.
... Essa abordagem responsável da IA deve ser suportada, segundo Dignum (2017Dignum ( , 2019Dignum ( , 2020, por atributos éticos e valores humanos que, minimamente, reflitam os seguintes princípios básicos: 1) responsabilidade, 2) transparência e 3) accountability (prestação de contas). Para Dignum (2020), tais princípios também devem enquadrar todo o sistema sociotécnico da IA. ...
This conceptual study, organized by a literature survey, aims to reflect and
critically observe the impacts and transformations artificial intelligence (AI), supported by algorithm systems and machine learning, has brought to sociocultural relations and processes in contemporary Brazil. It also outlines the concept of responsible artificial intelligence, emphasizing the key role of social participation in the regulatory constructions encompassing the development, application, and social uses of AI. As a result, this study provides an informative theoretical and technical framework that signalizes and discusses some points of attention and recommendations on the expressions of AI in Brazil in a critical and interdisciplinary way. These recommendations can inform and support future interventions and research on the topics in focus.
Keywords: responsible AI; communication; social participation; regulation.
... Several other research areas like AI accountability [45,46], responsibility [47,48], or explainability [49] are closely connected to the strive for trustworthiness. These discussions also fueled the recently passed AI regulations, such as the EU AI Act [50] as well as the United States' California initiatives [51] and presidential executive order [52]. ...
As artificial intelligence (AI) becomes integral to economy and society, communication gaps between developers, users, and stakeholders hinder trust and informed decision-making. High-level AI labels, inspired by frameworks like EU energy labels, have been proposed to make the properties of AI models more transparent. Without requiring deep technical expertise, they can inform on the trade-off between predictive performance and resource efficiency. However, the practical benefits and limitations of AI labeling remain underexplored. This study evaluates AI labeling through qualitative interviews along four key research questions. Based on thematic analysis and inductive coding, we found a broad range of practitioners to be interested in AI labeling (RQ1). They see benefits for alleviating communication gaps and aiding non-expert decision-makers, however limitations, misunderstandings, and suggestions for improvement were also discussed (RQ2). Compared to other reporting formats, interviewees positively evaluated the reduced complexity of labels, increasing overall comprehensibility (RQ3). Trust was influenced most by usability and the credibility of the responsible labeling authority, with mixed preferences for self-certification versus third-party certification (RQ4). Our Insights highlight that AI labels pose a trade-off between simplicity and complexity, which could be resolved by developing customizable and interactive labeling frameworks to address diverse user needs. Transparent labeling of resource efficiency also nudged interviewee priorities towards paying more attention to sustainability aspects during AI development. This study validates AI labels as a valuable tool for enhancing trust and communication in AI, offering actionable guidelines for their refinement and standardization.
... Concerns concerning privacy and data security, openness in algorithmic decision-making, fairness in targeting, and accountability for AI-driven decisions are particularly significant. The study by [59], emphasises the ethical issues raised by AI in marketing, underlining the significance of responsible AI governance and ethical norms. Similarly, [33] examine the ethical implications of AI and machine learning in a variety of disciplines, underlining the need of anticipating ethical issues. ...
The use of AI-driven personalisation methods has become a revolutionary force in the quickly changing world of digital marketing and e-commerce. This chapter offers a thorough analysis of how AI-driven personalisation methods are revolutionising digital client targeting. To set the context for the significant influence, an overview of the importance of consumer targeting in e-commerce and digital marketing is provided in this chapter, underlining its critical role in attracting and maintaining customers. To explore the difficulties organisations encounter in this endeavour, this research emphasizes the shortcomings of conventional approaches in managing massive amounts of data and adjusting to shifting consumer preferences. The main focus of this research is on AI-driven personalisation methods. This chapter discusses the in-depth analyses of these methods, which include chatbots, machine learning, recommender systems, and natural language processing (NLP). This work also describes the effects of AI-driven personalisation on product sales, including part in boosting consumer loyalty, decreasing cart abandonment, and raising conversion rates. Incorporating a comprehensive literature review, this is a recent research in the area. Personalised suggestions, predictive analytics for customer segmentation, natural language processing for content personalization, and customer churn prediction with retention tactics are covered with details. This work embodies the important ethical issues that come with using AI for customer targeting. Examined in light of importance and effect on consumer trust and regulatory compliance are issues including openness, data protection, informed consent, bias reduction, user control, and long-term 42 customer value. This study offers a comprehensive look at AI-driven personalisation in e-commerce and digital marketing, which must be investigated in the constantly evolving digital landscape while taking opportunities and ethical issues into account.
... But how do you make sure that AI systems make ethical decisions, in particular, in high stakes situations? [39]. ...
In this study, we integrate the implications of artificial intelligence integration into the global workforce from both optimistic and pessimistic scenarios and also suggest how we should and could pragmatically adapt to such changes. We seek to understand how artificial intelligence technologies are changing employment patterns, required skills, and the organisation structure using a systematic review of current literature and emerging trends. We find that while artificial intelligence holds the promise of lifting productivity and allowing the rise of new industries, it also threatens to displace jobs and polarise the skill set of the remaining workforce. Several key factors which will drive the impact of artificial intelligence on the workforce are identified in the study, including the rate of technological adoption, the evolution of human-artificial intelligence hybrid workflows, and the resultant efficiency of educational and regulatory responses. Evidence is proposed that workforce adaptation to the new environment will demand a balanced process, combining continuous learning endeavours, sound regulatory frameworks, and amplified global cooperation. Instead, our findings show that artificial intelligence will lead to a gradual transformation of work and instead to a complementary human-artificial intelligence collaboration, leading to the most successful outcomes. This paper contributes to a growing body of literature about technological unemployment by developing a framework for comprehension and management of artificial intelligence in the workforce and by highlighting the urgent need for policy measures that proactively mitigate the social impact of job displacement and strategic responses that will acclimatise the labor force for a fair transition in the labor market.
... Frameworks for developing responsible AI based on 10 principles-namely, well-being, respect for autonomy, privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, prudence, responsibility, and sustainable development-are elucidated. (Dignum, 2017(Dignum, , 2019aLiu et al., 2022) Explainable AI is a suite of algorithmic techniques generating highperformance, explainable, and trustworthy models. (Adadi & Berrada, 2018;Kaur et al., 2022;Li et al., 2021;Zou & Schiebinger, 2018) Trustworthy AI ...
The widespread and rapid diffusion of artificial intelligence (AI) into all types of organizational activities necessitates the ethical and responsible deployment of these technologies. Various national and international policies, regulations, and guidelines aim to address this issue, and several organizations have developed frameworks detailing the principles of responsible AI. Nevertheless, the understanding of how such principles can be operationalized in designing, executing, monitoring , and evaluating AI applications is limited. The literature is disparate and lacks cohesion, clarity, and, in some cases, depth. Subsequently, this scoping review aims to synthesize and critically reflect on the research on responsible AI. Based on this synthesis, we developed a conceptual framework for responsible AI governance (defined through structural, relational, and procedural practices), its antecedents, and its effects. The framework serves as the foundation for developing an agenda for future research and critically reflects on the notion of responsible AI governance.
... Therefore, GenAI systems need to be fair, explainable, and accountable to reduce model behavior risks and provide insight into what occurs inside the algorithmic black box (Jovanovic & Campbell, 2022). These principles are familiar from previous AI ethics guidelines, which tend to emphasize fairness, accountability, and transparency as the key principles (Dignum, 2019;Jobin et al., 2019;Mirbabaie et al., 2022). However, advanced AI chatbots need to meet additional expectations, such as dealing with misinformation, malicious uses, unintended unethical behaviors, chatbots being mistaken for humans, and environmental inefficiency and harms (Jovanovic & Campbell, 2022;Zhuo et al., 2023). ...
This scoping review develops a conceptual synthesis of the ethics principles of generative artificial intelligence (GenAI) and large language models (LLMs). In regard to the emerging literature on GenAI, we explore 1) how established AI ethics principles are presented and 2) what new ethical principles have surfaced. The results indicate that established ethical principles continue to be relevant for GenAI systems but their salience and interpretation may shift, and that there is a need to recognize new principles in these systems. We identify six GenAI ethics principles: 1) respect for intellectual property, 2) truthfulness, 3) robustness, 4) recognition of malicious uses, 5) sociocultural responsibility, and 6) human-centric design. Addressing the challenge of satisfying multiple principles simultaneously, we suggest three meta-principles: categorizing and ranking principles to distinguish fundamental from supporting ones, mapping contradictions between principle pairs to understand their nature, and implementing continuous monitoring of fundamental principles due to the evolving nature of GenAI systems and their applications. To conclude, we suggest increased research emphasis on complementary ethics approaches to principlism, ethical tensions between different ethical viewpoints, end-user perspectives on the explainability and understanding of GenAI, and the salience of ethics principles to various GenAI stakeholders.
... Desde asistentes virtuales para una educación personalizada hasta sistemas de seguimiento de alumnos o profesores, los beneficios potenciales de la IA para la educación suelen ir acompañados de un debate sobre su impacto en la privacidad y el bienestar. Al mismo tiempo, la transformación social provocada por la IA exige reformar los sistemas educativos tradicionales (Dignum, 2019) y un nuevo campo de estudio emergente en el ámbito educativo (Carrión-Sánchez y Porto-Pedrosa, 2023). ...
La competencia digital, según la OCDE y la UE, se refiere a las habilidades para vivir y trabajar en una sociedad digitalizada. La omnipresencia de algoritmos en la interactuación de los individuos desafía los modelos hasta ahora desarrollados, lo que resalta la necesidad de una nueva competencia algorítmica enfocada en comprender y utilizar algoritmos e inteligencia artificial. Se identifican dos dimensiones para evaluar la competencia algorítmica: habilidades conceptuales y habilidades humanas. Las conceptuales incluyen la creación de recursos digitales, gestión de datos, pensamiento crítico, pedagogía digital y evaluación de alumnos. Las humanas se refieren a la comunicación organizacional, colaboración profesional, actuación ética y responsable, práctica reflexiva y formación digital. La competencia algorítmica busca mejorar el pensamiento crítico, la autonomía, la resolución de problemas mediante el uso adecuado de herramientas de IA. Este enfoque integral proporciona un marco claro para desarrollar y evaluar la competencia algorítmica en el contexto educativo.
... In addition, it comprises awareness of AI's societal consequences of AI in terms of employment, social interactions, and democratic processes. (39) Students should be able to reason about algorithmic bias and how artificial intelligence can worsen or improve social inequalities. It is designed to inculcate responsible AI citizenship so that students are prepared to participate in informed debates about AI governance and make ethical decisions as developers, users, and policymakers of AI technology. ...
Introduction: As artificial intelligence (AI) has become increasingly integrated into daily life, traditional digital literacy frameworks must be revised to address the modern challenges. This study aimed to develop a comprehensive framework that redefines digital literacy in the AI era by focusing on the essential competencies and pedagogical approaches needed in AI-driven education. Methods: This study employed a constructivist and connectivist theoretical approach combined with Jabareen's methodology for a conceptual framework analysis. A systematic literature review from 2010-2024 was conducted across education, computer science, psychology, and ethics domains, using major databases including ERIC, IEEE Xplore, and Google Scholar. The analysis incorporated a modified Delphi technique to validate the framework’s components. Results: The developed framework comprises four key components: technical understanding of AI systems, practical implementation skills, critical evaluation abilities, and ethical considerations. These components are integrated with traditional digital literacy standards through a meta-learning layer that emphasises adaptability and continuous learning. This framework provides specific guidance for curriculum design, pedagogical approaches, assessment strategies, and teacher development. Conclusions: This framework offers a structured approach for reconceptualising digital literacy in the AI era, providing educational institutions with practical guidelines for implementation. Integrating technical and humanistic aspects creates a comprehensive foundation for preparing students for an AI-driven world, while identifying areas for future empirical validation.
... As respostas que a IA retorna, contudo, são fruto da resolução probabilística da análise dos dados que servem de treinamento para a máquina e, por isso, podem trazer resultados imprevisíveis e inesperados, gerando consequências que podem afetar os direitos fundamentais dos seres humanos [Cassino 2021], [Couldry e Mejias 2019] inclusive ampliando vieses [Cozman e Kaufman 2022] e preconceitos [Silva 2022]. Nesta perspectiva, pensar em desenvolvimentos com IA pautados pela ética é também projetar uma IA responsável [Dignum 2019]. Abordagens como a ética desde o projeto para a IA (Ethics by Design for AI) proposta por Brey e Dainow [2023] com base em ampla pesquisa, favorecem que os desenvolvimentos com IA levem em consideração a ética desde o projeto ao instanciar os valores morais fundamentais de natureza geral e convertê-los em requisitos éticos dentro de produtos e sistemas. ...
A Inteligência Artificial tem avançado e chegado ao cotidiano das pessoas por meio de produtos, sistemas e serviços resultantes de projetos de design. Tem-se então, na atualidade, uma tecnologia que força a aceleração no desenvolvimento dos projetos que a comportam e criar soluções éticas nestas circunstâncias posicionando o ser humanos como centro do processo é o que destaca o papel dos designers enquanto agentes humanizadores das tecnologias. Buscando estabelecer estas relações, este artigo traz reflexões a partir de uma pesquisa bibliográfica apoiada em autores pertinentes com foco em contribuir com as pesquisas nas áreas de design, inteligência artificial e ética.
... The focus on ethics and fairness in AI has gained traction in recent years, both by researchers (e.g., Holmes et al. (2021); Slade and Prinsloo (2013); Cerratto Pargman and McGrath (2021); Yang et al. (2021)) and governmental institutions such as UNESCO and the European Parliament. AI models should be used responsibly and should align with fundamental human values and principles to safeguard human well-being (Dignum, 2019). This responsibility should cover all steps of the AI lifecycle, including data acquisition, model implementation and deployment of systems, as well as how end-users understand and adopt the system. ...
... Usability became an issue of growing importance [48]. ...
This review explores the integration of Human-Computer Interaction (HCI) principles in AI to advance Human-Centered Artificial Intelligence (HCAI). It highlights how these fields intersect to create user-friendly AI systems that enhance human capabilities and align with human values. Given the recent interest of HCI in user-centered design and AI in technical innovation, this paper bridges this divide by infusing principles from HCI into AI systems. Relevant peer-reviewed articles, conference papers, and case studies have been selected from leading databases like IEEE Xplore, ACM Digital Library, ScienceDirect, and Google Scholar, encompassing publications from 2017 to 2024. The inclusion criteria for the review focus on interdisciplinary approaches, real-world applications, and challenges of HCAI, while studies that do not have a clear methodology or lack relevance to HCAI were excluded. This paper identifies some of the key gaps, highlights the successful applications of HCAI across healthcare, edu-cation, and entertainment, and discusses various challenges that have arisen, such as bias, transparency, and balancing automation with human control. Findings reveal that iterative design and hu-man-centered frameworks will lead to better usability and ethical fit for HCAI, but significant challenges remain. This study proposes an integrative framework for bringing HCI principles into AI design through interdisciplinary collaboration in developing systems that will enhance human capabilities while considering ethical aspects. Future directions include responsible AI, personalized healthcare, and effective human-AI collaboration.
... It focuses on building multi-dimensional trust, enhancing user and societal confidence in AI systems. This framework integrates principles from Responsible AI [56] , Ethical AI [57] , and Safety AI [58], while placing additional emphasis on robustness, which provide a foundation for the more advanced stages of FAI. ...
As Artificial Intelligence (AI) continues to advance rapidly, Friendly AI (FAI) has been proposed to advocate for more equitable and fair development of AI. Despite its importance, there is a lack of comprehensive reviews examining FAI from an ethical perspective, as well as limited discussion on its potential applications and future directions. This paper addresses these gaps by providing a thorough review of FAI, focusing on theoretical perspectives both for and against its development, and presenting a formal definition in a clear and accessible format. Key applications are discussed from the perspectives of eXplainable AI (XAI), privacy, fairness and affective computing (AC). Additionally, the paper identifies challenges in current technological advancements and explores future research avenues. The findings emphasise the significance of developing FAI and advocate for its continued advancement to ensure ethical and beneficial AI development.
Algorithm-driven financial systems significantly influence monetary stability and payment transactions. While these systems bring opportunities like automation and predictive analytics, they also raise ethical concerns, particularly biases embedded in historical data. Recognizing the critical role of governance, ethics, legal considerations, and social implications (GELSI), this study introduces a framework tailored for algorithmic systems in financial services, focusing on Indonesia's evolving regulatory environment. Using the Multiple Streams Approach (MSA) as our theoretical lens, we offer a framework that augments existing quantitative methodologies. Our study provides a nuanced, qualitative perspective on algorithmic trust and regulation. We proffer actionable strategies for the Central Bank of Indonesia (BI), emphasizing stringent data governance, system resilience, and cross-sector collaboration. Our findings highlight the critical importance of ethical guidelines and robust governmental policies in mitigating algorithmic risks. We combine theory and practical advice to show how to align problems, policies, and politics to create practical opportunities for algorithmic governance. This study contributes to the evolving discourse on responsible financial technology. Our study recommends a balanced way to manage the challenges of innovation, regulation, and ethics in the age of algorithms.
There is no consensus on what constitutes human-centeredness in AI, and existing frameworks lack empirical validation. This study addresses this gap by developing a hierarchical framework of 26 attributes of human-centeredness, validated through practitioner input. The framework prioritizes ethical foundations (e.g., fairness, transparency), usability, and emotional intelligence, organized into four tiers: ethical foundations, usability, emotional and cognitive dimensions, and personalization. By integrating theoretical insights with empirical data, this work offers actionable guidance for AI practitioners, promoting inclusive design, rigorous ethical standards, and iterative user feedback. The framework provides a robust foundation for creating AI systems that enhance human well-being and align with societal values. Future research should explore how these attributes evolve across cultural and industrial contexts, ensuring the framework remains relevant as AI technologies advance.
In the current Artificial Intelligence(AI) era, intense scrutiny needs to set the framework for AI, which provides rapid advancements in different fields. There is a need for careful consideration, adaptation, and the amalgamation of diverse ethical perspectives to tackle the ethical issues AI presents effectively. Therefore, the present study explores the essential dimensions of Aristotelian ethics, like well-being, virtues, and practical wisdom for AI-driven implementations and examines the relevance of ancient ethical principles in addressing the ethical challenges of AI through a literature review and a focus group discussion. The insights of the focus group discussion were aggregated and explained in three-letter words “WHY” (W is Why, How and Yield to represent the outcome). First, we delve into why these ethical dimensions of Aristotelian; second, how these ancient principles can shape AI systems by critically analysing the suitability of these ethics for designing and utilising AI technologies; and third, knowing the implications of the integration of Aristotle’s ethics and AI. Integrate these two to contribute to human flourishing as an entity that can enhance the collective well-being through their virtuous actions; however, the direct implementation of Aristotle’s ethical framework, such as virtues, moral character, and the pursuit of the good life, or eudaimonia, challenges the implementation of AI ethics, This will create AI systems that are not only intelligent and efficient but also moral and virtuous.
This chapter examines AI's impact on employment's future, analyzing whether its workforce integration benefits or harms society. The study explores AI adoption's implications across sectors, investigating how it reshapes job markets, skill requirements, and workplace dynamics. It addresses job displacement concerns and emerging employment opportunities while considering workplace AI ethics. The analysis examines education and training adaptation strategies for an AI-driven economy, alongside AI's economic impact on productivity, innovation, and global competitiveness. The chapter evaluates policy frameworks for ensuring AI integration in employment benefits society, balancing technological progress with human welfare and social equity.
O avanço das tecnologias assistivas tem proporcionado novas possibilidades para a inclusão de indivíduos com deficiência visual, especialmente no contexto acadêmico. Este artigo aborda o uso de assistentes virtuais em bibliotecas universitárias para facilitar a recuperação de informações para esses usuários. O objetivo geral é avaliar a eficácia desses assistentes na superação das barreiras enfrentadas por pessoas com deficiência visual nas buscas de informações acadêmicas. Os objetivos específicos incluem identificar os obstáculos enfrentados por esses usuários, analisar as funcionalidades dos assistentes virtuais e discutir os desafios e perspectivas de sua aplicação no ambiente universitário. A metodologia adotada foi a revisão bibliográfica narrativa, com a consulta de artigos científicos, livros e documentos oficiais sobre o tema. A pesquisa mostrou que, embora os assistentes virtuais tenham o potencial de facilitar a acessibilidade, ainda existem desafios na implementação adequada dessas tecnologias nas bibliotecas universitárias. Conclui-se que, para que os assistentes virtuais sejam eficazes na recuperação de informação, é necessária uma combinação de melhorias tecnológicas, capacitação profissional e políticas institucionais que promovam o uso de recursos inclusivos.
https://www.pbcib.com/index.php/pbcib/article/view/62567
In our current technological context, the virtues needed to live an ethically good life must be acquired in the context of practices in which technologies, particularly artificial intelligence systems, play an increasingly central role. To be good practitioners, humans must now be competent users of such technologies. MacIntyre’s virtue theory offers a way of looking at practices and other social structures as enablers of (or barriers to) moral agency. However, MacIntyre has not developed an account of the impact of technologies on practices, nor the way in which virtues can be developed in technology-mediated practices.
This chapter aims to address this gap by discussing, first, how so called intelligent technologies are intertwined with human agency within practices; and then, how virtues can be developed in such “sociomaterial” practices. Thus, the chapter aims to help extend MacIntyrean virtue ethics, on the one hand, to the study of the moral impact of intelligent technologies on human action and, on the other hand, to the identification of the virtues necessary to seize the opportunities and overcome the harms, dangers, temptations, and distractions which smart technologies pose on human flourishing.
This paper proposes a framework, "Virtuous AI," which aims to grow ethics and self-regulation in AI systems. This framework is based on the stages of self-development like the self that is obsessed with impulse, the system that blames oneself and the system that is at the peak of peace. The suggested rightly develops towards the final stage of AI, from the rule-based stage, which focuses on rules of ethical conduct, to one that is adaptive and self-regulates, and finally, advanced ethical consciousness. The proposed Virtuous AI model addresses the challenges posed by rule-based static ethics in complex human-centered scenarios by suggesting a shift from static ethics in AI to one that is progressive in nature. The first stage focuses on fixing basic and fundamental ethics for AI so as to avoid harm through strict guidelines with set boundaries. The second stage reinforces the first stage by embedding AI with self-regulatory capabilities through the use of mechanisms such as reinforcement learning and self-assessment algorithms of AI, allowing it to use situational information and adjust its responses accordingly. The third stage transcends to interaction with AI systems where an ethical maturity is acquired, which therefore allows AI systems to autonomously and through the right decision-making processes demonstrate virtues of empathy, fairness, and humility, creating trust and ethics in the transaction within rapidly changing environments. The implementation strategies include using reinforcement learning to increase ethical flexibility, natural language processing, and emotional engagement through response generation and employing moral feedback loops to assist people and AI in boosting virtue. Through case studies in healthcare, law enforcement, and social media, this paper illustrates the practical uses of Virtuos AI in promoting humane, fair, and trustworthy engagements in various spheres. Given the specific barriers present, such as virtue language development, side effects, and the issue of interfaces with human morality, this paper calls for cross-domain collaboration, research, and policy framework toward achieving Virtuous AI. This framework seeks to limit artificial intelligence's impact on people and 1
This chapter explores how artificial intelligence (AI) augments human capabilities across sectors like healthcare, education, and business, emphasizing ethical considerations. It addresses challenges such as bias in algorithms and workforce displacement while discussing future trends like natural language interfaces and brain-computer interfaces. It advocates for ethical governance, proactive reskilling, and inclusive AI development to ensure equitable societal benefits and sustainable progress.
Os Ataques Adversários (Adversarial Attacks) suscitam discussões e investimentos na área de Tecnologia da Informação. Porém, podem provocar problemas éticos profundos, pois esses Ataques Adversários desencadeiam alucinações em máquinas equipadas com Inteligências Artificiais Generativas por meio da entrada (input) de dados adulterados ou apenas mal interpretados. O objetivo é fazer um levantamento bibliográfico, não somente dos aspectos técnicos, mas também de propostas de implementações éticas em Inteligências Artificiais. Os resultados indicam que os maiores desafios estão envoltos nos possíveis danos irreparáveis decorrentes das distorções que os Ataques Adversários acarretam, exigindo novas formas de interpelar o problema ético em Inteligências Artificiais
Attention-Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental condition marked by inattention, impulsivity, and hyperactivity, significantly affecting well-being and workplace productivity. Current management strategies often lack scalability, personalization, and continuous support. Advances in Artificial Intelligence (AI) present new opportunities to address these gaps through enhanced diagnosis, treatment, monitoring, and mentorship. Based on existing research, this study proposes a Human-AI Collaboration Model for ADHD Support encompassing Diagnosis, Therapeutic Support, and Post-Therapy Monitoring and Mentoring. Combining AI's precision with human empathy, the model provides a scalable, holistic approach to ADHD care in ADHD life and organizational context. The study recommends the future development of a Minimum Viable Product (MVP) for the proposed model to validate its effectiveness for future innovation.
O objetivo deste artigo é delinear e explicar o conceito de Inteligência Artificial Responsável envolvendo, nesse exercício de contribuição conceitual, os espaços da publicidade e as suas implicações éticas. É proposta também identificar, descrever e refletir acerca das ações e os enquadramentos éticos que as entidades brasileiras do campo publicitário, especificamente o Conselho Nacional Autorregulamentação Publicitária, vêm deliberando para fomentar e difundir princípios e recomendações-chave para uma abordagem responsável à Inteligência Artificial. A metodologia adotada envolve a articulação de um levantamento bibliográfico e uma pesquisa documental, guiada pela análise de conteúdo qualitativa, que explora as primeiras representações éticas contra campanhas publicitárias denunciadas e avaliadas pelo Conselho Nacional Autorregulamentação Publicitária que acionam a temática da Inteligência Artificial. Esses casos constam documentados, entre 2019 e 2023, no banco de decisões de casos julgados pela entidade e disponível no seu site para consulta pública. Como resultados, de modo geral, se identificou que ainda são raras as práticas das entidades setoriais da publicidade no Brasil para orientar o campo sobre as repercussões da Inteligência Artificial. Além disso, a partir das análises das representações éticas supracitadas, foi possível desenvolver o quadro descritivo e explicativo “Brechas regulatórias e pontos de atenção tecnoéticos sobre o uso da IA na publicidade brasileira”.
The approach to empower learners as the subject in the use of AI is in line with the United Nations Educational, Scientific and Cultural Organization (UNESCO)’s AI and Education Guidance for Policy-Makers and is pursued in recognition of the three paradigmatic shifts in the use of AI in educational setting. To strengthen the role of learners as leaders in the use of AI, this article uses the idea of the acting person from Karol Wojtyla. The concept of the acting person focuses on moral responsibility founded in human consciousness and conducted through human actions. The moral act of an acting person leads to responsible use that requires the commitment to the common good. In the first part, I will describe the history and development of AI technologies. In the second part, I will discuss the idea of the acting person and the AI as an acting machine. In the last part, I will present an analysis of the importance of grounding educational policy on the use of AI in learners’ ethical role as the acting person.
Considerable concerns have been raised regarding the potential adverse effects of artificial intelligence (AI) on privacy and ethics. Issues surrounding ethics and privacy are often linked to a variety of digital technologies. The rise of artificial intelligence (AI) has heightened the visibility of these concerns. To demonstrate how AI might exacerbate or affect privacy issues, three particular instances are analyzed. The first instance focuses on the use of private data by authoritarian governments. The second instance investigates the implications of AI's use of genetic data. The third instance relates to the difficulties presented by biometric surveillance. Following this examination, a discussion is provided on contemporary strategies for tackling privacy issues through data protection regulations, as well as an investigation into the potential new challenges that AI could introduce to existing data protection systems. Current European data protection laws require the execution of a data protection impact assessment. Additionally, various cases are presented in relation to the ethical considerations. This chapter suggests that an enhanced AI impact assessment could broaden the scope of such evaluations, thereby offering a more comprehensive analysis of potential privacy concerns associated with AI.
Artificial intelligence (AI) and its recent advancements pervade vast areas of education, the workplace, and society. As a driver of technological progress, AI has the potential to transform entire business areas, optimize the way we work and live together, and promote creativity. Concomitantly, it harbors the risk of biased algorithms, discrimination, and misinformation. Accordingly, it is now more important than ever to teach students-as potential future designers and users of AI systems-how to deal responsibly and ethically with AI. To support educators in conveying the responsible and ethical use of AI, we conducted a systematic literature review based on the PRISMA guideline. As a result, we present an overview of established and innovative methods of teaching AI ethics in K-12 and academic settings. We discuss these in terms of their effectiveness and grounding in learning theories and derive implications for theory and practice.
This paper argues that only artificial intelligence can confront climate change, as it offers unique methods for adaptation and mitigation. Climate change threatens the protection of the environment and the future of society, making the use of AI crucial in supporting sustainable development. This paper seeks to discuss the use of AI, particularly in understanding climate change and the various tasks in this area. In order to promote ecological conservation, AI improves weather forecasting and regulates the environment. Legal systems like the UN Climate Change Agreements incorporate AI to reduce emissions. However, ethical questions and regulatory issues are crucial for ensuring a more sensible use of AI that does not pose a threat. Technological change alone is not enough to harness AI for climate strategies without organizational change, and policy adjustment. Therefore, this paper advocates for the proactive integration of AI into climate action, adhering to legal and ethical guidelines, to prevent the emergence of an AI-driven catastrophe.
Communicating insights from data effectively requires design skills, technical knowledge, and experience. Data must be accurately represented with aesthetically pleasing visuals and engaging text to effectively communicate to the intended audience. Data storytelling has received much attention lately, but as of yet, it does not have a theoretical and practical foundation in information science. A data story adds context, narrative, and structure to the visual representation of data, providing audiences with character, plot, and a holistic experience of narrative. This paper proposes a methodological approach to transform a data visualization into a data story based on the Data‐Information‐Knowledge‐Wisdom (DIKW) pyramid and the S‐DIKW Framework. Starting from the bottom of the pyramid, the proposed approach defines a strategy to represent insights extracted from data. Data is then turned into information by identifying character(s) facing a problem, adding textual and graphic content; information is turned into knowledge by organizing what happens as a plot. Finally, a call to wise action—always informed by cultural and community values—completes the storytelling transformation to create a data story. This article contributes to the theoretical understanding of data stories as emerging information forms, supporting richer understandings of a story as information in the information sciences.
ResearchGate has not been able to resolve any references for this publication.