Figure - uploaded by Luciano Floridi
Content may be subject to copyright.
Figure A: Overview of the four core opportunities offered by AI, four corresponding risks, and
Source publication
This article reports the findings of AI4People, a year-long initiative designed to lay the foundations for a "Good AI Society". We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations-to assess, to develo...
Similar publications
This article reports the findings of AI4People, a year-long initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations – to assess, to deve...
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to d...
This article reports the findings of AI4People, a year-long initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations – to assess, to deve...
Citations
... When Documents/projects Elements Recently, after surveying the principles of the European Group on Ethics in Science and New Technologies (EGE) as well as 36 other ethical principles proposed so far, the AI4People task force summarized them into nine main principles [15]. Subsequently, the AI HELG, in response to these, refined them and selected four as the basic ethical principles that trustworthy AI should always adhere: Abstract ethical principles need to be translated into tangible operational requirements to establish a solid foundation. ...
In recent years, Artificial Intelligence technology has excelled in various applications across all domains and fields. However, the various algorithms in neural networks make it difficult to understand the reasons behind decisions. For this reason, trustworthy AI techniques have started gaining popularity. The concept of trustworthiness is cross-disciplinary; it must meet societal standards and principles, and technology is used to fulfill these requirements. In this paper, we first surveyed developments from various countries and regions on the ethical elements that make AI algorithms trustworthy; and then focused our survey on the state of the art research into the interpretability of AI. We have conducted an intensive survey on technologies and techniques used in making AI explainable. Finally, we identified new trends in achieving explainable AI. In particular, we elaborate on the strong link between the explainability of AI and the meta-reasoning of autonomous systems. The concept of meta-reasoning is 'reason the reasoning', which coincides with the intention and goal of explainable Al. The integration of the approaches could pave the way for future interpretable AI systems.
... • Security Testing: AI-driven penetration testing identified previously unknown vulnerabilities in patient data security protocols, ensuring compliance with HIPAA and GDPR regulations (Floridi et al., 2018). ...
AI technology brought into the field of healthcare is a matter of significant importance as it has contributed to a qualitative improvement of patient care, diagnostics as well as treatment planning. The integrity of the AI-driven healthcare applications including the accuracy, reliability, and safety aspects is what the whole game is about. Even a small matter of software bugs can have severe consequences such as a wrong diagnosis being made, the discharging of patients with the wrong medicines, or a data breach. The most common traditional testing techniques, such as manual testing and rule-based automation, are quite often unsatisfactory as they lack the proper adaptability level that is necessary to cope with the ever-increasing complexity of the newest AI-based healthcare applications. The deployment of AI in software testing has turned out to be an effective method to solve these challenges of ensuring the machine learning algorithm has proper test coverage, defect detection automation, and finally, the healthcare software systems more robust. Automated functional testing, performance testing, security testing, and usability testing are the AI-powered testing methodologies that are the gateway to the development of reliable software. Problematic topics are emphasized in this research such as AI-powered software testing methodologies and their impact on healthcare applications, and the challenges addressing widespread adoption. Forward-thinking is also addressed surrounding the creation of explainable AI (XAI) in testing, continuous integration with DevOps, and AI-powered real-time validation frameworks to ensure the reliability and security of AI-driven healthcare systems.
... As organizations contemplate integrating AI in HRM, the moral implications become paramount. Such concerns include fairness, transparency, accountability, and employee privacy [25,26]. However, caution should be observed when deploying technology in "sensitive social and political contexts, " including employment, education, and policing [27]. ...
... The evaluation of H5 indicated that ethical considerations of responsible AI governance (ECR) had a solid and direct impact on sustainable HRM performance (SHP) in higher educational institutions. According to [26] and [33], the use of ethics in the application of AIs is vital in enhancing the trust invested in organizations and the longevity of their existence. (OCR's) impact as a moderator of the relationship between AI-driven insights and SHP is negative and nonfinal, which goes against some of the literature's expectations. ...
This study examines the impact of artificial intelligence (AI)-driven insights on sustainable Human Resource Management (HRM) performance in higher education institutions. It explores the mediating roles of HRM practices optimization and decision-making enhancement, as well as the moderating effects of organizational culture, AI adoption readiness, and ethical considerations. By incorporating innovative insights into artificial intelligence technologies, this research contributes to career and HRM literature by revealing how AI transforms workforce dynamics in academic institutions. Data was collected via a quantitative survey from 215 participants, including administrators, AI and data science experts, data analysts, human resource management professionals, institutional leaders, IT staff, and policymakers from higher education institutions in the UAE. Using non-probability quota sampling. Data analysis was conducted through descriptive statistics, t-tests, correlation analysis, and structural equation modeling (SEM) using Partial Least Squares SEM (PLS-SEM) in SmartPLS software. Findings confirm that AI-driven insights significantly enhance sustainable HRM performance. Both HRM practices optimization and decision-making enhancement mediate this relationship. While ethical considerations, organizational culture, and AI adoption readiness directly influence HRM sustainability, they do not significantly moderate AI’s impact, suggesting that individual engagement with AI technologies plays a more pivotal role than broader organizational factors. This research uniquely integrates the Resource-Based View (RBV) and Technology Acceptance Model (TAM) to examine AI adoption in HRM within higher education—an area previously underexplored. It underscores the importance of HRM optimization and decision-making in maximizing AI’s benefits. The findings offer strategic insights for higher education institutions seeking to enhance HRM sustainability through effective AI integration.
... The ethics framework from different countries was designed to identify the ethical concerns raised by AI technologies that are new or likely to arise shortly and describe the steps that can be taken to mitigate them [30,44,45,55,72,96,97]. There is still more disagreement about whether AI should be held to human standards of accountability or if humans are supposed to have final authority over technical creations [35]. ...
In recent times, the term AI ethics caught the attention among the academics, legislators, developers, and among AI users to promote ethical AI development. While countries in the North have led the way in discussions about the direction of ethical and responsible artificial intelligence development and deployment, perspectives from developing countries like Bangladesh are underrepresented. Based on 32 qualitative interviews with different stakeholders, including machine learning practitioners, academic researchers, and policymakers in the emerging AI ecosystem in Bangladesh, this work closely examines the ongoing challenges and opportunities to ensure AI ethics in Bangladesh with emerging AI usage. In Bangladesh, the government has not yet fully implemented measures to empower citizens with AI-related skills, policies, resources, and data ethics, and a significant portion of the population lacks knowledge in AI. In this paper, we are presenting the findings of AI4Bangladesh project that intend to create the roadmap for ethical AI in Bangladesh. We outline the core challenges, present situation, and risks of AI for Bangladesh; propose seven AI ethics principles, and offer suggestions to ensure a transparent, accountable, and fair AI ecosystem for Bangladesh.
... In this process, clear criteria must be established to evaluate the social and environmental impacts of proposed solutions. This includes identifying and mitigating possible negative externalities, such as the digital divide, excessive energy consumption in processing for AI training, and the impact on workers replaced or affected by technology implementations [40,41]. ...
... Prospects for MAISTRO include its application in diverse sectors such as healthcare, finance, and manufacturing, where AI systems are gaining increasing importance [40]. In addition, future research can explore the integration of MAISTRO with MLOps and DevOps practices, expanding its applicability in continuous AI operations [1]. ...
... Its application in the PreçoBomAquiSim project resulted in an intelligent recommendation system that improved customer experience and increased the company's revenue. Previous studies have already pointed to the importance of methodologies that integrate technical and ethical aspects in AI development, which was corroborated by this practical application [40]. MAISTRO stood out by providing a flexible and iterative structure, perfectly aligning with the specific needs of the project [164]. ...
The MAISTRO methodology introduces a comprehensive and integrative, agile framework for managing Artificial Intelligence (AI) system development projects, addressing familiar challenges such as technical complexity, multidisciplinary collaboration, and ethical considerations. Designed to align technological capabilities with business objectives, MAISTRO integrates iterative practices and governance frameworks to enhance efficiency, transparency, and adaptability throughout the AI lifecycle. This methodology encompasses seven key phases, from business needs understanding to operation, ensuring continuous improvement and alignment with strategic goals. A comparative analysis highlights MAISTRO’s advantages over traditional methodologies such as CRISP-DM and OSEMN, particularly in flexibility, governance, and ethical alignment. This study applies MAISTRO in a simulated case study of the PreçoBomAquiSim supermarket, demonstrating its effectiveness in developing an AI-powered recommendation system. Results include a 20% increase in product sales and a 15% rise in average customer ticket size, highlighting the methodology’s ability to deliver measurable business value. By emphasizing iterative development, data quality, ethical governance, change and risk management, MAISTRO provides a robust approach for AI projects and suggests directions for future research across diverse industries context for facilitating large-scale adoption.
... By utalizing human-AI collaboration in these domains, businesses can gain a competitive edge, make more informed decisions, and drive innovation and growth. However, it is crucial to ensure the responsible and ethical development and deployment of AI systems, addressing concerns around data privacy, algorithmic bias, and transparency in decision-making processes (Arrieta et al., 2020;Floridi et al., 2018). ...
... Additionally, ethical guidelines and codes of conduct should be developed to outline the responsibilities and expected behavior of various stakeholders involved in human-AI systems, including developers, deployers, and end-users (Arrieta et al., 2020;Floridi et al., 2018). ...
This chapter explores how artificial intelligence (AI) augments human capabilities across sectors like healthcare, education, and business, emphasizing ethical considerations. It addresses challenges such as bias in algorithms and workforce displacement while discussing future trends like natural language interfaces and brain-computer interfaces. It advocates for ethical governance, proactive reskilling, and inclusive AI development to ensure equitable societal benefits and sustainable progress.
... 9. (Floridi et al., 2018) • The objective of the study was to explore opportunities and risks of AI for society • Knowledge on how to apply AI in clinical practice is still being developed. ...
Artificial Intelligence (AI) has the potential to transform the healthcare ecosystem, but further research is needed to understand how it can enhance healthcare capabilities. This study analyzes the literature on AI and healthcare capability using the PRISMA approach, applying specific search keywords and inclusion/exclusion criteria. The findings indicate that AI benefits the healthcare ecosystem, significantly influences health outcomes, and transforms medical practices. However, there is limited literature and a lack of understanding regarding how AI enhances healthcare capabilities. Most studies date from 2019, suggesting that COVID-19 has accelerated the adoption of AI systems in healthcare. This research contributes theoretically by developing a framework that clarifies AI’s role in enhancing healthcare capabilities, serving as a foundational model for future studies. It identifies critical gaps in the literature, especially in the Global South, and encourages exploration in under-researched areas where healthcare professionals can benefit from AI. Additionally, it bridges the gap between AI and healthcare, enriching interdisciplinary dialogue relevant to emerging economies facing financial constraints. Practically, the study provides actionable insights for healthcare practitioners and policymakers in the Global South on leveraging AI to improve service delivery. It sets the stage for empirical research, promoting the testing and refinement of the proposed framework in resource-limited contexts, while raising awareness among healthcare staff, managers, and technology developers about AI’s role in healthcare.
... The main reasons are the following: (1) Human-AI Collaboration: with AI handling large datasets and automating routine tasks, human employees working alongside AI can maximize strengths and improve overall performance [29,30]. (2) Ethical Oversight: AI can assist in decision-making, but human oversight is crucial for interpreting AI outcomes, and employee involvement ensures ethical issues are monitored and addressed [31]. (3) Enhanced AI Adaptation: employee involvement helps increase acceptance of technological changes and adaptability to AI-driven shifts [30]. ...
In the context of globalization and rapid technological advancement, the introduction of Artificial Intelligence (AI) has brought new opportunities and challenges to Human Resource Management (HRM). This study constructs an evolutionary game model to explore the strategy choices and evolutionary paths of enterprises and employees in HRM value co-creation with AI involvement. We numerically simulated the dynamic evolution of strategies under different scenarios, revealing the equilibrium characteristics of strategic interactions between enterprises and employees in the AI context. The study finds that, first, the evolutionary game system between enterprises and employees converges to two equilibrium points: {cooperation, active} and {non-cooperation, passive}. Overall, the probability of the former is 2.39 times greater than that of the latter. Second, higher initial probabilities of cooperation and active involvement, along with lower costs for cooperation and active involvement, facilitate the system’s evolution towards the {cooperation, active} equilibrium. Third, enterprises are more sensitive to the benefit distribution ratio than employees. This study provides theoretical support for effectively conducting HRM practices in the AI era through systematic analysis of HRM value co-creation behavior, along with practical policy recommendations.
... Policymakers play a crucial role in shaping the regulatory framework for responsible AI integration. Studies by Floridi, and & Vayena, (2018) emphasize the need for policymakers to strike a balance between encouraging innovation and ensuring ethical use of AI. ...
This research delves at the application of responsible AI in healthcare, with a focus on ethical frameworks, equity, and openness. In addition to addressing potential ethical and prejudice issues, it provides ways for integrating AI technology that improve patient outcomes and operational efficiency. The study looks at real-world issues and regulatory ramifications, making the case that strong governance and cross-disciplinary cooperation are necessary for moral AI applications. The results support ongoing enhancement and involvement of stakeholders in order to fully use AI's advantages and guarantee fair and long-lasting progress in healthcare.
... AI's impact on human agency is a double-edged sword. While AI tools have contributed to improving people's quality of living, often by employing their data to provide tailored recommendations (Logg et al., 2019), there are concerns about how the data is obtained, stored, and utilized -and controversies regarding manipulation and surveillance (Floridi et al., 2021;Ienca, 2023). AI tools like chatbots may seem impressive at mimicking human interactions that seemingly display feelings of empathy (Stark & Hoey, 2021), and that characteristic can add to the automation bias problem -where humans overly trust AI recommendations -that undermines critical thinking and accountability (Ienca, 2023;Suresh et al., 2020). ...