Figure - available from: AI and Ethics
This content is subject to copyright. Terms and conditions apply.
EPS seeks to bridge the principle-practice gap between principles and practical implementations, giving developers tools to use in the development cycle of an AI system. The general workflow of this method consists of an evaluation (Impact Assessment) and a Recommendation stage, structured in a WHY, SHOULD, and HOW format
Source publication
The past years have presented a surge in (AI) development, fueled by breakthroughs in deep learning, increased computational power, and substantial investments in the field. Given the generative capabilities of more recent AI systems, the era of large-scale AI models has transformed various domains that intersect our daily lives. However, this prog...
Similar publications
The past years have presented a surge in (AI) development, fueled by breakthroughs in deep learning, increased computational power, and substantial investments in the field. Given the generative capabilities of more recent AI systems, the era of large-scale AI models has transformed various domains that intersect our daily lives. However, this prog...
Purpose
The growing demand for rare-earth elements has led to interest in assessing the environmental impacts of their production. However, most existing procedures only consider a small part of the life cycle of rare-earth elements. Therefore, this study proposed an allocation model for the resource consumption of multi-output products in hydromet...
Citations
... For example, in an autonomous vehicle network, several vehicles may need to communicate to avoid accidents and make ethical safety and risk allocation decisions [11]. This machine-to-machine collaboration allows AI systems to efficiently address ethical difficulties while guaranteeing that their decisions adhere to established ethical norms [12]. EAIFT envisions a future in which AI systems adhere to ethical rules and actively collaborate to ensure ethical decision-making. ...
The rapid development of artificial intelligence (AI) has created several ethical issues, including bias, a lack of transparency, and privacy concerns, demanding the incorporation of ethical governance directly into AI systems. This study introduces the Ethical Artificial Intelligence Framework Theory (EAIFT), a novel approach to incorporating ethical reasoning into AI. It emphasizes real-time oversight, open decision-making, bias detection, and the ability to change ethical and legal norms. EAIFT advocates for establishing "ethical AI watchdogs" that automatically monitor and ensure the ethical operation of AI systems, together with dynamic compliance algorithms that can adapt to regulatory changes. The paradigm also encourages transparency and explainability to build user trust and detect and correct biases to ensure fairness. This paper employs a qualitative methodology that combines stakeholder interviews, content analysis, and expert commentary to evaluate EAIFT's potential to increase ethical accountability in various areas, including healthcare, banking, and criminal justice. The findings suggest that EAIFT outperforms existing ethical frameworks by proactively reducing biases, increasing transparency, and ensuring adherence to ethical standards. While presenting a comprehensive and adaptable technique, the study also acknowledges limitations in empirical testing and the need for additional research to widen EAIFT's applicability to future ethical challenges in artificial intelligence. The paper suggests future research subjects, such as empirical testing in different scenarios, a more in-depth examination of ethical risks, and the inclusion of the framework into new AI technologies to promote responsible AI governance by societal norms and values.
... As already stated by Fjeld et al. [181], there is a gap between established principles and their actual application. In the WAIE sample, most of the documents only prescribe normative claims without the means to achieve them, while the effectiveness of more practical methodologies, in most cases, remains extra empirical [115]. ...
The critical inquiry pervading the realm of Philosophy, and perhaps extending its influence across all Humanities disciplines, revolves around the intricacies of morality and normativity. Surprisingly, in recent years, this thematic thread has woven its way into an unexpected domain, one not conventionally associated with pondering "what ought to be": the field of artificial intelligence (AI) research. Central to morality and AI, we find "alignment", a problem related to the challenges of expressing human goals and values in a manner that artificial systems can follow without leading to unwanted adversarial effects. More explicitly and with our current paradigm of AI development in mind, we can think of alignment as teaching human values to non-anthropomorphic entities trained through opaque, gradient-based learning techniques. This work addresses alignment as a technical-philosophical problem that requires solid philosophical foundations and practical implementations that bring normative theory to AI system development. To accomplish this, we propose two sets of necessary and sufficient conditions that, we argue, should be considered in any alignment process. While necessary conditions serve as metaphysical and metaethical roots that pertain to the permissibility of alignment, sufficient conditions establish a blueprint for aligning AI systems under a learning-based paradigm. After laying such foundations, we present implementations of this approach by using state-of-the-art techniques and methods for aligning general-purpose language systems. We call this framework Dynamic Normativity. Its central thesis is that any alignment process under a learning paradigm that cannot fulfill its necessary and sufficient conditions will fail in producing aligned systems.
... As already stated by Fjeld et al. [181], there is a gap between established principles and their actual application. In the WAIE sample, most of the documents only prescribe normative claims without the means to achieve them, while the effectiveness of more practical methodologies, in most cases, remains extra empirical [115]. ...
The critical inquiry pervading the realm of Philosophy, and perhaps extending its influence across all Humanities disciplines, revolves around the intricacies of morality and normativity. Surprisingly, in recent years, this thematic thread has woven its way into an unexpected domain, one not conventionally associated with pondering "what ought to be": the field of artificial intelligence (AI) research. Central to morality and AI, we find "alignment", a problem related to the challenges of expressing human goals and values in a manner that artificial systems can follow without leading to unwanted adversarial effects. More explicitly and with our current paradigm of AI development in mind, we can think of alignment as teaching human values to non-anthropomorphic entities trained through opaque, gradient-based learning techniques. This work addresses alignment as a technical-philosophical problem that requires solid philosophical foundations and practical implementations that bring normative theory to AI system development. To accomplish this, we propose two sets of necessary and sufficient conditions that, we argue, should be considered in any alignment process. While necessary conditions serve as metaphysical and metaethical roots that pertain to the permissibility of alignment, sufficient conditions establish a blueprint for aligning AI systems under a learning-based paradigm. After laying such foundations, we present implementations of this approach by using state-of-the-art techniques and methods for aligning general-purpose language systems. We call this framework Dynamic Normativity. Its central thesis is that any alignment process under a learning paradigm that cannot fulfill its necessary and sufficient conditions will fail in producing aligned systems.