ArticlePDF Available

Abstract

A new generation of ethical standards in robotics and artificial intelligence is emerging as a direct response to a growing awareness of the ethical, legal and societal impacts of the fields. But what exactly are these ethical standards and how do they differ from conventional standards?
Pre-print of article published 15 February 2019 in Nature Electronics:
https://doi.org/10.1038/s41928-019-0213-6
Ethical standards in Robotics and AI
A new generation of ethical standards in robotics and artificial intelligence is emerging as a
direct response to a growing awareness of the ethical, legal and societal impact of the fields.
But what exactly are these ethical standards and how do they differ from conventional
standards?
Alan Winfield
Standards are a vital part of the infrastructure of the modern world: invisible, but no
less important than roads, airports and telephone networks. It is hard to think of any
aspect of everyday life untouched by standards. The International Organization for
Standardisation (ISO) just one of several standards bodies lists a total of 22,482
published standards. Take the simple act of brushing your teeth in the morning: there
are standards for your toothbrush (both manual ISO 20126 and electric ISO 20127),
your toothpaste and its packaging (ISO 11609), and the quality of your tap water
(ISO 5667-5). Although it might seem odd to wax lyrical on standards, they do
represent a truly remarkable body of work drafted by countless expert volunteers
with an extraordinary impact on individual and societal health and safety.
All standards embody a principle and often it is an ethical principle or value. Safety
standards are founded on the general principle that products and systems should do
no harm that they should be safe; ISO 13482, for instance, sets out safety
requirements for personal care robots. Quality management standards, such as ISO
9001, describe how things should be done, and can be thought of as expressing the
principle that shared best practice leads to improved quality. And technical
standards, like IEEE 802.11 (better known as WiFi), can be thought of as embodying
the benefits of interoperability. Even the basic idea of standards as codifying shared
ways of doing things can be thought of as expressing the values of cooperation and
harmonisation. All standards can therefore be thought of as implicit ethical standards.
We can define an explicit ethical standard as one that addresses clearly articulated
ethical concerns, and seeks through its application to, at best remove, hopefully
reduce, or at the very least highlight the potential for unethical impacts or their
consequences.
!
What are the ethical principles which underpin these new ethical standards? An
informal survey1 in December 2017 listed a total of ten different sets of ethical
principles for robotics and AI. The earliest (1950) are Asimov’s laws of robotics:
important because they established the principle that robots should be governed by
principles. Very recently we have seen a proliferation of principles; of the ten sets
surveyed seven were published in 2017.
!
Perhaps not surprisingly these ethical principles have much in common. In summary:
robots and artificial intelligences (AIs) should do no harm, while being free of bias
and deception; respect human rights and freedoms, including dignity and privacy,
while promoting well-being; and be transparent and dependable while ensuring that
the locus of responsibility and accountability remains with their human designers or
operators. Just as interesting is the increasing frequency of their publication: clear
evidence for a growing awareness of the urgent need for ethical principles for
robotics and AI. But, while an important and necessary foundation, principles are not
practice. Ethical standards are the next important step toward ethical governance in
robotics and AI2.
!
Ethical risk assessment
Almost certainly the world’s first explicit ethical standard in robotics is BS 8611 Guide
to the Ethical Design and Application of Robots and Robotic Systems3, which was
published in April 2016. Incorporating the EPSRC principles of robotics4, BS8611 is
not a code of practice, but instead guidance on how designers can undertake an
ethical risk assessment of their robot or system, and mitigate any ethical risks so
identified. At its heart is a set of 20 distinct ethical hazards and risks, grouped under
four categories: societal, application, commercial & financial, and environmental.
Advice on measures to mitigate the impact of each risk is given, along with
suggestions on how such measures might be verified or validated. The societal
hazards include, for example, loss of trust, deception, infringements of privacy and
confidentiality, addiction, and loss of employment. The idea of ethical risk
assessment is of course not new it is essentially what research ethics committees
do but a method for assessing robots for ethical risks is a powerful new addition to
the ethical roboticist’s toolkit.
In April 2016, the IEEE Standards Association launched a global initiative on the
Ethics of Autonomous and Intelligent Systems5. The significance of this initiative
cannot be overstated; coming from a professional body with the standing and reach
of the IEEE Standards Association it marks a watershed in the emergence of ethical
standards. And it is a radical step. As I’ve argued above all standards are even if
not explicitly based on ethical principles. But for a respected standards body to
launch an initiative which explicitly aims to address the deep ethical challenges that
face the whole of autonomous and intelligent systems from driverless car autopilots
to medical diagnosis AIs, drones to deep learning, and care robots to chat bots is
both ambitious and unprecedented.
Humanity first
The IEEE initiative positions human well-being as its central tenet6. This is a bold and
political stance since it explicitly seeks to reposition robotics and AI as technologies
for improving the human condition rather than simply vehicles for economic growth.
The initiative’s mission is “to ensure every stakeholder involved in the design and
development of autonomous and intelligent systems is educated, trained, and
empowered to prioritize ethical considerations so that these technologies are
advanced for the benefit of humanity”.
!
The first major output from the IEEE Standards Association’s global ethics initiative is
a discussion document called Ethically Aligned Design (EAD)7, developed through an
iterative process which invited public feedback. The published second edition of EAD
sets out more than 100 ethical issues and recommendations, and a third edition will
be launched early in 2019. The work of more than 1000 volunteers across thirteen
committees, EAD covers: general (ethical) principles; how to embed values into
autonomous intelligent systems; methods to guide ethical design; safety and
beneficence of artificial general intelligence and artificial superintelligence; personal
data and individual access control; reframing autonomous weapons systems;
economics and humanitarian issues; law; affective computing; classical ethics in AI;
policy; mixed-reality, and well-being.
Each EAD committee was additionally tasked with identifying, recommending and
promoting new candidate standards, and to date a total of 14 new IEEE
standards working groups have started work on drafting so called human standards
(Box 1).
// start Box 1
Box 1: IEEE P7000 series human standards in development
P7000 Model Process for Addressing Ethical Concerns During System Design
P7001 Transparency of Autonomous Systems
P7002 Data Privacy Process
P7003 Algorithmic Bias Considerations
P7004 Standard for Child and Student Data Governance
P7005 Standard for Transparent Employer Data Governance
P7006 Standard for Personal Data Artificial Intelligence (AI) Agent
P7007 Ontological Standard for Ethically Driven Robotics and Automation Systems
P7008 Standard for Ethically Driven Nudging for Robotic, Intelligent and
Autonomous Systems
P7009 Standard for Fail-Safe Design of Autonomous and Semi-Autonomous
Systems
P7010 Wellbeing Metrics Standard for Ethical Artificial Intelligence and
Autonomous Systems
P7011 Standard for the Process of Identifying and Rating the Trustworthiness of
News Sources
P7012 Standard for Machine Readable Personal Privacy Terms
P7013 Inclusion and Application Standards for Automated Facial Analysis
Technology
// end Box 1
The importance of transparency and explainability
Consider P7001 as a case study. One of the general principles8 of EAD asks how
can we ensure that autonomous and intelligent systems are transparent?, and
recommends a new standard for transparency. P7001 Transparency in Autonomous
Systems was initiated as a direct response. IEEE P7001 directly addresses the
straightforward ethical principle that it should always be possible to find out why an
autonomous system made a particular decision.
A robot or AI is transparent if it is possible to find out why it behaves in a certain way.
We might for instance want to discover why it made a particular decision, especially if
that decision caused an accident or for the less serious reason that the robot or
AI’s behaviour is puzzling. Transparency is not intrinsic to robots and AIs, but must
be designed for, and it is a property which autonomous systems might have more or
less of. And full transparency might be very challenging to provide, for instance in
systems based on artificial neural networks (deep learning systems), or systems that
are continually learning.
There are two reasons transparency is so important.
First, because modern robots and AIs are designed to work with or alongside
humans, who need to be able to understand what they are doing and why. If we take
an assisted living robot as an example transparency (or to be precise, explainability)
means the user can understand what the robot might do in different circumstances.
An elderly person might be very unsure about robots, so it is important that her robot
is helpful, predictable never does anything that frightens her and above all safe. It
should be easy for her to learn what the robot does and why, in different
circumstances. An explainer system that allows her to ask the robot why did you just
do that?and receive a simple natural language explanation would be very helpful in
providing this kind of transparency. A higher level of transparency would be the
ability to ask questions like what would you do if I fell down? or what would you do
if I forget to take my medicine? This allows her to build a mental model of how the
robot will behave in different situations.
And second, because robots and AIs can and do go wrong. If physical robots go
wrong they can cause physical harm or injury. Real world trials of driverless cars
have already resulted in several fatalities9. Even a software AI can cause harm. A
medical diagnosis AI might, for instance, give the wrong diagnosis, or a biased credit
scoring AI might cause someone’s loan application to be wrongly rejected. Without
transparency, discovering what went wrong is extremely difficult and may in some
cases be impossible. The ability to find out what went wrong and why is not only
important to accident investigators, it might also be important to establish who is
responsible, for insurance purposes, or in a court of law. And following high profile
accidents wider society needs the reassurance of knowing that problems have been
found and fixed.
Transparency and explainability measured
But transparency is not one thing. Clearly an elderly relative does not require the
same level of understanding of a care robot as the engineer who repairs it. The
P7001 working group has defined five distinct groups of stakeholders (the
beneficiaries of the standard): users, safety certifiers or agencies, accident
investigators, lawyers or expert witness, and the wider public. For each of these
stakeholder groups, P7001 is setting out measurable, testable levels of transparency
so that autonomous systems can be objectively assessed and levels of compliance
determined, in a range that defines minimum levels up to the highest achievable
standards of transparency.
Of course, the way in which transparency is provided is very different for each group.
Safety certification agencies need access to technical details of how the system
works, together with verified test results. Accident investigators will need access to
data logs of exactly what happened prior to and during an accident, most likely
provided by something akin to an aircraft flight data recorder10. Lawyers and expert
witnesses will need access to the reports of safety certifiers and accident
investigators, along with evidence of the developer or manufacturer’s quality
management processes. And wider society needs accessible documentary-type
science communication to explain autonomous systems and how they work. P7001
will provide system designers with a toolkit for self-assessing transparency, and
recommendations for how to achieve greater transparency and explainability.
Outlook
How might these new ethical standards be applied when, like most standards, they
are voluntary? First, standards which relate to safety (and especially safety-critical
systems), can be mandated by licensing authorities, so that compliance with those
standards becomes a de facto requirement of obtaining a licence to operate that
system; for the P7000 series candidates might include P7001 and P7009. Second, in
a competitive market, compliance with ethical standards can be used to gain market
advantage especially among ethically aware consumers. Third, there is growing
pressure from professional bodies for their members to behave ethically. Emerging
professional codes of ethical conduct such as the recently published ACM11 and
IEEE12 codes of ethics and professional conduct are very encouraging; in turn, those
professionals are increasingly likely to exert internal pressure on their employers to
adopt ethical standards. And fourth, soft governance plays an important role in the
adoption of new standards: by requiring compliance with standards as a condition of
awarding procurement contracts governments can and do influence and direct the
adoption of standards across an entire supply chain without explicit regulation.
For data- or privacy-critical applications, a number of the P7000 standards
(P7002/3/4/5/12 and 13, for instance) could find application this way.
!
While some argue over the pace and level of impact of robotics and AI (on jobs, for
instance), most agree that increasingly capable intelligent systems create significant
ethical challenges, as well as great promise. This new generation of ethical
standards takes a powerful first step toward addressing those challenges. Standards,
like open science13, are a trust technology. Without ethical standards, it is hard to see
how robots and AIs will be trusted and widely accepted, and without that acceptance
their great promise will not be realised.
Alan Winfield is Professor of Robot Ethics at the Bristol Robotics Laboratory, UWE
Bristol, and visiting professor at the University of York. He chairs IEEE Standards
Working Group P7001.
e-mail: Alan.Winfield@brl.ac.uk
The views expressed in this article are those of the author only, and do not represent
the opinions of any organisation mentioned, or with which I am affiliated.
References
1. http://alanwinfield.blogspot.com/2017/12/a-round-up-of-robotics-and-ai-
ethics.html
2. Winfield, A. F. & Jirotka, M. Phil. Trans. R. Soc. A 376, 20180085 (2018);
http://dx.doi.org/10.1098/rsta.2018.0085
3. British Standards Institute, BS 8611:2016 Robots and robotic devices. Guide to
the ethical design and application of robots and robotic systems (2016)
https://shop.bsigroup.com/ProductDetail/?pid=000000000030320089
4. EPSRC, Principles of Robotics (2011)
https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principl
esofrobotics/
5. https://standards.ieee.org/industry-connections/ec/autonomous-systems.html
6. http://standards.ieee.org/develop/indconn/ec/ec_about_us.pdf
7. IEEE Standards Association, Ethically Aligned Design (2017)
https://ethicsinaction.ieee.org/
8. https://standards.ieee.org/content/dam/ieee-
standards/standards/web/documents/other/ead_general_principles_v2.pdf
9. Stilgoe J, Winfield A, The Guardian, 13 April 2018
https://www.theguardian.com/science/political-science/2018/apr/13/self-driving-
car-companies-should-not-be-allowed-to-investigate-their-own-crashes
10. Winfield A.F., Jirotka M. (2017) The Case for an Ethical Black Box. In: Gao Y.,
Fallah S., Jin Y., Lekakou C. (eds) Lecture Notes in Computer Science, vol
10454. Springer, Cham.
11. https://www.acm.org/code-of-ethics
12. https://www.ieee.org/about/corporate/governance/p7-8.html
13. Grand, A., Wilkinson, C., Bultitude, K. & Winfield, A.F. Open Science: A new
‘trust technology’? Science Communication 34, 679- 689 (2012).
... AI's ability to process vast amounts of data, simulate human decisionmaking, and optimise operations has created new opportunities across industries, from healthcare to finance (Bryson 2019). However, this enthusiasm is accompanied by growing concerns regarding the social, ethical, and economic implications of AI's widespread adoption (Smith and Anderson 2014;Winfield 2019). ...
... Control over AI technologies by businesses and governments was also a major concern, with 84% of respondents indicating that they worry about the power imbalance created by AI. Interview data revealed that participants feared the increasing use of AI by large corporations and governments to manipulate or control public behaviour, which reflects broader discussions in the literature about the ethical governance of AI and the potential for AI technologies to reinforce existing power structures (Winfield 2019). This concern is particularly relevant in the context of AI's growing role in decision-making processes, where transparency is often lacking. ...
... Participants expressed doubts about whether governments and businesses were adequately prepared to manage the ethical and operational challenges posed by AI. This aligns with the observations of Winfield (2019), who argued that the rapid advancement of AI often leaves public institutions struggling to keep up, potentially exacerbating public fears. These results underscore the importance of transparency and education in AI governance, as the public remains uncertain about AI's long-term consequences. ...
Article
Full-text available
This research critically examines the underlying anxieties surrounding artificial intelligence (AI) that are often concealed in public discourse, particularly in the United Kingdom. Despite an initial reluctance to acknowledge AI-related fears in focus groups, where 86% of participants claimed no significant concerns, further exploration through anonymous surveys and interviews uncovered deep anxieties about AI's impact on job security, data privacy, and ethical governance. The research employed a mixed-methods approach, incorporating focus groups, a survey of 867 participants, and 53 semi-structured interviews to investigate these anxieties in depth. The study identifies key sources of concern, ranging from the fear of job displacement to the opacity of AI systems, particularly in relation to data handling and the control exerted by corporations and governments. The analysis reveals that anxieties are not evenly distributed across demographics but rather shaped by factors such as age, education, and occupation. These findings point to the necessity of addressing these anxieties to foster trust in AI technologies. This study highlights the need for ethical and transparent AI governance, providing critical insights for policymakers and organisations as they navigate the complex socio-technical landscape that AI presents.
... The IEEE, too, has recognized the need for transparency in autonomous systems and made a proposal towards its standardization [2]. Further, several other ethical standards on the topic have stated the need for transparency [3]. ...
... We then task the robot with picking up one of the cubes in a legible manner and sample trajectories using the following steps: (1) Randomly choose a target goal. (2) Sample N control ∈ [3,5] control points in the robot's workspace, which we limit to the area above the second table into which cubes are sampled. (3) Adding the starting position and the position of the chosen goal to the list of control points, create a piecewise-linear trajectory that moves through these points. ...
Preprint
We present SLOT-V, a novel supervised learning framework that learns observer models (human preferences) from robot motion trajectories in a legibility context. Legibility measures how easily a (human) observer can infer the robot's goal from a robot motion trajectory. When generating such trajectories, existing planners often rely on an observer model that estimates the quality of trajectory candidates. These observer models are frequently hand-crafted or, occasionally, learned from demonstrations. Here, we propose to learn them in a supervised manner using the same data format that is frequently used during the evaluation of aforementioned approaches. We then demonstrate the generality of SLOT-V using a Franka Emika in a simulated manipulation environment. For this, we show that it can learn to closely predict various hand-crafted observer models, i.e., that SLOT-V's hypothesis space encompasses existing handcrafted models. Next, we showcase SLOT-V's ability to generalize by showing that a trained model continues to perform well in environments with unseen goal configurations and/or goal counts. Finally, we benchmark SLOT-V's sample efficiency (and performance) against an existing IRL approach and show that SLOT-V learns better observer models with less data. Combined, these results suggest that SLOT-V can learn viable observer models. Better observer models imply more legible trajectories, which may - in turn - lead to better and more transparent human-robot interaction.
... We base our approach to ERA on British Standard BS8611:2023 Guide to the ethical design and application of robots and robotic systems [21]. 'BS8611 is not a code of practice, but instead 4 Which maps approximately to the 'societal: fairness and justice' category of table 1. [22]. BS8611 defines an ethical harm as 'anything likely to compromise psychological and/or societal and environmental well-being'. ...
Article
Full-text available
In this paper, we address the question: what practices would be required for the responsible design and operation of real-world swarm robotic systems? We argue that swarm robotic systems must be developed and operated within a framework of ethical governance. We will also explore the human factors surrounding the operation and management of swarm systems, advancing the view that human factors are no less important to swarm robots than social robots. Ethical governance must be anticipatory, and a powerful method for practical anticipatory governance is ethical risk assessment (ERA). As case studies, this paper includes four worked examples of ERAs for fictional but realistic real-world swarms. Although of key importance, ERA is not the only tool available to the responsible roboticist. We outline the supporting role of ethical principles, standards, and verification and validation. Given that real-world swarm robotic systems are likely to be deployed in diverse ecologies, we also ask: how can swarm robotic systems be sustainable? We bring all of these ideas together to describe the complete life cycle of swarm robotic systems, showing where and how the tools and interventions are applied within a framework of anticipatory ethical governance. This article is part of the theme issue ‘The road forward with swarm systems’.
... It shows the need of businesses to apply a proactive approach to ethics and marketing responsibility, which yield long term consequences. Since ethics become integral part for marketing strategies, as well as great promise, adoption of robots in marketing creates significant ethical challenges (Winfield, 2019). Although robots are anticipated to play a significant role in marketing services which involving direct interactions with customers, robots' ethics in marketing is concerned with the important question of how businesses should minimise the ethical harms that can arise from robots either arising from poor (unethical) design, inappropriate application, information or misuse (European Parliamentary Research Service, 2020). ...
Article
Purpose This paper aims to explore the role and relationship of ethics and morals in technology, specifically examining how Islam, as a religion that emphasizes spirituality and sacredness, can uniquely influence the concept of robot rights. Design/methodology/approach The existing literature on robot rights and Islamic perspectives has been critically reviewed to address the study’s objectives. Findings In Islam, robots are viewed similarly to property ownership, where the owner holds responsibilities rather than absolute control. Islamic ownership rights are distinct compared to conventional ownership models. In Islam, private ownership is limited, as God is considered the ultimate owner of all assets. Assets, including robots, must be managed according to Islamic values and ethics. Unlike conventional ownership, where the owner can dispose of their property without justification, Islamic principles grant more rights to assets (including robots). This difference arises from the sacred origins of economic resources in Islam, which extends to the treatment of assets as inputs in an economy. Therefore, spirituality, as defined in Islam, uniquely influences the rights of robots. Originality/value As robotics becomes an increasingly significant part of our lives, religion plays a growing role in shaping the ethical and moral framework within which robots operate. This study is among the first to present an integrative framework and evaluate robot rights from an Islamic economics perspective.
... It is impossible to emphasize the importance of this undertaking, which represents a turning point in the development of ethical standards coming from a professional organization with the stature and scope of the IEEE Standards Association. It is also a radical move [24]. A set of rules and principles known as the standards of ethics in intelligent systems has been established to ensure that intelligent systems are created and used responsibly and ethically. ...
Article
Full-text available
Intelligent systems (IS) are increasingly prevalent in modern life, and their success or failure heavily depends on their quality and adherence to contemporary international standards. The lack of establishing quality standards for intelligent systems is a significant obstacle for organizations striving for efficient implementation. So, choosing the appropriate standards for intelligent systems is essential for quality control. Quality Measures of Intelligent Systems (QMIS) are defined within specific contexts based on standards. This study aims to describe and discuss the criteria that determine the quality of intelligent systems (QIS) and to analyze the fundamental criteria and techniques for assessing the quality of intelligent systems as well as their impact on the quality of intelligent systems within the context of current international standards to create a general framework for quality. This framework will be used to assess the effectiveness, significance, and applicability of intelligent systems and to gauge the level of intelligence of intelligent systems.
... Recognizing and addressing these ethical and security considerations is vital in building public trust, safeguarding individual rights, and fostering a cyber landscape that prioritizes both innovation and ethical standards. [22] The integration of AI in cybersecurity introduces a range of ethical dilemmas, security risks, and vulnerabilities that require careful examination. Addressing the ethical dilemmas, security risks, and vulnerabilities associated with AI in cybersecurity requires collaboration between policymakers, cybersecurity experts, and AI researchers. ...
Article
Full-text available
As artificial intelligence (AI) rapidly infiltrates the realm of cybersecurity, its geopolitical implications have become a critical concern for nations and global security. This research presents a comprehensive analysis of the geopolitical implications of artificial intelligence in cybersecurity by drawing on an extensive review of scholarly literature, policy documents, and expert insights, the study explores the multifaceted intersections between AI and cybersecurity, addressing both potential advantages and challenges. The analysis delves into the growing role of AI in enhancing cyber defense capabilities, ranging from threat detection and incident response to vulnerability assessment and data protection. Additionally, the research investigates how AI-powered offensive cyber capabilities could amplify geopolitical tensions, engendering new forms of state-sponsored cyber espionage and warfare. Furthermore, the research assesses the geopolitical landscape of AI development and adoption, examining the distribution of AI-related research, resources, and expertise across nations. This analysis highlights the potential for technology asymmetry and its potential implications for international relations and cyber power dynamics. The study also probes the ethical and legal dimensions of AI in cybersecurity, addressing concerns related to privacy, data sovereignty, and human rights. Furthermore, it investigates the challenges surrounding international cooperation and standardization of AI-powered cybersecurity measures to ensure global cyber resilience. In conclusion, this comprehensive analysis highlights the transformative influence of AI on cybersecurity and its profound geopolitical implications. By understanding and addressing these challenges proactively, the international community can harness the potential of AI while mitigating risks, ultimately paving the way for a more secure and cooperative cyberspace in an increasingly interconnected world.
Chapter
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
Article
Full-text available
Social work, as a human rights–based profession, is globally recognized as a profession committed to enhancing human well-being and helping meet the basic needs of all people, with a particular focus on those who are marginalized vulnerable, oppressed, or living in poverty. Artificial intelligence (AI), a sub-discipline of computer science, focuses on developing computers with decision-making capacity. The impacts of these two disciplines on each other and the ecosystems that social work is most concerned with have considerable unrealized potential. This systematic review aims to map the research landscape of social work AI scholarship. The authors analyzed the contents of 67 articles and used a qualitative analytic approach to code the literature, exploring how social work researchers investigate AI. We identified themes consistent with Staub-Bernasconi’s triple mandate, covering profession level, social agency (organizations), and clients. The literature has a striking gap or lack of empirical research about AI implementations or using AI strategies as a research method. We present the emergent themes (possibilities and risks) from the analysis as well as recommendations for future social work researchers. We propose an integrated model of Artificial Intelligence Enhanced Social Work (or “Artificial Social Work”), which proposes a marriage of social work practice and artificial intelligence tools. This model is based on our findings and informed by the triple mandate and the human rights framework.
Chapter
Artificial intelligence (AI) serves as a pivotal tool in comprehending the exponentially increasing volume of global data. By harnessing AI's capacity to construct robust models capable of interpreting vast datasets, there arises a surge in investments across various sectors towards AI-driven smart technologies. This surge is observed not only in scientific research but also in industry innovations, with analysts foreseeing substantial interest in the advancement of AI-driven smart technologies within the job market. However, amidst this rapid progression, it becomes imperative to address the potential ethical, safety, security measures, even-handedness, and transparency concerns surrounding AI driven smart technologies. Despite the undeniable benefits they offer, unchecked advancement may lead to oversight in regulating these systems, necessitating a proactive approach to mitigate risks.In elucidating the concept of AI-of-Things (AIoT), this paper advocate for a perspective that underscores the necessity for reliability and responsibility in AI driven smart technologies.
Article
Full-text available
This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap—which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement—as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
Article
Full-text available
The emerging practice of open science, which makes the entire process of a scientific investigation available, could extend membership of the research community to new, public audiences, who do not have access to science’s long-established trust mechanisms. This commentary considers if the structures that enable scientists to trust each other, and the public to trust scientists, are enriched by the open science approach. The completeness of information provided by open science, whether as a replacement for or complement to older systems for establishing trust within science, makes it a potentially useful “trust technology.”
Conference Paper
Full-text available
This paper proposes that robots and autonomous systems should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box. We argue that an ethical black box will be critical to the process of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility. We also argue that without the transparency afforded by an ethical black box, robots and autonomous systems are unlikely to win public trust.
Article
Full-text available
The emerging practice of open science, which makes the entire process of a scientific investigation available, could extend membership of the research community to new, public audiences, who do not have access to science’s long-established trust mechanisms. This commentary considers if the structures that enable scientists to trust each other, and the public to trust scientists, are enriched by the open science approach. The completeness of information provided by open science, whether as a replacement for or complement to older systems for establishing trust within science, makes it a potentially useful “trust technology.”
Guide to the Ethical Design and Application of Robots and Robotic Systems BS
  • Robotic Robots
  • Devices
British Standards Institute, BS 8611:2016 Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems (2016) https://shop.bsigroup.com/ProductDetail/?pid=000000000030320089
A round up of robotics and AI ethics
  • A Winfield
Self-driving car companies should not be allowed to investigate their own crashes. The Guardian
  • J Stilgoe
  • A Winfield
  • A F Winfield
  • M Jirotka
  • Phil
Winfield, A. F. & Jirotka, M. Phil. Trans. R. Soc. A 376, 20180085 (2018);