ArticlePDF Available

Professional Ethics in Computing and Intelligent Systems

Authors:

Abstract

Research and engineering have a decisive impact on the development of the society, providing not only the material artifacts, but also the ideas and other "tools of thought" used to conceptualize and relate to the world. Scientists and engineers are therefore required to take into consideration the wel-fare, safety and health of the public affected by their professional activities. Research and Engineer-ing Ethics are highly relevant for the field of computing (with Intelligent Systems/AI as its sub-field). Computing Ethics has thus been developed as a particular branch of Applied Ethics. By pro-fessional organizations, ethical judgment is considered an essential component of professionalism. This paper will point out the significance of teaching ethics, especially for the future AI profession-als. It argues that education in Ethics should be incorporated in computing curricula. Experience from the course "Professional Ethics in Science and Engineering" given at Mälardalen University in Sweden is presented as an illustration.
A preview of the PDF is not available
... Table 22.3 shows a list of some organizations that have developed codes of conduct along with their corresponding websites. Some are highly different in their content, because of their specific devotion and origins, but the chief idea and the general ethical values they express are parallel (Dodig-Crnkovic, 2006). ...
... On a more specific level of abstraction, intelligent artifacts are the focus of interest of Roboethics, a new field of applied ethics, which has brought about many interesting novel insights [Veruggio and Operto, 2008] [Roboethics]. Ethical challenges addressed within Roboethics include the use of robots, ubiquitous sensing systems and ambient intelligence, direct neural interfaces and invasive nano-devices, intelligent soft bots, robots aimed at warfare, and similar which actualize ethical issues such as values, responsibility, liability, accountability, control, privacy, self, (human) rights, and similar [Dodig-Crnkovic, 2006b], [Dodig-Crnkovic and Persson, 2008]. ...
Chapter
Full-text available
Luciano Floridi’s Information Ethics (IE) is a new theoretical foundation of Ethics. According to Floridi, ICT with all informational structures and processes generates our new informational habitat, the Infosphere. For IE, moral action is an information processing pattern. IE addresses the fundamentally informational character of our interaction with the world, including interactions with other agents. Information Ethics is macro-ethics as it focuses on systems/networks of agents and their behavior. The IE’s capacity to study ethical phenomena on the basic level of underlying information patterns and processes makes it unique among ethical theories in providing a conceptual framework for fundamental level analysis of present globalised ICT-based world. It allows computational modeling – a powerful tool for study which increases our understanding of informational mechanisms of ethics. Computational models help capturing behaviors invisible to unaided mind which relies exclusively on shared intuitions. The article presents an analysis of the application of IE as interpreted within the framework of Info-Computationalism. The focus is on responsibility/accountability distribution and similar phenomena of information communication in networks of agents. Agent-based modeling enables studying the increasing complexity of behavior in multi-agent systems when agents (actors) are ranging from cellular automata to softbots, robots and humans. Autonomous, learning artificial intelligent systems technologies are developing rapidly, resulting in a new division of tasks between humans and robots/softbots. The biggest present-day concern about autonomous intelligent systems is the fear of human loss of control and robots acting inappropriately and causing harm. Among inappropriate kinds of behavior is the ethically unacceptable one. In order to assure ethically adequate behavior of autonomous intelligent systems, artifactual ethical responsibility/accountability should be one of the built-in features of intelligent artifacts. Adding the requirement for artifactual ethical behavior to a robot/softbot does not by any means take responsibility from humans designing, producing and controlling autonomous intelligent systems. On the contrary, it will make explicit the necessity for all involved with such intelligent technology to assure its ethical conduct. Today’s robots are used mainly as complex electromechanical tools and do not have any capability of taking moral responsibility. But technology progress is remarkable; robots are quickly improving their sensory and motor competencies, and the development of artifactual (synthetic) emotions adds new dimensions to robotics. Artifactual reasoning and other information processing skills are advancing – all of which is causing significant progress in the field of Social Robotics. We have thus strong reasons to try to analyze future technological development where robots/softbots are so intelligent and responsive that they possess artifactual morality alongside with artifactual intelligence. Technological artifacts are always part of a broader socio-technological system with distributed responsibilities. The development of autonomous, learning, morally responsible intelligent agents relies consequently on several responsibility feedback loops; the awareness and preparedness for handling risks on the side of designers, producers, implementers, users and maintenance personnel as well as the support of the society at large which will provide a response on the consequences of the use of technology. This complex system of shared responsibilities should secure a safe functioning of hybrid systems of humans and intelligent machines. Information Ethics provides a conceptual framework for computational modeling of such socio-technological systems. Apart from examples of specific applications of IE, interpretation of several widely debated questions, such as the role of Levels of Abstraction, naturalism and complexity/diversity in Information Ethics, is offered through Info-Computationalist analysis.
... Recently, Roboethics, a field of applied ethics, has developed with many interesting, novel insights. 3 Topics addressed within Roboethics include the use of robots, ubiquitous sensing systems and ambient intelligence, direct neural interfaces and invasive nano-devices, intelligent soft bots, robots aimed at warfare, and similar, which actualize ethical issues of responsibility, liability, accountability, control, privacy, self, (human) rights, and similar [2]. This article deals specifically with the issue of (moral) responsibility in artificial intelligent systems. ...
Conference Paper
Full-text available
Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making autonomous decisions gives us reasons to talk about a system (an artifact) as being "responsible" for a task. No doubt, technology is morally significant for humans, so the "responsibility for a task" with moral consequences could be seen as moral responsibility. Intelligent systems can be seen as parts of socio-technological systems with distributed responsibilities, where responsible (moral) agency is a matter of degree. Knowing that all possible abnormal conditions of a system operation can never be predicted, and no system can ever be tested for all possible situations of its use, the responsibility of a producer is to assure proper functioning of a system under reasonably foreseeable circumstances. Additional safety measures must however be in place in order to mitigate the consequences of an accident. The socio-technological system aimed at assuring a beneficial deployment of intelligent systems has several functional responsibility feedback loops which must function properly: the awareness and procedures for handling of risks and responsibilities on the side of designers, producers, implementers and maintenance personnel as well as the understanding of society at large of the values and dangers of intelligent technology. The basic precondition for developing of this socio-technological control system is education of engineers in ethics and keeping alive the democratic debate on the preferences about future society.
Conference Paper
Full-text available
As a global community we are facing number of existential challenges like global warming, deficit of basic commodities, environmental degradation and other threats to life on earth, as well as possible unintended consequences of AI, nano-technology, biotechnology, and similar. Among world-wide responses to those challenges the framework programme for European research and technological development, Horizon 2020, have formulated the Science with and for Society Work Programme, based on Responsible Research and Innovation with a goal to support research contributing to the progress of humanity and preventing catastrophic events and their consequences. This goal may only be reached if we educate responsible researchers and engineers with both deep technical knowledge and broad disciplinary and social competence. From the perspective of experiences at two Swedish Universities, this paper argues for the benefits of teaching professional ethics and sustainable development to engineering students.
Article
Full-text available
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots. KeywordsArtificial morality–Machine ethics–Machine morality–Roboethics–Autonomous agents–Artifactual responsibility–Functional responsibility
Article
Full-text available
Play and games are among the basic means of expression in intelligent communication, influenced by the relevant cultural environment. Games have found a natural expression in the contemporary computer era in which communications are increasingly mediated by computing technology. The widespread use of e-games results in conceptual and policy vacuums that must be examined and understood. Humans involved in designing, administering, selling, playing etc. computer games encounter new situations in which good and bad, right and wrong, are not defined by the experience of previous generations. This article gives an account of the historical necessity of games, the development of e-games, their pros- and cons, threats and promises, focusing on the ethical awareness and attitudes of game developers.
Article
Full-text available
Most discussions of engineering ethics dismiss the idea of codes of ethics from the outset. Codes are described as self-serving, unrealistic, inconsistent, mere guides for novices, too vague, or unnecessary. This article does not do that . Instead, it argues that a code of professional ethics is central to advising individual engineers how to conduct themselves, to judging their conduct, and ultimately to understanding engineering as a profession. The article begins with a case now commonly discussed in engineering ethics (the Challenger disaster), finding its general argument in a detailed analysis of a particular choice. While the analysis should be applicable to all professions, that claim is not argued in this article.
Article
Full-text available
Article
The purpose of this essay is to determine what exactly is meant by the claim computer ethics is unique, a position that will henceforth be referred to as the CEIU thesis. A brief sketch of the CEIU debate is provided, and an empirical case involving a recent incident of cyberstalking is briefly considered in order to illustrate some controversial points of contention in that debate. To gain a clearer understanding of what exactly is asserted in the various claims about the uniqueness of computer ethics, and to avoid many of the confusions currently associated with the term ``unique'', a precise definition of that term is proposed. We then differentiate two distinct and radically different interpretations of the CEIU thesis, based on arguments that can be found in the relevant computer ethics literature. The two interpretations are critically analyzed and both are shown to be inadequate in establishing the CEIU thesis. We then examine and reject two assumptions implicit in arguments advanced both by CEIU advocates and their opponents. In exposing and rejecting these assumptions, we see why it is not necessary to accept the conclusions reached by either side in this debate. Finally, we defend the view that computer ethics issues are both philosophically interesting and deserving of our attention, regardless of whether those issues might also happen to be unique ethical issues.