ArticlePDF Available

Abstract

ACM and the IEEE Computer Society created a code of professional practices within the industry. The contains eight keyword principles related to the behavior of and decisions made by professional software engineers, as well as trainees and students of the profession. The principles identify the various relationships in which individuals, groups, and organizations participate and the primary obligations within these relationships.
A preview of the PDF is not available
... (1) development of design principles for ethical software development (e.g. Al-A'ali (2008), Collins and Miller (1994), Gotterbarn et al. (1997), Hameed (2009), andCary et al. (2003)); ...
... "A motive can be […] individual characters who drive this forward, and a corporate culture". (Interviewee 7) Cary et al. (2003) Spiekermann and Winkler (2020) Thomson and Schmoldt (2001) Gotterbarn et al. (1997) ...
... This includes the attitude that higher initial investment costs for ethical development pays out in the long run. Gotterbarn et al. (1997) ...
Thesis
Digitalization is a socio-technical phenomenon that shapes our lives as individuals, economies, and societies. The perceived complexity of technologies continues to increase, and technology convergence makes a clear separation between technologies impossible. A good example of this is the Internet of Things (IoT) with its embedded Artificial Intelligence (AI). Furthermore, a separation of the social and the technical component has become near enough impossible, for which there is increasing awareness in the Information Systems (IS) community. Overall, emerging technologies such as AI or IoT are becoming less understandable and transparent, which is evident for instance when AI is described in terms of a “black box”. This opacity undermines humans’ trust in emerging technologies, which, however, is crucial for both its usage and spread, especially as emerging technologies start to perform tasks that bear high risks for humans, such as autonomous driving. Critical perspectives on emerging technologies are often discussed in terms of ethics, including such aspects as the responsibility for decisions made by algorithms, the limited data privacy, and the moral values that are encoded in technology. In sum, the varied opportunities that come with digitalization are accompanied by significant challenges. Research on the negative ramifications of AI is crucial if we are to foster a human-centered technological development that is not simply driven by opportunities but by utility for humanity. As the IS community is positioned at the intersection of the technological and the social context, it plays a central role in finding answers to the question as to how the advantages outweigh the challenges that come with emerging technologies. Challenges are examined under the label of “dark side of IS”, a research area which receives considerably less attention in existing literature than the positive aspects (Gimpel & Schmied, 2019). With its focus on challenges, this dissertation aims to counterbalance this. Since the remit of IS research is the entire information system, rather than merely the technology, humanistic and instrumental goals ought to be considered in equal measure. This dissertation follows calls for research for a healthy distribution along the so-called socio-technical continuum (Sarker et al., 2019), that broadens its focus to include the social as well as the technical, rather than looking at one or the other. With that in mind, this dissertation aims to advance knowledge on IS with regard to opportunities, and in particular with a focus on challenges of two emerging technologies, IoT and AI, along the socio-technical continuum. This dissertation provides novel insights for individuals to better understand opportunities, but in particular possible negative side effects. It guides organizations on how to address these challenges and suggests not only the necessity of further research along the socio-technical continuum but also several ideas on where to take this future research. Chapter 2 contributes to research on opportunities and challenges of IoT. Section 2.1 identifies and structures opportunities that IoT devices provide for retail commerce customers. By conducting a structured literature review, affordances are identified, and by examining a sample of 337 IoT devices, completeness and parsimony are validated. Section 2.2 takes a close look at the ethical challenges posed by IoT, also known as IoT ethics. Based on a structured literature review, it first identifies and structures IoT ethics, then provides detailed guidance for further research in this important and yet under-appreciated field of study. Together, these two research articles underline that IoT has the potential to radically transform our lives, but they also illustrate the urgent need for further research on possible ethical issues that are associated with IoTs’ specific features. Chapter 3 contributes to research on AI along the socio-technical continuum. Section 3.1 examines algorithms underlying AI. Through a structured literature review and semi-structured interviews analyzed with a qualitative content analysis, this section identifies, structures and communicates concerns about algorithmic decision-making and is supposed to improve offers and services. Section 3.2 takes a deep dive into the concept of moral agency in AI to discuss whether responsibility in human-computer interaction can be grasped better with the concept of “agency”. In section 3.3, data from an online experiment with a self-developed AI system is used to examine the role of a user’s domain-specific expertise in trusting and following suggestions from AI decision support systems. Finally, section 3.4 draws on design science research to present a framework for ethical software development that considers ethical issues from the beginning of the design and development process. By looking at the multiple facets of this topic, these four research articles ought to guide practitioners in deciding which challenges to consider during product development. With a view to subsequent steps, they also offer first ideas on how these challenges could be addressed. Furthermore, the articles offer a basis for further, solution-oriented research on AI’s challenges and encourage users to form their own, informed, opinions.
... However, professional responsibility concerns added responsibilities, which are associated with one's profession, such as engineering, medicine, law, etc. Social responsibility refers to obligations that one has towards the society, which is the same as Principle 1 in [1]. The software engineering code of ethics (SWECoE) (see [10]) clearly outlines the responsibilities that are part of the profession of software engineering. These include proper and adequate testing of software, addressing ethical issues related to systems, working according to industry approved standards, and the responsibility to put the public interest above all other interests. ...
... The fact that all respondents agreed (even though few of them agreed somewhat) that software engineers have an obligation to understand the potential of their software products to either assist or harm society, as also stipulated in [1] and [19], signals the awareness on the side of the educators of the responsibility that SEs have in ensuring that their products should function not only as expected but also ethically. Software products should be intended to do good, and this is in line with IEEE/ACM's code of ethics Principle 2 (see [10]). This finding is. ...
... However, professional responsibility concerns added responsibilities, which are associated with one's profession, such as engineering, medicine, law, etc. Social responsibility refers to obligations that one has towards the society, which is the same as Principle 1 in [1]. Furthermore, as alluded above software engineering code of ethics (SWECoE) (see [9]) clearly outlines the responsibilities that come with the software engineering job. These include proper and adequate testing of their software, addressing ethical issues related to their work, working according to industry approved standards, the responsibility to put the public interest above all other interests. ...
... The fact that all respondents agreed (even though few of them agreed somewhat) that software engineers have an obligation to understand the potential potency of their software products to either assist or harm society, as also stipulated in [1] and [18], signals the awareness on the side of the educators of the responsibility that SEs have in ensuring that their products should function not only as expected but also ethically. Software products should be intended to do good, and this is in line with IEEE/ACM's code of ethics Principle 2 (see [9]). This finding is by and large in line with the findings of a study conducted by Paradice [4], which established that software professionals should take responsibility for their software products (however, not for software behavior beyond their control). ...
Preprint
Software engineering codes of ethics and standards of practice are a means by which the profession expresses its position concerning obligations and associated responsibilities expected to be shouldered by software engineers in the production of software. It is therefore expected that apart from software engineering companies taking responsibilities at organizational level, equally, at individual level software engineers should take ethical responsibility for the work they do. Studies indicate the importance of evaluating the apportionment of such responsibilities to ensure that relevant role players are aware of the kind and level of responsibilities they are obligated to undertake. The study surveyed educators in a computing faculty to determine what they perceive as levels of responsibilities and role players to which they attribute those responsibilities. The paper reveals a distributive order of responsibility in terms of the measures used in the study. It also shows that educators perceive software engineers as obliged to take responsibility for their work to ensure quality in their products and that public concern is of paramount importance. Furthermore, (people in authority) who approved an unethical system, and other role players such as designers, test engineers and users need to take responsibility as well.
... Two works headed by him in the field of SE are of remarkable note. In Gotterbarn [22], the object of analysis is the Code of Ethics and its formulation and implementation in collectivized organizations of computing professionals, Gotterbarn has several papers on this topic, we only cite this one as an example. In Gotterbarn [21], the topic is specifically and properly ethics in software engineering, where the author indicates that this area is in its "infancy" in the early 2000s, it resembles a position paper and research agenda with the author's interpretative vision and contextual scope of the ethics and software engineering encounter. ...
Conference Paper
Many practitioners and researchers still consider software as program code, composed by algorithms and associated documentation, written in a programming language, and finally compiled or interpreted in a computational environment. The different facets of the software question, explored in interdisciplinary fields such as Digital Humanities, seem largely unexplored in Software Engineering and other traditional Computing fields of research. This paper aims to present an overview of ethical aspects through the publications of the Brazilian Symposium on Software Quality (SBQS). We followed a Systematic Literature Review (SLR) approach and presented quantitative and qualitative, and in-depth results, analyzing fifteen editions of the SBQS between 2006 and 2020. We adopted the concept of Ethics in a primary way, through searches for terms directly associated, and secondary, by terms such as Informed Consent, aligned with the definitions and concepts of Computational Ethics combined with episteme and good practices in Software Engineering. The results pointed to the minor occurrence of ethical aspects in SBQS publications, growing timidly over the years. We conclude that there is broad space to explore the theme of Computational Ethics combined with Software Quality.
... Whether they work fully autonomously or in human-machine teams, artificial agents given rules to follow (for rules ranging from international laws, to company ethical policies, to mission-specific orders) can benefit tremendously by understanding how to use interpretive reasoning to determine the applicability of open-textured phrases [55]. For example, the ACM/IEE-CS Software Engineering Code of Ethics [20] states that software engineers should "[m]oderate the interests of the software engineer, the employer, the client and the users with the public good." But the phrase 'public good' is highly open-textured, and people may disagree about whether certain plausible actions are in service of the public good. ...
Preprint
Full-text available
Can artificially intelligent systems follow rules? The answer might seem an obvious `yes', in the sense that all (current) AI strictly acts in accordance with programming code constructed from highly formalized and well-defined rulesets. But here I refer to the kinds of rules expressed in human language that are the basis of laws, regulations, codes of conduct, ethical guidelines, and so on. The ability to follow such rules, and to reason about them, is not nearly as clear-cut as it seems on first analysis. Real-world rules are unavoidably rife with open-textured terms, which imbue rules with a possibly infinite set of possible interpretations. Narrowing down this set requires a complex reasoning process that is not yet within the scope of contemporary AI. This poses a serious problem for autonomous AI: If one cannot reason about open-textured terms, then one cannot reason about (or in accordance with) real-world rules. And if one cannot reason about real-world rules, then one cannot: follow human laws, comply with regulations, act in accordance with written agreements, or even obey mission-specific commands that are anything more than trivial. But before tackling these problems, we must first answer a more fundamental question: Given an open-textured rule, what is its correct interpretation? Or more precisely: How should our artificially intelligent systems determine which interpretation to consider correct? In this essay, I defend the following answer: Rule-following AI should act in accordance with the interpretation best supported by minimally defeasible interpretive arguments (MDIA).
Article
Pattern recognition, machine learning and artificial intelligence offer tremendous opportunities for efficient operations, management and governance. They can optimise processes for object, text, graphics, speech and pattern recognition. In doing so the algorithmic processing may be subject to unknown biases that do harm rather than good. We examine how this may happen, what damage may occur and the resulting ethical/legal impact and newly manifest obligations to avoid harm to others from these systems. But what are the risks, given the Human Condition?
Article
This paper provides a novel theoretical framework of engineering ethics, specifically how engineering professional practitioners make sense of ethics. Multiple regression was applied to 2009 survey data (N=2276, 38% return) of practising engineers, to identify groups of competencies correlated with ethics. These were identified as values within an ethics worldview model. The model has two compartments. One is the development of a professional worldview, whereby professional engineers reconstruct their own values over time, and then seek to embody those in their own life. The other is an awareness of the need for professional judgement in complex decision-making. All the significant variables identified in the survey may be accommodated in this model. While the raw data (ex 2009) were dated, the method and findings help move the field forward by providing new insights into how practising engineers make sense of ethics.
ResearchGate has not been able to resolve any references for this publication.