Article

The epistemic condition for moral responsibility

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

An article on the epistemic or knowledge condition for moral responsibility, written for the Stanford Encyclopedia of Philosophy. Available at https://plato.stanford.edu/entries/moral-responsibility-epistemic/

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... For example, someone who took a drug that can affect alertness may have been involved in an accident, but in this particular case the drug may not have causally contributed to the accident. Such questions, which are about singular causation (Cheng & Novick, 2005;Stephan & Waldmann, 2018), are highly relevant because holding someone responsible for some outcome generally requires that their action caused it (Alicke, 2000;Driver, 2008;Rudy-Hiller, 2018). ...
... It also has to be reasonably foreseeable to them that the harmful outcome might be produced by their actions. This requirement is reflected in the so-called epistemic condition in philosophical theories of moral responsibility (Rudy-Hiller, 2018), and in definitions of negligent or reckless behaviour in the law (see, e.g., Dubber, 2015, pp. 42-46). ...
... That is, we retrospectively asked participants how confident they were that the action actually caused the harmful outcome in this situation. Singular causation is generally seen as a prerequisite for assigning moral responsibility (see, e.g., Driver, 2008;Rudy-Hiller, 2018). Thus, an additional reason why moral responsibility is lower in chains than in direct relations (apart from lower outcome foreseeability) could be that participants are less confident that agents actually have caused the harmful outcomes in chains. ...
Article
Causal analysis lies at the heart of moral judgment. For instance, a general assumption of most ethical theories is that people are only morally responsible for an outcome when their action causally contributed to it. Considering the causal relations between our acts and potential good and bad outcomes is also of crucial importance when we plan our future actions. Here, we investigate which aspects of causal relations are particularly influential when the moral permissibility of actions and the moral responsibility of agents for accidental harms are assessed. Causal strength and causal structure are two independent properties of causal models that may affect moral judgments. We investigated whether the length of a causal chain between acts and accidental harms, a structural feature of causal relations, affects people's moral evaluation of action and agent. In three studies (N = 2285), using a combination of vignettes and causal learning paradigms, we found that longer chains lead to more lenient moral evaluations of actions and agents. Moreover, we show that the reason for this finding is that harms are perceived to be less likely, and therefore less foreseeable for agents, when the relation is indirect rather than direct. When harms are considered equally likely and equally foreseeable, causal structure largely ceases to affect moral judgments. The findings demonstrate a tight coupling between causal representations, mental state inferences, and moral judgments, and show that reasoners process and integrate these components in a largely rational manner.
... Questions of vulnerability, and moral agency has generated contentious debates in moral philosophy (e.g., Strawson, 1962;Rudy-Hiller, 2018, Goodin, 1985a. Therefore, both vulnerability and moral responsibility are at the heart of the core issues in this paper. ...
... First, the agent must have the capacity and act freely to deserve blame or praise (Rudy-Hiller, 2018;Björnsson, 2017). An agent who had no control over action is excused according to this condition. ...
Article
Full-text available
The article addresses issues at the nexus of physician industrial action, moral agency, and responsibility. There are situations in which we find ourselves best placed to offer aid to those who may be in vulnerable positions, a behavior that is consistent with our everyday moral intuitions. In both our interpersonal relationships and social life, we make frequent judgments about whether to praise or blame someone for their actions when we determine that they should have acted to help a vulnerable person. While the average person is unlikely to confront these kinds of situations often, those in the medical professions, physicians especially, may confront these and similar situations regularly. Therefore, when physicians withhold their services for whatever reason in support of industrial action, it raises issues of moral responsibility to patients who may be in a vulnerable position. Using theories of moral responsibility, vulnerability, and ethics, this paper explores the moral implications of physician industrial action. We explore issues of vulnerability of patients, as well as the moral responsibility and moral agency of doctors to patients. Determining when a person is vulnerable, and when an individual becomes a moral agent, worthy of praise or blame for an act or non-action, is at the core of the framework. Notwithstanding the right of physicians to act in their self-interest, we argue that vulnerability leads to moral obligations, that physicians are moral agents, and that the imperatives of their obligations to patients are clear, even if limited by certain conditions. We suggest that both doctors and governments have a collective responsibility to prevent harm to patients and present the theoretical and practical implications of the paper.
... Philosophers typically believe two conditions to be necessary and only jointly sufficient for moral responsibility (Rudy-Hiller, 2018). First, the agent needs some sort of control over what 3 In the philosophical literature, derivative moral responsibility is discussed as the result of a tracing strategy. ...
Article
Moral philosophers draw an important distinction between two kinds of moral responsibility. An agent can be directly morally responsible, or they can be derivatively morally responsible. Direct moral responsibility, so many believe, presupposes that the agent could have behaved differently. However, in some situations, we hold agents responsible even though they could not have behaved differently, such as when they recklessly cause an accident or do not take adequate precautions to avoid harmful consequences. Moral philosophers typically argue that what we ascribe in these cases is derivative moral responsibility. In this paper, I apply this conceptual distinction to the experimental debate about so-called folk-compatibilism. I argue that experimental philosophers have failed to consider this distinction when designing experiments and interpreting their results. I demonstrate that while compatibilism requires judgments of direct moral responsibility, participants in some of the most influential studies ascribed derivative moral responsibility. For this reason, these studies do not speak in favour of compatibilism at all.
... 27 He emphasizes that decisions reached under coercion 24 See, e.g., Fischer and Ravizza (1998, pp. 12-13) and Rudy-Hiller (2018). 25 This is also how Williams (1995h, pp. ...
Article
Full-text available
Is the idea of the voluntary important? Those who think so tend to regard it as an idea that can be metaphysically deepened through a theory about voluntary action, while those who think it a superficial idea that cannot coherently be deepened tend to neglect it as unimportant. Parting company with both camps, I argue that the idea of the voluntary is at once important and superficial—it is an essentially superficial notion that performs important functions, but can only perform them if we refrain from deepening it. After elaborating the contrast between superficial and deepened ideas of the voluntary, I identify the important functions that the superficial idea performs in relation to demands for fairness and freedom. I then suggest that theories trying to deepen the idea exemplify a problematic moralization of psychology—they warp psychological ideas to ensure that moral demands can be met. I offer a three-tier model of the problematic dynamics this creates, and show why the pressure to deepen the idea should be resisted. On this basis, I take stock of what an idea of the voluntary worth having should look like, and what residual tensions with moral ideas this leaves us with.
... They do wrong negligently when they act through culpable ignorance-when they do not acquire knowledge they should have. (On the epistemic conditions of moral responsibility see Rudy-Hiller 2018). With this background in place, I turn to consider disability counselling. ...
Article
In this paper I argue that selective abortion for disability often involves inadequate counselling on the part of reproductive medicine professionals who advise prospective parents. I claim that prenatal disability clinicians often fail in intellectual duty—they are culpably ignorant about intellectual disability (or do not disclose known facts to parents). First, I explain why a standard motivation for selective abortion is flawed. Second, I summarize recent research on parent experience with prenatal professionals. Third, I outline the notions of epistemic excellence and deficiency. Fourth, I defend culpable ignorance as the best explanation of inadequate disability counselling. Fifth, I rebut alternative explanations. My focus is pregnancies diagnosed with mild or moderate intellectual disability.
... That is to say that one cannot be blamed for something which she or he is forced to do, or without knowing that a harm is concerned by this action. More broadly, the epistemic condition is conceived as concerning "whether the agent's epistemic or cognitive state was such that she can properly be held accountable for the action and its consequences", such that it is equivalent to asking "was this person aware of what she was doing (of its consequences, moral significance, etc.)?" [32]. 12 Explainability can be understood as a kind of requirement for ethical principles in AI. ...
Article
Full-text available
It has been recently claimed that explainability should be added as a fifth principle to AI ethics, supplementing the four principles that are usually accepted in Bioethics: Autonomy, Beneficence, Nonmaleficence and Justice. We propose here that with regard to AI, on the one hand explainability is indeed a new dimension of ethical concern that should be paid attention to, while on the other hand, explainability in itself should not necessarily be considered an ethical “principle”. We think of explainability rather (i) as an epistemic requirement for taking into account ethical principles, but not as an ethical principle in itself; (ii) as an ethical demand that can be derived from ethical principles. We do agree that explainability is a key demand in AI Ethics, with practical importance for stakeholders to take into account; but we argue that it should not be considered as a fifth ethical principle, to maintain a philosophical consistency in the organization of AI ethical principles.
... The level of luck is proportional, and the level of control is inversely proportional to the degree of the limit of knowledge (epistemic limit) and agency (control limit). The constraints of knowledge (epistemic limits) are constitutive for luck when an action succeeds despite the lack of awareness of the agent, whereby at the moment of decision-making, the agent either does not have full knowledge about the causes or the nature of its future success (Rudy-Hiller 2018). In the former case, luck means that the agent does not have sufficient information about the means that may guarantee the future success of the action at hand. ...
Article
Full-text available
The study discusses the three roles of normative assumption in the theory and practice of innovation management: (1) they define the value of innovation, (2) specify its luck, and (3) determine some goals and methodologies of managing the luck of innovations. The crucial questions of the investigation are as follows: What does ‘luck’ mean in theories of innovation management?, and What is luck in the practice of innovation management? The conceptual analyses present logical links which occur between the normative premises of some canonical theories of metaethics and definitions of luck. In the context of these analyses the study discusses some prerequisites for responsible decisions relating to innovations. The paper illustrates some ways of using philosophical methods in the theory of innovation management.
... The epistemic condition typically concerns itself with awareness. For an action to be culpable, according to Ruedy-Hiller (2018), the Agent needs to be aware of the situation under which they are doing a, the need to be aware of the consequences of a, and they need to be aware that more permissible alternatives existed. Reudy-Hiller also lists a requirement that the agent should know the moral significance of their actions. ...
Preprint
Full-text available
Intent modifies an actor's culpability of many types wrongdoing. Autonomous Algorithmic Agents have the capability of causing harm, and whilst their current lack of legal personhood precludes them from committing crimes, it is useful for a number of parties to understand under what type of intentional mode an algorithm might transgress. From the perspective of the creator or owner they would like ensure that their algorithms never intend to cause harm by doing things that would otherwise be labelled criminal if committed by a legal person. Prosecutors might have an interest in understanding whether the actions of an algorithm were internally intended according to a transparent definition of the concept. The presence or absence of intention in the algorithmic agent might inform the court as to the complicity of its owner. This article introduces definitions for direct, oblique (or indirect) and ulterior intent which can be used to test for intent in an algorithmic actor.
... See, e.g.,Fischer and Ravizza (1998, pp. 12-13) andRudy-Hiller (2018). 25 This is also how Williams (1995h, pp. ...
Article
Full-text available
Is the idea of the voluntary important? Those who think so tend to regard it as an idea that can be metaphysically deepened through a theory about voluntary action, while those who think it a superficial idea that cannot coherently be deepened tend to neglect it as unimportant. Parting company with both camps, I argue that the idea of the voluntary is at once important and superficial-it is an essentially superficial notion that performs important functions, but can only perform them if we refrain from deepening it. After elaborating the contrast between superficial and deepened ideas of the voluntary, I identify the important functions that the superficial idea performs in relation to demands for fairness and freedom. I then suggest that theories trying to deepen the idea exemplify a problematic moralization of psychology-they warp psychological ideas to ensure that moral demands can be met. I offer a three-tier model of the problematic dynamics this creates, and show why the pressure to deepen the idea should be resisted. On this basis, I take stock of what an idea of the voluntary worth having should look like, and what residual tensions with moral ideas this leaves us with.
... One could argue that designers and providers of gamified systems are not merely responsible for consequences of intended actions, but also, at least some of the unintended actions. Many philosophers and ethicists, for example, believe that people should not only be held morally responsible for wrongdoings they are aware of, but also in cases where they should have known better [14]. Ascriptions of such moral responsibility (for should-have-known cases) may be even more justified in cases where the professional role of a person may morally require them to have known certain things. ...
Article
Full-text available
The use of game-like elements is become increasingly popular in the context of fitness and health apps. While such “gamified” apps hold great potential in motivating people to improve their health, they also come with a “darker side”. Recent work suggests that these gamified health apps raise a number of ethical challenges that, if left unaddressed, are not only morally problematic but also have adverse effects on user health and engagement with the apps. However, studies highlighting the ethical challenges of gamification have also met with criticism, indicating that they fall short of providing guidance to practitioners. In avoiding this mistake, this paper seeks to advance the goal of facilitating a practice-relevant guide for designers of gamified health apps to address ethical issues raised by use of such apps. More specifically, the paper seeks to achieve two major aims: (a) to propose a revised practice-relevant theoretical framework that outlines the responsibilities of the designers of gamified health apps, and (b) to provide a landscape of the various ethical issues related to gamified health apps based on a systematic literature review of the empirical literature investigating adverse effects of such apps.
Thesis
Full-text available
In the 1970’s Bernard Williams and Thomas Nagel formally introduced the problem of moral luck. Moral luck can be understood as the seeming paradox between the control principle and the moral judgements we confer on others. The control principle states that an agent can only be held morally responsible for an action if, and only if, said agent had control over it. Contrary to this, we often do judge people for many things out of their control. The consequences of our actions, the circumstances we find ourselves in, and our own characters are all things we either wholly or partially lack control over, yet, we hold people responsible for these things. This lack of control and accompanying moral judgements are what is referred to as “moral luck”, and we must therefore either conclude that agents cannot be held responsible for their actions, or that we can hold people responsible for things out of their control, both being framed as problems. Here, I will attempt to give a solution to the problem of moral luck. I will do this by discussing some of the most influential writings on the problem, each section of the thesis focusing on a separate type of luck, addressing the mistakes philosophers have made while inferring that moral luck is real. I will argue that each type of moral luck only exists because we have misunderstood important concepts, and once we revise our conception of control, agency, and responsibility the problem of moral luck disappears. In particular, I will argue that 1) Resultant luck is only a problem because we are focusing on the consequences of actions rather than the intentions of the agent, 2) Circumstantial luck is only a problem because we fallaciously transfer the luck of the world onto moral considerations, and 3) Constitutive luck is only a problem because we are misapplying the concept of control onto character. The thesis will also include a section on relevant implication if I am successful in solving the paradox, including theoretical and practical implications. My conclusion will thus be, contrary to the thesis of moral luck, that we can still hold agents morally responsible without having to reject the control principle, however, this is only possible if we accept revisions to important moral concepts.
Article
Full-text available
This paper argues that two single‐factor accounts of exploitation are inadequate and instead defends a two‐factor account. Purely distributive accounts of exploitation, which equate exploitation with unfair transaction, make exploitation pervasive and cannot deliver the intuition that exploiters are blameworthy. Recent, non‐distributive alternatives, which make unfairness unnecessary for exploitation, largely avoid these problems, but their arguments for the non‐necessity of unfairness are unconvincing. This paper defends a two factor account according to which A exploits B iff A gains unfairly from B and either A believes that the gains he receives in the transaction wrong B, or A is culpably unaware that the gains he receives in the transaction wrong B. This account avoids the problems of non‐distributive approaches and also delivers the intuition that exploiters are blameworthy.
Article
Full-text available
What does logic tells us how about we ought to reason? If P entails Q, and I believe P, should I believe Q? I will argue that we should embed the issue in an independently motivated contextualist semantics for ‘ought’, with parameters for a standard and set of propositions. With the contextualist machinery in hand, we can defend a strong principle expressing how agents ought to reason while accommodating conflicting intuitions. I then show how our judgments about blame and guidance can be handled by this machinery.
Article
Full-text available
I argue that there are liberal reasons to reject what I call “Global Individualism”, which is the conjunction of two views strongly associated with liberalism: moral individualism and social individualism. According to the first view, all moral properties are reducible to individual moral properties. The second holds that the social world is composed only of individual agents. My argument has the following structure: after suggesting that Global Individualism does not misrepresent liberalism, I draw on some recent insights in social ontology to show that it is inconsistent with the satisfaction of an important liberal principle related to the protection of individual rights over time. As I hold, to solve this problem we need to accept group agents acting as moral agents, which in turn commits us to the weaker notion of normative individualism (a view that is consistent with the existence of some group moral properties). I conclude with the suggestion that even this solution is costly for liberalism, for the conjunction of group moral agency and normative individualism makes the latter unstable and compels liberals to a much less individualistic stance than expected.
Article
Full-text available
In this paper we investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems (AISs) could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, we identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind of agency to AISs. This is because this type of awareness is thought to involve first-person or de se representations, which, in turn, are usually assumed to involve some form of consciousness. We clarify what this widespread assumption involves and conclude that the possibility of AISs’ moral responsibility hinges on what the correct theory of de se representations ultimately turns out to be.
Article
Full-text available
I consider three challenges to the traditional view according to which moral responsibility involves an epistemic condition in addition to a freedom condition. The first challenge holds that if a person performs an action A freely, then she thereby knows that she is doing A. The epistemic condition is thus built into the freedom condition. The second challenge contends that no epistemic condition is required for moral responsibility, since a person may be blameworthy for an action that she did not know was wrong. The third challenge invokes the quality of will view. On this view, a person is blameworthy for a wrong action just in case the action manifests a bad quality of will. The blameworthy person need not satisfy an additional epistemic condition. I will argue that contrary to appearances, none of these challenges succeeds. Hence, moral responsibility does require a non-superfluous epistemic condition.
Article
Full-text available
Moralization is a social-psychological process through which morally neutral issues take on moral significance. Often linked to health and disease, moralization may sometimes lead to good outcomes; yet moralization is often detrimental to individuals and to society as a whole. It is therefore important to be able to identify when moralization is inappropriate. In this paper, we offer a systematic normative approach to the evaluation of moralization. We introduce and develop the concept of ‘mismoralization’, which is when moralization is metaethically unjustified. In order to identify mismoralization, we argue that one must engage in metaethical analysis of moralization processes while paying close attention to the relevant facts. We briefly discuss one historical example (tuberculosis) and two contemporary cases related to COVID-19 (infection and vaccination status) that we contend to have been mismoralized in public health. We propose a remedy of de-moralization that begins by identifying mismoralization and that proceeds by neutralizing inapt moral content. De-moralization calls for epistemic and moral humility. It should lead us to pull away from our tendency to moralize—as individuals and as social groups—whenever and wherever moralization is unjustified.
Article
Full-text available
AI as decision support supposedly helps human agents make ‘better’ decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise with unintended AI influence. I argue that unintended AI influence has important implications for the way we perceive and evaluate human-AI interaction. To make this point approachable from both the theoretical and practical side, and to avoid anthropocentrically-laden ambiguities, I introduce the notion of decision points. Based on this, the main argument of this paper will be presented in two consecutive steps: i) unintended AI influence doesn’t allow for an appropriate determination of decision points - this will be introduced as decision-point-dilemma, and ii) this has important implications for the ascription of responsibility.
Book
Assertion is the central vehicle for the sharing of knowledge. Whether knowledge is shared successfully often depends on the quality of assertions: good assertions lead to successful knowledge sharing, while bad ones don't. In Sharing Knowledge, Christoph Kelp and Mona Simion investigate the relation between knowledge sharing and assertion, and develop an account of what it is to assert well. More specifically, they argue that the function of assertion is to share knowledge with others. It is this function that supports a central norm of assertion according to which a good assertion is one that has the disposition to generate knowledge in others. The book uses this functionalist approach to motivate further norms of assertion on both the speaker and the hearer side and investigates ramifications of this view for other questions about assertion.
Article
Assertion is the central vehicle for the sharing of knowledge. Whether knowledge is shared successfully often depends on the quality of assertions: good assertions lead to successful knowledge sharing, while bad ones don't. In Sharing Knowledge, Christoph Kelp and Mona Simion investigate the relation between knowledge sharing and assertion, and develop an account of what it is to assert well. More specifically, they argue that the function of assertion is to share knowledge with others. It is this function that supports a central norm of assertion according to which a good assertion is one that has the disposition to generate knowledge in others. The book uses this functionalist approach to motivate further norms of assertion on both the speaker and the hearer side and investigates ramifications of this view for other questions about assertion.
Article
Full-text available
After introducing the new field of cultural evolution, we review a growing body of empirical evidence suggesting that culture shapes what people attend to, perceive and remember as well as how they think, feel and reason. Focusing on perception, spatial navigation, mentalizing, thinking styles, reasoning (epistemic norms) and language, we discuss not only important variation in these domains, but emphasize that most researchers (including philosophers) and research participants are psychologically peculiar within a global and historical context. This rising tide of evidence recommends caution in relying on one's intuitions or even in generalizing from reliable psychological findings to the species, Homo sapiens. Our evolutionary approach suggests that humans have evolved a suite of reliably developing cognitive abilities that adapt our minds, information-processing abilities and emotions ontogenetically to the diverse culturally-constructed worlds we confront.
Article
Full-text available
Most accounts of responsibility focus on one type of responsibility, moral responsibility, or address one particular aspect of moral responsibility such as agency. This article outlines a broader framework to think about responsibility that includes causal responsibility, relational responsibility, and what I call “narrative responsibility” as a form of “hermeneutic responsibility”, connects these notions of responsibility with different kinds of knowledge, disciplines, and perspectives on human being, and shows how this framework is helpful for mapping and analysing how artificial intelligence (AI) challenges human responsibility and sense-making in various ways. Mobilizing recent hermeneutic approaches to technology, the article argues that next to, and interwoven with, other types of responsibility such as moral responsibility, we also have narrative and hermeneutic responsibility—in general and for technology. For example, it is our task as humans to make sense of, with and, if necessary, against AI. While from a posthumanist point of view, technologies also contribute to sense-making, humans are the experiencers and bearers of responsibility and always remain in charge when it comes to this hermeneutic responsibility. Facing and working with a world of data, correlations, and probabilities, we are nevertheless condemned to make sense. Moreover, this also has a normative, sometimes even political aspect: acknowledging and embracing our hermeneutic responsibility is important if we want to avoid that our stories are written elsewhere—through technology.
Article
Hsiao has recently developed what he considers a ‘simple and straightforward’ argument for the moral permissibility of corporal punishment. In this article we argue that Hsiao's argument is seriously flawed for at least two reasons. Specifically, we argue that (i) a key premise of Hsiao's argument is question-begging, and (ii) Hsiao's argument depends upon a pair of false underlying assumptions, namely, the assumption that children are moral agents, and the assumption that all forms of wrongdoing demand retribution.
Article
Blame skeptics argue that we have strong reason to revise our blame practices because humans do not fulfill all the conditions for it being appropriate to blame them. This paper presents a new challenge for this view. Many have objected that blame plays valuable roles such that we have strong reason to hold on to our blame practices. Skeptics typically reply that non-blaming responses to objectionable conduct, like forms of disappointment, can serve the positive functions of blame. The new challenge is that skeptics need to show that it can be appropriate (or less inappropriate) to respond with this kind of disappointment to people’s conduct if it is inappropriate to respond with blame. The paper argues that current blame-skeptical views fail to meet this challenge.
Article
Full-text available
After introducing the new field of cultural evolution, we review a growing body of empirical evidence suggesting that culture shapes what people attend to, perceive and remember as well as how they think, feel and reason. Focusing on perception, spatial navigation, mentalizing, thinking styles, reasoning (epistemic norms) and language, we discuss not only important variation in these domains, but emphasize that most researchers (including philosophers) and research participants are psychologically peculiar within a global and historical context. This rising tide of evidence recommends caution in relying on one’s intuitions or even in generalizing from reliable psychological findings to the species, Homo sapiens. Our evolutionary approach suggests that humans have evolved a suite of reliably developing cognitive abilities that adapt our minds, information-processing abilities and emotions ontogenetically to the diverse culturally-constructed worlds we confront.
Article
It is widely accepted that there is what has been called a non-hypocrisy norm on the appropriateness of moral blame; roughly, one has standing to blame only if one is not guilty of the very offence one seeks to criticize. Our acceptance of this norm is embodied in the common retort to criticism, “Who are you to blame me?” But there is a paradox lurking behind this commonplace norm. If it is always inappropriate for x to blame y for a wrong that x has committed, then all cases in which x blames x (i.e., cases of self-blame) are rendered inappropriate. But it seems to be ethical common-sense that we are often, sadly, in position (indeed, excellent, privileged position) to blame ourselves for our own moral failings. And thus, we have a paradox: a conflict between the inappropriateness of hypocritical blame, and the appropriateness of self-blame. We consider several ways of resolving the paradox and contend none is as defensible as a position that simply accepts it: we should never blame ourselves. In defending this starting position, we defend a crucial distinction between self-blame and guilt.
Chapter
Full-text available
Scholarship of teaching and learning (SoTL) presents the vital intersection between teaching, learning and research in the Higher Education context. However, ethical requirements applicable to SoTL research are mistrusted and remain a challenge. This results in lecturers not engaging in SoTL research towards transformative pedagogies. In addition, clear guidelines for ethics in SoTL are lacking. In this chapter, the authors critically reflect on ethical mindedness specifically relevant to SoTL research. The scientific gap identified in the literature implies the provision of more guidance on ethical Chapter 1 33 issues to enhance SoTL research. Applying ethical mindedness to SoTL research may provide a stronger coherence between the ethical application process and the scientific approach of SoTL. The study followed a qualitative research approach using design thinking as research methodology. This chapter provided ethical principles and guidelines to the wider SoTL community, including academics, academic developers, scientific committees and RECs to close this gap. Guidelines included aspects such as how to address the power relation in SoTL research, important aspects of informed consent and the process, autonomy to choose freely to participate or not, selection of participants, benefits and risk ratio, protecting participants and the integrity of the research as well as safeguarding data.
Article
Full-text available
New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage in conceptual engineering (without naming it as such). We subsequently reflect on the case studies to find out how these illustrate conceptual engineering as an appropriate method to deal with pressing concerns in the philosophy of technology. We have two main goals. We first want to contribute to the literature on conceptual engineering by presenting concrete examples of conceptual engineering in the philosophy of technology. This is especially relevant, because the technologies that are designed based on the conceptual work done by philosophers of technology potentially have crucial moral and social implications. Secondly, we want to make explicit what choices are made when doing this conceptual work. Making explicit that some of the implicit assumptions are, in fact, debated in the literature allows for reflection on these questions. Ultimately, our hope is that conscious reflection leads to an improvement of the conceptual work done.
Article
Full-text available
This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or “patients” of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.
ResearchGate has not been able to resolve any references for this publication.