ArticlePDF Available

Narrative responsibility and artificial intelligence: How AI challenges human responsibility and sense-making

Authors:

Abstract

Most accounts of responsibility focus on one type of responsibility, moral responsibility, or address one particular aspect of moral responsibility such as agency. This article outlines a broader framework to think about responsibility that includes causal responsibility, relational responsibility, and what I call “narrative responsibility” as a form of “hermeneutic responsibility”, connects these notions of responsibility with different kinds of knowledge, disciplines, and perspectives on human being, and shows how this framework is helpful for mapping and analysing how artificial intelligence (AI) challenges human responsibility and sense-making in various ways. Mobilizing recent hermeneutic approaches to technology, the article argues that next to, and interwoven with, other types of responsibility such as moral responsibility, we also have narrative and hermeneutic responsibility—in general and for technology. For example, it is our task as humans to make sense of, with and, if necessary, against AI. While from a posthumanist point of view, technologies also contribute to sense-making, humans are the experiencers and bearers of responsibility and always remain in charge when it comes to this hermeneutic responsibility. Facing and working with a world of data, correlations, and probabilities, we are nevertheless condemned to make sense. Moreover, this also has a normative, sometimes even political aspect: acknowledging and embracing our hermeneutic responsibility is important if we want to avoid that our stories are written elsewhere—through technology.
Vol.:(0123456789)
1 3
AI & SOCIETY (2023) 38:2437–2450
https://doi.org/10.1007/s00146-021-01375-x
ORIGINAL PAPER
Narrative responsibility andartificial intelligence
How AI challenges human responsibility and sense-making
MarkCoeckelbergh1
Received: 25 August 2021 / Accepted: 7 December 2021 / Published online: 30 December 2021
© The Author(s) 2021
Abstract
Most accounts of responsibility focus on one type of responsibility, moral responsibility, or address one particular aspect
of moral responsibility such as agency. This article outlines a broader framework to think about responsibility that includes
causal responsibility, relational responsibility, and what I call “narrative responsibility” as a form of “hermeneutic respon-
sibility”, connects these notions of responsibility with different kinds of knowledge, disciplines, and perspectives on human
being, and shows how this framework is helpful for mapping and analysing how artificial intelligence (AI) challenges human
responsibility and sense-making in various ways. Mobilizing recent hermeneutic approaches to technology, the article argues
that next to, and interwoven with, other types of responsibility such as moral responsibility, we also have narrative and her-
meneutic responsibility—in general and for technology. For example, it is our task as humans to make sense of, with and,
if necessary, against AI. While from a posthumanist point of view, technologies also contribute to sense-making, humans
are the experiencers and bearers of responsibility and always remain in charge when it comes to this hermeneutic respon-
sibility. Facing and working with a world of data, correlations, and probabilities, we are nevertheless condemned to make
sense. Moreover, this also has a normative, sometimes even political aspect: acknowledging and embracing our hermeneutic
responsibility isimportant if we want to avoid that our stories are written elsewhere—through technology.
Keywords Responsibility· Narrative responsibility· Hermeneutic responsibility· Artificial intelligence· Hermeneutics·
Philosophy of technology
1 Introduction
Most philosophical accounts of responsibility focus on
moral responsibility, to the extent that both terms are often
used interchangeably. This is understandable, since, as Tal-
bert puts it, ‘holding others and ourselves responsible for
actions and the consequence of actions, is a fundamental and
familiar part of our moral practices and our interpersonal
relationships.’ (Talbert 2019). This is also the case in the
domain of technology. In particular, automation technolo-
gies driven by artificial intelligence (AI) and robotics pose
the question who or what is responsible for the actions of
these technologies, given that we may not be able to control
them and predict their outcomes and consequences (Matthias
2004). For example, who is responsible when a self-driving
car or the autopilot of an airplane causes an accident, and
is it possible to ascribe responsibility at all in such cases?
One way to answer such questions is to draw on clas-
sic theory of responsibility. From Aristotle to contempo-
rary analytic moral philosophy (Aristotle 1984; Fischer and
Ravizza 1998; McKenna 2008; Rudy-Hiller 2018), it has
been held that there are at least two types of conditions for
holding someone responsible and for exercising responsi-
bility: humans need to be in control and know what we are
doing. However, these conditions are not always fulfilled
when technologies such as AI take over human tasks. For
example, a user of a fully automated self-driving car is not
be in control of the steering of the car and may not be able
to react quickly when something goes wrong. And some
types of AI, in particular deep learning that uses neural nets,
work in ways that are not transparent, creating ignorance
on the part of the user. Does this mean no human can and
* Mark Coeckelbergh
mark.coeckelbergh@univie.ac.at
1 Department ofPhilosophy, University ofVienna,
Universitätsstrasse 7 (NIG), 1180Vienna, Austria
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2438 AI & SOCIETY (2023) 38:2437–2450
1 3
should be held responsible for the actions and consequences
of these technologies? These are important questions, which
are being discussed in the literature on ethics of robotics and
AI (Hakli 2019; Johnson 2014; Santoro etal. 2008; Coeckel-
bergh 2020; Yeung 2018; Santoni de Sio and Mecacci 2021).
These discussions already show how AI, here in the form of
an automation technology, poses a challenge to our human-
istic notions and to the kind of control and knowledge condi-
tions connected with them. While there is a lot of variation
in historical humanism, it has always put humans in the cen-
tre, and since the Enlightenment humanist moral philosophy
has stressed human autonomy and agency. AI threatens such
views of human being, morality, and responsibility.
Yet, there are also other notions of responsibility: con-
cepts that are usually not included in discussions about AI
and responsibility, but are equally important if we want to
understand human responsibility in its full richness and how
AI challenges our existing ways of thinking and doing. In
this paper, I identity three further notions: causal responsi-
bility, relational responsibility, and what I call “hermeneutic
responsibility”. Next to making this distinction, my further
aim is to show (1) that in spite of connections between them
(moral responsibility is related to causal and relational
responsibility), each of these notions are connected to dif-
ferent kinds of knowledge and perspectives on human being,
which are often in tension, and (2) how each of them are
challenged by AI, in particular machine learning AI. Spe-
cial attention will be paid to a specific form of hermeneu-
tic responsibility, what I call “narrative responsibility”: I
will explain why we need such a notion and what it means
to exercise it, how it differs from moral responsibility, and
how it links to hermeneutic approaches to human being
and technology. With a nod to tensions between humanism
and posthumanism (and, to some extent, transhumanism),
I will argue that AI and meaning are already entangled,
since there are ways in which AI “participates” in meaning-
making (thus supporting criticisms of humanist approaches),
but insist that whatever may be said about other notions of
responsibility, it is always up to humans to make sense of AI,
with AI, and, if necessary, against AI. In this way, I aim to
make an original contribution to thinking about responsibil-
ity (in general and especially in the context of thinking about
AI) and respond to, and further develop, recent literature
on technology and hermeneutics (Romele 2020; Reijers and
Coeckelbergh 2020; Kudina 2021), which has proposed revi-
sions of, or alternatives to, existing postphenomenological
accounts of technology (Ihde 1990; 1998; Rosenberger and
Verbeek 2015).
Note that while most of the effort in this paper goes in
analytically distinguishing the different notions of respon-
sibility—I aim to establish hermeneutic responsibility
and narrative responsibility as distinct concepts—I will
also acknowledge that at least some, if not all notions are
inextricably interwoven. For example, I will note that moral
responsibility is linked to moral responsibility and argue
that hermeneutic responsibility and moral responsibility
need each another.
Let me start with causal responsibility and moral
responsibility.
2 Causal responsibility andmoral
responsibility introuble: thebattle
forthemind
Causal responsibility of agents refers to agents being the
cause of an outcome. While many moral philosophers hold
that moral responsibility requires, or is grounded in, causal
responsibility (Sartorio 2007; see also again the control
condition), causal responsibility does not necessarily entail
moral responsibility. For example, a young child may cause
harm to someone, and is causally responsible for that harm,
but typically we do not hold that child morally responsible.
AI is also a case in point, at least if we assume that it cannot
be morally responsible: if AI takes the form of an artificial
agent (e.g., an autopilot), then that artificial agent may cause
a particular outcome, but we do not hold that agent mor-
ally responsible; instead, we look for a human to bear the
responsibility for what the agent does, has done, or might
do. Moreover, in technological action, causal responsibility
is usually a matter of degree and involves many hands (van
de Poel etal. 2015): an outcome (e.g., a recommendation
or decision by the AI) is often not directly caused by one
agent, but may be the result of a long causal chain and the
causal responsibility of a particular agent (human or artifi-
cial) depends on the extent and directness of the agent’s con-
tribution to the causal chain. For example, the outcome of
what an AI system does (a recommendation, an action) may
the result of several programmers and data scientists doing
part of the work, and those who did more work and directly
influenced the outcome will carry more causal responsibil-
ity. However, it is not clear how these degrees of causal
responsibility translate into moral responsibility. Our cur-
rent moral and legal ways of thinking do not seem very well
adapted to dealing with causal chains that are temporally
stretched, vary in degree, and involve many agents.
Causation is itself a long-standing topic in philosophy,
and in discussions about moral responsibility, it is con-
nected to debates about free will and determinism (van
Inwagen 1983; Fischer and Ravizza 1998; Frankfurt 1969;
Pereboom 2001; Dennett 1984). In general, the tension
between moral responsibility and causal responsibility
is related to two different views of human beings. One,
usually defended in moral philosophy, is that of human
beings as rational and free beings who wish to preserve
their autonomy and wish to be in control of their actions.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2439AI & SOCIETY (2023) 38:2437–2450
1 3
As Berlin (1997) puts it in his famous paper on liberty:
there is ‘the wish on the part of the individual to be his
own master. I wish my life and decisions to depend on
myself, not on external forces of whatever kind….I wish,
above all, to be conscious of myself as a thinking, willing,
active being, bearing responsibility for my choices and
able to explain them by reference to my own ideas and
purposes.’ (Berlin 1997, p. 203) Another is the scientific
view, present in positive psychology, neuroscience, cogni-
tive science, and so on, that explains human actions and
that shows opportunities for the manipulation of human
choices and behaviour. Consider nudging for example,
which subconsciously aims to influence our choices by
changing our decision environment, the so-called ‘choice
architecture’ (Thaler and Sunstein 2008). This goes
against the view of humans as autonomous choosers and
reasoners.
AI technology seems to be situated firmly on the side
of the scientific view: it does not support the view that
humans are autonomous reasoners and instead categorizes,
profiles, and enables manipulation. The AI we are usually
talking about today is machine learning that relies on sta-
tistics. Humans are analysed in terms of their data. From
the epistemic gateway offered by AI, they are not seen as
human beings that want to be autonomous and masters of
their lives. Current AI does not care about your motives,
your reasoning, and your plans. It will categorize you
statistically, compare you with others, and make predic-
tions. It is not about you as a person, and not even about
you as a rational agent—the favourite of moral philoso-
phers. Reasoning is no longer required when we have data
analysis. It is also not necessary to introspect. As Harari
(2015) suggested: AI knows you better than yourself. This
kind of claim was already made by positive (behavioural)
psychology; now, AI joins this project of rendering the
self entirely transparent and knowable. Ethically dubi-
ous experiments with humans and animals—from the
infamous Milgram’s (1963) experiment and the Stanford
Prison Experiment (Heney etal. 1973) to experiments with
apes with brain chips implanted today (e.g., Serruya etal.
2002 and Elon Musk’s recent Neuralink experiments)—
are replaced by, and supplemented with, computer models
and datasets. AI is the new technology to “read” humans.
Behaviour or minds are no longer of direct interest; what
matters is the data produced by that behaviour and those
minds. The precise methods and goals of such research
may differ considerably. However, the resulting claim
about knowledge and self-knowledge is the same. It seems
that there is no longer any need for humanistic reading and
writing, meant since at least the Renaissance as a tool that
enables us to attain self-knowledge. According to adher-
ents of these positivist views, AI can do that job based
on data about us. And from this perspective, reasoning
about moral responsibility seems at best an epiphenom-
enon when we have data about how humans actually make
moral choices and how they behave.
However, due to the type of knowledge it produces and
relies on, AI does not offer causal explanations and there
is no assumption of determinism. AI makes predictions,
but it gives us correlations and probabilities. In this sense,
AI is a threat to both classic notions of causal respon-
sibility and moral responsibility. It is a threat to moral
responsibility, because by enabling statistical knowledge,
manipulation, and automation, it seems to undermine the
agency, autonomy, and responsibility of humans. However,
it is also a threat to classic causal responsibility. Causes
are something humans think of, for example when they
make a causal model and construct a theory. Causes can
be doubted, as many people did and do since Hume. AI,
by contrast, only works with correlations and probabilities.
These are not a matter of beliefs (or so it seems); they are
calculated. AI is not about (old-style) physics but statis-
tics. It does not need theory; it only needs data—your data,
the data of millions of other people.
In this way, AI seems to undermine both causal and
moral responsibility, and the respective views of human
being connected to them. Philosophers continue to talk
about agents, reasons, and causes. However, both physics
and the human sciences have moved on. What (literally)
counts now in the age of AI is data extracted from our
behaviour and their analysis in terms of correlations and
probabilities. This adds AI to the history of disenchant-
ments and disappointments that humanists had to cope
with since Darwin and Freud. The “human” of the Renais-
sance humanists, Enlightenment thinkers, and nineteenth
century romantics, with its free will, rational autonomy,
and mysterious mind, seems to be an illusion. AI seems to
set us on a path towards the ‘Palace of Crystal’ sketched
by Dostoyevsky (1972) in Notes from Underground: one
in which science will teach us that we ‘are no more than a
sort of piano keyboard or barrel-organ cylinder,’ a world
in which everything has been mathematically worked out
and where there is no room for fancy, ‘individual deeds
or adventures’ (pp. 32–33). A world in which humans and
their minds become fully transparent. Dostoyevsky is still
struggling with determinism. However, the kind of tension
is the same. Humanist philosophers and writers defend
the human, and some of them may want to ‘send all these
logarithms to the devil and be able to live our own lives at
our own sweet will’ (p. 33), as Dostoyevsky put it. How-
ever, AI, neuroscience, and behavioural psychology and
economics are here to stay, and can easily be used for the
manipulation of people and the destruction of the auton-
omy and morality cherished by those who, at least from
the perspective of these positivist sciences, might be seen
as old-style philosophers and psychologists.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2440 AI & SOCIETY (2023) 38:2437–2450
1 3
3 From minds tosocial relations: relational
responsibility toothers
However, those engaged in this battle for the mind tend
to neglect another kind of responsibility—or, as we will
see, another aspect of moral responsibility—and a differ-
ent way of looking at human beings: we are also social
beings, and as such, we have a relational kind of respon-
sibility. We do not only have responsibility as an agent, a
causal and moral responsibility for our actions, but also a
responsibility to others. Whereas most accounts of moral
responsibility relate the agent to a moral demand, rela-
tional responsibility highlights the relation to others, to
‘responsibility patients’ (Coeckelbergh 2020).
While we can analytically distinguish moral and rela-
tional responsibility in this way, one could argue that
moral responsibility always involves relations with oth-
ers, and that a richer and more plausible notion of moral
responsibility includes relational responsibility: we are
responsible for our actions to others. In this sense, rela-
tional responsibility can be seen as an aspect of moral
responsibility. Nevertheless, this aspect is often silent and
silenced in the above-mentioned discussions on moral
responsibility, and many authors have found it necessary
to develop new or alternative accounts of responsibility in
response to this gap, thereby sometimes radically changing
the account of moral responsibility (see below). Therefore,
I have chosen to give this this aspect by giving it a separate
name: relational responsibility.
Both the link between moral and relational responsibil-
ity and the potential for seeing this as an entirely different
view become clearer when we look at some sources we
may use to develop this conception of relational respon-
sibility. One is responsibility as answerability, which has
been proposed in a criminal legal philosophy context (Duff
2005). Duff has proposed an account of criminal respon-
sibility according to which ‘to be responsible is to be held
responsible for something by some person or body within
a social practice.’ (Duff 2005, p. 441). For example, in a
trial, a defendant has to answer a charge of wrongdoing.
Duff connects this with reasons: the defendant has to have
the capacity to engage with reasons for action (p. 446).
Now, one could generalize this notion of responsibility to
a richer, relational view of moral responsibility (a move in
line with Duff’s work), or supplement moral responsibil-
ity strictly speaking with another type of responsibility,
relational responsibility, which gives us a responsibility
in addition to moral responsibility: we do not only have
responsibility as agents but also as social beings, social
actors, who in our specific roles and social contexts have
to answer to others for what we do (to them). In both
options, we put responsibility in a social context, without
necessarily losing the link to moral responsibility. In the
remainder of the paper, I will assume that the first option
holds: moral responsibility implies relational responsibil-
ity, relational responsibility is an aspect of moral respon-
sibility, and moral responsibility must be interpreted in a
relational way.
Yet, recognizing this relation between moral and rela-
tional responsibility should not hide the potential radicality
of a shift towards a more relational view of moral responsi-
bility. Another range of sources for thinking in a relational
way about moral responsibility can be found in theoretical
directions that question the individualism and focus on the
self that is inherent in much modern normative theory, and
that are more relational and other-directed such as ethics
of care and Levinasian ethics. Here, the emphasis is not on
the agent’s will, control, and autonomy, but on the other
and on what the other may need, ask, or demand. Here, too,
moral responsibility is interpreted in a relational way. For
example, Gilligan’s ethics of care connects responsibility to
human relationships and stresses being responsive to people
and care about them instead of focusing on one’s autonomy
(Gilligan 1982, p. xiii); in nursing ethics, it has been argued
that health care professionals have relational responsibilities
towards their patients, which depend on professional roles
and may be very particular (Nortvedt etal. 2011); and ear-
lier, Levinas (1969) has proposed an ethics that starts from
(the face of) the other: not the self but the other, and the
ethical relationship constituted by the other, is primary. Once
again, we arrive at more relational views of moral responsi-
bility. However, in the case of Levinas, this implies a radi-
cal shift from a self-oriented to an other-directed account
of moral responsibility. In that case, it becomes more dif-
ficult to see relational responsibility as merely an aspect of
moral responsibility: moral responsibility implies relational
responsibility, but that changes the entire picture. Levinas
radically revises the usual accounts of moral responsibility.
Furthermore, beyond philosophy, social-scientific
approaches—also to science and technology—question
mainstream moral philosophy’s obsession with the mind,
its psychologism. They point to the social context of respon-
sibility and to the power structures at play: Who asks this
question of responsibility, who is supposed to be responsi-
ble, who is included and excluded in this game of responsi-
bility? Seeing people as autonomous and individual agents
is itself a cultural construction, and in particular a Western
obsession. While social constructivism does not neces-
sarily deny (the importance of) moral agency, it criticizes
the tendency to understand agents in isolation from their
social contexts and the claim to universality made by stand-
ard accounts of agency. And important for the topic of this
paper: social studies of science and technology show that
neither science nor technology is politically, morally, or cul-
turally neutral. This insight has been taken up in philosophy
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2441AI & SOCIETY (2023) 38:2437–2450
1 3
of technology. For example, in dialogue with STS, Win-
ner (1980) has famously argued that technical things ‘have
politics.’ And inspired by a by now decades long tradition
initiated by Bijker and colleagues (1987), Johnson (2014)
has argued that responsibility arrangements regarding robots
will have to be negotiated between actors as the technology
is developed, tested, and used. Some actors will push others
in their direction; they get what they want, rather than oth-
ers. Here, we move from physics (causes and determinism),
individualist moral theory (will, minds, autonomous selves,
reasons, etc.), and statistics (correlations and probabilities)
to the social, cultural, and political sciences. This is about
social actors, relationships, roles, groups, and power.
With regard to AI, this relational approach means asking
to whom we are responsible if we develop and use AI (not
just asking who is responsible for what): what is the context
of social relationships, and what responsibilities does who
have towards whom? This can be further unpacked in num-
ber of ways. First, one problem with AI is that it is often
divorced from such a social context and ecology of social
relations; it is seen as a purely technical matter. And philo-
sophical discussions in terms of agency do the same: they
highlight the moral responsibility of agents (human and arti-
ficial) without asking the “to whom” question, thus leaving
out an important if not essential dimension of the ethical
relationship. Second, a relational approach to AI also means
evaluating again the knowledge provided by AI. And here
we encounter the next challenge: if AI gives us a recom-
mendation and we make a decision based on this recom-
mendation, but the AI process does not enable us to explain
to those affected by the decision why a particular decision
was made in their case, then we cannot fulfil our relational
responsibility (Coeckelbergh 2020). Third, relational respon-
sibility could also mean responsible innovation, which
means, among other things, that stakeholders are involved
in the development and decisions about the use of AI. The
idea is to have a transparent process by which societal actors
become mutually responsive to each other (von Schomberg
2011). This normative view of innovation contrasts sharply
with the fact that much innovation in this area is done within
companies who develop their technologies more or less in
isolation from the rest of society, let alone that decisions
about their design or development are democratic. Fourth,
this more social and political perspective on responsibility
enables us to open up a pandora’s box of political interests
and power relations that surround and interact with respon-
sibility. For example, if moral philosophy asks of individuals
to act responsibly, but these individuals find themselves in
contexts that do not enable them to exercise this responsibil-
ity, because they are over-powered by their company, their
government, and so on, then all the theories about causal
and moral responsibility seem less relevant—at least in the
first instance. Then, at the very least analysis of these power
structures is also needed—for instance inspired by Marx or
Foucault.
Since AI is usually seen as a technical and scientific mat-
ter, discourses on AI tend to obscure these social and politi-
cal relationships and therefore render it difficult to talk about
responsibility with regard to AI in a relational way. There
is a gap between, on one hand, the usual discourse about AI
and responsibility in technical literature and in moral philos-
ophy, and on the other hand, the political issues raised by AI,
which remain unaddressed or at least underdeveloped. This
gap can be closed in at least two kind of ways. First, we can
use social science and political philosophy to talk about AI.
Currently, awareness about political issues concerning AI is
growing (Véliz 2020; Bartoletti 2020; Crawford 2021), but
often this is not matched by in-dept academic analysis using
political theories developed within the social science and
humanities. Second, even considered at the technological
level and within the range of the existing discourse, there is
potential for highlighting social and political issues, since AI
can show much about our social world, sometimes things we
are not aware of. For example, by means of purely quantita-
tive, statistical analysis, AI can reveal that there are existing
and historical biases in our language, our texts (Caliskan
etal. 2017), our organization, and our society. Bias in AI
may well be an ethical problem, but AI also contributes to
more knowledge about our society by revealing this bias in
the first place and by thus “inviting” us to talk about it. AI
ethics, together with other developments and in specific con-
texts (e.g., the Black Lives Matter movement in the U.S.),
has succeeded in putting bias high on the political agenda.
Yet, the tension between humanistic and technical
approaches remains, especially if we consider again the
knowledge provided by AI that is abstracted from human
contexts. The knowledge here is not gained by sociologists
and intellectuals that come up with big theories and heavy
volumes of analysis; it is offered by AI and is, again, of
a specific quantified kind that does not stand in need of
theory. As a technology that, like all technologies, is more
than an instrument, AI “suggests” that the correlations and
probabilities it gives us are enough. In this way, AI kind
of by-passes not only human agency (moral responsibility)
and human reasoning about causes (causal responsibility);
it also circumvents at least part of human social analysis.
Similar to developments in psychology, that analysis was
already getting increasingly quantitative and statistical. But
now, the machine also takes over, or rather seems to render
superfluous, the only part that was left for the human social
scientist: theory.
From a humanist point of view that puts humans in
the centre, things start looking rather bleak now. Is there
still a place for humans and what (only) humans can do?
This way of putting it is too one-sided, as if it is a mat-
ter of either human responsibility or AI taking over. The
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2442 AI & SOCIETY (2023) 38:2437–2450
1 3
picture must be revised in at least two ways. First, from a
humanist point of view, we can insist on the human role
and we can demand that humans still decide, in the face of
probabilities, demand that, even if agency is delegated, we
remain morally responsible, etc. In other words, here, we
accept the picture or narrative of the battle between humans
and technology, and—unsurprisingly—choose the side of
the humans. Second, however, we can develop theories that
criticize this humans/technology binary and bring together
humans and technologies, while not losing the capacity to
criticize technology. We can emphasize the human side of
technology and the technological side of humans. Postphe-
nomenology, posthumanist theory, the work of Latour, and
indeed, the STS already mentioned may be considered to
develop this. In this section, I will not say more about this,
but in the next section, I will further discuss this issue and
try to find a middle way between these extreme positions.
This will come in two versions. The humanist version insists
on the hermeneutic centrality of the human: the only form
of anthropocentrism that is still viable is one that gives a
special place to humans as interpreters. While the assertion
of human exceptionality becomes increasingly challenging
in the light of posthumanist, environmentalist, and postphe-
nomenological insights, one could argue that there is one
thing that is and should be the responsibility of humans: to
make sense of the world. In the posthumanist version, this
sense-making is more intimately connected to technology.
It could be argued that we have to embrace the entangle-
ment of sense-making and technology: technology partici-
pates in meaning-making and contributes to hermeneutic
responsibility. Nevertheless, I will argue that even in this
posthumanist version, humans carry and should carry the
(end-)responsibility.
Let me unpack this and say more about hermeneu-
tic responsibility, especially about what I call “narrative
responsibility.
4 Narrative responsibility asaform
ofhermeneutic responsibility:
theresponsibility tomake sense
We have (moral) responsibility for what we (causally) do
and we have responsibility to others, to those to whom we
are related. However, we also have a responsibility that is
usually not mentioned in discussions about the topic: the
responsibility to make sense, to interpret, and to narrate.
Whereas relational responsibility is a second-person kind
of responsibility, directed to others, and whereas causal
and often moral responsibility is often formulated from a
third-person point of view (as ethicists, we look at the whole
from an outside perspective and then ascribe causal and
moral responsibility), what I propose to call “hermeneutic
responsibility” is mainly a kind of first-person responsibility:
a responsibility that we have mainly to ourselves as persons
(first-person singular) and to “us” as communities, societies,
and cultures (first-person plural). It is not a moral responsi-
bility strictly speaking, if that means that making sense is a
moral “ought”. It is not so much something that we can or
should be blamed for or that can be demanded. It is rather
a responsibility that emerges from my and our existential
situation as humans. It is a “have to” or “cannot do but”, not
an “ought to”. And I have to do it, as the person I am. Or
we have to do it, as the community and culture we are. It is
not about a responsibility that we have as universal moral
subjects—although moral responsibility can also sometimes
be about problems for particular people, as Sparrow (2021),
inspired by Gaita, recently reminded the robot ethics com-
munity. It is a responsibility we have as the particular per-
sons we are living in a particular community, society, and
culture. Hermeneutic responsibility also has nothing to do
with causes and explanations, or with general laws of psy-
chological and social behaviour. It is usually applied to the
human and social world, but it is about interpretation and
verstehen: a term meaning “understanding” that was already
used by Weber and Simmel against sociological positivism.
It is relational, in the sense that we have to communicate
the sense to others, but it is not only or primarily others that
we have to answer: we have to answer ourselves. We have
to provide answers to what happens to us and to others, and,
ultimately, we have to answer the question mark that we
ourselves are as human beings and as persons.
This “hermeneutic responsibility” typically takes the
form of what we may call “narrative responsibility.” Our
sense-making and answer to what we are usually comes
in the form of a narrative: a story about ourselves, about
others, about events, and about how we respond to those
events. Here, the disciplinary field is not moral philosophy,
psychology, or sociology; we move to literature, music,
art, film, games, etc. With regard to exercising this kind of
responsibility, reading or writing a novel is not superfluous
or simply a matter of entertainment; it is part of the herme-
neutic work. The humanistic culture we inherited still uses
the technology of writing and the medium of text and books.
However, I mentioned games, because in principle, we could
also use new, digital technologies to do this work, to exercise
our narrative responsibility. All kinds of media and cultural
practices can be used and developed—indeed have to be
developed—since it is and remains our responsibility.
If this kind of responsibility sounds abstract and not
much related to what we usually mean by “responsibil-
ity”, consider examples drawn from the discussion about
moral responsibility: a car crash or airplane accident. If we
approach these from a moral point of view, I (from a third-
person perspective and backward-looking perspective) try to
find the responsible agent or agents who caused the accident
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2443AI & SOCIETY (2023) 38:2437–2450
1 3
directly. For example, someone investigating a crash will
face this task. From a first-person point of view, I may also
explain to others why I have done what I did—thus fulfilling
my relational responsibility. This could be the responsibil-
ity of the driver or pilot. Forward-looking and from a first-
person singular perspective, I will try to act as a responsible
agent and try to avoid accidents by fulfilling the conditions
of responsibility: I will need to make sure that I am in con-
trol and know what I am doing, e.g., I make sure that I am
not drunk—to pick up an Aristotelian example. For example,
a car driver will make sure she is not drunk (next time). In
the plural: we need to create responsible technologies and
societies by developing technologies and building structures
and infrastructures that make fulfilling these conditions eas-
ier. All this may involve trying to gain knowledge in terms
of causes and probabilities. To be responsible for something
and explain to others, we need to know what happened, how
things work, who did what, and who needs to do something.
We need to sort out the causal responsibility and, with the
current science and technology including AI, we need to
know about probabilities and risk. For example, exercising
forward-looking moral and causal responsibility in the case
of airplanes means that a lot of knowledge about risk and
probabilities needs to be acquired and produced.
If, however, we look at the same kind of cases from a
hermeneutic and narrative angle, the main issue is not about
agency and not even immediately about responding to oth-
ers (responsibility patients). From a backward-looking point
of view, this means: something happened or might happen
and we are faced with the task to make sense of what hap-
penedor might happen. For example, an airplane crashed
and more than 300 people died. Or a particular airplane has
a high risk of failure, but is still widely used. In such cases,
we need to sort out the other kinds of responsibility (includ-
ing moral responsibility and legal responsibility), but we
also need to make sense of this and cope with this as inter-
pretative human beings, human beings who are mortal and
fear death, love their relatives, and so on. People involved
andother stakeholders, journalists, readers, and so on, do
this by creating a story about the accident or the risk. Facts,
causes, probabilities, reasons, obligations, and so on may
be part of that story, but they do not make a full story. It
needs to be a story that makes sense and that helps us to
make sense. For the sake of ourselves and others, we need to
make sense of what happened and—ideally—in the process
make sense of ourselves as persons and as humans. Perhaps
afterwards, we will see the world in a new light. And from
a forward-looking point of view, we need to make sense
of what might happen. As humans, we are always directed
towards the future. And that future is uncertain and risky
by definition. The knowledge needed here—if it can still
be called knowledge at all—is of an entirely different kind
than the knowledge offered by the science or the reasons and
discussions offered by moral philosophy. If we only have
science or moral philosophy, we face a hermeneutic gap: we
know already many things about what happened (e.g., causes
or correlations) or what might happen (e.g., calculated risk,
probabilities), and we have reasoned about those and we
have explanations, but we still need to make sense of it all,
and we need to make that sense for ourselves and for the
people around us and the community and perhaps thesociety
and culture we belong to.
For example, when someone near to us dies in an acci-
dent or when we suddenly and unexpectedly become seri-
ously ill, we want to know what exactly happened and, in
some cases, we will want to talk about the causal and moral
responsibility (e.g., the other driver was drunk or I may have
contributed to my bad health). Usually, we will get medi-
cal information, for example, say the probability a family
member has to survive at a given time and in a given condi-
tion. We may want to have the data available (and a medical
professional’s interpretation, which is already hermeneutic).
However, if necessary at all, that knowledge is not sufficient
for making sense. That sense may come with making, tell-
ing, or hearing the story and after the story. Narratives can
help, albeit without guarantee. Narratives are hermeneutic
tools that help us to make sense. And they do not just con-
sist of numbers and statistics; they are about personas and
events, about personal experiences, personal transformation,
personal relationships, meaning, and existence. My personal
sense-making will also relate to the meanings and practices
of sense-making that are already given in my community,
society, and culture. To make sense can be a very personal
matter (for example making sense of an accident in which
a loved one was involved); but the way I do it will link to a
wider whole, what I will below refer to as a ‘form of life.’
For example, someone might refer to religious meanings
and other meanings available in one’s family, community,
and culture.
In philosophy, one of the key figures of a hermeneu-
tic approach to human being is Ricoeur, who argued that
human lives and human experience are not only fundamen-
tally social but also have a narrative character. He also theo-
rized narrativity. Based on his reading of Aristotle’s Poetics,
Ricoeur offered in Time and Narrative a theory of ‘emplot-
ment’ and mimesis: he argued that the plot of a story config-
ures and organizes characters, motivations, and events in a
meaningful whole. There is action and there are events, but
in the end, the narrative as a whole makes sense and leads to
a new understanding (Ricoeur 1983). This can be read as a
theory about fictional stories (Aristotle wrote about theatre
and in his thinking, renewal comes in the form of the famous
catharsis), but it can also be used as a theory about how we
make sense in our lives: narrativity, in particular emplot-
ment, leads us see things in a new light, helps us to make
sense of things. The knowledge, or rather know-how, that
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2444 AI & SOCIETY (2023) 38:2437–2450
1 3
is thus needed for exercising hermeneutic responsibility is
narrative. By organizing events, characters, motivations, and
other elements in a meaningful whole, we can make sense.
Now, this theory seems to work well for backward-look-
ing responsibility, where we already have the different ele-
ments of our story and where part of the story may already
be available in narrative form (consider for example a news-
paper story about the airplane accident). However, what
about forward-looking narrative responsibility? It seems
that we then need to use our imagination. We can create
narratives about possible futures. This is what with Ricoeur
we could call a form of ‘productive’ imagination that does
not just copy but shows us new possibilities. We can imag-
ine various scenarios and, in that way, we can try to make
sense of what we are doing now and what might happen in
the future. Narration does not mean that we can only create
one story, even if eventually we might have to choose one;
there is an inherent plurality and openness in this stage of
the responsibility exercise.
With regard to this forward-looking, imaginative exercise
of responsibility, it is also important to distinguish its nar-
rative character from that of other forms of responsibility.
Even when faced with a moral choice, it is not enough to
discuss this in terms of my moral responsibilities (e.g., my
obligations towards others, the reasons I have as a moral
and rational being); the choice also has to make sense to me.
Using narrative imagination, I must explore and create these
meanings. This is my (or our) hermeneutic responsibility:
no-one else can do it in my place and no community or soci-
ety can do it in our place. I have to make sense given my own
personal history and given the person that (narratively) I
am. The same is true for communities or societies. Consider
societies that struggle with, say, their colonial past. This is
not only a moral question, as it is usually understood. There
is a moral aspect, for sure. For example, that society with
a colonial past may well have the obligation to apologize,
to make sure it never happens again, and to be particularly
sensitive to new instances of racism, imperialism, bias, and
so on in the present. However, dealing with such a past is
also hermeneutic home work. That society and the previ-
ously or presently involved and affected groups—agents and
patients, doers and victims—have the hermeneutic respon-
sibility to deal with their past and to find and make mean-
ing today and for the future in the light of what happened.
This requires this time not just a forward-looking but also a
backward-looking imagination. It requires linking the past
to the present. This can be done in the form of narrative:
stories concerning the past need to be told, perhaps revised,
and made to bear on the present.
Furthermore, the purpose of this hermeneutic work is
not only to make sense of the present but also to shape the
future. The future needs to be approached narratively to
imagine new possibilities to organize people and events—in
the example, this could be: a non-colonial form of organiza-
tion. In that sense, hermeneutic work has a normative dimen-
sion: not necessarily or at least not just moral in the sense
of obligations and blame, but still related to normative ideas
about how we should do things, how we should lead our
lives, and how we should live together.
To distinguish hermeneutic and narrative responsibil-
ity from moral responsibility does not imply that both are
unrelated. On the contrary, to think about what is right and
about what the good life is without involving the question
regarding meaning seems problematic. As mentioned, a
moral solution should also make sense to me and to others.
And vice versa: to shape the narrative of our lives without
taking into account moral responsibility and other kinds of
responsibility would not be desirable and not good—if it
is possible at all. The relations between the different kinds
of responsibility and their respective domains of life and
thinking are complex, and a full discussion of this issue is
beyond the scope of this paper. More work is needed on this.
For now, let me conclude that clearly all notions or aspects
of responsibility are important, and that moral responsibility
and narrative responsibility are interwoven in the sense that
both seem to need one another.
5 Narrative responsibility andAI
What does this concept of narrative responsibility mean for
AI? At first sight, AI has little to do with all this meaning-
making. One could argue that AI is not conscious and not
self-conscious, and that it therefore lacks subjectivity and
experience, which is assumed to be needed for meaning-
making. Whether or not AI may achieve consciousness in
the future, AI as we know it lacks consciousness; it does not
experience, let alone tell stories about that experience. At
first sight, therefore, meaning-making is not a case for sci-
ence or technology at all. We better call in the poets and the
writers. It is a humanistic, not a scientific project. And that is
partly true. Like in the case of moral and relational responsi-
bility, there are and remain fundamental tensions with regard
to the kind of knowledge and responsibility required. And
it is all too easy to see this in a binary way. To make sense
by means of narrativity is not about probabilities but about
meaning. It is not about data and correlations but about
emplotment of persons/characters and events. It is not about
gathering data and analysing data; it is about creating, read-
ing, and interpreting texts. Once again AI seems to totally
circumvent human knowledge, experience, and imagina-
tion. It offers statistical analysis and probabilities, whereas
humans need to make sense. Once more the humanist project
finds itself in tension with, if not in radical opposition to,
science and technology.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2445AI & SOCIETY (2023) 38:2437–2450
1 3
Yet, this picture is again too one-sided and distorted. AI,
like other technologies, has a lot to do with human culture
and human meaning-making. First, within academia, there
are new and interesting interactions between AI and the
humanities. Consider the field of digital humanities, which
is situated at the intersection of computing science and
humanities disciplines such as history and linguistics. It uses
digital tools such as data mining to study the humanities.
This does not mean that other, classic humanities methods
are abandoned, but they are combined with the digital meth-
ods. As already suggested, we may think about how to use
new technologies and media for doing humanistic work—as
long as we do not forget that interpretation and narration by
humans is always required. Second, there is a much more
internal relation between technologies and hermeneutics:
technologies are part of our stories and even shape these
stories. Recently, this has been conceptualized in at least
three ways:
First, I have mobilizedWittgenstein’s concepts (games
and forms of life) and approach (meaning is in use), used
in the Philosophical Investigations (2009), to argue that
not only meaning in language depends on use and context,
but that also the meaning of technologies depends on their
use and is related to our activities, games, and form of life:
technologies thus contribute to culture and are at the same
time shaped by it (Coeckelbergh 2018). For AI, this means
for example that the biases it may (re-)produce are often
related to the biases that are present in the language and
other games that are played in our societies. Another meta-
phor is grammar: there is already bias in the grammar of
our society: in the way we speak about one another; in the
way we treat one another. These meanings are then repro-
duced and performed in and through AI—without AI itself
having consciousness, experience, or subjectivity, and with
humans involved as necessary co-makers and interpreters of
the meaning. For example, if there are forms of gender bias
in a particular society, then AI that is developed and used
in such a society is, through its use, likely to contribute to
this game or form of life, together with humans. Changing
the game might well be possible, but is a long and difficult
process. Ethics of AI would then have to understand itself
as a game changer by producing meanings that differ from
those enacted in our present games and form(s) of life. For
example, it may try to shape the development of AI in a way
that does not exacerbate, or even avoids supporting at all,
binary ways of thinking about gender in society.
Second, drawing on the work of Ricoeur and Gadamer
and responding to postphenomenology’s claim that technol-
ogies mediate human–world relations (Ihde 1990; Rosen-
berger and Verbeek 2015), several authors in philosophy
of technology have argued for a hermeneutic approach to
digital technologies. Ricoeur argued that human experi-
ence is mediated by language and narrative; this has now
been expanded to technologies. Connecting technologies
to meaning-making, it has recently been argued that digi-
tal technologies mediate and modify our world (Romele
2020), co-configure our narratives (Reijers and Coeckel-
bergh 2020), and mediate our sense-making: people try to
comprehend new technologies and fit them in their daily
practices (Kudina 2021). Seeing digital technologies as hav-
ing nothing to do with human culture and meaning-making
and creating a strong opposition between them, is then itself
one (problematic) way of coping with, and making meaning
of, these technologies. With regard to AI, one could argue
that AI is integrated in our lifeworld and participates in our
sense-making as it shapes our experience and actions and
configures our narratives. For example, if AI monitors my
health (through all kinds of apps and devices), then this co-
writes the narrative of my day (e.g., get up and go running,
don’t eat between this time and that time, doing a particular
kind of exercises and yoga, etc.) but will also influence and
shape the sense I make of myself: the stories I tell to others
(for exampleon social media) but also the story that I tell
to myself: the kind of person that I am and become, and
the sense I make of my life. In this sense, AI becomes co-
narrator of my stories. Again, no conscious AI is needed for
these mediations and participations in meaning-making. It
suffices that humans have consciousness and subjectivity.
AI can only participate in, and co-shape, sense-making and
narrative processes through humans.
Third, both environmental philosophy and posthumanist
theory have questioned anthropocentric views. For exam-
ple, Braidotti (2017) has explored post-anthropocentric
directions in the form of posthuman critical theory and
Puig de la Bellacasa (2017) has argued that care is not just
a ‘human-only matter’ (2). With regard to meaning-mak-
ing, such views at least invite us to consider the idea that
meaning-making is something in which non-humans can
also participate. McCormack (2018) has argued that anthro-
pocentric accounts of meaning-making are untenable if we
situate human meaning-making in an ecological context and
understand it in a non-binary way. While it is not clear what
this means for machines, it is worth considering meaning
as a process in which also non-humans participate. Even
if these non-humans are not conscious and hence do not
have experience, the view that meaning-making is exclu-
sively human seems at least problematic if we consider that
that human is always related to its environment, and that
meaning-making therefore is also always relational. For
example, making meaning of our society today requires that
I somehow also make sense of AI, since AI is now part of
the meaning-full world of my society. In this sense, AI “par-
ticipates” in our collective meaning-making. And, when AI
writes a text (consider the language model GPT-3 that uses
deep learning to produce text), then one could argue that this
contributes to meaning-making, even if only humans can
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2446 AI & SOCIETY (2023) 38:2437–2450
1 3
complete and lead the meaning-making process, since they
have consciousness and experience.
Thus, these directions in hermeneutics imply that AI
is not just the object of our stories, but also contributes
in important ways to these stories and to the meaning that
arises in the process. This enables us to revise the picture
and narrative that emerged so far when we considered how
AI challenges our notions of responsibility: the human-
ist narrative that responsibility in all its forms involves a
kind of battle between humans and machine unnecessarily
exaggerates and misconceives the tensions between, on one
hand, human pursuits such as morality and sense-making,
and technology. Technology is itself human-made and
human-used, and is entangled in various ways with human
beings and what they do. This includes morality and making
sense. AI can contribute to exercising our moral, relational,
and hermeneutic responsibility, for example by making us
aware of existing bias in society or by becoming integrated
in our daily lives. And even if humanists write against AI,
for example in an ultimate humanist effort to win the bat-
tle “against the machine,” AI still shapes their thinking and
sense-making, it is still “with” them—albeit as an opponent
or even enemy. More generally, AI is part of our narratives:
personal narratives and larger, cultural narratives.
Another example of such larger narratives is the tran-
shumanist narrative of increasing intelligence. It tells a par-
ticular history of technological progress, which is a history
of humans, a history of increasing computer power, and a
history of AI (as a kind of hero) doing things and AI events,
e.g., winning the game Go, writing texts, interpreting brain
waves, etc., and which seems to be “driven” by AI. In this
sense, AI is also a “character” (a hero or helper) and even
co-creates the narrative. At the personal level, AI config-
ures our lives and gets integrated in our lifeworld as we use
various AI-powered technologies. For example, as our cal-
endars and phones get increasingly “smart”, they organize
the stories of our daily lives. And perhaps, AI will soon
literally write many of our texts, or at least co-author them.
The relation between technology and meaning is thus far
more complex than presented in standard humanist accounts
that defend meaning and humanity against the invasion and
taking over of technology. As far as the creation of nar-
ratives goes, meaning-making can be a shared or distrib-
uted activity between human and non-human “authors” and
“readers”. Even if AI, as a non-human author and reader, is
not conscious and is a different kind of author and reader
than humans, since it uses and produces a different kind of
knowledge (see also below), it still co-shapes meaning and
narrative. AI is not just an element of our story; it also co-
creates that story.
That being said, even if we accept that AI and other
technologies contribute to meaning-making in the ways
described, we may still want to insist that the responsibility
for this meaning-making remains with the human. AI and
other machines cannot themselves have or take responsibility
for making meaning (since, so I assume, they cannot take
responsibility for anything given that they lack conscious-
ness and subjectivity) and therefore cannot take the her-
meneutic lead, so to speak. Humans have to take that lead:
they carry the hermeneutic responsibility for making sense
of themselves and their social and cultural world (which
includes technology). AI is part of our narratives and helps
to shape them, but it is our responsibility to define their role
as co-creators and it is up to us what place and role we give
them in the narratives that we co-create. And in the end,
it is our responsibility to decide what narratives we want
to (co-)write—including narratives about AI, with AI, and
sometimes against AI.
Asking this question about which narrative we want
is important, since there is a normative dimension to this
responsibility. The precise ways in which AI shapes us and
our stories may be very problematic. Consider for instance
‘quantified self’: this is not only a specific phenomenon of
technology use (self-tracking using digital technologies and
data); the term can also be used to point to a more-than-
instrumental effect of AI and data science that has to do
with meaning and with stories: quantification of the self in
the sense that the self comes to be experienced and shaped
in terms of data, numbers, and statistics, and that the story of
our selves becomes one about data, numbers, and statistics.
Moreover, as I will stress below, when we use such technolo-
gies, there is also the danger that we live a narrative that is
written by someone else (programmers, designers, corpora-
tions, governments)—through technology. For example, a
health app may try to shape how I live my life. Acknowledg-
ing that AI co-shapes ourselves and co-writes our narratives
does not mean that we must uncritically accept the specific
self-formation and story. On the contrary, once we become
aware of the hermeneutic and narrative role of AI and other
digital technologies, we can evaluate what happens and try
to re-shape ourselves and re-write the story. Without know-
ing and acknowledging AI’s hermeneutic role, by contrast,
we risk to be delivered to whatever selves and stories these
technologies (and their designers and employers) co-create.
Understanding and evaluating the narrative role of AI are
thus both a normative and a hermeneutic responsibility.
Moreover, while AI can be meaning-full and can be
meaningfully integrated in our lifeworlds and perhaps even
contribute to our narratives (literally and figuratively) and
hence the meanings in the sense explained, they are not
meaning-making in a strong, human and social-existential
sense of the term captured by Ricoeur and other hermeneu-
tics philosophers. Machines lack consciousness and there-
fore the experience that was a starting point in Ricoeur’s
analysis of narrativity and that is theorized by phenom-
enology and sometimes forgotten by postphenomenology.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2447AI & SOCIETY (2023) 38:2437–2450
1 3
Without having experience in the first place, we cannot
rely on that experience in our narration and achieve a new
understanding that transforms that experience. AI can only
derivatively rely on human experience by extracting and
analysing data that are supposed to represent human experi-
ence, for example human texts or images. And it participates
in the meaning-making process in the sense that it offers
this kind of knowledge. However, it cannot complete the
mimetic and transformative hermeneutic process of narrative
meaning-making; the transformation of understanding, the
new understand emerges in the process but needs to happen
through human experience, interpretation, and subjectivity.
Moreover, meaning-making always happens in a situation
and requires implicit and embodied knowledge that cannot
be formalized. In Dreyfus’s (1972) Heideggerian language,
machines lack being-in-the-world.
In addition, considering further the epistemic dimension
of responsibility (moral and hermeneutic), tensions will
remain between, on one hand, human experience and sense-
making and, on the other hand, what AI “knows” and does,
since it remains a challenge for humans to make sense of the
kind of knowledge provided by AI—at least the kind of AI
that is not based on human reasoning and decision-making
(e.g., decision trees), but that is based on machine learning.
Especially, deep learning with neural nets seems to pose a
problem here, since it is not possible for humans to under-
stand how the machine comes to a decision. More generally,
we must ask what we can do and should do with this kind of
statistical knowledge produced by machine learning. Shall
we (co-)create narratives in which correlations and prob-
abilities play an increasing role? What place do we give this
kind of quantitative knowledge in our lives? What narratives
about our personal future do we want? As AI moves into the
medical sphere (e.g., diagnosis based on data from image
recognition, genome, etc.) and in our daily lives (analysis
of data coming from our self-monitoring, consider again
quantified self), this becomes a very practical and urgent
question that may soon become relevant to everyone. Both
at a personal and cultural level, it may well transform our
self-understanding dramatically. Are these tensions there to
remain, perhaps tragically so, or can they be overcome—
without denying our humanity as social, experiencing, and
interpreting beings? We will be faced with these problems,
whether we want it or not. Next to responsibility to deal with
moral issues, we have the hermeneutic-existential1 task of
making sense of ourselves and our lives given this new form
of knowledge.
For this reason, next to taking care of moral and relational
responsibility, it is important to take up this normative (but
not necessarily moral) task at all levels. At a cultural level,
we need to scrutinize the grand narratives about AI that cur-
rently pervade AI discourse. Consider for example again the
transhumanist narrative about superintelligence, but also the
humanist Frankenstein-like narrative of AI taking over and
the posthumanist fairy tales of friendly AI-others we live
with: all these narratives stand in need of interpretation and
evaluation. We therefore better give both technology devel-
opers and citizens the education that gives them the herme-
neutic tools to take critical distance from these narratives,
revise these narratives, or imagine new, better narratives for
the technological future. Classic humanist media such as
books can and should still play a role in such an educa-
tion. At the personal, interpersonal, and community level,
we need to discuss what kind of plots and stories we want to
create with or without AI, and what role we give AI and the
kind of knowledge it creates in our lives and communities.
If we fail to exercise this hermeneutic and narrative
responsibility, if we fail to make sense, AI may emplot and
organize us in stories created by those who make profit from
the technology2 or who may have other aims that are not
in line with our own aims. Then we leave the creation of
narrative to big tech and its transhumanist supporters, to
governments, and other players that wish to shape our lives.
We may even end up playing the non-human character in the
story: raw resources for data. And if we do not even know
that story or if we are suddenly confronted with it when
it is already too late, it may leave us in a state of herme-
neutic ignorance and existential crisis, potentially leading
to despair and anxiety. We would then be in the situation
that we are living in the narrative that someone else wrote
for us, without even (fully) knowing it. Or worse: we may
expect meaning from technology, but technology alone will
not and cannot provide narrative meaning. Then we end up
in a situation of nihilism, in particular the passive nihilism
Gertz (2018) warns for: we are unwilling to take responsibil-
ity for our lives and try to make the machines responsible
(which, as I assumed here in this paper, is not even possi-
ble). The danger of falling into this void is not only a moral
problem but also a hermeneutic one, especially in modernity.
While having someone else write the story of our lives and
communities was quite a familiar experience in pre-modern
times, when people believed that there was a divine Author
or Authors who would eventually pull things together and
enable meaning, hermeneutically speaking, for us, it is hard
to live and think like this today. And even many pre-modern
people thought that they had a role to play, had to participate
1 Note that more work is needed on the existential dimension of
hermeneutic responsibility and, more generally, on the existential
dimension of our relation to digital technologies and AI. Some points
of departure may be Lagerkvist’s Existential Media (2022) andmy
Human Being @ Risk (2013).
2 This point could be further developed by making connections with
political economy.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2448 AI & SOCIETY (2023) 38:2437–2450
1 3
in the divine meaning-making process via rituals. In (late?)
modern times, we wish to be the first author of our lives,
even if it may no longer be possible to be its sole author.
Today, AI is our co-author, and often an uninvited one. Yet,
we wish to make sense of our lives and we wish to cre-
ate a narrative that we can live with—regardless of further
moral, relational, and scientific considerations. Therefore,
and also taking into account the mentioned limits of AI due
lack of consciousness, subjectivity, and experience (while
acknowledging its participations and mediations in the rel-
evant senses explained), this hermeneutic work cannot and
should not and cannot be outsourced: neither to technology
nor to the tech barons and politicians of this world, who may
be expected to play the role of a deus ex machina that will
save us. Again: humans are the main meaning-makers and,
given that compared to AI only they have consciousness
and experience, meaning-making always has to take place
through them. Humans are the experiencers and meaning-
makers. With a nod to Sartre (2007), who made a similar
claim in the moral sphere when talking about freedom, I
conclude that we are condemned to make sense. And in the
age of AI, we are condemned to make sense of, with, or
against AI. No one else or nothing else will do that or can
do that in our place, least of all AI itself. It is our own, nar-
rative responsibility.
6 Conclusion
This paper has distinguished various notions of responsi-
bility: causal responsibility, moral responsibility, relational
responsibility, and what I have called “narrative respon-
sibility” as a form of “hermeneutic responsibility.” I have
noted that moral responsibility implies relational responsi-
bility and shown how, in some accounts at least, this radi-
cally changes what we understand by moral responsibility.
I also asked some political questions, usually neglected by
standard accounts of moral responsibility. Yetmy main
aim in this paper was to develop the notion of “narrative
responsibility” and to show how AI challenges these dif-
ferent notions of responsibility and underlying assump-
tions about humans and (knowledge of) the world. For
example, the notion of causal responsibility becomes prob-
lematic when AI produces a different kind of knowledge.
The new notion developed in this paper, however, was
narrative responsibility as a specific form of hermeneutic
responsibility. Using Ricoeur (and making a connection to
Wittgenstein) and building on ongoing work in hermeneu-
tics of technology, I have argued why we need the notions
of narrative responsibility and hermeneutic responsibility
next to other notions of responsibility, I have highlighted
the role of imagination in exercising these responsibilities,
and I have discussed the role of AI vis-à-vis these kinds of
responsibility. This has led me to argue that while (1) AI
participates in meaning-making and narration, (2) humans
as conscious beings and beings-in-the-world are the nec-
essary and main meaning-makers and narrators through
which the hermeneutic process of meaning-making—for
example by means of narrative—always has to pass and
attains its completion. Therefore, I concluded that humans
have narrative responsibility and, more generally, a her-
meneutic responsibility: we are responsible to create a
narrative we can live with, and to tell a story that makes
sense of, with, or against AI. This revealed the normative
dimension of hermeneutic responsibility.
Which narrative should we create? Answering this
question goes beyond the scope of this paper, but through-
out the paper, I already raised the question and have indi-
cated a number of conflicting “grand” narratives: human-
ist, posthumanist, and transhumanist ones. Each of them
do not only relate to a particular view of humans and the
world, but also incur normative visions about the future
of humans and the future of technology. However, noth-
ing said here limits our narrative–hermeneutic space to
these narratives. On the contrary, if we have the narrative
responsibility I conceptualized, we must critically discuss
the narratives and explore new narratives.
My general conclusion is therefore that to exercise our
responsibility for AI and towards others, it is not sufficient to
exercise our causal, moral, and relational responsibility. It is
also important to connect moral and relational responsibility
to the hermeneutic role we have as humans. Taking seriously
this hermeneutic responsibility—and understanding the
ways it is woven together with other kinds of responsibility
and with normativity—is essential to our further efforts to
engage with AI not only in morally and politically responsi-
ble ways, but also in meaningful ways, ways that make sense
to us as humans. In the form of narrative responsibility, the
concept of hermeneutic responsibility invites us to make our
narratives about humans and technology explicit, interpret
them, argue about them, and mobilize them in normatively
relevant contexts—for instance, democratic discussions
about technology. And if we reject the current narratives,
for example those written by big tech, we have to create new
onesand better ones. No one will do this for us. It is up to us
to create the new stories and, for example, define the role of
AI in those stories and in the writing of those stories. Thus,
while anthropocentrism might be morally and politically
problematic, hermeneutically speaking it is unavoidable:
we are the main meaning-makers and story tellers. Meaning
has to pass through us. However, if the more posthumanist
directions in the hermeneutics of technology mobilized in
this article are right, this will always have to be done in
“co-authorship” with AI and other technologies of our time.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2449AI & SOCIETY (2023) 38:2437–2450
1 3
Funding Open access funding provided by University of Vienna. No
funding was received for this work.
Data and code availability Not applicable.
Declarations
Conflict of interest No conflict of interest.
Open Access This article is licensed under a Creative Commons Attri-
bution 4.0 International License, which permits use, sharing, adapta-
tion, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons licence, and indicate if changes
were made. The images or other third party material in this article are
included in the article's Creative Commons licence, unless indicated
otherwise in a credit line to the material. If material is not included in
the article's Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will
need to obtain permission directly from the copyright holder. To view a
copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
References
Aristotle (1984) Nicomachean ethics. In: Barnes J (ed) The complete
works of Aristotle, vol 2. Princeton University Press, Princeton,
pp 1729–1867
Bartoletti I (2020) An artificial revolution: on power, politics and AI.
The Indigo Press, London
Berlin I (1997) Two concepts of liberty. In: Berlin I (ed) The proper
study of mankind. Chatto & Windus, London, pp 191–242
Bijker WE, Hughes TP, Pinch T (eds) (1987) The social construction
of technological systems: new directions in the sociology and
history of technology. The MIT Press, Cambridge, MA
Braidotti R (2017) Posthuman critical theory. Journal of Posthuman
Studies 1(1):9–25
Coeckelbergh M (2013) Human being @ risk: enhancement, tech-
nology, and the evaluation of vulnerability transformations.
Springer, Dordrecht, New York
Coeckelbergh M (2018) Technology games: using Wittgenstein
for understanding and evaluating technology. Sci Eng Ethics
24(5):1503–1519
Coeckelbergh M (2020) Artificial intelligence, responsibility attri-
bution, and a relational justification of explainability. Sci Eng
Ethics 26:2051–2068
Crawford K (2021) Atlas of AI: power, politics, and the planetary
costs of artificial intelligence. Yale University Press, New
Haven, London
Dennett D (1984) Elbow room: the varieties of free will worth want-
ing. MIT Press, Cambridge, MA
Dostoyevsky F (1972) Notes from underground (trans. Coulson J).
Penguin Books, London
Dreyfus H (1972) What computers can’t do: the limits of artificial
intelligence. Harper & Row, New York
Duff RA (2005) Who is responsible, for what, to whom? Ohio State
J Crim Law 2:441–461
Fischer JM, Ravizza M (1998) Responsibility and control: a theory
of moral responsibility. Cambridge University Press, Cambridge
Frankfurt H (1969) Alternate possibilities and moral responsibility.
J Philos 66(23):829–839
Gertz N (2018) Nihilism and technology. Rowman & Littlefield,
London
Gilligan CC (1982) In a different voice: psychological theory and
women’s development. Harvard University Press, Cambridge,
MA
Hakli R (2019) Moral responsibility of robots and hybrid agents.
Monist 102(2):259–275
Harari YN (2015) Homo Deus. Vintage, London
Heney C, Banks W, Zimbardo P (1973) Interpersonal dynamics in a
simulated prison. Int J Criminol Penol 1:69–97
Ihde D (1990) Technology and the lifeworld: from garden to earth.
Indiana University Press, Bloomington
Ihde D (1998) Expanding hermeneutics: visualism in science. North-
western University Press, Evanston, IL
Johnson DG (2014) Technology with no human responsibility? J Bus
Ethics 127:707–715
Kudina O (2021) “Alexa, who am I?”: voice assistants and hermeneu-
tic lemniscate as the technologically mediated sense-making.
Hum Stud. https:// doi. org/ 10. 1007/ s10746- 021- 09572-9
Lagerkvist A (2022) Existential media: a media theory of the limit
situation. Oxford University Press, New York
Matthias A (2004) The responsibility gap: ascribing responsibil-
ity for the actions of learning automata. Ethics Inf Technol
6:175–183
Milgram S (1963) Behavioral study of obedience. J Abnorm Soc
Psychol 67:371–378
Nortvedt P, Hem MH, Skirbekk H (2011) The ethics of care: role
obligations and moderate partiality in health care. Nurs Ethics
18(2):192–200
Pereboom D (2001) Living without free will. Cambridge University
Press, Cambridge
Puig de la Bellacasa M (2017) Matters of care: speculative ethics
in more than human worlds. University of Minnesota Press,
Minneapolis, London
Reijers W, Coeckelbergh M (2020) Narrative and technology ethics.
Palgrave, New York
Ricoeur P (1983) Time and narrative—volume 1 (McLaughlin K,
Pellauer D, trans.). The University of Chicago, Chicago
Romele A (2020) Digital hermeneutics: philosophical investiga-
tions in new media and technologies. Routledge, New York,
Abingdon
Rosenberger R, Verbeek P-P (eds) (2015) Postphenomenological
investigations: essays on human-technology relations. Lexing-
ton Books, London
Rudy-Hiller, F. 2018. The epistemic condition for moral responsibil-
ity. Stanford Encyclopedia of Philosophy. Retrieved on April 13,
2021 from https:// plato. stanf ord. edu/ entri es/ moral- respo nsibi
lity- epist emic/
Santoni de Sio F, Mecacci G (2021) Four responsibility gaps with
artificial intelligence: why they matter and how to address them.
Philos Technol. https:// doi. org/ 10. 1007/ s13347- 021- 00450-x
Santoro M, Marino D, Tamburrini G (2008) Learning robots inter-
acting with humans: from epistemic risk to responsibility. AI
Soc 22(3):301–314
Sartorio C (2007) Causation and responsibility. Philos Compass
2(5):749–765
Sartre J-P (2007) Existentialism is a humanism (Macomber C,
trans.). Yale University Press, New Haven, CT and London,
England
Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue
JP (2002) Instant neural control of a movement signal. Nature
416(6877):141–142. https:// doi. org/ 10. 1038/ 41614 1a
Sparrow R (2021) Why machines cannot be moral. AI Soc. https://
doi. org/ 10. 1007/ s00146- 020- 01132-6
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2450 AI & SOCIETY (2023) 38:2437–2450
1 3
Talbert, M. 2019. Moral responsibility. Stanford Encyclopedia of
Philosophy. Retrieved on April 12, 2021 from https:// plato. stanf
ord. edu/ entri es/ moral- respo nsibi lity/
Thaler RH, Sunstein CR (2008) Nudge: improving decisions about
health, wealth, and happiness. Yale University Press, New
Haven, CT
van de Poel I, Royakkers L, Zwart SD (2015) Moral responsibility
and the problem of many hands. Routledge, New York
van Inwagen P (1983) An essay on free will. Oxford University Press,
New York
von Schomberg R (ed) (2011) Towards responsible research and
innovation in the information and communication technolo-
gies and security technologies fields. European Commission,
Brussels. Retrieved on April 13, 2021 from http:// ec. europa. eu/
resea rch/ scien ce- socie ty/ docum ent_ libra ry/ pdf_ 06/ mep- rappo
rt- 2011_ en. pdf
Véliz C (2020) Privacy is power: why and how you should take back
control of your data. Penguin/Bantam Press, London
Winner L (1980) Do artifacts have politics? Daedalus
109(1):121–136
Wittgenstein, L. (2009). Philosophical investigations (revised 4th
edn, Anscombe GEM, Hacker PMS, Schulte J, trans.). Wiley,
Malden, MA
Yeung K (2018) A study of the implications of advanced digital
technologies (including AI systems) for the concept of respon-
sibility within a human rights framework. Retrieved on April
12, 2021 from https:// papers. ssrn. com/ sol3/ papers. cfm? abstr
act_ id= 32860 27
Publisher's Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... The examination of ethical issues related to the adoption of AI in business is a complex and multifaceted subject, with various strands of literature addressing different facets of these concerns. Some common themes and perspectives include: (i) bias and fairness: concerns about biases in AI algorithms, leading to unfair treatment based on gender, race, or socioeconomic status [13][14][15], (ii) transparency and explainability: the need for transparency in AI decision-making processes to build trust and accountability, emphasizing the importance of understanding how AI systems arrive at specific conclusions [16][17][18], (iii) privacy: exploration of the privacy implications of AI technologies involves questions about consent, data ownership, and security, particularly in the context of collecting and processing personal data [19][20][21], (iv) accountability and responsibility: ethical considerations include determining responsibility when AI systems make decisions, addressing critical questions about accountability for the outcomes of AI applications [22][23][24], (v) job displacement and economic inequality: discussions revolve around the impact of AI on employment, potential economic inequality, job displacement, retraining needs, and broader socioeconomic consequences [5,6,25], (vi) autonomy and decision making: ethical considerations explore the autonomy of AI systems and the role of human oversight in their decisionmaking capabilities [26][27][28], (vii) security and malicious use: ethical challenges related to the security of AI systems such as concerns about hacking, malicious use, and the potential weaponization of AI [29][30][31], (viii) global perspectives and cultural considerations: the importance of considering cultural and global variations when discussing AI ethics, acknowledging that ethical standards may vary across contexts [32][33][34], and (ix) long-term implications: exploration of the long-term ethical implications of widespread AI adoption includes speculative scenarios, existential risks, and the need for adaptable ethical frameworks (Bostrom, 2002; [35,36]). ...
Article
Full-text available
Widespread adaptation and implementation of artificial intelligence (AI) across the businesses make ethical implications increasingly important. This study explores the ethical challenges and best practices surrounding the adoption of AI in various business contexts. The study finds that following ethical concerns are the hinderance in the adaptation of AI in business (Privacy and data protection, bias and fairness, transparency and explainability, job displacement and workforce changes, algorithmic influence, and manipulation, accountability, and liability, and ethical decision making). It also shows that these challenges vary across gender, age group, country, profession area, and age of the organizations. Lastly, the study provides insights on how businesses can navigate these challenges while upholding ethical standards. The study finding is highly useful for the business leaders, policymakers, and researchers in ensuring responsible and ethical AI deployment in the business ecosystem.
... Artificial Intelligence, in its current state, lacks the ability to elucidate its decision-making processes, experience authentic empathy, or engage in self-doubt. Consequently, AI falls short of fully embodying or achieving the human concepts of ethics and morality (Berente, 2021;Dörfler & Cuthbert, 2024;Sinnott & Conitzer., 2021;Coeckelbergh., 2023). So, if we leave moral considerations in human hands, when does a human step in? ...
... They aim to increase public awareness and understanding of AI and promote the safe adoption of human-centered and trustworthy AI [43]. Coeckelbergh advocates for narrative responsibility in AI, suggesting that narrative responsibility serves as a means of explaining AI, helping the public understand it, and opposing it when necessary [16]. Watson et al. argue that competing narratives are essential for advancing contemporary AI ethics discourse [62]. ...
Preprint
Full-text available
AI ethics narratives have the potential to shape the public accurate understanding of AI technologies and promote communication among different stakeholders. However, AI ethics narratives are largely lacking. Existing limited narratives tend to center on works of science fiction or corporate marketing campaigns of large technology companies. Misuse of "socio-technical imaginary" can blur the line between speculation and reality for the public, undermining the responsibility and regulation of technology development. Therefore, constructing authentic AI ethics narratives is an urgent task. The emergence of generative AI offers new possibilities for building narrative systems. This study is dedicated to data-driven visual storytelling about AI ethics relying on the human-AI collaboration. Based on the five key elements of story models, we proposed a conceptual framework for human-AI collaboration, explored the roles of generative AI and humans in the creation of visual stories. We implemented the conceptual framework in a real AI news case. This research leveraged advanced generative AI technologies to provide a reference for constructing genuine AI ethics narratives. Our goal is to promote active public engagement and discussions through authentic AI ethics narratives, thereby contributing to the development of better AI policies.
... Teachers are responsible for vetting and critically curating materials they implement in their classrooms. Since researchers identified AI tools could take away individual autonomy and restrict human agency (Coeckelbergh, 2023;Farina et al., 2022), MTEs can take back control by making our teachers aware of the tool's inaccuracies. Therefore, transparent processes are needed to create and use AI tools to support and understand each other. ...
Article
Full-text available
It has been extensively argued that emerging autonomous technologies can represent a challenge for our traditional responsibility practices. Though these challenges differ in a variety of ways, at the center of these challenges is the worrying possibility that there may be outcomes of autonomous technologies for which there are legitimate demands for responsibility but no legitimate target to bear this responsibility. This is well exemplified by the possibility of techno-responsibility gaps. These challenges have elicited a number of responses, including dismissals of the legitimacy of these demands, attempts to find proximate agents that can be legitimately held responsible, and arguments for prohibiting the use of technologies that may open such gaps. In this piece we present a general argument that an overlooked but valuable option lies in adopting a strategy of taking responsibility for the outcomes of autonomous technologies even when the conditions for being legitimately held responsible are not met. We develop a general argument that the adoption of such a strategy is often justified not only by the demands of being responsible, but by practical considerations rooted in our relationships: the need to preserve of the quality of our relationships and the trustworthiness of the socio-technical system of which the autonomous technology is both a result of and embedded in.
Article
Full-text available
In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI agency we begin with, the responsibility gap will boil down to one of these two moral problems. Moreover, I will adduce that this conclusion reveals an underlying weakness in AI ethics: the lack of attention to the question of the disciplinary boundaries of AI ethics. This absence has made it difficult to identify the specifics of the responsibility gap arising from new AI systems as compared to the responsibility gaps of other applied ethics. Lastly, I will be concerned with outlining these specific aspects.
Article
Full-text available
This study adopts a hermeneutic, practice-based approach to Responsible Innovation to explore how a reflective and proactive attitude can be implemented in a start-up context. We hypothesised that a moral hermeneutics framework - rooted in post-phenomenology and theories on technology-induced value change - could provide a way to understand how practitioners in a start-up make sense of the different kinds of responsibilities in their work, balancing professional demands and standards of excellence with broader ecological and social commitments. Using in-depth interviews with the team members of a start-up R&D laboratory, we explored how they interpret their responsibilities-as-(moral)-obligations. Our findings suggest that the syntactical ways team members make sense of the relationship between these responsibilities can be useful for understanding how reflexivity can surface in this environment. We conclude by proposing that less conciliatory interpretations of conflicting responsibilities may lead to a collective search for practical solutions addressing these tensions, as long as it is embedded in a collective dialogue involving the other members’ moral perspectives and technical expertise.
Article
Full-text available
Are there opportunities to use the plural to express the first person (“I”) of “the same person” in English? It means that the self is an entity that guarantees uniqueness and is at the core of identity. Recently, radical and rapid innovations in AI technologies have made it possible to alter our existential fundamentals. Principally, we are now interacting with “virtual personalities” generated by generative AI. Thus, there is an inevitability to explore the relationship between AI and society, and the problem domain of phenomenological sociology related to the “virtuality” of personalities and the world. Encountering and interacting with “others without subject” artificially generated by generative AI based on individual big data and attribute data is a situation that mankind has never experienced before from the perspective of sociology and phenomenological sociology related to the ego. The virtual personalities can be perceived as if it were interacting with existing humans in the form of video and audio, and it is also possible to arbitrarily change their attributes (e.g., gender, race, age, physical characteristics) and other settings, as well as to virtually create deceased persons or great figures from the past. Such technological innovation is, so to speak, a virtualization of human existential identity, and advanced technologies such as AI will transform not only the boundary between self and others but also the aspect of human existence itself (Shibuya in Digital transformation of identity in the age of artificial intelligence. Springer, Belrin, 2020). In addition, from a phenomenological viewpoint, the boundary between reality and virtuality is blurring due to technological innovation in the living world itself, and there is a concern that this will lead to an artificial state of detachment. Actually, the use of advanced technologies such as AI, VR in virtual worlds and cyberspace will not only cause people to lose their reality and actuality but will also endanger the very foundations of their existential identity. Therefore, we must ask what it means for us as existences to interact with virtual personalities in a virtually generated world, and what is the nature of the intersubjectivity formation and semantic understanding as well as the modes of existence, facts, and worlds, and what are their evidential natures. In line with what Husserl, the founder of phenomenology, once declared at the beginning of his “Cartesianische Meditationen” (Husserl in CartesianischeMeditationen, e-artnow, 2018), that “we need to begin philosophy radically anew”, as also phenomenological sociology, it can now state that “we need to begin phenomenological sociology radically anew”. Then, this paper reviews and discusses the following issues based on technological trends.Is there an intersubjectivity between the virtual personalities generated by the AI and the human being? How does the virtualization of identity, as well as the difference between self and others, transform the nature of existence? How is a mutual semantic understanding possible between a human being and the virtual personality that is generated by a generative AI and a generative AI? How can we verify discourses and propositions of fact and worldliness in our interactions with generative AIs, and how can we overcome the illusion (i.e., hallucination) that generative AIs create? What does the transformation of the world and its aspect as existence mean? How is it possible to collaborate between a human being and the virtual personality that is generated by a generative AI and a generative AI?
Article
Full-text available
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.
Article
Full-text available
In this paper, I argue that AI-powered voice assistants, just as all technologies, actively mediate our interpretative structures, including values. I show this by explaining the productive role of technologies in the way people make sense of themselves and those around them. More specifically, I rely on the hermeneutics of Gadamer and the material hermeneutics of Ihde to develop a hermeneutic lemniscate as a principle of technologically mediated sense-making. The lemniscate principle links people, technologies and the sociocultural world in the joint production of meaning and explicates the feedback channels between the three counterparts. When people make sense of technologies, they necessarily engage their moral histories to comprehend new technologies and fit them in daily practices. As such, the lemniscate principle offers a chance to explore the moral dynamics taking place during technological appropriation. Using digital voice assistants as an example, I show how these AI-guided devices mediate our moral inclinations, decisions and even our values, while in parallel suggesting how to use and design them in an informed and critical way.
Article
Full-text available
The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of an ethical claim depends in part on the life history of the person who is making it. For both these reasons, machines could at best be engineered to provide a shallow simulacrum of ethics, which would have limited utility in confronting the ethical and policy dilemmas associated with AI.
Book
This book offers a reappreciation and revisiting of existential philosophy—and in particular of Karl Jaspers’s philosophy—for media theory in order to remedy the existential deficit in the field. The book thereby also offers an introduction to the young field of existential media studies. Jaspers’s concept of the limit situation is chosen as a privileged reality which allows for bringing limits, in all their shapes and forms, onto the radar when interrogating digital existence. Despite their all-pervasiveness the book argues that media speak to and about limits and limitations in a variety of ways. The book furthermore argues that the present age of deep technocultural saturation—and of escalating multifaceted and interrelated global crises—is a digital limit situation, in which there are both existential and politico-ethical stakes of media. To enter into these terrains, the book places the margin of mourners and the meek—the coexisters— at the center of media studies. The book provides an alternative mapping for approaching digital cultures in contexts of both the mundane and the extraordinary, and on scales traversing the individual and the global. Empirically Existental Media attends to mourning, commemorating, and speaking to the dead online as well as to the digital afterlife. It interrogates four cases that center on the voices from the field of online bereavement, and provides an arc of media instantiations of the digital limit situation : chapter 5: Metric Media ; chapter 6: Caring Media , chapter 7: Transcendent Media and chapter 8: Anticipatory media .
Book
This book proposes that technologies, similar to texts, novels and movies, ‘tell stories’ and thereby configure our lifeworld in the Digital Age.The impact of technologies on our lived experience is ever increasing: innovations in robotics challenge the nature of work, emerging biotechnologies impact our sense of self, and blockchain-based smart contracts profoundly transform interpersonal relations. In their exploration of the significance of these technologies, Reijers and Coeckelbergh build on the philosophical hermeneutics of Paul Ricouer to construct a new, narrative approach to the philosophy and ethics of technology. Reijers and Coeckelbergh take the reader on a journey: from a discussion of the philosophy of praxis, via a hermeneutic notion of technical practice that draws from MacIntyre, Heidegger and Ricour, through the virtue ethics of Vallor and Ricoeur’s ethical aim, to the eventual construction of a practice method which can guide ethics in research and innovation. In its creation of a compelling hermeneutic ethics of technology, the book offers a concrete framework for practitioners to incorporate ethics in everyday technical practice.
Book
A landmark book in the debate over free will that makes the case for compatibilism. In this landmark 1984 work on free will, Daniel Dennett makes a case for compatibilism. His aim, as he writes in the preface to this new edition, was a cleanup job, “saving everything that mattered about the everyday concept of free will, while jettisoning the impediments.” In Elbow Room, Dennett argues that the varieties of free will worth wanting—those that underwrite moral and artistic responsibility—are not threatened by advances in science but distinguished, explained, and justified in detail. Dennett tackles the question of free will in a highly original and witty manner, drawing on the theories and concepts of fields that range from physics and evolutionary biology to engineering, automata theory, and artificial intelligence. He shows how the classical formulations of the problem in philosophy depend on misuses of imagination, and he disentangles the philosophical problems of real interest from the “family of anxieties” in which they are often enmeshed—imaginary agents and bogeymen, including the Peremptory Puppeteer, the Nefarious Neurosurgeon, and the Cosmic Child Whose Dolls We Are. Putting sociobiology in its rightful place, he concludes that we can have free will and science too. He explores reason, control and self-control, the meaning of “can” and “could have done otherwise,” responsibility and punishment, and why we would want free will in the first place. A fresh reading of Dennett's book shows how much it can still contribute to current discussions of free will. This edition includes as its afterword Dennett's 2012 Erasmus Prize essay. Bradford Books imprint
Book
This book provides a comprehensive, systematic theory of moral responsibility. The authors explore the conditions under which individuals are morally responsible for actions, omissions, consequences, and emotions. The leading idea in the book is that moral responsibility is based on 'guidance control'. This control has two components: the mechanism that issues in the relevant behavior must be the agent's own mechanism, and it must be appropriately responsive to reasons. The book develops an account of both components. The authors go on to offer a sustained defense of the thesis that moral responsibility is compatible with causal determinism.