ArticlePDF Available

Should Violence Against Robots be Banned?

Article

Should Violence Against Robots be Banned?

Abstract

This paper addresses the following question: “Should violence against robots be banned?” Such a question is usually associated with a query concerning the moral status of robots. If an entity has moral status, then concomitant responsibilities toward it arise. Despite the possibility of a positive answer to the title question on the grounds of the moral status of robots, legal changes are unlikely to occur in the short term. However, if the matter regards public violence rather than mere violence, the issue of the moral status of robots may be avoided, and legal changes could be made in the short term. Prohibition of public violence against robots focuses on public morality rather than on the moral status of robots. The wrongness of such acts is not connected with the intrinsic characteristics of robots but with their performance in public. This form of prohibition would be coherent with the existing legal system, which eliminates certain behaviors in public places through prohibitions against acts such as swearing, going naked, and drinking alcohol.
International Journal of Social Robotics (2022) 14:1057–1066
https://doi.org/10.1007/s12369-021-00852-z
Should Violence Against Robots be Banned?
Kamil Mamak1
Accepted: 24 November 2021 / Published online: 3 January 2022
© The Author(s) 2022
Abstract
This paper addresses the following question: “Should violence against robots be banned?” Such a question is usually associated
with a query concerning the moral status of robots. If an entity has moral status, then concomitant responsibilities toward
it arise. Despite the possibility of a positive answer to the title question on the grounds of the moral status of robots, legal
changes are unlikely to occur in the short term. However, if the matter regards public violence rather than mere violence, the
issue of the moral status of robots may be avoided, and legal changes could be made in the short term. Prohibition of public
violence against robots focuses on public morality rather than on the moral status of robots. The wrongness of such acts is
not connected with the intrinsic characteristics of robots but with their performance in public. This form of prohibition would
be coherent with the existing legal system, which eliminates certain behaviors in public places through prohibitions against
acts such as swearing, going naked, and drinking alcohol.
Keywords Robots ·Prohibition ·Violence ·Human–robot interactions
1 Introduction
The increasing number and sophistication of robots are bring-
ing about changes in humans’ lives on many levels. There are
discussed changes on the macro level—for example, how
robots are changing the job market (c.f. [1,2]—but also on
the micro level, such as appropriate responses to individ-
ual robots. Some human–robot interactions seem relevant to
our moral practice and are already under discussion. Experts
believe that the discussion is unlikely to stop at ethics and will
probably precipitate changes to the scope of the law [3,4,5].
Various guidelines have already been proposed on artificial
intelligence (AI) ethics issues [6,7,8]. One ethical problem
on which to focus is the question of violence against robots
and possible legal recourses. There are examples of both pos-
itive and negative treatment of robots by humans [9], but the
negative behaviors are often ethically more interesting than
the positive ones.
Questions about whether it is acceptable to mistreat robots
are not new. In 2014, Knight stated that it might seem ridicu-
lous to consider whether the treatment of machines should
BKamil Mamak
kamil.mamak@uj.edu.pl
1Department of Criminal Law, Faculty of Law and
Administration, Jagiellonian University, Olszewskiego 2
Street, 31-007 Kraków, Poland
be regulated ([10], 9). The issue has long been deliberated by
ethicists, philosophers, and lawyers, as well as raised in pop-
ular media (c.f. [11,12]). However, these questions become
more pressing with the increasing complexity of robots and
the fact that they share an expanding number of features with
humans and animals. A recent example of public concern on
this topic relates to the Boston Dynamics robots (c.f. [13,
14]). Videos in which robots were kicked and pushed caused
a significant stir, and PETA, an organization fighting for ani-
mal rights, was even asked to intervene [15]. A more recent
example described in Wire d was seen in California in 2019,
when a drunk human knocked down and repeatedly kicked
K5, a security robot from Knightscope [16], see also [17].
Wire d discussed the K5 case under the title “Of Course Citi-
zens Should Be Allowed to Kick Robots.” This paper defends
the opposite view: Violence against robots should prohibited,
at least partially.
The first section of this paper is devoted to the question
of requirements for introducing a general ban on violence
against robots. It is difficult to imagine such a ban coming into
force until there is a consensus on the issue of robots’ moral
status. The second section considers the implications of a
focus on public violence, in particular how this amendment
shifts the point of the discussion from robots’ moral status
to protecting the public sphere against antisocial behaviors.
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1058 International Journal of Social Robotics (2022) 14:1057–1066
The final part of the paper contemplates how such a change
in the law could affect the position of robots.
2 A Ban on Violence and the Moral Status
of Robots
Introducing a prohibition on violence against robots to the
legal system might be perceived as an example of grant-
ing robots rights [18]. Granting rights to robots could seem
unthinkable, but, as Stone has pointed out, granting rights to
new entities was present in the history of law ([19], 2). Dis-
cussion of robots’ possession of rights is strongly connected
with deliberation on their moral status, more precisely on
their moral patiency, that is the capacity to be a target of
right or wrong ([20], 505). Four possible approaches can be
considered. The first is based on the intrinsic properties of a
candidate for the moral circle, the second is based on an inter-
pretation of Kantian indirect duties of humans to animals, the
third is virtue ethics, and the fourth adopts a relational per-
spective.
The usual procedure for attributing moral status, which
could take the form of some rights, to an entity entails look-
ing at the entity’s intrinsic properties. Properties are a robot’s
intrinsic characteristics, defining what it is ([21], 16). If the
entity has a number of crucial features, it could be said to have
moral status and deserve certain rights (cf. [22]. A candidate
for inclusion in the moral circle needs to be characterize by a
certain ontology [23]. It is necessary to consider what prop-
erties are important in determining moral status, for example
consciousness or the ability to feel pain (c.f. [2430]. A posi-
tion based on properties is often accepted as legitimate in a
discussion on rights. Commentators rarely contest that if a
robot has particular qualities, it is acceptable to give them
rights (c.f. [31,32]).
In 1964, Hilary Putman argued that the material of which
an entity is composed should not determine the possibility
of its possessing rights, but rather that it is properties matter
[33]. Granting moral status based on the possession of cer-
tain qualities could be an encouraging stance to those who
believe robots need legal protection, but closer examination
reveals problems. For example, some question which qual-
ities would be sufficient, how to understand qualities (c.f.
[34]), and even whether we should create robots which have
qualities in the first place (c.f. [3537]). This approach also
attempts to defer discussion of this topic to an unspecified
future time [38]. The key problem relates to epistemological
limitations ([39], 212), for example how to assess whether
an entity possesses s certain qualities, such as the ability to
feel pain. In Why You Can’t Make a Computer That Feels
Pain , Daniel Dennett asserted that there must be doubt that
such knowledge of another entity is possible [40], and see
also [41]. As Gunkel has explained, Dennett does not prove
machines’ inability to suffer, however, but our difficulties in
explaining the experience of pain in the first place ([18], 147).
The other popular argument for attributing moral status to
robots centers on indirect duties, which are usually associ-
ated with Immanuel Kant and his views on nature and our
obligations toward animals. Kant stated,
So if a man has his dog shot, because it can no longer
earn a living for him, he is by no means in breach of
any duty to the dog, since the latter is incapable of judg-
ment, but he damages the kindly and humane qualities
in himself, which he ought to exercise in virtue of his
duties to mankind. Lest he extinguish such qualities, he
must already practice a similar kindliness toward ani-
mals; for a person who already displays such cruelty
to animals is also no less hardened toward men. ([42],
212)
Kant believes that we have indirect duties toward animals not
because of their qualities, but because of our own. As we can
see in this argument, an analogy can be drawn between such
discussion of animals and the debate over the status of robots
(c.f. [43,44]),insomesensealso[22], in that context, it could
be asked whether we should treat robots as Kantian dogs [45].
However, other commentators have critiqued such an analogy
as being an unreasonable starting point for a discussion of
the moral status of robots (c.f. [46]), see also [21]. Another
work with implications for this debate is the recent book by
Joshua Smiths, who argues that the proper treatment of robots
could positively impact humans’ dignity and value [47]. Like
the aforementioned approaches, this approach aims to avoid
causing harm to humans by the mistreatment of robots.
The third approach is virtue ethics. Virtue ethics focuses
on the character of the agents, not on their individual actions
(see more [4850]). In the context of robots’ moral status, we
could ask what the mistreatment of robots tells us about a per-
son’s character. Sparrow argues from virtue ethical premises
that even if “cruel” treatment of a robot has no implications
for a person’s future behavior towards people or animals,
it may reveal something about their character, offering us
reason to criticize those actions [51]. Sparrow states, more-
over, that “Viciousness towards robots is real viciousness”
([52], 23). This approach does not claim that mistreatment
of robots is “bad” for robots or bad in a utilitarian sense, but
in the sense that it is incompatible with the model of the vir-
tuous agent [53]. As Coeckelbergh puts it, mistreatment of
robots damages the moral character of the person engaging
in the behavior [54].
The final approach to robots’ moral status is relational and
is mostly represented by Coeckelbergh and Gunkel (c.f. [18,
38,39,5557]). In their view, the source of moral considera-
tion lies not in how the entity is built but on the bonds of our
relationships with them. The key is social relations between
humans and robots ([39], 217). As Gunkel writes,
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
International Journal of Social Robotics (2022) 14:1057–1066 1059
According to this alternative way of thinking, moral
status is decided and conferred not on substantiative
characteristics or internal properties that have been
identified in advance of social interactions but accord-
ing to empirically observable, extrinsic relationships.
([18], 241)
In answer to the question of whether violence against robots
should be banned based on the views presented above of
robots’ moral status, each of the four above approaches can
theoretically justify such a ban.
Practical aspects of the topic are even more problematic. A
view on this question based on intrinsic properties is the most
widely accepted in the literature, but it is difficult to show that
robots have certain qualities. The other approaches are not
widely accepted, and it is difficult to imagine that legislators
would introduce changes in law when there is no common
acceptance of their necessity among experts in the field. As
discussed above, changes in the law could be introduced from
a different perspective, avoiding a discussion of moral status.
It is necessary to consider how exactly a robot can be
defined. Many machines could be considered robots: Vac-
uum cleaners, autonomous vehicles, humanoids, military
robots, bots on social media, and even smartphones might
be classified as such. Jordan identifies three main reasons
for the difficulty in defining the term “robot.” The first is
that the definition is not settled, even among experts in the
field; the second is that a definition is continually evolving
due to changing social contexts and technologies; and the
third is that science fiction determined the conceptual frame-
work before engineers addressed it [58]. Science fiction’s
role might seem irrelevant, but as Adams et al. have pointed
out, science fiction’s influence over AI and robotics is sub-
stantial. Science fiction has inspired research when, usually,
inspiration would be assumed to travel in the opposite direc-
tion ([59], 30). Considering such aspects is also important
in the regulation of robots. A particular regulatory decision
could depend on how society perceives robots. Considera-
tion of this issue cannot focus only on the technical aspects
of such entities but must also regard social perceptions and
the fact that those perceptions depend upon the portrayal of
robots.
Some definitions of robots exclude different types of arti-
facts. For example, Winfield’s definition of a robot empha-
sizes embodiment, entailing that bots are not robots: A robot
is an AI with a physical body” ([60], 8). As Gunkel has
pointed out, researchers wrestling with definitions when writ-
ing on robots usually resort to operational definitions, which
are used for further deliberation [18]. This approach is also
evident in the discussion of legal implications in robotics. For
example, in his introduction to Robot Law [61], Froomkin
defined a robot thus:
The three key elements of this relatively narrow, likely
under-inclusive, working definition are: (1) some sort
of sensor or input mechanism, without which there can
be no stimulus to react to; (2) some controlling algo-
rithm or other system that will govern the responses to
the sensed data; and (3) some ability to respond in a
way that affects or at least is noticeable by the world
outside the robot itself, ([62], XI)
My own position in relation to any proposed regulation is
that a decision on what a robot is must be based not on the
intrinsic internal qualities of a robot but on its appearance.
The argument supporting this claim is introduced in the next
section.
3 A Ban on Public Violence Against Robots
From a practical viewpoint, policymakers could ignore philo-
sophical deliberations, as they occasionally seem to do, and
could immediately introduce legal protection against vio-
lence offered to robots. Law is conventional, and a lot of
rapid changes are possible. However, certain commentators
have argued that, even if it were possible to do so, legal rights
should not be given to robots [35,63]. Bro˙zek and Jakubiec
have argued that legal responsibility should not be introduced
to autonomous machines [64]. Authors have observed that
any such law could only be “law in books” and could not be
used in real life (i.e., could not be “law in action”). If a policy-
makers wants to change the law in this area, the change should
cohere with folk-psychology [64]. According to Hutto and
Ravenscroft "Folk psychology is a name traditionally used
to denote our everyday way of understanding, or rationaliz-
ing, intentional actions in mentalistic terms." [65]. Bro˙zek
and Jakubiec’s argument indicates that the decision should
depend on consensus. In the case of contemporary robots,
even experts are extremely divided on both the moral status
of robots and how the law should react.
This paper focuses on violent behavior toward robots. Pro-
posals have already been made to legally limit the treatment
of robots, but these proposals are based, at least partially, on
different arguments than those presented in this paper. Two
papers are particularly relevant to this question: Whitby’s
“Sometimes it’s hard to be a robot: A call for action on the
ethics of abusing artificial agents” [66] and Kate Darling’s
“Extending legal protection to social robots: The effects of
anthropomorphism, empathy, and violent behavior towards
robotic objects” [43]. Whitby has emphasized that his publi-
cation is only an invitation to discussion. However, he does
not merely formulate an abstract idea but argues for changes
in the treatment of robots. He suggests that we should limit
our behavior toward them because if we do not do so, the
outcome could be violence against humans. To support his
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1060 International Journal of Social Robotics (2022) 14:1057–1066
position, Whitby refers to the allegedly negative impact of
violent video games on gamers’ behavior ([66], 329). He is
concerned that if there is a possibility of mistreating robots,
violence against humans could ensue. Similar arguments in
other studies focus on the abuse of robots by children [67].
In Whitby’s logic, if we do not stop children from abusing
robots, they could become violent toward other people. This
argument is uncertain (see in that context [68]). Many studies
have examined the impact of video games on violence, and
the results have not provided strong evidence that games have
such an impact (see, e.g., [69]). Darling has proposed extend-
ing legal protection to one specific kind of robot, namely
social robots, which are designed to interact with humans on
a social level ([43], 213). She uses the argument of indirect
duties discussed above. The justification for my own thesis
differs from that presented in these two papers. However,
it is partially connected with the "indirect duties" approach
in the form used by Darling. Violence against robots could
be partially prohibited if the question of the moral status of
robots is avoided. For example, we can ask instead, “Should
we ban public violence against robots?” Such an amend-
ment removes the discussion of the moral status of robots. It
is “public morality” that is being protected, rather than the
robots’ moral status. My view should not be seen as indirect
protection of humans’ moral status because I distinguish the
legal treatment of public and private violence against robots.
In my view, private violence should not be banned. In my
approach, there is a focus on regulating robots’ presence in
public spaces (see also: [70,71]).
I argue that public violence against robots is contrary
to public morality and should be prohibited. To support
this argument, I consider other prohibitions and show that
the logic behind the existing rules should include violence
against robots, if such violence is similarly perceived. Con-
sidering that this is an argument for the “coherence of the
legal system, the Polish criminal law system is used as an
illustration, but similar provisions are present in many other
legal systems.
In Polish criminal law, prohibited acts are crimes or con-
traventions [72]. More serious behaviors are crimes, usually
situated in the criminal code, and more petty behaviors are
contraventions. There is a special code of contraventions, but
many prohibited contraventions also exist in different legal
acts. Three examples of contraventions could be considered,
two of which are listed in the chapter on the code of contra-
ventions entitled “Contraventions against public decency”:
Article 140. Whoever publicly commits an indecent act
shall be punishable by detention, restriction of liberty, a fine
of up to PLN 1,500, or a reprimand.
Art. 141. Whoever shows in public places an indecent ad,
caption or drawing or uses indecent words, is subject to the
restriction of liberty, fines of up to PLN 1,500, or the penalty
of reprimand.
The third provision is from the Act on Sober Upbringing
and Combating Alcoholism of 26 October 1982:
Art. 14 clause 2a. It is forbidden to consume alcohol bev-
erages in a public place, except in places intended for their
consumption on the spot, in points of sale of such beverages.
Legal scholars have commented on the provision prohibit-
ing an “indecent act,” addressing not only the wording of the
provision but also the name of the chapter in which it is
situated. They have argued that public decency is, in other
words, public morality [73]. The perpetrator of an indecent
act violates the moral norm, leading to feelings of shame and
embarrassment on the part of the witnesses [74]. According
to the Polish Supreme Court,
An indecent act is a behavior that, in the specific cir-
cumstances of time, place, and surroundings, should
not be expected due to the customary norms of human
coexistence, which therefore causes widespread nega-
tive social assessments and feelings of disgust, anger,
and indignation. An indecent act is therefore charac-
terized by a sharp contradiction to generally accepted
norms of behavior. (see the judgment of the Supreme
Court of 2 December 1992, III KRN 189/92)
Public nudity is an example of the behavior prohibited under
this provision. As we can see from how the provision is under-
stood, the error in the action goes beyond the act as such.
Any person can be naked in their own homes. The illegality
of such activity is the context in which it happens because
it causes discomfort for witnesses of the act. The same is
true in regard to the public use of swear words or drinking
alcohol. These behaviors are not intrinsically bad nor ille-
gal regardless of context: People may do these things, but
these acts are limited in public spaces because such acts can
discomfort potential observers. In other words, we protect
potential witnesses from acts that could be unpleasant for
them to watch in public spaces. The common grounding of
these prohibitions is that they are all perceived to be morally
wrong if enacted in public spheres, making them contrary to
public morality. Anyone can enact these actions in private
spaces, and there is nothing intrinsically immoral about any
of the listed behaviors. The prohibition regards only their
public performance.
Looking at the public sphere as something special in this
way is not new. The Italian thinker Cesare Beccaria, in his
treatise “On Crimes and Punishments” (first edition 1764),
mentions crimes against “public peace” as one of the three
basic types of crime: “Finally, in the third type of crimes we
find particularly those which disturb the public peace and
the calm of the citizenry, such as brawls and revels in the
public streets which are meant for the conduct of business
and traffic” ([75], 29). His understanding also covers more
serious behaviors, but he identifies that the public sphere, as
a place of calm for citizen, should be protected by law.
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
International Journal of Social Robotics (2022) 14:1057–1066 1061
It is necessary to question whether violence against robots
is an act similar to those subject to the prohibitions discussed
above and whether it is perceived by the public as some-
thing morally bad. The media have reported on a number
of reactions to violence against robots, as discussed above,
for example when the Boston Dynamic robots were kicked
[76]. Another example of violence against robots was the
case of hitchBOT, a hitchhiking robot destroyed by vandals
in the United States after successfully travelling though sev-
eral other countries [77]. It is interesting to consider why
the media highlight the story of the destruction of hitchBOT
but do not cover the destruction of a toaster. As Coeckel-
bergh has pointed out, “Many people respond to robots in
a way that goes beyond thinking of the robot as a mere
machine” ([78], 142). Robots are not perceived simply as
a tool [79], and the key concept here is empathy. Humans
can empathize with robots [80]. In the debate on granting
rights to robots, Turner has termed this as an “argument from
compassion” (96, 155). Psychological research has demon-
strated that humans can empathize with robot “pain” [81],
and other research has confirmed this finding [82,83]. Many
people believe that it is morally questionable to act violently
against robots. Researchers have analyzed the tweets about
the hitchBOT, and a qualitative analysis of Twitter users’
reactions to the destruction of the robot suggests that they
perceived the actions of the vandals to be morally corrupt
[84]. People experienced discomfort at the idea of robot pain
and believed that the perpetrators’ acts were immoral. These
responses depend not on the intrinsic qualities of robots such
as ability to feel pain.
One might wonder, if robots cannot suffer, why not edu-
cate people about that instead of banning such behaviors?
The proposed provision responds to how people react to
such acts, independent of whether that reaction is justified
on the grounds of the moral status of robots. Violence is per-
ceived as something unpleasant to see. It is also not entirely
obvious that this feeling results from a lack of knowledge
about whether robots can suffer. A recent study by Lima
et al. reports that “Even though people did not recognize any
mental state for automated agents, they still attributed pun-
ishment and responsibility to these entities” [85], 1, see also
[86]. This observation suggests that the reaction to robots
is independent of knowledge about their ontology. Even if
people understand that robots do not have an inner life, they
still feel empathy toward them. What the discussed law aims
to achieve is to protect public spheres from behaviors that
make people feel uncomfortable. In this way, banning vio-
lence against robots and not recognizing the moral status
of robots are not seen as a contradiction. The law should
consider how people function and that an empathic feeling
toward robot pain seems to be part of us.
It is unclear whether people would accept such a law. One
survey asked whether robots should have rights. Findings
from one survey that listed a range of specific rights, includ-
ing one addressing violence, indicated “that even though
online users mainly disfavor AI and robot rights, they are
supportive of protecting electronic agents from cruelty (i.e.,
they favor the right against cruel treatment)” [87]. Thus, pub-
lic acceptance of such a law seems possible, but before any
legal decision is made, more research is necessary. For exam-
ple, the scope acceptance of limitations of humans’ behaviors
toward robots needs to be empirically investigated.
In summary, violence against robots is regarded as morally
wrong by the public. It is perceived as something unpleas-
ant to watch, like other acts against public morality and,
therefore, should be prohibited. This approach complements
existing rules, making the system more coherent.
It is reasonable to question why a proposal prohibiting
violence against robots should be advanced rather than a
proposal in relation to other (embarrassing or discomfort-
ing) behaviors that could be perceived in a similar way.
Other behaviors could also be interpreted as contrary to
public morality (such as spitting or screaming) but are not
currently prohibited widely. One issue is timing. Violence
against robots is a relatively new phenomenon for society to
deal with. Discussion of this topic and how the law should
react continue. This social problem might also become more
important as robots become more animal-like and human-
like and as we encounter more of them in everyday situations.
It is important to discuss potential solutions to this problem
well in advance, and my proposal offers a possible solution
that could be introduced at any time.
The other issue to contemplate is why we should consider
use of repressive instruments to regulate human behavior.
Some scholars have argued that our life is already over-
criminalized (c.f. [88]). The literature attests that enforcing
morality by law is a direct path to over-criminalization (cf.
[89,90]). Significant social effects are created by excessive
use of criminal law, for instance extensive incarceration (c.f.
[91,92]). These problems are critical, but rejecting the possi-
bility of regulating human behavior toward robots could also
entail the rejection of the validity of other prohibitions (such
as the prohibition of public indecency). I argue from the situ-
ation of the existing legal system and its intrinsic coherence.
The other question to consider is whether criminal law should
regulate such behaviors. The consequences of the answer to
that question concern not only behaviors against robots but
also similar behaviors against public morality. Importantly,
the behavior subject to the proposed ban will not be a crime:
A person punished for such an act will neither have a criminal
record nor end up in jail.
It is useful at this point to return to the issue of definition.
Introducing a provision against violence to robots, especially
in the sphere of prohibitions, requires the use of terms that
make the relatively easy distinction between prohibited and
non-prohibited acts. For that reason, the criterion of distinc-
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1062 International Journal of Social Robotics (2022) 14:1057–1066
tion should not lie in the intrinsic qualities of robots or their
social role but in their external appearance. I propose to use
the term “life-like” robot in the provision. That choice is
coherent with the justification of the ban mentioned earlier.
This descriptor also marks the point at which my justifica-
tion differs from Darling’s view on protecting social robots.
According to Darling, A social robot is a physically embod-
ied, autonomous agent that communicates and interacts with
humans on a social level” ([43], 215). Arguably, social robots
need special treatment (c.f. [93,94]), including protection
against violent behavior, but there are practical obstacles to
introducing such a postulate into law. The social role of the
robot is almost impossible to recognize ex ante in every case.
It would be challenging to introduce a provision based on that
feature of robots. In some cases, it would be possible only
ex post to determine whether the attacked robot has a social
role or not. In the proposal advanced in this paper, this prob-
lem does not exist. An assessment of whether the robot is
protected ex ante could be made simply by looking at it.
The justification based on wanting to eradicate antisocial
patterns in the public sphere is phenomenological: It relates
to how the behavioral act is perceived. Under such a prohi-
bition, robots would be protected, such as social robots, but
not only and not all, since there is a need to use a demar-
cation criterion considering the multitude of forms in which
robots are created. There could be a sophisticated robot that
looks like stone or bread. Even if it is kicked, this robot is
unlikely to provoke empathic feelings. People show empa-
thy particularly towards robots when their outer appearance
is similar to that of living beings [5]. Therefore, a criterion
should be based on appearance. This criterion should cover
human-like and animal-like robots, and the “life-like” cate-
gory includes both. This part of the deliberation resembles
John Danaher’s view that robots should be welcomed into a
moral circle based on empirical behaviorism, which focuses
on how we perceive robots [22]. However, Danaher is dis-
cussing moral, not legal, status. In a critique of Danaher’s
approach, Smids identifies that knowledge of the design pro-
cess and the robot’s ontology are also highly relevant [95].
Danaher is not necessarily arguing against Smids’ position,
but he focuses on observable behavior, a feature to which we
have access.
Some arguments used here would support a ban on vio-
lence against virtual robots (bots); however, in my proposal I
use the legal concept of public spaces (generally understood
by lawyers as physical places) and the regulation of behav-
iors in such places. Until there is a change in how public
places are perceived to include public virtual places, the pro-
posed provision will not cover violence against bots. Thus,
the proposal concerns only embodied robots. Embodiment is
not mentioned directly in the proposal but results from the
earlier justification. The previously discussed provisions reg-
ulating behaviors in public spaces (swearing, drinking, and
indecent acts) are concerned with physical places. The argu-
ment concerning the coherence of the legal system is valid
as far as it is concerned with such physical places. Addition-
ally, the provision will ban the performance of violence by
the perpetrator, and only such behaviors will be prohibited.
Projecting acts of violence on the big screen in public places
is something for further and separate consideration; it might
also upset bystanders, but it will not be covered by this pro-
posed provision. Some may argue in support of such a ban,
but it is beyond the scope of the proposed law.
In regard to how the situation concerning the regulation of
violence might change after the proposed provision is intro-
duced, it should be recalled that violence against humans
is already a crime under the criminal code. Violence against
animals is, in general, prohibited under the Act on the Protec-
tion of Animals. Violence against owned robots to the point
of their destruction is already a crime under provisions for
property protection. The proposed change will include vio-
lent behavior toward robots that does not cause significant
destruction. It should be added that it concerns only inten-
tional violence. That is, the perpetrator must have a specific
mental attitude [72], meaning that unintentional or artistic
(also intentional) violent behaviors would not be treated as
prohibited acts. Punishments should suit existing provisions.
Under the mentioned legal system, the punishment would be
a reprimand or fine, thus symbolic rather than harsh.
To summarize, the ban should concern embodied life-like
robots, regardless of their level of sophistication and intrinsic
qualities. There could be other reasons to change the law,
aiming to prohibit misbehavior toward robots, but the reason
presented in this paper is limited to public violence and is
independent of the internal features of robots or their role.
Based on the abovementioned examples of provisions stating
prohibitions, the new provision, focused on robots, could read
roughly as follows: Whoever publicly treats life-like robots
violently shall be subject to punishment.
4 Moral Side Effects of Prohibition
On my analysis, the justification for making changes in law
and banning public violence against robots does not relate to
the moral status of robots. Such a regulation could be intro-
duced to law immediately, based on the existing logic of the
legal system and in the interest of legal coherence. However,
such changes would not entail that the question of robots
as entities with moral status would become irrelevant. Such
changes would partially satisfy demands to grant moral status
to robots and address concerns about human epistemological
limitations. At the same time, the deliberations presented in
this section could be seen as arguments against the proposed
provision by those opposed to granting moral status to robots.
Such laws could raise moral considerations toward entities
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
International Journal of Social Robotics (2022) 14:1057–1066 1063
that are seen by critics of granting rights to robots, as unde-
serving of special treatment (see e.g. [35]). The prohibition
of public violence against robots has two possible moral side
effects.
One effect of the legislative decision outlined above is
the granting of protection by law to robots, at least partially.
Although the aim of the provision differs (it should directly
protect public morality, translated as protecting society from
inconvenient experiences in public places), if people obey
that law, robots gain protection in public places. This out-
come could be treated as a side effect. It is an extremely
limited protection, but it is a starting point for granting robots
stronger protections. Other entities whose moral status is not
in doubt, such as animals, are also not totally protected. On
the one hand, the mistreatment of animals in many countries
is legally curtailed; on the other, in the same countries it is
possible to kill animals for safety reasons, for food, and for
clothing. For example, a farmer could be criminally liable
for mistreatment of a cow and be sentenced for the same, but
retain the right to kill the same cow for meat without any
legal consequences. From that perspective, other provisions
protect robots as a side effect. Robots are others’ property
and are usually expensive. If someone kicks and damages
a robot on the street, that act could be considered a crime.
The value protected in such cases is not the moral status of
the robot but that of the property (see [96], 165). There is a
possibility that the system of norms could introduce actual
protection for robots on the grounds of public or private law,
without a single provision aimed at improving the situation
of robots as entities with moral status.
The other possible moral side effect is a potential change
in public morality manifesting itself through the recognition
of the moral significance of robots. Such a change could
be helpful for the wider acceptance of laws aimed solely at
justification, as connected with the moral status of robots.
For the law to be effective, it should be based on widely
accepted views. It may be possible for the law to change the
perception of robots and cause people to see their intrinsic
moral significance. The law can change morality.
The legal philosopher H.L.A. Hart has deliberated on the
connection between law and morality, stating that morality
impacts law and that law impacts the development of moral-
ity ([97], 1). One way to understand that the law has moral
value is to say it has the potential to achieve moral goals.
Green considers this an instrumentalist thesis of the law [98].
Brownell and Child have debated the instrumental value of
law in three categories: law as a moral advisor, law as a moral
example, and law as a moral motivator. From the perspective
under discussion, the most important of the three categories
is the “moral advisor” function, and there are two ways for
law to play that role. The authors refer to the “coordination
function” and the “moral leadership function” of law, the
second of which is crucial. The law can change public per-
ception of public values, and the authors give examples of
such change, including the recognition that rape should be
prohibited in marriage or that children should be provided
with special protections ([99], 33), along with the promotion
of moral norms endorsed by only part of the population. In
their words, “the law serves us by expanding our moral hori-
zons and promoting us to attend to issues to which we might
not otherwise have given much thought” ([99], 3).
Research on opinions about granting rights to robots sug-
gests there is no common belief that robots are entities
deserving rights based on their intrinsic qualities [87]. A
number of experts simultaneously promote the recognition
of the moral significance of robots (c.f. [18,22,39]), and
tension results from the variation in positions on the topic.
It seems that few people hold the view that robots should be
recognized as possessing moral status, at least for now, and
the majority of people think otherwise. However, attitudes
might change, and the wider public might think differently
about robots if the proposed change in the law to directly
include robots is implemented. This possibility problema-
tize my legal proposition, especially from the perspective of
positions that are against treating robots as belonging to a
moral circle (c.f. [36]).
5 Conclusions
This paper has considered whether violence against robots
should be banned, a question usually connected with the mat-
ter of the moral status of robots. Limitation on how humans
can behave toward robots is one consequence of their pos-
sessing moral status. I have noted that a positive answer to
this question is possible on the grounds of the four discussed
approaches, which focus on the intrinsic properties of the
robot, indirect duties toward them, virtue ethics, and a rela-
tional approach. The most widely accepted way of ascribing
moral status is the first approach, but it is simultaneously the
most problematic. There is no consensus on which proper-
ties are crucial, what they exactly philosophically mean, or
how we could know that the robots have those properties. For
now, at least, it seems that robots lack the qualities required
to grant them status based on their ontology. Indirect duties,
virtue ethics, and relational approaches are not that popular,
and it could be problematic to apply them directly in law with-
out achieving an acceptance threshold among both experts
and the public. However, if the concern is public violence
rather than violence more generally, a discussion of robots’
moral status can be avoided. Prohibition of public violence
against robots focuses not on the robots themselves but on
public morality. The wrongness of such acts is not connected
with the intrinsic characteristics of the acts but the fact that
they are carried out in public. Furthermore, such a prohibi-
tion is coherent with existing regulations in the legal system
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1064 International Journal of Social Robotics (2022) 14:1057–1066
that aim to eliminate certain behaviors in public places, such
as prohibitions against swearing, going naked, and drinking
alcohol. The proposed change could be introduced into law
immediately. Despite the fact that this regulation would be
detached from the discussion of robots’ moral status, it could
bring about “moral side effects” for robots and afford them
certain partial rights. If people behaved according to the new
regulation, robots would be protected in some spheres. The
change of law suggested in this paper could prompt changes
in moral attitudes toward robots, based on the notion that
changes in the law can lead to changes in morality. However,
this could also be seen as an argument against introducing
such laws by those opposed to granting moral status to robots.
Data availability Data sharing not applicable to this article as no
datasets were generated or analysed during the current study.
Declarations
Conflict of interest Author declare that there is no conflict of interest.
Open Access This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing, adap-
tation, distribution and reproduction in any medium or format, as
long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons licence, and indi-
cate if changes were made. The images or other third party material
in this article are included in the article’s Creative Commons licence,
unless indicated otherwise in a credit line to the material. If material
is not included in the article’s Creative Commons licence and your
intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copy-
right holder. To view a copy of this licence, visit http://creativecomm
ons.org/licenses/by/4.0/.
References
1. Danaher J (2019) Automation and utopia: human flourishing in a
world without work. Harvard University Press, Cambridge, Mas-
sachusetts
2. Ford M (2016) Rise of the robots: technology and the threat of a
jobless future, Reprint. Basic Books, New York
3. Coeckelbergh M (2020) AI ethics. The MIT Press, Cambridge, MA
4. Darling K (2017) Who’s Johnny?’ anthropomorphic framing in
human-robot interaction, integration, and policy. In: Lin P, Abney
K, Jenkins R (eds) Robot Ethics 2.0: from autonomous cars to
artificial intelligence, 1st edn. Oxford University Press, New York,
NY
5. Müller VC (2020) Ethics of Artificial Intelligence and Robotics. In:
Zalta EN (Ed.), The Stanford Encyclopedia of Philosophy, Sum-
mer 2020. Metaphysics Research Lab, Stanford University. https://
plato.stanford.edu/archives/sum2020/entries/ethics-ai/
6. Hagendorff T (2020) The ethics of ai ethics: an evaluation of guide-
lines. Mind Mach 30(1):99–120. https://doi.org/10.1007/s11023-
020-09517-8
7. Fosch Villaronga E, Angelo G Jr (2019) Robots, standards and
the law: rivalries between private standards and public policymak-
ing for robot governance. Comput Law Secur Rev 35(2):129–144.
https://doi.org/10.1016/j.clsr.2018.12.009
8. Bennett B, Daly A (2020) Recognising rights for robots: Can we?
Will we? Should we? Law Innov Technol 12(1):60–80. https://doi.
org/10.1080/17579961.2020.1727063
9. Nyholm S (2020) Humans and robots: ethics, agency, and anthro-
pomorphism. Rowman & Littlefield Publishers, London, New York
10. Knight H (2014) How humans respond to robots: building
public policy through good design. Brookings (blog) 29 Jul
2014. https://www.brookings.edu/research/how-humans-respond-
to-robots-building-public-policy-through-good-design/
11. Fisher M (2012) A different justice: why Anders Breivik only got
21 years for killing 77 people. The Atlantic 24 Aug 2012. https://
www.theatlantic.com/international/archive/2012/08/a-different-
justice-why-anders-breivik-only-got-21-years-for-killing-77-
people/261532/
12. Roh D (2009) “Do Humanlike Machines Deserve Human Rights?”
Wired, 2009. https://www.wired.com/2009/01/st-essay-16/
13. Parke P (2015) “Is it cruel to kick a robot dog?” CNN. 13
Dec 2015. https://www.cnn.com/2015/02/13/tech/spot-robot-dog-
google/index.html.
14. Tiku N (2015) Stop kicking the robots before they start kicking
us. The Verge. 12 Feb 2015. https://www.theverge.com/2015/2/12/
8028905/i-really-dont-think-we-should-be-kicking-the-robots
15. Graham DA (2017) What interacting with robots might reveal
about human nature. The Atlantic 30 Jun 2017. https://www.
theatlantic.com/technology/archive/2017/06/kate-darling-robots-
aspen/532194/
16. Harrison S (2019) Of course citizens should be allowed to
kick robots. Wired, 29 Aug 2019. https://www.wired.com/story/
citizens-should-be-allowed-to-kick-robots/
17. Keijsers M, Kazmi H, Eyssel F, Bartneck C (2021) Teaching
robots a lesson: determinants of robot punishment. Int J Soc Robot
13(1):41–54. https://doi.org/10.1007/s12369-019-00608-w
18. Gunkel DJ (2018) Robot rights. The MIT Press, Cambridge, Mas-
sachusetts
19. Stone CD (2010) Should trees have standing?: Law, morality, and
the environment, 3rd edn. Oxford University Press, Oxford, New
York
20. Gray K, Wegner DM (2009) Moral typecasting: divergent per-
ceptions of moral agents and moral patients. J Pers Soc Psychol
96(3):505–520. https://doi.org/10.1037/a0013748
21. Gellers JC (2020) Rights for robots: artificial intelligence, animal
and environmental law. Routledge, England
22. Danaher J (2020) Welcomingrobots into the moral circle: a defence
of ethical behaviourism. Sci Eng Ethics. https://doi.org/10.1007/
s11948-019-00119-x
23. Floridi L (2013) The ethics of information, Reprint. Oxford Uni-
versity Press, Oxford
24. Floridi L, Sanders JW (2004) On the morality of artificial
agents. Mind Mach 14(3):349–379. https://doi.org/10.1023/B:
MIND.0000035461.63578.9d
25. Hildt E (2019) Artificial intelligence: does consciousness Matter?
Front Psychol. https://doi.org/10.3389/fpsyg.2019.01535
26. Himma KE (2009) Artificial agency,consciousness, and the criteria
for moral agency: what properties must an artificial agent have to
be a moral agent? Ethics Inf Technol 11(1):19–29. https://doi.org/
10.1007/s10676-008-9167-5
27. Levy D (2009) The ethical treatment of artificially conscious
robots. Int J Soc Robot 1(3):209–216. https://doi.org/10.1007/
s12369-009-0022-6
28. Mosakas K (2020) On the moral status of social robots: consid-
ering the consciousness criterion. AI Soc. https://doi.org/10.1007/
s00146-020-01002-1
29. Neely EL (2014) Machines and the moral community. Philos Tech-
nol 27(1):97–111. https://doi.org/10.1007/s13347-013-0114-y
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
International Journal of Social Robotics (2022) 14:1057–1066 1065
30. Sparrow R (2004) The turing triage test. Ethics Inf Technol
6(4):203–213. https://doi.org/10.1007/s10676-004-6491-2
31. Gordon J-S (2020) What do we owe to intelligent robots? AI Soc
35(1):209–223. https://doi.org/10.1007/s00146-018-0844-6
32. Santosuosso A (2015) The human rights of nonhuman artificial
entities: an oxymoron? Jahrb für Wiss Ethik 19(1):203–238. https://
doi.org/10.1515/jwiet-2015-0114
33. Putman H (1964) Robots: machines or artificially created life? J
Philos 61(21):668–691. https://doi.org/10.2307/2023045
34. Umbrello S, Sorgner SL (2019) Nonconscious cognitive
suffering: considering suffering risks of embodied artifi-
cial intelligence. Philosophies 4(2):24. https://doi.org/10.3390/
philosophies4020024
35. Bryson JJ (2010) Robots should be slaves. Close Engagem Artif
Companions Key Soc Psychol Ethical Des Issues 8:63–74
36. Bryson JJ (2018) Patiency is not a virtue: the design of intelligent
systems and systems of ethics. Ethics Inf Technol 20(1):15–26.
https://doi.org/10.1007/s10676-018-9448-6
37. van Wynsberghe A, Robbins S (2019) Critiquing the reasons for
making artificial moral agents. Sci Eng Ethics 25(3):719–735.
https://doi.org/10.1007/s11948-018-0030-8
38. Gunkel DJ (2018) The other question: can and should robots have
rights? Ethics Inf Technol 20(2):87–99. https://doi.org/10.1007/
s10676-017-9442-4
39. Coeckelbergh M (2010) Robot rights? Towards a social-
relational justification of moral consideration. Ethics Inf Technol
12(3):209–221. https://doi.org/10.1007/s10676-010-9235-5
40. Dennett DC (1978) Why you can’t make a computer that feels pain.
Synthese 38(3):415–456
41. Bishop M (2009) Why computers can’t feel pain. Mind Mach
19(4):507. https://doi.org/10.1007/s11023-009-9173-3
42. Kant I (1997)Lectures on Ethics. In: Heath P, Schneewind JB (eds)
The Cambridge Edition of the Works of Immanuel Kant (trans:
Heath P) Cambridge University Press, Cambridge
43. Darling K (2016) Extending legal protection to social robots:
the effects of anthropomorphism, empathy, and violent behavior
towards robotic objects. In: Calo R, Froomkin AM, Kerr I (eds)
Robot law. Edward Elgar Pub, Cheltenham, UK
44. Darling K (2021) The new breed: what our history with animals
reveals about our future with robots. Henry Holt and Co, New York,
NY
45. Coeckelbergh M (2020) Should we treat teddy bear 2.0 as a Kantian
Dog? Four arguments for the indirect moral standing of personal
social robots, with implications for thinking about animals and
humans. Minds Mach. https://doi.org/10.1007/s11023-020-09554-
3
46. Johnson DG, Verdicchio M (2018) Why robots should not be
treated like animals. Ethics Inf Technol 20(4):291–301. https://doi.
org/10.1007/s10676-018-9481-5
47. Smith JK (2021) Robotic persons: our future with social robots.
Westbow Press, S.l. Bloomington
48. Cappuccio ML, Sandoval EB, Mubin O, Obaid M, Velonaki M
(2021) Can robots make us better humans? Int J Soc Robot
13(1):7–22. https://doi.org/10.1007/s12369-020-00700-6
49. Russell DC (ed) (2013) The cambridge companion to virtue ethics.
Cambridge University Press, Cambridge
50. Vallor S (2016) Technology and the virtues: a philosophical guide
to a future worth wanting, 1st edn. Oxford University Press, New
York , N Y
51. Sparrow R (2017) Robots, rape, and representation. Int J Soc Robot
9(4):465–477. https://doi.org/10.1007/s12369-017-0413-z
52. Sparrow R (2021) Virtue and vice in our relationships with robots:
is there an asymmetry and how might it be explained? Int J Soc
Robot 13(1):23–29. https://doi.org/10.1007/s12369-020-00631-2
53. Cappuccio ML, Peeters A, McDonald W (2020) Sympathy for
dolores: moral consideration for robots based on virtue and recog-
nition. Philos Technol 33(1):9–31. https://doi.org/10.1007/s13347-
019-0341-y
54. Coeckelbergh M (2021) How to use virtue ethics for thinking
about the moral standing of social robots: a relational interpreta-
tion in terms of practices, habits, and performance. Int J Soc Robot
13(1):31–40. https://doi.org/10.1007/s12369-020-00707-z
55. Coeckelbergh M (2012) Growing moral relations: critique of moral
status ascription. Palgrave Macmillan, UK
56. Coeckelbergh M (2014) The moral standing of machines: towards a
relational and non-cartesian moral hermeneutics. Philoso Technol
27(1):61–77. https://doi.org/10.1007/s13347-013-0133-8
57. Coeckelbergh M, Gunkel DJ (2014) Facing animals: a relational,
other-oriented approach to moral standing. J Agric Environ Ethics
27(5):715–733. https://doi.org/10.1007/s10806-013-9486-3
58. Jordan JM (2016) Robots, 1st edn. The MIT Press, Cambridge,
MA
59. Adams B, Breazeal C, Brooks RA, Scassellati B (2000) Humanoid
robots: a new kind of tool. IEEE Intell Syst Appl 15(4):25–31.
https://doi.org/10.1109/5254.867909
60. Winfield A (2012) Robotics: a very short introduction. Oxford Uni-
versity Press, Oxford, New York
61. Calo R, Michael Froomkin A, Kerr I (2016) Robot Law, 1st edn.
Edward Elgar Pub, Cheltenham, UK
62. Froomkin AM (2016) Introduction. In: Calo R, Froomkin AM,
Kerr I (eds) Robot Law, 1st edn. Edward Elgar Pub, Cheltenham,
UK
63. Chesterman S (2020) Artificial intelligence and the limits of legal
personality. Int Comp Law Q 69(4):819–844. https://doi.org/10.
1017/S0020589320000366
64. Bro˙zek B, Jakubiec M (2017) On the legal responsibility of
autonomous machines. Artif Intell Law 25(3):293–304. https://doi.
org/10.1007/s10506-017-9207-8
65. Hutto D, Ian R (2021) Folk Psychology as a Theory. In: Zalta EN
(eds) The Stanford Encyclopedia of Philosophy, Fall 2021. Meta-
physics Research Lab, Stanford University. https://plato.stanford.
edu/archives/fall2021/entries/folkpsych-theory/
66. Whitby B (2008) Sometimes it’s hard to be a robot: a call for
action on the ethics of abusing artificial agents. Interact Comput
20(3):326–333. https://doi.org/10.1016/j.intcom.2008.02.002
67. Bršˇci´c D, Hiroyuki K, Yoshitaka S, Takayuki K (2015) Escaping
from children’s abuse of social robots. In: 2015 10th ACM/IEEE
International Conference on Human-Robot Interaction (HRI),
pp. 59–66
68. Coghlan S, Vetere F, Waycott J, Neves BB (2019) could social
robots make us kinder or crueller to humans and animals? Int J Soc
Robot 11(5):741–751. https://doi.org/10.1007/s12369-019-00583-
2
69. Przybylski AK, Weinstein N (2019) Violent video game engage-
ment is not associated with adolescents’ aggressive behaviour: evi-
dence from a registered report. R Soc Open Sci Turne 6(2):171474.
https://doi.org/10.1098/rsos.171474
70. Mamak K (2021) Whether to save a robot or a human: on the ethical
and legal limits of protections for robots. Front Robot AI. https://
doi.org/10.3389/frobt.2021.712427
71. Thomasen K (2020) Robots, Regulation, and the changing nature
of public space. SSRN Scholarly Paper ID 3589896. Social Sci-
ence Research Network, Rochester, NY https://papers.ssrn.com/
abstract=3589896
72. Wróbel W, Andrzej Z (2014) Polskie prawo karne: cz˛c ogólna.
Społeczny Instytut Wydawniczy Znak, Kraków
73. Grzegorczyk T, Jankowski W, Zbrojewska M (2013) Kodeks
Wykrocze´n. 2. wyd., stan Prawny: 1 stycznia 2013 r. Komentarz
Lex. Lex a Wolters Kluwer business, Warszawa
74. Budyn-Kulik M, Mozgawa M (eds) (2009) Kodeks Wykrocze´n:
Komentarz. 2. wyd., stan Prawny na 1 wrze´snia 2009 r. Warszawa:
Lex a Wolters Kluwer business, Komentarze
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1066 International Journal of Social Robotics (2022) 14:1057–1066
75. Beccaria C (1995) Beccaria: “On crimes and punishments” and
other writings. In: Bellamy R (ed). (trans: Davies R). Digital Print
ed. ed. Cambridge, Cambridge University Press
76. Ulanoff L (2018) Please Do Kick Your Robot. Medium. 22 Feb
2018. https://medium.com/@LanceUlanoff/please-do-kick-your-
robot-cc1cd828452d
77. Victor D (2015) Hitchhiking robot, safe in several countries,
meets its end in Philadelphia (Published 2015). The New York
Times, 3 Aug 2015, sec. U.S. https://www.nytimes.com/2015/08/
04/us/hitchhiking-robot-safe-in-several-countries-meets-its-end-
in-philadelphia.html
78. Coeckelbergh M (2018) Why care about robots? Empathy, moral
standing, and the language of suffering. Kairos J Philos Sci
20(1):141–158. https://doi.org/10.2478/kjps-2018-0007
79. Prescott TJ (2017) Robots are not just tools. Connect Sci
29(2):142–149. https://doi.org/10.1080/09540091.2017.1279125
80. Schmetkamp S (2020) Understanding A.I. Can and should
we empathize with robots? Rev Philos Psychol. https://doi.org/10.
1007/s13164-020-00473-x
81. Suzuki Y, Galli L, Ikeda A, Itakura S, Kitazaki M (2015) Measuring
empathy for human and robot hand pain using electroencephalog-
raphy. Sci Rep 5(1):15924. https://doi.org/10.1038/srep15924
82. der Rosenthal-von P, Astrid M, Krämer NC, Hoffmann L, Sobieraj
S, Eimler SC (2013) An experimental study on emotional reactions
towards a robot. Int J Soc Robot 5(1):17–34. https://doi.org/10.
1007/s12369-012-0173-8
83. der Rosenthal-von P, Astrid M, Schulte FP, Eimler SC, Sobieraj S,
Hoffmann L, Maderwald S, Brand M, Krämer NC (2014) Inves-
tigations on empathy towards humans and robots using FMRI.
Comput Hum Behav 33(April):201–212. https://doi.org/10.1016/
j.chb.2014.01.004
84. Fraser KC, Frauke Z, David HS, Saif M, Frank R (2019) How do we
feel when a robot dies? Emotions expressed on twitter before and
after HitchBOT’s Destruction. In: Proceedings of the Tenth Work-
shop on Computational Approaches to Subjectivity, Sentiment and
Social Media Analysis, pp. 62–71
85. Lima G, Meeyoung C, Chihyung J, Kyungsin P (2021) The pun-
ishment gap: the infeasible public attribution of punishment to AI
and robots. arXiv:2003.06507
86. Lee M, Ruijten PAM, Frank LE, Yvonne AW, de KortY, IJsselsteijn
WA (2021) People may punish, but not blame robots. In: Confer-
ence on Human Factors in Computing Systems. Association for
Computing Machinery, Inc. https://research.tue.nl/en/publications/
people-may-punish-but-not-blame-robots
87. Lima G, Changyeon K, Seungho R, Chihyung J Meeyoung C
(2020) Collecting the public perception of ai and robot rights.
In: Proceedings of the ACM on Human-Computer Interaction 4
(CSCW2): 135:1–135:24. https://doi.org/10.1145/3415206
88. Husak D (2009) Overcriminalization: the limits of the criminal law,
1st edn. Oxford University Press, Oxford
89. Beale S (2005) The many faces of overcriminalization: from
morals and mattress tags to overfederalization. Am Univy Law
Rev 54:747–780
90. Kadish SH (1967) The crisis of overcriminalization. Ann
Am Acad Pol Soc Sci 374(1):157–170. https://doi.org/10.1177/
000271626737400115
91. Chiao V (2017) Mass Incarceration and the Theory of Punishment.
Crim Law Philos 11(3):431–452. https://doi.org/10.1007/s11572-
015-9378-x
92. Reiter KA (2017) Mass incarceration, 1st edn. Oxford University
Press, New York
93. Subramanian R (2017) Emergent AI, Social Robots and the Law:
Security, Privacy and Policy Issues. SSRN Scholarly Paper ID
3279236. Social Science Research Network, Rochester, NY https://
papers.ssrn.com/abstract=3279236
94. Tavani HT (2018) Can social robots qualify for moral consid-
eration? Reframing the question about robot rights. Information
9(4):73. https://doi.org/10.3390/info9040073
95. Smids J (2020) Danaher’s ethical behaviourism: an adequate
guide to assessing the moral status of a robot? Sci Eng Ethics
26(5):2849–2866. https://doi.org/10.1007/s11948-020-00230-4
96. Turner J (2018) Robot rules: regulating artificial intelligence. Pal-
grave Macmillan, London
97. Hart HLA (1963) Law, liberty, and morality. Stanford University
Press, Palo Alto
98. Green L (2010) Law as a means. In: Cane P (ed) The hart-fuller
debate in the twenty-first century. Hart Publishing, London
99. Brownlee K, Child R (2018) Can the law help us to
be moral? Jurisprudence 9(1):31–46. https://doi.org/10.1080/
20403313.2017.1352317
Publisher’s Note Springer Nature remains neutral with regard to juris-
dictional claims in published maps and institutional affiliations.
Kamil Mamak, Ph.D., is a philosopher and a lawyer. He is a postdoctoral
researcher at the RADAR group at the University of Helsinki and an
assistant professor at the Department of Criminal Law at the Jagiel-
lonian University. He is also a Member of the Board of the Cracow
Institute of Criminal Law. He holds PhDs in law (2018) from Jagiel-
lonian University and philosophy (2020) from the Pontifical Univer-
sity of John Paul II in Cracow. Kamil’s current research focuses on
the ethical and legal issues concerning social robots. He is especially
interested in robots’ moral and legal status, robot rights, responsibil-
ity gap, laws of robots, and human-robot interactions. There should
be information about the founding institution: This article is a result
of the research project ‘A New Model of Law of Contraventions. The-
oretical, Normative, and Empirical Analysis’ financed by the National
Science Centre (grant OPUS 14, no. 2017/27/B/HS5/02137).
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... If there are new phenomena that correspond to the gravity of wrongness of recognized crimes, we should consider their criminalization (cf. Malsch 2007;Mamak 2021cMamak , 2022. I think that in a world full of interactions of humans and robots, a group of issues will emerge from these interactions that we should at least discuss in the context of criminalization. ...
Article
Full-text available
Whether or not we call a love-like relationship with robots true love, some people may feel and claim that, for them, it is a sufficient substitute for love relationship. The love relationship between humans has a special place in our social life. On the grounds of both morality and law, our significant other can expect special treatment. It is understandable that, precisely because of this kind of relationship, we save our significant other instead of others or will not testify against her/him. How as a society should we treat love-like relationships humans with robots? Based on the assumption that robots do not have an inner life and are not moral patients, I defend the thesis that this kind of relationship should be protected by criminal law.
... When it comes to humanoid robots in general, some topics of discussion about the ethical implications of our relations with them include: whether these robots are deceptive and whether this deception is ethically problematic [9,52]; whether these robots could have moral status [14,15,37] and, accordingly, whether such robots could, or should, be treated morally well and even granted rights [2,20,22,23,30]. Specifically, the ethical implication upon which this paper focuses, is whether we should be concerned about humanoid robots replacing human relations. ...
Article
Full-text available
This paper considers ethical concerns with regard to replacing human relations with humanoid robots. Many have written about the impact that certain types of relations with robots may have on us, and why we should be concerned about robots replacing human relations. There has, however, been no consideration of this issue from an African philosophical perspective. Ubuntu philosophy provides a novel perspective on how relations with robots may impact our own moral character and moral development. This paper first discusses what humanoid robots are, why and how humans tend to anthropomorphise them, and what the literature says about robots crowding out human relations. It then explains the ideal of becoming "fully human", which pertains to being particularly moral in character. In ubuntu philosophy, we are not only biologically human, but must strive to become better, more moral versions of ourselves, to become fully human. We can become fully human by having other regarding traits or characteristics within the context of interdependent, or humane, relationships (such as by exhibiting human equality, reciprocity, or solidarity). This concept of becoming fully human is important in ubuntu philosophy. Having explained that idea, the main argument of the paper is then put forward: that treating humanoid robots as if they are human is morally concerning if they crowd out human relations, because such relations prevent us from becoming fully human. This is because we cannot experience human equality, solidarity, and reciprocity with robots, which can be seen to characterise interdependent, or humane, relations with human beings.
Article
Full-text available
Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots.
Article
Full-text available
Proponents of welcoming robots into the moral circle have presented various approaches to moral patiency under which determining the moral status of robots seems possible. However, even if we recognize robots as having moral standing, how should we situate them in the hierarchy of values? In particular, who should be sacrificed in a moral dilemma–a human or a robot? This paper answers this question with reference to the most popular approaches to moral patiency. However, the conclusions of a survey on moral patiency do not consider another important factor, namely the law. For now, the hierarchy of values is set by law, and we must take that law into consideration when making decisions. I demonstrate that current legal systems prioritize human beings and even force the active protection of humans. Recent studies have suggested that people would hesitate to sacrifice robots in order to save humans, yet doing so could be a crime. This hesitancy is associated with the anthropomorphization of robots, which are becoming more human-like. Robots’ increasing similarity to humans could therefore lead to the endangerment of humans and the criminal responsibility of others. I propose two recommendations in terms of robot design to ensure the supremacy of human life over that of humanoid robots.
Article
Full-text available
The use of autonomous and intelligent personal social robots raises questions concerning their moral standing. Moving away from the discussion about direct moral standing and exploring the normative implications of a relational approach to moral standing, this paper offers four arguments that justify giving indirect moral standing to robots under specific conditions based on some of the ways humans—as social, feeling, playing, and doubting beings—relate to them. The analogy of “the Kantian dog” is used to assist reasoning about this. The paper also discusses the implications of this approach for thinking about the moral standing of animals and humans, showing why, when, and how an indirect approach can also be helpful in these fields, and using Levinas and Dewey as sources of inspiration to discuss some challenges raised by this approach.
Article
Full-text available
As artificial intelligence (AI) systems become more sophisticated and play a larger role in society, arguments that they should have some form of legal personality gain credence. The arguments are typically framed in instrumental terms, with comparisons to juridical persons such as corporations. Implicit in those arguments, or explicit in their illustrations and examples, is the idea that as AI systems approach the point of indistinguishability from humans they should be entitled to a status comparable to natural persons. This article contends that although most legal systems could create a novel category of legal persons, such arguments are insufficient to show that they should.
Article
Full-text available
Social robots are designed to facilitate interaction with humans through “social” behavior. As literature in the field of human–robot interaction shows, this sometimes leads to “bad” behavior towards the robot or “abuse” of the robot. Virtue ethics offers a helpful way to capture the intuition that although nobody is harmed when a robot is “mistreated”, there is still something wrong with this kind of behavior: it damages the moral character of the person engaging in that behavior, especially when it is habitual. However, one of the limitations of current applications of virtue ethics to robots and technology is its focus on the individual and individual behavior and insufficient attention to temporal and bodily aspects of virtue. After positioning its project in relation to the work of Shannon Vallor and Robert Sparrow, the present paper explores what it would mean to interpret and apply virtue ethics in a more social and relational way and a way that takes into account the link between virtue and the body. In particular, it proposes (1) to use the notion of practice as a way to conceptualize how the individual behavior, the virtue of the person, and the technology in question are related to their wider social-practical context and history, and (2) to use the notions of habit and performance conceptualize the incorporation and performance of virtue. This involves use of the work of MacIntyre, but revised by drawing on Bourdieu’s notion of habit in order to highlight the temporal, embodiment, and performative aspect of virtue. The paper then shows what this means for thinking about the moral standing of social robots, for example for the ethics of sex robots and for evaluating abusive behaviors such as kicking robots. The paper concludes that this approach does not only give us a better account of what happens when people behave “badly” towards social robots, but also suggests a more comprehensive virtue ethics of technology that is fully relational, performance-oriented, and able to not only acknowledges but also theorize the temporal and bodily dimension of virtue.
Book
This is the first philosophical monograph entirely and exclusively dedicated to Information Ethics.Information and Communication Technologies (ICTs) have profoundly changed many aspects of life, including the nature of entertainment, work, communication, education, health care, industrial production and business, social relations, and conflicts.Therefore, they have had a radical and widespread impact on our moral lives and on contemporary ethical debates. Privacy, ownership, freedom of speech, responsibility, technological determinism, the digital divide, online pornography, are only some of the pressing issues that characterize the ethical discourse in the information society. They are the subject of Information Ethics (IE), the new philosophical area of research that investigates the ethical impact of ICTs on human life and society.The book lays down, for the first time, the conceptual foundations for Information Ethics. It does so systematically, by pursuing three goals:a). metatheoretical goal: it describes what Information Ethics is, its problems, approaches, and methods;b). introductory goal: it helps the reader to gain a better grasp of the complex and multifarious nature of the various concepts and phenomena related to Information Ethics;c) analytic goal: it answers several key theoretical questions of great philosophical interest, arising from the investigation of the ethical implications of ICTs.Although entirely independent of The Philosophy of Information (OUP, 2011), the previous book by the same author, it complements it as part of the tetralogy on the foundations of the philosophy of information (Principia Philosophiae Informationis).