Content uploaded by Zoe Panagiota Nigianni
Author content
All content in this area was uploaded by Zoe Panagiota Nigianni on Feb 22, 2022
Content may be subject to copyright.
!
1
Response to Cappelen and Dever’s summary of
Action without the first person perspective
Cappelen and Dever 1 offer a description of a system of agency by
introducing the notion of the “Action Inventory”, which operates as a
depository of a range of actions agents can perform. Each agent has
third-person beliefs and desires, which trigger the agent’s intentions, in
turn serving as rationalisations or motivations of the agents’ actions.
When there is no match between intention and the range of actions the
agent can perform, the agent is unmotivated to perform the required
action. The authors propose a selection-function for actions based on
some psychophysical constraints of the agent. I will examine the
examples of agency provided by the authors in support of their systemic
theory.
Take Francois. His actions are motivated by his ability to rationally
integrate his beliefs and desires with each other. When his belief-desires
are stripped away, he needs an “occasionalist god” to motivate him to
action. The authors claim that the agent is able to act without first-person
beliefs, but, in fact, this is not possible without the intervention of a
deus
ex machina
. The original hypothesis fails.
Take Jeeve Stobs. His actions are motivated by his desire for beer
and his belief in cloud computing. Stobs is nevertheless constrained by his
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1 Herman Cappelen and Josh Dever, “Action Without the First Person Perspective”, draft
paper, October 2017.
!
2
inability to avoid not getting distracted by his beliefs. Without first person
beliefs and desires, he is also unable to act. In this case, his assistants act
like Stobs’s surrogates by tracking his first-person mental states and
moving into action for Stobs. The original hypothesis fails.
Take Stobs’s assistants. They lack first-person beliefs and desires,
but, somehow, acting as surrogates for Stobs, they are motivated by
Stobs’s beliefs and desires as if they are their own. The assistants are
constrained by external corporate demands, however getting more
assistants to assist them in turn solves the constraint problem. Do they
act on “borrowed” first-person beliefs or on third-person beliefs? It is
unclear how the original hypothesis applies in this example.
Take Semiramis. Her actions are motivated by her belief in and
desire for optimism, or her optimistic disposition. However, her divine
capacities have been constrained by Pavlovian training, which has had a
restrictive effect on her intentions. It is not clear in this example whether
the agent is motivated into action by first-person beliefs-desires and what
is the effect of the training on the agent’s holding first-person beliefs-
desires. The original hypothesis cannot be applied.
Take Robbie the Robot. Robbie is programmed to act without first-
person beliefs-desires. The indexical “we” is here used to refer to the
builders of the robot; so, the use of the first-person indexical indicates
that the builders have first-person beliefs-desires, which motivate them to
action. The original hypothesis applies to the automated machine, the
robot, but not to the robot’s builders.
!
3
Take David-and-Susan. Their habitual dispositions to ‘stop, drop and
roll’ (David), or ‘skip and run’ (Susan), trigger same third-person beliefs-
desires, which motivate them into action. However, the sameness of their
beliefs-desires-intentions, which the authors also consider as a “Siamese
Twins” or a brain-in-a-vat experience (Torre), does not motivate them to
perform the same actions. It is not obvious what the constraints upon
these agents are. Therefore, any disjunction between intention and action
in this example is not explained by the original hypothesis.
In addition, the authors’ own intentions to provide a systemic
analysis of agency in line with their notion of the “Action Inventory” is
supported by the seemingly inductive claim:
“To defeat the claim that
indexical attitudes are essential for action, it suffices that there could
even be one successful impersonal action rationalisation”
, which is not
plausible. The inductive reasoning applied here (i.e. one successful case
suffices to explain or justify refuting an accepted position) is flawed,
because it does not follow the general rules of induction. One cannot
generalise simply from a particular case, however successful or not (take
Robbie the Robot, or David-and-Susan). For induction to apply, one needs
a sample of cases to draw a generally valid inference.
In my view, a more accurate interpretation of the systemic theory
offered by Cappelen and Dever would be that they present a fictive
system, following third-person descriptivist theory. The problem of doing
away with the essential indexical for describing agency is encountered,
but not solved, in the use of the indexical “we”. It can be argued that the
authors’ claim for the (in-)essentiality of indexicality depends on their
!
4
nominalistic theorisation. For instance, it doesn’t make a difference
whether agency is described in the first person, or third, or whatever;
instead, “actions” are drawn from the so called “Action Inventory” in
correspondence with the agents’ names, which also stand for subverted
proper names (i.e. Jeeve Stobs, Zark Muckerberg), serving no more as
rigid designators in Kripkean terms. The inconsistency in the authors’
theorisation of intentionality also points at their presenting us with an
impossible world: “If worlds were stories, or “set-ups”, or suitably
constructed models, or representations of some other sort, there could
very well be impossible ones.”2
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2 David K. Lewis,
On the Plurality of Worlds
, Oxford: Blackwell, 1983, p. 21.