ArticlePDF Available

Abstract

In the 1950s, Alan Turing proposed his influential test for machine intelligence, which involved a teletyped dialogue between a human player, a machine, and an interrogator. Two readings of Turing's rules for the test have been given. According to the standard reading of Turing's words, the goal of the interrogator was to discover which was the human being and which was the machine, while the goal of the machine was to be indistinguishable from a human being. According to the literal reading, the goal of the machine was to simulate a man imitating a woman, while the interrogator – unaware of the real purpose of the test – was attempting to determine which of the two contestants was the woman and which was the man. The present work offers a study of Turing's rules for the test in the context of his advocated purpose and his other texts. The conclusion is that there are several independent and mutually reinforcing lines of evidence that support the standard reading, while fitting the literal reading in Turing's work faces severe interpretative difficulties. So, the controversy over Turing's rules should be settled in favor of the standard reading.
Turing’s Rules for the Imitation Game
GUALTIERO PICCININI
Department of History and Philosophy of Science, University of Pittsburgh, 1017 Cathedral of
Learning, Pittsburgh, PA 15260, USA; E-mail: gupst1@pitt.edu
Abstract. In the 1950s, Alan Turing proposed his influential test for machine intelligence, which
involved a teletyped dialogue between a human player, a machine, and an interrogator. Two read-
ings of Turing’s rules for the test have been given. According to the standard reading of Turing’s
words, the goal of the interrogator was to discover which was the human being and which was the
machine, while the goal of the machine was to be indistinguishable from a human being. According
to the literal reading, the goal of the machine was to simulate a man imitating a woman, while the
interrogator – unaware of the real purpose of the test – was attempting to determine which of the
two contestants was the woman and which was the man. The present work offers a study of Turing’s
rules for the test in the context of his advocated purpose and his other texts. The conclusion is that
there are several independent and mutually reinforcing lines of evidence that support the standard
reading, while fitting the literal reading in Turing’s work faces severe interpretative difficulties. So,
the controversy over Turing’s rules should be settledin favor of the standard reading.
Key words: Turing test
1. Introduction
In his 1950 Mind paper, Alan Turing proposed replacing the question "Can ma-
chines think?" with the question "Are there imaginable digital computers which
would do well in the imitation game?" (Turing, 1950, p. 442). The setup for what
came to be known as the Turing test was introduced in the following famous pas-
sage:
[The imitation game] is played with three people, a man (A), a woman (B),
and an interrogator (C) who may be of either sex. The interrogator stays in a
room apart from the other two. The object of the game for the interrogator is
to determine which of the other two is the man and which is the woman. He
knows them by labels X and Y, and at the end of the game he says either "X
is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put
questions to A and B thus:
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is A’s object in the game
to try to cause C to make the wrong identification. His answer might therefore
be
"My hair is shingled, and the longest strands are about nine inches long."
Minds and Machines 10: 573–582, 2000.
© 2001 Kluwer Academic Publishers. Printed in the Netherlands.
574 GUALTIERO PICCININI
In order that tones of voice may not help the interrogator the answers should be
written, or better still, typewritten. The ideal arrangement is to have a teleprinter
communicating between the two rooms. Alternatively the question and answers
can be repeated by an intermediary. The object of the game for the third player
(B) is to help the interrogator. The best strategy for her is probably to give
truthful answers. She can add such things as "I am the woman, don’t listen to
him!" to her answers, but it will avail nothing as the man can make similar
remarks.
We now ask the question, "What will happen when a machine takes the part of
A in this game?" Will the interrogator decide wrongly as often when the game
is played like this as he does when the game is played between a man and a
woman? These questions replace our original, "Can machines think?" (Turing,
1950, pp. 433–434).
When the imitation game involved two human beings, Turing explained the
rules in some detail. However, after introducing machines into the game, Turing
did not make the rules explicit. According to the traditional interpretation of this
passage, when a machine and a human being are playing the game, the goal of the
interrogator is to discover which is the human being and which is the machine,
while the goal of the machine is to be mistaken for a human being. I will refer to
this as the standard reading. Under the standard reading, the Turing test is squarely
a comparison between human beings and machines, where a skillful interrogator
can require the machine to demonstrate mastery of human language, knowledge,
and inferential capacities. Possessing these abilities is, by most standards, a clear
sign of intelligence or thinking.1So, the question of whether a machine can do
well at the imitation game can be seen as a sensible replacement for the question
of whether a machine can think.
Some authors have read Turing’s passage in a more literal way, suggesting that
the goal of the machine is to simulate a man imitating a woman, while the interrog-
ator – unaware of the real purpose of the test – is still attempting to determine which
of the two players is the woman and which is the man. I will call this the literal
reading.2Supporters of the literal reading disagree over which of the machine’s
capacities are being uncovered by Turing’s game. Some argue that his point is
testing the machine’s ability to utilize language like a person; the blindness of
the interrogator and the gender impersonation are introduced for methodological
reasons – in order to make the test unbiased.3Others suggest that Turing’s point
was not to test the machine’s ability to utilize language like a human, but literally
to test the machine’s competence at replicating the abilities of a human male who
is attempting to imitate a human female.4
As far as I know,no one has defended the standard reading against this revision-
ist line. The present work offers a thorough study of Turing’s rules for the imitation
game in the context of his advocated purpose and his other texts. Several independ-
ent and mutually reinforcing lines of evidence that support the standard reading
TURING’S RULES FOR THE IMITATION GAME 575
will be presented, while fitting the literal reading in Turing’s work will face severe
interpretative difficulties. The evidence supporting the standard reading is found by
considering other sections of the Mind paper, its overall argumentative structure,
and relevant statements made by Turing on other occasions. So, the controversy
over Turing’s rules should be settled in favor of the standard reading.
2. How Literal is the Literal Reading?
An opponent might accuse the standard reading of unnecessarily attributing ambi-
guity to Turing’s description of the game’s rules. If Turing meant the machine to
simulate not a woman, as his words seem to suggest, but a generic human being,
why didn’t he say so from the start? The standard reading makes Turing’s descrip-
tion of the rules appear confusingly incomplete, while the literal reading seems to
take Turing’s words at face value. Other things being equal, this opponent would
conclude, the literal reading should be preferred over the standard one. Before
turning to the evidence in favor of the standard reading, let me dispense with this
potential objection.
It turns out that, when examined closely, the literal reading generates an inter-
pretative problem similar to the one just mentioned. Suppose the literal reading is
correct. Turing’s words would still fall short of fixing the rules of the game, this
time with respect to the interrogator’s role. Does the interrogator know that she is
dealing with a machine and a woman, or does she incorrectly think she is dealing
with a woman and a man? Turing doesn’t say anything in this respect. This ques-
tion is far from irrelevant, as we would expect the interrogator’s strategy, and the
chances of making correct guesses, to be different in each of the two cases. So, the
literal reading is also committed to attributing ambiguity to Turing’s explanation
of the rules. Usually, the proponents of the literal reading assume that the inter-
rogator should not know that she is talking to a machine.5But such misconception
of the interrogator changes the game’s original setting, in which the interrogator
was correctly informed that the players were a woman and a man. This change
does resolve the ambiguity resulting from the literal reading, but generates the
following question: if Turing meant the interrogator to ignore the real purpose of
the game, why didn’t he say so? The ambiguity resulting from the literal reading,
and the inference required to resolve the ambiguity, make the literal reading no
longer literal. As both readings attribute an ambiguity to Turing’s description of
the rules, no reading is better off than the other in this respect.
3. The Turing Test as a Replacement for the Question "Can Machines
Think?"
Any reading of Turing’s rules must explain how the imitation game fulfills his
goal of replacing the question of whether machines can think. As I said, the stand-
ard reading’s account is straightforward: if a machine can demonstrate mastery
576 GUALTIERO PICCININI
of human language, knowledge, and inferential capacities to the point that it is
mistaken for a human being, most people would consider it intelligent – or so
they should according to Turing. With respect to this replacement goal, though, the
literal reading generates more questions than answers. Presumably, any successful
simulation of a human being includes a simulation of both a human gender and the
human ability to imitate other human beings. Under the standard reading, this fact
could be exploited by the interrogator. Given the rules of the Turing test under the
standard reading, the interrogator can ask both players to impersonate a member
of any gender to see how they compare on that task. However, it is not obvious
how this gender imitation task relates to the question of human intelligence. The
literal reading restricts the entire test to the question of whether a human male or
amechanical male can imitate the opposite sex better. In what way is this ability
relevant to whether machines think? Why is proficiency at this task sufficient to
prove that a machine is intelligent? One possible answer is that the machine needs
to simulate the mental processes of a human male to a degree of sophistication that
is sufficient to also simulate the human male impersonating a human female. This
might convince one that the machine is intelligent. But if the machine is able to
simulate a man to such a degree, why not ask it a broad range of questions rather
than limiting the task to the impersonation of the opposite gender?
These questions illustrate that it is not obvious how the test – as defined by the
literal reading – fulfills Turing’s replacement goal. The proponents of the literal
reading owe us not only an answer to these questions, but an explanation for why
Turing did not address them at all.6If, as some have argued, he simply was intro-
ducing an experimental design to make the test unbiased, why didn’t he say so?
Again, recall that, in the imitation game played by humans, the interrogator knows
that both players are human. If Turing thought the game with the machine needed
the extra precaution of deceiving the interrogator as to the nature of the game,
he should have – and presumably would have – both said it and explained why
he thought so.7Instead, he spent most of his rather long text considering various
general attributes of human beings, and various general reasons for believing that
machines cannot think. In each case, he argued that none of those reasons were
obstacles to the conclusion that a digital computer would eventually be able to
play the imitation game. His punchline was that a machine could be developed
to match all the elements that are relevant to human intelligence, including the
ability to learn from experience and one’s own mistakes. Turing never discussed
any elements that would make the machine able or unable to do the gender imit-
ation, nor did he mention how his detailed discussions of various human abilities
related to gender imitation. As a result, it is natural to read the Mind paper as being
entirely devoted to the motivation and explication of the test as understood under
the standard reading. Turing was neither a sloppy thinker nor a sloppy writer. If
he wanted to propose his test under the literal reading, it’s likely that he would
have motivated and explained it in detail, instead of just concentrating on what
TURING’S RULES FOR THE IMITATION GAME 577
potentially makes humans and machines intellectually different, or intellectually
equal.
4. The Turing Test in Section 2 of the Mind Paper
The passage describing the test constitutes most of section 1 of the Mind paper.
In section 2, a few lines after introducing the game, Turing wrote that "[t]he new
problem has the advantage of drawing a fairly sharp line between the physical and
the intellectual capacities of a man" (Turing, 1950, p. 434).8He did not mention the
capacities of human beings as gender imitators, but he did give "specimen questions
and answers" between the interrogator and the other players. The questions were
no longer relevant to being a man or woman, as were the examples given by Turing
for the game with only human players. Now, the "specimen questions and answers"
involved writing a sonnet, adding numbers, and playing chess. These were some
of the paradigmatic intelligent human activities that Turing referred to – in other
papers – as some of the tasks on which computers needed to be tested to show
they were intelligent.9Turing’s examples are in line with the standard reading,
according to which the goal of the interrogator is to distinguish the human being
from the machine. If one, instead, takes the literal reading, one should explain why
Turing’s examples are not about gender differences, but about general intellectual
abilities of human beings, at which men and women hardly differ.
At the end of section 2, Turing made two additional points that are hard to
reconcile with the literal reading. The first is that the "counterpart" to the imitation
game is for a "man" (i.e. a human being) to "pretend to be the machine" (Turing,
1950, p. 435). This makes sense if Turing meant to test a machine simulating a
human being. If he meant to test a machine simulating a man imitating a woman,
he should have said that the counterpart to his test is for a woman to imitate a man
imitating a machine. Turing’s second point is a suggestion that the best strategy
for the machine is giving answers that would naturally be given by a "man."10
This is not in direct contradiction with the literal reading, for the claim might be
that the machine should literally simulate the answers given by a man imitating
a woman. But the literal reading makes this assumption oddly unwarranted. For,
under the literal reading, the machine could follow a strategy that appears at least
equally good, if not better: imitate the woman directly, giving answers that would
be given by a woman qua woman, rather than by a man imitating a woman. In
contrast, under the standard reading, given that Turing’s "man" can stand for a
generic human being, the assumption that the machine should attempt to provide
human-like answers becomes straightforward.
5. Turing’s Second Description of the Test
The most serious problem with the literal reading is that, in section 5 of the Mind
paper, Turing described the test again, in accordance with the standard reading. He
578 GUALTIERO PICCININI
described the game as that of a machine imitating a human being, which, as usual,
he called "man."11 When they haven’t ignored section 5, the proponents of the
literal reading have suggested that, in it, Turing described a different "version" of
the test (as understood under the literal reading). In the first "version," the machine
was playing against a woman; this time, the test is alleged to involve a machine
playing against a human male, and both the mechanical and the human player
must pretend to be women.12 This suggestion is entirely ad hoc,forthereisno
independent evidence supporting it. Nowhere in the paper did Turing mention any
change in his description of the test, or in the rules of the game. If he meant to
describe two tests rather than one, we should expect that he say so, and explain
why he was making such a change. Moreover, there is textual evidence against
the hypothesis that, in sections 1 and 5, Turing was describing two different tests.
In section 3, Turing discussed the already introduced game (from section 1 of the
paper), and then pointed in the direction of section 5, where the game was described
as being between a computer and a human being.13 In section 3, as throughout
the paper, Turing referred to the game, or test, without ever using the plural, and
mentioned no change in the rules. In section 3, as in the rest of the Mind paper,
Turingwas writing about one and the same test, namely the test as defined by the
standard reading.
6. The Turing Test Outside of the Mind Paper
The Turing test was foreshadowed in a report on mechanical intelligence written
by Turing a few years before the Mind paper. Even before the actual construction
of digital computers, Turing and others began writing computer programs for chess
playing and other activities.14 The performance of such programs could be tested
by asking a person to compute, at each stage of the game, what the next move
should be according to the program. A human being who is given paper, pencil,
and a set of instructions to carry out, was called by Turing a "paper machine." A
paper machine behaves like a digital computer executing a program:
The extent to which we regard something as behaving in an intelligent manner
is determined as much by our state of mind and training as by the properties of
the object under consideration. If we are able to explain and predict its beha-
viour or if there seems to be little underlying plan, we have little temptation to
imagine intelligence. With the same object therefore it is possible that one man
would consider it as intelligent and another would not; the second man would
have found out the rules of its behaviour.
It is possible to do a little experiment on these lines, even at the present stage
of knowledge. It is not difficult to devise a paper machine which will play a
not very bad game of chess. Now get three men as subjects for the experiment
A, B, C. A and C are to be rather poor chess players, B is the operator who
works the paper machine. (In order that he should be able to work it fairly fast
it is advisable that he be both mathematician and chess player.) Two rooms are
TURING’S RULES FOR THE IMITATION GAME 579
used with some arrangement for communicating moves, and a game is played
between C and either A or the paper machine. C may find it quite difficult to
tell which he is playing [sic]. (This is a rather idealized form of an experiment
I have actually done.) (Turing, 1948, p. 23).
The "experiment" described here closely resembles the imitation game under
the standard reading. In this passage, Turing suggested that playing chess against a
machine could generate the feeling that the machine was intelligent. In the title of
the above quote, Turing called intelligence an "emotional concept," meaning that
there is no objective way to apply it. Direct experience with a machine, which plays
chess in a way that cannot be distinguished from human playing, could convince
one to attribute intelligence to the machine. This is very likely to be an important
part of the historical root for Turing’s proposal of the imitation game. Believing that
"[t]he extent to which we regard something as behaving in an intelligent manner
is determined as much by our state of mind and training as by the properties of
the object under consideration," he hoped that, by experiencing the versatility of
digital computers at tasks normally thought to require intelligence, people would
modify their usage of terms like "intelligence" and "thinking," so that such terms
apply to the machines themselves.15
Finally, Turing described his test on two other public occasions. The first was
a talk broadcast on the BBC Third Programme on May 15, 1951. The relevant
portion went as follows:
I think it is probable for instance that at the end of the century it will be possible
to programme a machine to answer questions in such a way that it will be
extremely difficult to guess whether the answers are being given by a man or by
the machine. I am imagining something like a viva-voce examination, but with
the questions and answers all typewritten in order that we need not consider
such irrelevant matters as the faithfulness with which the human voice can be
imitated (Turing, 1951, pp. 4–5).
The second was a discussion between Turing, M.H.A. Newman, Sir Geoffrey Jef-
ferson, and R.B. Braithwaite, broadcast on the BBC Third Programme on January
14 and 23, 1952. Turing said:
I would like to suggest a particular kind of test that one might apply to a ma-
chine. You might call it a test to see whether the machine thinks, but it would
be better to avoid begging the question, and say that the machines that pass are
(let’s say) Grade A machines. The idea of the test is that the machine has to try
to pretend to be a man, by answering questions put to it, and it will only pass
if the pretence is reasonably convincing... (Turing, 1952, pp. 4–5, italics in the
original).
The topic of these radio broadcasts was whether digital computers could be said
to think. Turing’s advocated purpose was the same in these occasions as in his
Mind paper: to replace the question of whether machines could think with the
question of whether machines could pass his test. The terms used, and the gist
580 GUALTIERO PICCININI
of Turing’s speeches, closely resembled the Mind paper. Yet, in both occasions,
Turing unambiguously described the test as understood under the standard reading.
7. Conclusion
According to those who have witnessed or studied his life, Turing was often a
surprisingly fast thinker. He would get frustrated when others took a long time
to get points that seemed obvious to him.16 Perhaps because of this, his writing
was lucid but not always easily understood. In his logic papers, some apparent
obscurities resulted from him skipping some of the inferential steps, and can be
clarified by adding the missing steps.17 In light of this, the most likely explanation
for the ambiguity in Turing’s rules is that he expected his readers to fill in the
details in accordance with the game’s purpose. Given that the test is a replacement
for the question of whether machines can think, the machine must pretend to be
human, while the interrogator tries to determine which of the two players is the
machine and which is the human being. A careful examination of Turing’s work,
at any rate, provides plenty of evidence that the standard reading of his rules is
correct. Turing’s own imitation game did not involve a machine simulating a man
who’s pretending to be a woman, but a machine simulating a human being.
Acknowledgements
The writing of this paper was prompted by a discussion with Susan Sterrett, for
which I am very grateful. Thanks to the participants of The Future of the Turing
Test for the fruitful discussion that took place there, and to Becka Skloot for many
helpful comments.
Notes
1For the present purpose of understanding Turing’s text, following his usage, I use the terms "intelli-
gence" and "thinking" interchangeably. This is not meant to suggest that, in other contexts, no useful
distinction can be drawn between the two.
2Webb, 1980, p. 238; Haugeland, 1985, p. 6; Genova, 1994, pp. 313–315; Cowley and McDorman,
1995, p. 122, esp. n. 10; Hayes and Ford, 1995, p. 972; Saygin et al., 2000; Traiger, 2000.
3Haugeland, 1985, pp. 6–8; Saygin et al., 2000; Traiger, 2000.
4Genova, 1994, p. 315; Hayes and Ford, 1995, p. 977. According to Genova, the literal reading
accounts for Turing’s replacement proposal because Turing held the view that thinking is imitating;
thus, a machine successful at imitating must be thinking (Genova, 1994, pp. 315–322). According to
Genova, the request that the machine specifically simulate a human male imitating a human female
is explained by what she takes tobe Turing’s views on sexual identity, due to his own experience as
a homosexual (ib., esp. pp. 314–315).
5The ambiguity is recognized by Haugeland, 1985, p. 6, n. 2. The additional claim that the interrog-
ator must be deceived about the purpose of the game is explicitly made by Hayes and Ford, 1995, p.
972; Saygin et al., 2000; Traiger, 2000.
6These questions have actually been answered at length by Sterrett (2000), who argues that the test
defined by the literal reading makes a better test for intelligence than the test defined by the standard
TURING’S RULES FOR THE IMITATION GAME 581
reading. Of course, Sterrett does not attribute her arguments to Turing. The issue of what is the best
test for machine intelligence is irrelevant to the topic of the present paper. Here, I concentrate on
what Turing said,anddidn’t say.
7Genova’s account in terms of Turing’s alleged view that thought is imitation is even more problem-
atic. First, such a view is no reason to restrict a test for thought to the simulation of a human male
imitating a human female, rather than allowing for a wider range of simulations. Second, and more
importantly, Genova provides no textual evidence to warrant her attribution to Turing of the view that
thought is imitation. In studying his work, I have found no evidence that Turing held sucha view.
8Since Turing’s language – as that of most of his colleagues – was not politically correct by today’s
standards, he generally used "man" to refer to a generic human being.
9The following examples are from papers written before the Mind paper. The first time he mentioned
machine intelligence in a paper, Turing did so in a discussion of mechanical chess-playing (1945,
p. 41). In a more extensive discussion of machine intelligence for an audience of mathematicians,
he suggested that machines could prove their intelligence by both playing chess and doing math-
ematical derivations in a formal logical system (1947, pp. 122–123). In a report entirely devoted
to machine intelligence, Turing discussed the possibility of programming machines to play various
games, to learn languages, to do translations, cryptanalysis (which he used to call "cryptography"),
and mathematics (1948, p. 13).
10The text goes as follows:
It might be urged that when playing the ‘imitation game’ the best strategy for the machine may
possibly be something other than imitation of the behaviour of a man. This may be, but I think it is
unlikely that there is any great effect of this kind. In any case there is no intention to investigate here
he theory of the game, and it will be assumed that the best strategy is to try to provide answers that
would naturally be given by men (Turing, 1950, p. 435).
11Notice the initial reference to section 3, which turns out to have some importance:
We may now consider again the point raised at the end of §3. It was suggested tentatively that
the question, ‘Can machines think?’ should be replaced by ‘Are there imaginable digital computers
which would do well in the imitation game?’ If we wish we can make this superficially more general,
and ask ‘Are there discrete state machines which would do well?’ But in view of the universality
property we see that either of these questions is equivalent to this, ‘Let us fix our attention on one
particular digital computer C. Is it true that by modifying this computer to have an adequate storage,
suitably increasing its speed of action, and providing it with an appropriate programme, C can be
made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?’
(Turing, 1950, p. 442).
Recall also that "[t]he object of the game for the third player (B) is to help the interrogator" (ib., p.
434).
12Genova, 1994, p. 314; Saygin et al., 2000. According to Traiger’s (2000) reading of this passage,
in the modified test the human player can be either a man or a woman, but he or she has to play the
role of a woman.
13Here is the relevant excerpt:
There are already a number of digital computers in working order, and it may be asked, ‘Why not
try the experiment straight away? It would be easy to satisfy the conditions of the game. A number
of interrogators could be used, and statistics compiled to show how often the right identification was
given.’ The short answer is that we are not asking whether all digital computers would do well in the
game nor whether the computers at present available would do well, but whether there are imaginable
computers which would do well. But this is only the short answer. We shall see this question in a
different light later (Turing, 950, p. 436).
The word "later" is a clear reference to the quote taken from section 5, which he – as noted in n. 11 –
starts with a cross-reference to section 3, and comes after Turing’s explanation of digital computers,
their property of universality, and the importance of programs.
14See Hodges, 1983, chapt. 6.
582 GUALTIERO PICCININI
15In this respect, this is what he said in the Mind paper: "I believe that at the end of the century the
use of words and general educated opinion will have altered so much that one will be able to speak
of machines thinking without expecting to be contradicted" (Turing, 1950, p. 442).
16See Newman, 1955, p. 255; Turing, 1959, pp. 13, 27–28; numerous relevant episodes are also
reported by Hodges, 1983.
17For some examples, see Piccinini, 2001.
References
Cowley, S.J. and MacDorman, K.F. (1995), ‘Simulating Conversations: The Communion Game’, AI
and Society 9, pp. 116–139.
Genova, J. (1994), ‘Turing’s Sexual Guessing Game’, Social Epistemology 8, pp. 313–326.
Haugeland, J. (1985), Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press.
Hayes, P. and Ford, K. (1995), ‘Turing Test Considered Harmful’, Proceedings of the Fourteenth
International Joint Conference on Artificial Intelligence, Montreal, Quebec, Canada, pp. 972–
977.
Hodges, A. (1983), Alan Turing: The Enigma. New York: Simon and Schuster.
Ince, D.C., ed. (1992), Collected Works of A.M. Turing: Mechanical Intelligence. Amsterdam: North
Holland.
Newman, M.H.A. (1955), ‘Alan Mathison Turing’, in Biographical Memoirs of Fellows of the Royal
Society. London: Royal Soc., pp. 253–362.
Piccinini, G. (2001), ‘Turing and the Mathematical Objection’, Forthcoming in Minds and Machines.
Saygin A.P.,Cicekli I. and Akman V. (2000), ‘Turing Test: 50 Years Later’, Minds and Machines 10,
pp. 463–518.
Sterrett, S. (2000), ‘Turing’s Two Tests for Intelligence’, Minds and Machines 10, pp. 541–559.
Traiger, S. (2000), ‘Making the Right Identification in the Turing Test’, Minds and Machines 10, pp.
561–572.
Turing, A.M. (1945), ‘Proposal for Development in the Mathematical Division of an Automatic
Computing Engine (ACE)’, reprinted in Ince (1992), pp. 1–86.
Turing, A.M. (1947), ‘Lecture to the London Mathematical Society on 20 February 1947’, reprinted
in Ince (1992), pp. 87–105.
Turing, A.M. (1948), ‘IntelligentMachinery’, reprinted in Ince (1992), pp. 107–127.
Turing, A.M. (1950), ‘Computing Machinery and Intelligence’, Mind 59, pp. 433–460.
Turing, A.M. (1951), ‘Can digital computers think?’ Typescript of talk broadcast on BBC Third
Programme, 15 May 1951, AMT B.5, Contemporary Scientific Archives Centre, King’s College
Library, Cambridge.
Turing, A.M. (1952), ‘Can automatic calculating machines be said to think?’ Typescript of broadcast
discussion on BBC Third Programme, 14 and 23 January 1952, between M.H.A. Newman, A.M.
Turing, Sir Geoffrey Jefferson, R.B. Braithwaite, AMT B.6, Contemporary Scientific Archives
Centre, King’s College Library, Cambridge.
Turing, E.S. (1959), Alan M. Turing. Cambridge: Heifer & Sons.
Webb, J.C. (1980), Mechanism, Mentalism, and Metamathematics. Dordrecht: D. Reidel.
... At the other end, interpreters such as Jack Copeland (2004, p. 435-6) and James Moor (2001), following Gualtiero Piccinini (2000), acknowledged no gender test whatsoever as present in Turing's 1950 text. Rather, they offered an argument -now hegemonic in philosophy -that Turing would have proposed a species test (whether the machine kind can match the human kind in intelligence). ...
... A machine intelligence test in which the machine should imitate, now a woman, now a man, could be ridiculed by less sensible readers, as in fact it was. Piccinini (2000), Moor (2001) and Copeland (2004, p. 435-6) were all writing a few years after (1995), when at then the main research conference in the field now called artificial intelligence, Patrick Hayes and Kenneth Ford suggested that scientists should abandon the Turing test, among other things, because it requires the construction of a "mechanical transvestite" (p. 973). ...
... It shall suffice though to connect it with the problem of this paper. Interpreters such as Piccinini (2000), Moor (2001) and Copeland (2004, p. 435-6), as mentioned, seem to understand that there must be one and only one Turing test. Since the overarching goal of Turing over sources (1948;1950;1952) is clearly to propose a species test (machine-imitates-human), then the gender test (machine-imitates-woman/man), they seem to have suggested, could be forgotten. ...
Article
Turing proposed in 1950 his famous imitation game or test: a machine is supposed to imitate, sometimes a woman, sometimes a man. In 1995 scientists in artificial intelligence complained that, according to Turing, the goal of the field should be to build a ``mechanical transvestite.'' Supporters of Turing's test as a decisive experiment for machine intelligence then suggested to read ``man'' in Turing's text as masculine generics. Drawing also from primary sources other than Turing's 1950 text, they argued that Turing must have proposed not a gender, but a species test. My contention is that Turing did propose gender learning and imitation as one of his various tests for machine intelligence. I shall reconstruct the context of Turing's 1950 proposal and point out that it came out of a 1949 controversy, notably with neurosurgeon Geoffrey Jefferson. I will then try to show that Turing designed his imitation game as a thought experiment to refute, among other things, an \emph{a priori} view of Jefferson that intelligence was an exclusive feature of the animal nervous system, and that interesting behavior in the male and the female was largely determined by sex hormones.
... As Piccinini (2000),Proudfoot (2013), and others have pointed out, Turing uses the terms "thought" and "intelligence" interchangeably. Although I will not differentiate between the terms, for reasons of uniformity I shall usually use the term "intelligence". ...
... This is how Turing's prediction was understood byMays (1952, pp. 149-151),Beran (2014) and others.(Piccinini 2000 understands that Turing hopes such a change will occur.) For an illuminating discussion regarding the possibility of this sort of change (not concerning Turing's paper) seeTorrance (2014). ...
Chapter
Full-text available
Turing’s Imitation Game (1950) is usually understood to be a test for machines’ intelligence; I offer an alternative interpretation. Turing, I argue, held an externalist-like view of intelligence, according to which an entity’s being intelligent is dependent not just on its functions and internal structure, but also on the way it is perceived by society. He conditioned the determination that a machine is intelligent upon two criteria: one technological and one sociolinguistic. The Technological Criterion requires that the machine’s structure enables it to imitate the human brain so well that it displays intelligent-like behavior; the Imitation Game tests if this Technological Criterion was fulfilled. The Sociolinguistic Criterion requires that the machine be perceived by society as a potentially intelligent entity. Turing recognized that in his day, this Sociolinguistic Criterion could not be fulfilled due to humans’ chauvinistic prejudice towards machines; but he believed that future development of machines displaying intelligent-like behavior would cause this chauvinistic attitude to change. I conclude by discussing some implications Turing’s view may have in the fields of AI development and ethics.
... Piccinini [97] discusses the "imitation game" developed by Alan Turing. It is a test between human and machine subjects and the task is to differentiate between the two. ...
Article
Full-text available
Conversational systems are now applicable to almost every business domain. Evaluation is an important step in the creation of dialog systems so that they may be readily tested and prototyped. There is no universally agreed upon metric for evaluating all dialog systems. Human evaluation, which is not computerized, is now the most effective and complete evaluation approach. Data gathering and analysis are evaluation activities that need human intervention. In this work, we address the many types of dialog systems and the assessment methods that may be used with them. The benefits and drawbacks of each sort of evaluation approach are also explored, which could better help us understand the expectations associated with developing an automated evaluation system. The objective of this study is to investigate conversational agents, their design approaches and evaluation metrics. This approach can help us to better understand the overall process of dialog system development, and future possibilities to enhance user experience. Because human assessment is costly and time consuming, we emphasize the need of having a generally recognized and automated evaluation model for conversational systems, which may significantly minimize the amount of time required for analysis.
... The idea of "intelligent machines" came afterward. Alan Turing, in his paper "Computing Machinery and Intelligence" [7,8], explained how they should "act" to be human-like, "The imitation game". He defined a test that will be the standard to judge if a machine is "intelligent". ...
Thesis
Full-text available
Artificial Intelligence (AI) is gaining space in our everyday life rapidly. In parallel, the cyberspace to which the industrialized countries are constantly connected (76 billion connections by 2025) is presenting more and more threats (230.000 new strains of malware every day). This thesis presents an overview of cyber threats and cybersecurity-related to AI, and the European Union’s situation on the world stage. European leaders acknowledge the importance of reaching the AI global competitors to have an ambitious strategy. First, we will review AI technology and general concepts, which will help us understand how AI can be used to produce or fight against cyber threats. We will establish which are the current cyber threats and which novelties precisely AI brings to this scope. AI itself creates specific categories of threats, such as the “Deepfake" and bots propaganda used for disinformation campaigns. AI threats can be used by: criminal organizations, the Advanced Persistent Threats (state-sponsored criminals), which are highly active, and states themselves. The utilization of AI is also subject to cyber threats as malevolent actors can mitigate its intrinsic learning abilities. To fight AI cyber threats or to address AI usage weakness, more research is needed. Finally, we will look at the world situation, the main actors, and how the European Union is promoting AI development and protecting the citizens. The European Union has defined an ambitious strategy. It answers most of the threats that AI will bring as societal changes. Research is the key factor in AI cybersecurity and the EU Commission is investing €1.5 billion in AI research and development. However, the USA and China are in advance on particular aspects of AI development. The European Union still has to provide solid efforts to fill the technological gap.
... Where a machine has autonomously generated a defective algorithm, problems will arise that will be particularly challenging, and there will be causal proof issues if the source algorithms and data come from many or unidentifiable sources. Liability issues can be anticipated in the public sector as well as in industry [54]. ...
Article
Full-text available
There has been an explosion of innovation in agricultural technologies, but whether the anticipated benefits are fully realised depends partly upon with the institutional structures are supportive. Many types of law will shape what innovations are viable and the scale of the economic returns. Australia does not have a coherent strategy for future rural regulation that will both minimise the public risks and increase the private opportunities from future agricultural innovation. This paper addresses these issues. It considers the diverse legal issues that will affect these opportunities, and it looks particularly at agricultural robotics as an example of the many ways in which legal issues will shape opportunities from innovation. It proposes that an integrated strategy, based on a careful analysis of future issues, would be a significant contributor to Australia’s agricultural sector and to its innovating industries.
... It was due to this difficulty that Alan Turing proposed in 1950 the imitation game which was for him a way to surmount this problem (Turing 1950). The purpose of the imitation game was to develop a machine that would be indistinguishable from a human (Piccinini 2000;Proudfoot 2011). The game consisted of a judge, a human player, and a machine. ...
... It was due to this difficulty that Alan Turing proposed in 1950 the imitation game which was for him a way to surmount this problem (Turing 1950). The purpose of the imitation game was to develop a machine that would be indistinguishable from a human (Piccinini 2000;Proudfoot 2011). The game consisted of a judge, a human player, and a machine. ...
... This is the kind of machine that Turing thought was in the right ballpark to exhibit intelligence, in virtue of the fact that digital computers were explicitly designed to carry out operations standardly performed by human computers (Turing 1950, p. 444). The role of B (the truth-teller) is to be taken by a human being, with the gender of that participant presumably no longer intended to be salient (although this is a debated issue; for discussion, see Genova 1994;Traiger 2000;Copeland 2000;Piccinini 2000;Moor 2001). Thus, when Turing returns to it later in 'Computing Machinery and Intelligence', stage two of the imitation game looks like this: 'Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by [a human being]?' (Turing 1950, p. 448). ...
Article
Full-text available
The Turing Test is routinely understood as a behaviourist test for machine intelligence. Diane Proudfoot (Rethinking Turing’s Test, Journal of Philosophy, 2013) has argued for an alternative interpretation. According to Proudfoot, Turing’s claim that intelligence is what he calls ‘an emotional concept’ indicates that he conceived of intelligence in response-dependence terms. As she puts it: ‘Turing’s criterion for “thinking” is…: x is intelligent (or thinks) if in the actual world, in an unrestricted computer-imitates-human game, x appears intelligent to an average interrogator’. The role of the famous test is thus to provide the conditions in which to examine the average interrogator’s responses. I shall argue that Proudfoot’s analysis falls short. The philosophical literature contains two main models of response-dependence, what I shall call the transparency model and the reference-fixing model. Proudfoot resists the thought that Turing might have endorsed one of these models to the exclusion of the other. But the details of her own analysis indicate that she is, in fact, committed to the claim that Turing’s account of intelligence is grounded in a transparency model, rather than a reference-fixing one. By contrast, I shall argue that while Turing did indeed conceive of intelligence in response-dependence terms, his account is grounded in a reference-fixing model, rather than a transparency one. This is fortunate (for Turing), because, as an account of intelligence, the transparency model is arguably problematic in a way that the reference-fixing model isn’t.
... Our acceptance of the non-gendered version of the test is based on evidence internal to Turing's Mind paper as well as some later remarks(Turing 1951(Turing , 1952. This issue is thoroughly discussed byCopeland and Proudfoot (2008),Moor (2001), andPiccinini (2000). ...
Article
Full-text available
In 2014, widespread reports in the popular media that a chatbot named Eugene Goostman had passed the Turing test became further grist for those who argue that the diversionary tactics of chatbots like Goostman and others, such as those who participate in the Loebner competition, are enabled by the open-ended dialog of the Turing test. Some claim a new kind of test of machine intelligence is needed, and one community has advanced the Winograd schema competition to address this gap. We argue to the contrary that implicit in the Turing test is the cooperative challenge of using language to build a practical working understanding, necessitating a human interrogator to monitor and direct the conversation. We give examples which show that, because ambiguity in language is ubiquitous, open-ended conversation is not a flaw but rather the core challenge of the Turing test. We outline a statistical notion of practical working understanding that permits a reasonable amount of ambiguity, but nevertheless requires that ambiguity be resolved sufficiently for the agents to make progress.
Conference Paper
Full-text available
Passing the Turing Test is not a sensible goal for Artificial Intelligence. Adherence to Turing's vision from 1950 is now actively harmful to our field. We review problems with Turing's idea, and suggest that, ironically, the very cognitive science that he tried to create must reject his research goal. 1
Article
Bien qu'importante, la question: L'ordinateur, pense-t-il? n'est pas la seule question sur la nature de la pensee posee, consciemment ou inconsciemment, par Turing dans son texte, extremement complexe et riche « Computing machinery and intelligence » (1950). Moins apparentes, mais non pas moins importantes d'apres l'A., sont les questions sur la nature du sexe, le naturel et l'artificiel, l'analogue et le discret, le biologique et le culturel
Article
The test Turing proposed for machine intelligence is usually understood to be a test of whether a computer can fool a human into thinking that the computer is a human. This standard interpretation is rejected in favor of a test based on the Imitation Game introduced by Turing at the beginning of "Computing Machinery and Intelligence."
Book
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.