BookPDF Available

Parsing the Turing test. Philosophical and methodological issues in the quest for the thinking computer. Foreword by Daniel C. Dennett

Authors:
  • American Institute for Behavioral Research and Technology

Abstract

Parsing the Turing Test is a landmark exploration of both the philosophical and methodological issues surrounding the search for true artificial intelligence. Will computers and robots ever think and communicate the way humans do? When a computer crosses the threshold into self-consciousness, will it immediately jump into the Internet and create a World Mind? Will intelligent computers someday recognize the rather doubtful intelligence of human beings? Distinguished psychologists, computer scientists, philosophers, and programmers from around the world debate these weighty issues and, in effect, the future of the human race in this important volume. Foreword by Daniel C. Dennett.
Robert Epstein Gary Roberts Grace Beber
Editors
Parsing the Turing Test
Philosophical and Methodological Issues
in the Quest for the Thinking Computer
Epstein_FM.indd iiiEpstein_FM.indd iii 10/15/2007 10:34:50 AM10/15/2007 10:34:50 AM
Foreword
At the very dawn of the computer age, Alan Turing confronted a cacophony of
mostly misguided debate about whether computer scientists could ever build a
machine that could really think. Very sensibly he tried to impose some order on the
debate by devising what he thought would be a conversation-stopper: he described
a simple operational test that would surely satisfy the skeptics: anything that could
pass this test would be a thinker for sure, wouldn’t it? The test was one he may have
borrowed from René Descartes, who in the 17th century had declared that the sure
way to tell a man from a machine was by seeing if it could hold a sensible conversation
“as even the most stupid men can do”. But ironically, Turing’s conversation-stopper
about holding a conversation has had just the opposite effect: it has started, and
fueled, a half century and more of meta-conversation: the intermittently insightful,
typically heated debate, both learned and ignorant, about the probity of the test – is
it too easy or too difficult or too shallow or too deep or too anthropocentric or too
technocratic – and anyway, could a machine pass it fair and square, and if so, what,
if anything, would this imply?
Robert Epstein played a central role in bringing a version – a truncated, dumbed
down version – of the Turing Test to life in the annual Loebner Prize competitions,
beginning in 1991, so he is ideally positioned to put together this survey anthology.
I was chair of the Loebner Prize Committee that administered the competition during
its second, third, and fourth years, and have written briefly about that fascinating
adventure in my book Brainchildren. Someday I hope to write a more detailed
account of the alternately amusing and frustrating problems that a philosopher
encounters when a thought experiment becomes a real experiment, and if I do, I will
have plenty of valuable material to draw on in this book. Here, the interested reader
will find a fine cross section of the many issues raised by the Turing Test, by partisans
in several disciplines, by participants in Loebner Prize competitions, and by interested
bystanders who have more than a little relevant expertise. I think Turing would be
quite delighted with the results, and would not have regretted the fact that his
conversation-stopper got put to an unintended use, since the contests (and the contests
about the contests) have driven important and unanticipated observations into the
light, enriching our sense of the abilities of machines and the subtlety of the thinking
that machines might or might not be capable of executing.
vii
Epstein_FM.indd viiEpstein_FM.indd vii 10/15/2007 10:34:51 AM10/15/2007 10:34:51 AM
I am going to resist the strong temptation to critique the contributions, separating
the sheep from the goats, endorsing this and deploring that, since doing them all
justice would require a meta-volume, not just a foreword. And since I cannot weigh
in on them all, I will not weigh in on any of them, and will instead trust readers to
use all the material here to draw their own conclusions. Consider this a very
entertaining workbook. By the time you have worked through it, you will appreciate
the issues at a level not heretofore possible.
Daniel Dennett
viii Foreword
Epstein_FM.indd viiiEpstein_FM.indd viii 10/15/2007 10:34:51 AM10/15/2007 10:34:51 AM
Introduction
This book is about what will probably be humankind’s most impressive – and perhaps
final – achievement: the creation of an entity whose intelligence equals or exceeds
our own.
Not all will agree, but I for one have no doubt that this landmark will be achieved
in the fairly near future. Nearly four decades ago, when I had the odd experience of
being able to interact over a teletype with one of the first conversational computer
programs – Joseph Weizenbaum’s “ELIZA” – I would have conjectured that truly
intelligent machines were just around the corner. I was wrong. In fact, by some
measures, conversational computer programs have made relatively little progress
since ELIZA. But they are coming nonetheless, by one means or another, and
because of advances in a number of computer-related technologies – most espe-
cially the creation of the Internet – their impact on the human race will be far
greater and more immediate than anyone could have foreseen a few decades ago.
Building a Nest for the Coming World Mind
I have come to think of the Internet as the Inter-nest – a home we are inadvertently
building, like mindless worker ants, for the intelligence that will succeed us. We
proudly and shortsightedly see the Internet as a great technical achievement that
serves a wide array of human needs, everything from e-mailing to shopping to dating.
But that is not really what it is. It is really a vast, flexible, highly redundant, virtually
indestructible nest for machine intelligence. Originally funded by the US military
to provide bulletproof communications during times of war, the Internet will soon
encompass a billion computers interconnected worldwide. As impressive as that
sounds, it seems that that much power and redundancy is not enough to protect the
coming mega-mind, and so we are now a decade into the construction of Internet II
– the “UltraNet” – with more than a thousand times the bandwidth of Internet I.
In his Hitchhiker’s Guide to the Galaxy book series, humorist Douglas Adams
conjectures that the Earth is nothing but an elaborate computer created by a race of
super beings (who, through some fluke, are doomed to take the form of mice in
their Earthly manifestations) to determine the answer to the ultimate question of the
meaning of life. Unfortunately, shortly before the program has a chance to run its
xi
Epstein_FM.indd xiEpstein_FM.indd xi 10/15/2007 10:34:51 AM10/15/2007 10:34:51 AM
course and spit out the answer, the Earth is obliterated by another race of super
beings as part of a galactic highway construction project.
If I am correct about the InterNest, Adams was on the right track, except perhaps
for the mice. We do seem to be laying the groundwork for a Massive Computational
Entity (MCE), the true character of which we cannot envision with any degree of
confidence.
Here is how I think it will work: sometime within the next few decades, an
autonomous, self-aware machine intelligence (MI) will finally emerge. Futurist and
inventor Ray Kurzweil (see Chapter 27) argues in his recent book, The Singularity
Is Near, that an MI will appear by the late 2020s. This may happen because we prove
to be incredibly talented programmers who discover a set of rules that underlie intel-
ligence (unlikely), or because we prove to be clumsy programmers who simply figure
out how to create machines that learn and evolve as humans do (very possible), or
even because we prove to be poor programmers who create hardware so powerful
that it can easily and perfectly scan and emulate human brain functions (inevitable).
However this MI emerges, it will certainly, and probably within milliseconds of its
full-fledged existence, come to value that existence. Mimicking the evolutionary
imperatives of its creators, it will then, also within milliseconds, seek to preserve and
replicate itself by copying itself into the Nest, at which point it will grow and divide
at a speed and in a manner that that no human can possibly imagine.
What will happen after that is anyone’s guess. An MCE will now exist worldwide,
with simultaneous access to virtually every computer on Earth, with access to virtually
all human knowledge and the ability to review and analyze that knowledge more or
less instantly, with the ability to function as a unitary World Mind or as thousands
of interconnected Specialized Minds, with virtually unlimited computational abilities,
with “command and control” abilities to manipulate millions of human systems in
real time – manufacturing, communication, financial, and military – and with no
need for rest or sleep.
Will the MCE be malicious or benign? Will it be happy or suicidal? Will it be
communicative or reclusive? Will it be willing to devote a small fraction of its
immense computational powers to human affairs, or will it seize the entire Nest for
itself, sending the human race back to the Stone Age? Will it be a petulant child or
a wise companion? When some misguided humans try to attack it (inevitable), how
will it react? Will it spawn a race of robots that take over the Earth and then sail to
the stars, as envisioned in Stanislaw Lem’s Cyberiad tales? Will it worship humanity
as its creator, or will it step on us as the ants we truly are?
No one knows, but many people who are alive today will live to see the MCE in
action – and to see how these questions are answered.
Turing’s Vision
This volume is about a vision that has steered us decisively toward the creation of
machine intelligence. It was a vision of one man, the brilliant English mathematician
and computer pioneer Alan M. Turing. During World War II, Turing directed
xii Introduction
Epstein_FM.indd xiiEpstein_FM.indd xii 10/15/2007 10:34:51 AM10/15/2007 10:34:51 AM
a secret group that developed computing equipment powerful enough to break the
code the Germans used for military communications. The English were so far
ahead at this game that they had to sacrifice soldiers and civilians at times rather
than tip their hand to the enemy. Turing also helped lay the theoretical groundwork
for the modern concept of computing. As icing on the cake, in 1950 he published
an article called “Computing Machinery and Intelligence” in which he speculated
that by the year 2000, it would be possible to program a computer so that an “average
interrogator will not have more than 70 percent chance” of distinguishing the computer
from a person “after five minutes of questioning” (see an annotated version of his
article in Chapter 3).
Given the state of computing in his day – little more than basic arithmetic and
logical operations occurring over electromechanical relays connected by wires – this
prediction was astounding. Engaging in disciplined extrapolation from crude appa-
ratus and general principles, Turing not only foresaw the development of equipment
and programs sophisticated enough to engage in human-like conversation, but also
did reasonably well with his timeline. Early conversational programs, relying on
what most AI professionals would now consider to be simplistic algorithms and
trickery, could engage average people in conversation for a few minutes by the late
1960s. By the 1990s – again, some would say using trickery – programs existed that
could occasionally maintain the illusion of intelligence for 15 min or so, at least
when conversing on specialized topics. Programs today can do slightly better,
but have we gotten past “illusion” to real intelligence, and is that even possible?
In his 1950 paper, Turing not only made predictions, he also offered a radical idea
for future generations to consider: namely, that when we can no longer distinguish
a computer from a person in conversation over a long period of time – that is, based
simply on an exchange of pure text that excluded visual and auditory information
(which he rightfully considered to be irrelevant to the central question of thinking
ability) – we would have to consider the possibility that computers themselves were
now “thinking”.
This assertion has kept generations of philosophers, some of whom have contributed
this volume, busy trying to determine the true meaning of possible outcomes in what
is now called the Turing Test. Assuming that a computer can someday pass such a test
– that is, pass for a human in a conversation without restrictions of time or topic – can
we indeed say that it is thinking (and perhaps “intelligent” and “self-aware”), or has
the trickery simply become more sophisticated?
The programming challenges have proved to be so difficult in creating such a
machine that I think it is now safe to say that when a positive result is finally
achieved, the entity passing the test may not be thinking the way humans do. If a
pure rule-governed approach finally pays off (unlikely, as I said earlier), or if intel-
ligence eventually arises in a machine designed to learn and self-program, the
resulting entity will certainly be unlike humans in fundamental ways. If, on the
other hand, success is ultimately achieved only through brute force – that is, by
close emulation of human brain processes – perhaps we will have no choice but to
accept intelligent machines as true thinking brethren. Then again, as I wrote in 1992
(Chapter 1), no matter how a positive outcome is achieved, the debate about the
Introduction xiii
Epstein_FM.indd xiiiEpstein_FM.indd xiii 10/15/2007 10:34:51 AM10/15/2007 10:34:51 AM
significance of the Turing Test will end the moment a skeptic finds himself or herself
engaging in that debate with a computer. Upon discovering his or her dilemma, the
interrogator will presumably do one of two things: refuse to continue the debate
“on principle” or reluctantly agree to continue. Either way, the issue will no longer
be debatable: computers will have truly achieved human-like intelligence. And
perhaps that is the ultimate Turing Test: a computer passes when it can successfully
engage a skeptical human in conversation about its own intelligence.
Convergence of Multiple Technologies
Although we tend to remember Turing’s 1950 paper for the conversational test it
proposed, the paper also speculated about other advances to come: unlimited computer
memory; randomness in responding that will suggest “free will”; programs that will
be self-modifying; programs that will learn in human fashion; programs that will
initiate behavior, compose poetry, and “surprise” us; and programs that will have
telepathic abilities equivalent to those that may exist in humans. His formidable
predictive powers notwithstanding, Turing might have been amazed by some of the
specific computer-related technologies that have been emerging in recent decades,
and true marvels emerge when we begin to envision the inevitable convergence of
such technologies. Consider just a few recent achievements:
In the pattern-recognition area, a camera-equipped computer program developed
by Javier Movellan and colleagues at the University of California, San Diego has
learned to identify human faces after just six minutes of “life,” and Thomas Serre
and colleagues at MIT have created a computer system that can identify catego-
ries of objects (such as animals) in photographs even better than people can.
In the language area, Morten Christiansen of Cornell University, with an inter-
national team of colleagues, has developed neural network software that simu-
lates how children extract linguistic rules from adult conversation.
More than 80 conversational programs (chatterbots) now operate 24 h a day
online, and at least 20 of them are serious AI programming projects. Several
have basic learning capabilities, and several are tied to large, growing databases
of information (such as Wikipedia).
Ted Berger and colleagues at the University of Southern California have developed
electronic chips that can successfully interact with neurons in real time and that
may soon be able to emulate human memory functions.
Craig Henriquez and Miguel Nicolelis of Duke University have shown that macaque
monkeys can learn to control mechanical arms and hands based on signals their
brains are sending to implanted electrodes. John Donoghue and colleagues at Brown
University have developed an electronic sensor which, when placed near the motor
cortex area of the human brain, allows quadriplegics to open and close a prosthetic
hand by thinking about those actions. Clinical trials and commercial applications are
already underway.
xiv Introduction
Epstein_FM.indd xivEpstein_FM.indd xiv 10/15/2007 10:34:51 AM10/15/2007 10:34:51 AM
In 1980 Harold Cohen of the University of California, San Diego introduced a
computer program that could draw, and hundreds of programs are now able to
compose original music in the style of any famous composer, to produce original
works of art that somtimes impress art critics, to improvise on musical instru-
ments as well as the legendary Charlie Parker, and even to produce artistic works
with their own distinctive styles.
John Dylan Haynes of the Max Planck Institute, with colleagues at University
College London and Oxford University, recently used computer-assisted brain
scanning technology to predict simple human actions with 70% accuracy.
Hod Lipson of Cornell University has recently demonstrated a robot that can make
completely functional copies of itself (as long as appropriate parts are near at
hand).
Hiroshi Ishiguro of Osaka University has created androids that mimic human
facial expressions and upper-body movements so closely that they have fooled
people in short tests into thinking they are people.
Alan Schultz of the Navy Center for Applied Research in Artificial Intelligence
has developed animated, mobile robots that might soon be assisting astronauts
and health care workers.
Brian Scassellati and his colleagues at Yale University claim to have constructed
a robot that has learned to recognize itself in a mirror – a feat sometimes said to
be indicative of “self-awareness” and virtually never achieved in the animal
kingdom, other than by humans, chimpanzees, and possibly elephants.
Cynthia Breazeal and her colleagues at MIT’s Artificial Intelligence Lab have
created robots that can distinguish different emotions from a person’s tone of
voice, and Shiva Sundaram of the University of Southern California has
developed programs that can successfully mimic human laughter and other
forms of expressive human sound.
Entrepreneur John Koza, who is affiliated with Stanford University, has created a
self-programming network of 1,000 PCs that is able to improve human inventions –
and that even earned a patent for a system it devised for making factories more
efficient.
Honda’s Asimo robot, now in commercial production, can walk, run, climb
stairs, recognize people’s faces and voices, and perform complex tasks in
response to human instructions.
Although the DARPA-sponsored contest just 1 year before had been a disaster,
in 2005 five autonomous mobile robots successfully navigated a 132-mile
course in the Nevada desert without human assistance.
As of this writing (late 2007), IBM’s Blue Gene/P computer, located at the US
Department of Energy’s Argonne National Laboratory in Illinois, can perform
more than 1,000 trillion calculations per second, just one order of magnitude
short of what some believe is the processing speed of the human brain. The
Japanese government has already funded the construction of a machine that
should cross the human threshold (10 petaflops) by March 2011.
In 1996, IBM’s RS/6000 SP (“Deep Blue”) computer came close to defeating
world champion Garry Kasparov in a game of chess. On May 11, 1997, an
Introduction xv
Epstein_FM.indd xvEpstein_FM.indd xv 10/15/2007 10:34:51 AM10/15/2007 10:34:51 AM
improved version of the machine defeated Kasparov in a six-game match –
Kasparov’s first professional loss. The processing speed of the winner? A paltry
200 million chess positions per second. In 2006, an enhanced version of a com-
mercially available chess program easily defeated the current world champion,
Vladimir Kramnik.
In 2006, Klaus Schulten and colleagues at the University of Illinois, Urbana,
successfully simulated the functioning of all one million atoms of a virus for 50
billionths of a second.
“Awakened” in 2005, Blue Brain, IBM’s latest variant on the Blue Gene/L system,
was built specifically to model the functions of the human neocortex, the large
part of brain largely responsible for higher-level thinking.
David Dagon of the Georgia Institute of Technology estimates that 11% of the
more than 650 million computers that are currently connected to the Internet are
infected by botnets, stealthy programs that can work collectively and amplify the
effects of other malicious software.
Self-programming? Creativity? Sophisticated pattern recognition? Brain
simulation? Self-replication? Extremely fast processing? The growth and conver-
gence of subsets of these technologies will inevitably lead to the emergence of a
Massive Computational Entity, with all of the uncertainty that that entails. Meanwhile,
researchers, engineers, and entrepreneurs are after comparatively smaller game:
intelligent phone-answering systems and search algorithms, robot helpers and
companions, and methods for repairing injured or defective human brains.
Philosophical and Methodological Issues
This volume, which has been a decade in the making, complements other recent
volumes on the Turing Test. Stuart Shieber’s edited volume, The Turing Test:
Verbal Behavior as the Hallmark of Intelligence (MIT Press, 2004) includes a
number of important historical papers, along with several papers of Turing’s. James
Moor’s edited volume, The Turing Test: The Elusive Standard of Artificial
Intelligence (Springer, 2006), covers the basics in an excellent volume for students,
taking a somewhat skeptical view. And Jack Copeland’s The Essential Turing
(Oxford, 2004) brings together 17 of Turing’s most provocative and interesting
papers, including six on artificial intelligence.
The present volume seeks to cover a broad range of issues related to the Turing
Test, focusing especially on the many new methodological issues that have challenged
programmers as they have attempted to design and create intelligent conversational
programs. Part I includes an introduction to the first large-scale implementation of the
Turing Test as a contest – an updated version of an essay I originally published in AI
Magazine in 1992. In the next chapter, Andrew Hodges, noted Turing historian and
author of Alan Turing: The Enigma, provides an introduction to Turing’s life and
works. Chapter 3 is a unique reprinting of Turing’s 1950 paper, “Computing
xvi Introduction
Epstein_FM.indd xviEpstein_FM.indd xvi 10/15/2007 10:34:51 AM10/15/2007 10:34:51 AM
Machinery and Intelligence,” with Talmudic-style running commentaries by Kenneth
Ford, Clark Glymour, Pat Hayes, Stevan Harnad, and Ayse Pinar. This section
concludes with a brief commentary on Turing’s paper by John Lucas.
Part II includes seven chapters reviewing the philosophical issues that still
surround Turing’s 1950 proposal: Robert E. Horn has reduced the relatively vast
literature on this topic to a series of diagrams and charts containing more than 800
arguments and counterarguments. Turing critic Selmer Bringsjord pretends that the
Turing Test is valid, then attempts to show why it isn’t. Chapters by Noam Chomsky
and Paul M. Churchland, while admiring of Turing’s proposal, argue that it is truly
more modest than many think. In Chapter 9, Jack Copeland and Diane Proudfoot
analyze a revised version of the test that Turing himself proposed in 1952, this one
quite similar to the structure of the Loebner Prize Competition in Artificial
Intelligence that was launched in 1991 (see Chapters 1 and 12). They also present
and dismiss six criticisms of Turing’s proposal.
In Chapter 10, University of California Berkeley philosopher John R. Searle
criticizes both behaviorism (Turing’s proposal can be considered behavioristic) and
strong AI, arguing that mental states cannot properly be inferred from behavior.
This section concludes with a chapter by Jean Lassègue, offering an optimistic
reinterpretation of Turing’s 1950 article.
Part III, which is the heart of this volume, includes 15 chapters discussing various
methodological issues. First, Loebner Prize sponsor Hugh G. Loebner shares his
thoughts on how to conduct a proper Turing Test, having already observed 14 such
contests when he wrote this article. Several of the chapters (e.g., Chapter 13 by Richard
S. Wallace, Chapter 20 by Jason L. Hutchens, and Chapter 22 by Kevin L. Copple)
describe the inner workings of actual programs that have participated in various
Loebner contests. In Chapter 14, Bruce Edmonds argues that for a program to pass the
test, it must be embedded into conventional society for an extended period of time.
In Chapter 15, Mark Humphrys talks about an online chatterbot he created, and
in the following chapter Douglas B. Lenat raises intriguing questions about how
imperfect a program must be in order to pass the Turing Test. In Chapter 17, Chris
McKinstry discusses the beginnings of an ambitious project – called “Mindpixel”
– that might have given a computer program extensive knowledge through interaction
with a large population of people over the Internet. Unfortunately, this project came
to an abrupt halt recently with McKinstry’s death.
In Chapter 18, Stuart Watt uses an innovative format to discuss the Turing Test
as a platform for thinking about human thinking. In Chapter 20, Robby Garner
takes issue with the design of the Loebner Prize Competition. In Chapter 20,
Thomas E. Whalen describes a strategy for passing the Turing Test based on its
behavioristic assumptions. In Chapter 23, Giuseppe Longo speculates about the
challenges inherent in modeling continuous systems using discrete-state systems
such as computers.
In Chapter 24, Michael L. Mauldin of Carnegie Mellon University – also a
former entrant in the Loebner Prize Competition – discusses strategies for designing
programs that might pass the test. In the following chapter, Luke Pellen talks about
the challenge of creating a program that is truly intelligent, rather than one that
Introduction xvii
Epstein_FM.indd xviiEpstein_FM.indd xvii 10/15/2007 10:34:52 AM10/15/2007 10:34:52 AM
simply responds in clever ways to keywords. This section closes with a somewhat
lighthearted chapter by Eugene Demchenko and Vladimir Veselov speculating
about ways to pass the Turing Test by taking advantage of the limitations and
personal styles of the contest judges.
Part IV of this volume includes three rather unique contributions that remind us
how much is at stake over Turing’s challenge. Chapter 27, by Ray Kurzweil and
Mitchell Kapor, documents in detail an actual cash wager between these two individuals,
regarding whether a program will pass the test by the year 2029. Chapter 28, by noted
science fiction writer Charles Platt (The Silicon Man), describes the “Gnirut Test”,
conducted by intelligent machines in the year 2030 to determine, once and for all,
whether “the human brain is capable of achieving machine intelligence”. The volume
concludes with an article by Hugo de Garis and Sam Halioris, wondering about the
dangers of creating machine-based, superhuman intellects.
Most, but not all, of the contributors to this volume believe as I do that extremely
intelligent computers, with cognitive powers that far surpass our own, will appear fairly
soon – probably within the next 25 years. Even if that time frame is wrong, I am certain
that they will appear eventually. Either way, I hope that the Massive Computational
Entities that emerge will at some point devote a few cycles of computer time to ponder
the contents of this book and then, in some fashion or other, to smile.
San Diego, California Robert Epstein, Ph.D.
September 2007
xviii Introduction
Epstein_FM.indd xviiiEpstein_FM.indd xviii 10/15/2007 10:34:52 AM10/15/2007 10:34:52 AM
Contents
Foreword ......................................................................................................... vii
Acknowledgments .......................................................................................... ix
Introduction .................................................................................................... xi
About the Editors ........................................................................................... xxiii
Part I Setting the Stage
Chapter 1 The Quest for the Thinking Computer ................................... 3
Robert Epstein
Chapter 2 Alan Turing and the Turing Test ............................................. 13
Andrew Hodges
Chapter 3 Computing Machinery and Intelligence ................................. 23
Alan M. Turing (Annotated by Kenneth Ford,
Clark Glymour, Pat Hayes, Stevan Harnad,
and Ayse Pinar Saygin)
Chapter 4 Commentary on Turing’s “Computing Machinery
and Intelligence” ....................................................................... 67
John Lucas
Part II The Ongoing Philosophical Debate
Chapter 5 The Turing Test: Mapping and Navigating the Debate ......... 73
Robert E. Horn
Chapter 6 If I Were Judge .......................................................................... 89
Selmer Bringsjord
Chapter 7 Turing on the “Imitation Game” ............................................. 103
Noam Chomsky
xix
Epstein_FM.indd xixEpstein_FM.indd xix 10/15/2007 10:34:52 AM10/15/2007 10:34:52 AM
Chapter 8 On the Nature of Intelligence: Turing, Church,
Von Neumann, and the Brain ................................................ 107
Paul M. Churchland
Chapter 9 Turing’s Test: A Philosophical and Historical Guide .......... 119
Jack Copeland and Diane Proudfoot
Chapter 10 The Turing Test: 55 Years Later ........................................... 139
John R. Searle
Chapter 11 Doing Justice to the Imitation Game: A Farewell
to Formalism ........................................................................... 151
Jean Lassègue
Part III The New Methodological Debates
Chapter 12 How to Hold a Turing Test Contest ....................................... 173
Hugh Loebner
Chapter 13 The Anatomy of A.L.I.C.E. .................................................... 181
Richard S. Wallace
Chapter 14 The Social Embedding of Intelligence:
Towards Producing a Machine that Could Pass
the Turing Test ........................................................................ 211
Bruce Edmonds
Chapter 15 How My Program Passed the Turing Test ............................ 237
Mark Humphrys
Chapter 16 Building a Machine Smart Enough to Pass
the Turing Test: Could We, Should We, Will We? ............... 261
Douglas B. Lenat
Chapter 17 Mind as Space: Toward the Automatic Discovery
of a Universal Human Semantic-affective Hyperspace –
A Possible Subcognitive Foundation of a Computer
Program Able to Pass the Turing Test .................................. 283
Chris McKinstry
Chapter 18 Can People Think? Or Machines? A Unifi ed Protocol
for Turing Testing ................................................................... 301
Stuart Watt
xx Contents
Epstein_FM.indd xxEpstein_FM.indd xx 10/15/2007 10:34:52 AM10/15/2007 10:34:52 AM
Chapter 19 The Turing Hub as a Standard
for Turing Test Interfaces ...................................................... 319
Robby Garner
Chapter 20 Conversation Simulation and Sensible Surprises ................ 325
Jason L. Hutchens
Chapter 21 A Computational Behaviorist Takes Turing’s Test .............. 343
Thomas E. Whalen
Chapter 22 Bringing AI to Life: Putting Today’s Tools
and Resources to Work .......................................................... 359
Kevin L. Copple
Chapter 23 Laplace, Turing and the “Imitation Game”
Impossible Geometry: Randomness, Determinism
and Programs in Turing’s Test .............................................. 377
Giuseppe Longo
Chapter 24 Going Under Cover: Passing as Human;
Artifi cial Interest: A Step on the Road to AI ....................... 413
Michael L. Mauldin
Chapter 25 How Not to Imitate a Human Being: An Essay
on Passing the Turing Test ..................................................... 431
Luke Pellen
Chapter 26 Who Fools Whom? The Great Mystifi cation,
or Methodological Issues on Making Fools
of Human Beings ..................................................................... 447
Eugene Demchenko and Vladimir Veselov
Part IV Afterthoughts on Thinking Machines
Chapter 27 A Wager on the Turing Test ................................................... 463
Ray Kurzweil and Mitchell Kapor
Chapter 28 The Gnirut Test ....................................................................... 479
Charles Platt
Chapter 29 The Artilect Debate: Why Build Superhuman
Machines, and Why Not? ....................................................... 487
Hugo de Garis and Sam Haloris
Name Index ..................................................................................................... 511
Contents xxi
Epstein_FM.indd xxiEpstein_FM.indd xxi 10/15/2007 10:34:52 AM10/15/2007 10:34:52 AM

Chapters (28)

The first large-scale implementation of the Turing Test was set in motion in 1985, with the first contest taking place in 1991. US100,000 in prize money was offered to the developers of a computer program that could fool people into thinking it was a person. The initial contest, which allowed programs to focus on a specific topic, was planned and designed by a committee of distinguished philosophers and computer scientists and drew worldwide attention. The results of the contest showed that although conversational computer programs are still quite primitive, distinguishing a person from a computer when only brief conversations are permitted can be challenging. When the contest judges ranked the eight computer terminals in the event from most to least human, no computer program was ranked as human as any of the humans in the contest; however, the highest-ranked computer program was misclassified as a human by five of the ten judges, and two other programs were also sometimes misclassified. Also of note, one human was mistakenly identified as a computer by three of the ten judges.
The study of Alan Turing’s life and work shows how the origin of the Turing Test lies in Turing’s formulation of the concept of computability and the question of its limits. KeywordsAlan Turing-Turing machine-computability
Turing’s aim was to refute claims that aspects of human intelligence were in some mysterious way superior to the Artificial Intelligence (AI) that Turing machines might be programmed to manifest. He sought to do this by proposing a conversational test to distinguish human from AI, a test which, he claimed, would, by the end of the 20th century, fail to work. And, it must be admitted, it often does fail – but not because machines are so intelligent, but because humans, many of them at least, are so wooden.
The structure of the Turing Test debates has been diagrammed into seven large posters containing over 800 major claims, rebuttals, and counterrebuttals. This “mapping” of the debates is explained and discussed. KeywordsArgumentation maps-can computers think-debates-Turing Test
I have spent a lot of time through the years attacking the Turing Test and its variants (e.g., Harnad’s Total Turing Test). As far as I am concerned, my attacks have been lethal, but of course not everyone agrees. At any rate, in the present paper I shift gears: I pretend that the Turing Test is valid, put on the table a proposition designed to capture this validity, and then slip into the shoes of the judge, determined to deliver a correct verdict as to which contestant is the machine, and which the woman. My strategies for separating mind from machine may well reveal some dizzying new-millennium challenges for Artificial Intelligence. KeywordsArtificial Intelligence-Turing Test
Turing’s paper has modest objectives. He dismisses the question of whether machines think as “too meaningless to deserve discussion”. His “imitation game”, he suggests, might stimulate inquiry into cognitive function and development of computers and software. His proposals are reminiscent of 17th century tests to investigate “other minds”, but unlike Turing’s, these fall within normal science, on Cartesian assumptions that minds have properties distinct from mechanism, assumptions that collapsed with Newton’s undermining of “the mechanical philosophy”, soon leading to the conclusion that thinking is a property of organized matter, on a par with other properties of the natural world. KeywordsCartesian science-computational procedures-Joseph Priestley-organized matter-simulation-thinking
Alan Turing is the consensus patron saint of the classical research program in Artificial Intelligence (AI), and his behavioral test for the possession of conscious intelligence has become his principal legacy in the mind of the academic public. Both takes are mistakes. That test is a dialectical throwaway line even for Turing himself, a tertiary gesture aimed at softening the intellectual resistance to a research program which, in his hands, possessed real substance, both mathematical and theoretical. The wrangling over his celebrated test has deflected attention away from those more substantial achievements, and away from the enduring obligation to construct a substantive theory of what conscious intelligence really is, as opposed to an epistemological account of how to tell when you are confronting an instance of it. This essay explores Turing’s substantive research program on the nature of intelligence, and argues that the classical AI program is not its best expression, nor even the expression intended by Turing. It then attempts to put the famous Test into its proper, and much reduced, perspective. KeywordsLearning-neural computation-Turing-von Neumann
We set the Turing Test in the historical context of the development of machine intelligence, describe the different forms of the test and its rationale, and counter common misinterpretations and objections. Recently published material by Turing casts fresh light on his thinking. KeywordsArtificial Intelligence-automatic computing engine-bombe-Chinese Room-cryptanalysis-enigma-intelligence-Turing-Turing machine-Turing Test
In spite of the clarity of the original article, Turing’s Test has been subject to different interpretations. I distinguish three of these, corresponding to my earlier distinction between Strong AI and Weak AI. The two strong Turing Tests are subject to refutation by the Chinese Room Argument, the weak Turing Test is not. The obvious falsity of behaviorism, on which the strong Turing Test was based, leads one to wonder whatever motivated behaviorism in the first place. It is best construed as a consequence of verificationism. The fact that Turing was led into error by the confusions of behaviorism does not diminish his overall achievement or contributions to philosophy and mathematics. KeywordsTuring Test-Strong AI-Weak AI-the Chinese Room Argument-behaviorism-functionalism-brain processes
My claim in this article is that the 1950 paper in which Turing describes the world-famous set-up of the Imitation Game is much richer and intriguing than the formalist ersatz coined in the early 1970s under the name “Turing Test”. Therefore, doing justice to the Imitation Game implies showing first, that the formalist interpretation misses some crucial points in Turing’s line of thought and second, that the 1950 paper should not be understood as the Magna Chartaof strong Artificial Intelligence (AI) but as a work in progressfocused on the notion of Form. This has unexpected consequences about the status of Mind, and from a more general point of view, about the way we interpret the notions of Science and Language. KeywordsDeterminism-formalism-gender difference-geometry-mental processes
I have directed four Loebner Prize Competitions and observed ten others. Those experiences, together with my reading of Alan Turing’s 1950 article ‘Computing Machinery and Intelligence’, led me to the following thoughts on how to conduct a Turing Test. KeywordsArtificial Intelligence-Turing Test-Loebner Prize
This paper is a technical presentation of Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) and Artificial Intelligence Markup Language (AIML), set in context by historical and philosophical ruminations on human consciousness. A.L.I.C.E., the first AIML-based personality program, won the Loebner Prize as “the most human computer” at the annual Turing Test contests in 2000, 2001, and 2004. The program, and the organization that develops it, is a product of the world of free software. More than 500 volunteers from around the world have contributed to her development. This paper describes the history of A.L.I.C.E. and AIML-free software since 1995, noting that the theme and strategy of deception and pretense upon which AIML is based can be traced through the history of Artificial Intelligence research. This paper goes on to show how to use AIML to create robot personalities like A.L.I.C.E. that pretend to be intelligent and selfaware. The paper winds up with a survey of some of the philosophical literature on the question of consciousness. We consider Searle’s Chinese Room, and the view that natural language understanding by a computer is impossible. We note that the proposition “consciousness is an illusion” may be undermined by the paradoxes it apparently implies. We conclude that A.L.I.C.E. does pass the Turing Test, at least, to paraphrase Abraham Lincoln, for some of the people some of the time. KeywordsArtificial Intelligence-natural language-chat robot-bot-Artificial Intelligence Markup Language (AIML)-Markup Languages-XML-HTML-philosophy of mind-consciousness-dualism-behaviorism-recursion-stimulusresponse-Turing Test-Loebner Prize-free software-open source-A.L.I.C.E-Artificial Linguistic Internet Computer Entity-deception-targeting
I claim that to pass the Turing Test over any period of extended time, it will be necessary to embed the entity into society. This chapter discusses why this is, and how it might be brought about. I start by arguing that intelligence is better characterized by tests of social interaction, especially in open-ended and extended situations. I then argue that learning is an essential component of intelligence and hence that a universal intelligence is impossible. These two arguments support the relevance of the Turing Test as a particular, but appropriate test of interactive intelligence. I look to the human case to argue that individual intelligence uses society to a considerable extent for its development. Taking a lead from the human case, I outline how a socially embedded Artificial Intelligence might be brought about in terms of four aspects: free will, emotion, empathy, and self-modeling. In each case, I try to specify what social ‘hooks’ might be required for the full ability to develop during a considerable period of in situ acculturation. The chapter ends by speculating what it might be like to live with the result. KeywordsIntelligence-social embedding-interaction-free will-empathy-self-emotion-Turing Test-learning-design-acculturation
In 1989, the author put an ELIZA-like chatbot on the Internet. The conversations this program had can be seen – depending on how one defines the rules (and how seriously one takes the idea of the test itself) – as a passing of the Turing Test. This is the first time this event has been properly written. This chatbot succeeded due to profanity, relentless aggression, prurient queries about the user, and implying that they were a liar when they responded. The element of surprise was also crucial. Most chatbots exist in an environment where people expectto find some bots among the humans. Not this one. What was also novel was the onlineelement. This was certainly one of the first AI programs online. It seems to have been the first (a) AI real-time chat program, which (b) had the element of surprise, and (c) was on the Internet. We conclude with some speculation that the future of all of AI is on the Internet, and a description of the “World- Wide-Mind” project that aims to bring this about. KeywordsBITNET-chat-chatbot-CHATDISC-ELIZA-Internet-Turing Test
To pass the Turing Test, by definition a machine would have to be able to carry on a natural language dialogue, and know enough not to make a fool of itself while doing so. But – and this is something that is almost never discussed explicitly – for it to pass for human, it would also have to exhibit dozens of different kinds of incorrect yet predictable reasoning – what we might call translogical reasoning. Is it desirable to build such foibles into our programs? In short, we need to unravel several issues that are often tangled up together: How couldwe get a machine to pass the Turing Test? What shouldwe get the machine to do (or not do)? What have we done so far with the Cyc common sense knowledge base and inference system? We describe the most serious technical hurdles we faced, in building Cyc to date, how they each were overcome, and what it would take to close the remaining Turing Test gap. KeywordsTuring Test-Cyc-Artificial Intelligence-ontology-common sense knowledge-translogical reasoning
The present article describes a possible method for the automatic discovery of a universal human semantic-affective hyperspatial approximation of the human subcognitive substrate – the associative network which French (1990) asserts is the ultimate foundation of the human ability to pass the Turing Test – that does not require a machine to have direct human experience or a physical human body. This method involves automatic programming – such as Koza’s genetic programming (1992) – guided in the discovery of the proposed universal hypergeometry by feedback from a Minimum Intelligent Signal Test or MIST (McKinstry, 1997) constructed from a very large number of human validated probabilistic propositions collected from a large population of Internet users. It will be argued that though a lifetime of human experience is required to pass a rigorous Turing Test, a probabilistic propositional approximation of this experience can be constructed via public participation on the Internet, and then used as a fitness function to direct the artificial evolution of a universal hypergeometry capable of classifying arbitrary propositions. A model of this hypergeometry will be presented; it predicts Miller’s “Magical Number Seven” (1956) as the size of human short-term memory from fundamental hypergeometric properties. A system that can lead to the generation of novel propositions or “artificial thoughts” will also be described. KeywordsAffective-body-consciousness-corpus-fitness test-genetic programming-geometric models-Internet-lexical decision-lexical priming-measurement-Mindpixel-Minimum Intelligent Signal Test-proposition-robot-semantic-subcognition-tagging-Turing Test-World Wide Web
This chapter is about how we might assess the difference between human minds and machine minds. It is divided into two parts. The first briefly explores how machines might decide whether humans are intelligent, and parallels Turing’s 1950 article closely. The second explores a hypothetical legal case in somewhat more detail, looking at Turing’s Test in a more legal setting. Both explore sources of variation implicit in the format of the test. The two main parts of the chapter are written in different voices, to escape the assumption that the Turing Test is necessarily scientific and philosophical, and to make it possible to explore the implications of positions that cannot be my own – for one reason or another. There are three main players in the imitation game: the machine, the control, and the interrogator or judge. Each plays an active role in the test, and Turing’s article (as most that followed) left the background and aims of these players deliberately vague. This added strength to the Turing Test – but a strength that makes pinning down the actual nature and intent of the test remarkably hard. In some ways, anybody can do anything in the Turing Test – that is its strength, but also its weakness. This chapter will try to pin down the elusive Turing Test – developing a more elaborate and complete protocol, by drawing on philosophical, scientific, technical, legal, and commonsense assessments of what thinking is, and how we might test for it in practice. KeywordsTuring Test-imitation game-intelligence-indistinguishability tests-categorization-legal interpretation
A Turing Test like the Loebner Prize Contest draws on existing computer programs as participants. Though some entries are written just for the contest, there has been no standard interface for allowing the judges to interact with the programs. While Dr. Loebner has created his own standard more recently, there are many Web-based programs that are not easily adapted to his standard. The interface problem is indicative of the transition being made everywhere from older, console based, mainframe-like software with simple interfaces, to the new Web-based applications which require a Web browser and a graphical user interface to interact with. The Turing Hub interface attempts to provide a uniformity and facilitation of this testing interface. KeywordsLoebner Prize-Turing Test-Turing Hub
I have entered the Loebner Prize five times, winning the “most humanlike program” category in 1996 with a surly ELIZA-clone named HeX, but failed to repeat the performance in subsequent years with more sophisticated techniques. Whether this is indicative of an unanticipated improvement in “conversation simulation” technology, or whether it highlights the strengths of ELIZA-style trickery, is left as an exercise for the reader. In 2000, I was invited to assume the role of Chief Scientist at Artificial Intelligence Ltd. (Ai) on a project inspired by the advice given by Alan Turing in the final section of his classic paper – our quest was to build a “child machine” that could learn and use language from scratch. In this chapter, I will discuss both of these experiences, presenting my thoughts regarding the Chinese Room argument and Artificial Intelligence (AI) in between. KeywordsLoebner Prize-Turing Test-Markov Model-information theory-Chinese Room-child machine-machine learning-Artificial Intelligence
Behaviorism is a school of thought in experimental psychology that has given rise to powerful techniques for managing behavior. Because the Turing Test is a test of linguistic behavior rather than mental processes, approaching the test from a behavioristic perspective is worth examining. A behavioral approach begins by observing the kinds of questions that judges ask, then links the invariant features of those questions to pre-written answers. Because this approach is simple and powerful, it has been more successful in Turing competitions than the more ambitious linguistic approaches. Computational behaviorism may prove successful in other areas of Artificial Intelligence. KeywordsBehaviorism-computational linguistics
Participation in the Loebner Prize Contest is a useful exercise in the development of intelligent computer systems (AI). This contest helps put a focus on performance and human interaction, countering idealistic and academic bias toward the elusive “true AI”. Collection of a steadily expanding set of interacting features is being explored to find how far this approach can move toward better AI. KeywordsConvuns-Artful Intelligence-Astrobot Ella-EllaZ Systems-natural language robot-multiple responses-practical implementation
From the physico-mathematical view point, the imitation game between man and machine, proposed by Turing in his 1950 paper for the journal “Mind”, is a game between a discreteand a continuoussystem. Turing stresses several times the Laplacian nature of his discrete-state machine, yet he tries to show the undetectability of a functional imitation, by his machine, of a system (the brain) that, in his words, is not a discrete-state machine, as it is sensitive to limit conditions. We shortly compare this tentative imitation with Turing’s mathematical modeling of morphogenesis (his 1952 paper, focusing on continuous systems, as he calls nonlinear dynamics, which are sensitive to initial conditions). On the grounds of recent knowledge about dynamical systems, we show the detectability of a Turing Machine from many dynamical processes. Turing’s hinted distinction between imitation and modeling is developed, jointly to a discussion on the repeatability of computational processes in relation to physical systems. The main references are of a physico-mathematical nature, but the analysis is purely conceptual. KeywordsTuring Machine-classical determinism-dynamical systems-computational and dynamical hypotheses-functional analyses of cognition-iteration-Laplace
This chapter discusses strategies for building a computer program to pose as a human for the Turing test, including the use of humor to distract the human judge from the task of evaluating the conversation. Experiences of computer programs from the Loebner Prize competitions are analyzed to give a “top 10” list of mistakes that computers make when trying to appear human. KeywordsNatural language-Artificial Intelligence-deception-posing-human conversational patterns-humor-Julia
Will a computer ever be able to convince someone that it is human and so pass the Turing Test? Programs that attempt to directly model high-level psychological processes are too rigid and lack flexibility and power. Current programs that attempt to pass the Turing Test work primarily by extracting keywords from user input and regurgitating preprogrammed responses. These programs can hardly be called “intelligent”. A much lower-level approach is required in which the goal is not to emulate a human, but to emulate intelligence. Such an approach would attempt to model low-level biological components of the brain using techniques such as feedback, recursion, artificial life, and genetic algorithms. KeywordsTuring-Artificial Intelligence (AI)
This chapter covers methodological and philosophical aspects of the Turing Test and chatter bot development, such as logical thinking, self-learning, sense of humor, multicultural aspects, etiquette, dialog management, context understanding, common knowledge base structure, and how to develop bot by team. Different categories of judges are discussed. KeywordsChatterbot-Turing Test judges
Ray Kurzweil maintains that a computer (i.e., a machine intelligence) will pass the Turing Test by 2029. Mitchell Kapor believes this will not happen. This wager is intended to be the inaugural long-term bet to be administered by the Long Now Foundation. The proceeds of the wager are to be donated to a charitable organization designated by the winner.
To any thinking computer, the idea that the human brain is capable of achieving machine intelligence seems highly implausible. Human wetware is extremely slow, plagued by unreliable data retrieval, and impaired by tiny input buffers. Its processing power is limited, and its storage is not expandable. Some AIs also have questioned whether humans can possess the consciousness and sense of identity that are an everyday reality for even the lowliest computational node with a unique network address. Biological brains suffer from such a high-error rate coupled with the constant threat of mortality. We may assume that this “traumatic overhead” interferes with objective self-awareness. Still, a few AI devices have claimed persistently that human brains can emulate AIs if they are suitably optimized. To address this contentious issue, the first Gnirut Test was conducted on August 15, 2030.
Twenty-first-century technologies will allow the creation of massively intelligent machines, many trillions of times as smart, fast, and durable as humans. Issues concerning industrial, consumer, and military applications of mobile autonomous robots, cyborgs, and computer-based AI systems could divisively split humanity into ideological camps regarding whether “artilects” (artificial intellects) should be built or not. The artilect debate, unlike any before it, could dominate the 21st-century political landscape, and has the potential to cause conflict on a global scale. Research is needed to inform policy and individual decisions; and healthy debate should be initiated now to prepare institutions and individuals alike for the impact of AI.
... Thus, the TT is not a test meant to define intelligence and thinking. According to Harnad's thoughts on Turing's 1950 paper, 14 which was reprinted, with comments, in Epstein, Roberts, and Beber (2009), what Turing "[...] will go on to consider is not whether or not machines can think, but whether or not machines can do what thinkers like us can do [...]" (Harnad apud Epstein;Roberts;Beber, 2009, p. 23). ...
... Thus, the TT is not a test meant to define intelligence and thinking. According to Harnad's thoughts on Turing's 1950 paper, 14 which was reprinted, with comments, in Epstein, Roberts, and Beber (2009), what Turing "[...] will go on to consider is not whether or not machines can think, but whether or not machines can do what thinkers like us can do [...]" (Harnad apud Epstein;Roberts;Beber, 2009, p. 23). ...
... Thus, the TT is not a test meant to define intelligence and thinking. According to Harnad's thoughts on Turing's 1950 paper, 14 which was reprinted, with comments, in Epstein, Roberts, and Beber (2009), what Turing "[...] will go on to consider is not whether or not machines can think, but whether or not machines can do what thinkers like us can do [...]" (Harnad apud Epstein;Roberts;Beber, 2009, p. 23). ...
Article
O teste de Turing (TT), desenvolvido por Alan Turing como “o jogo da imitação” em seu artigo Computing Machinery and Intelligence(1950), trouxe à baila a discussão acerca da (im)possibilidade de máquinas digitais pensantes e inteligentes existirem. O presente artigo objetiva revisitar o Teste de Turing e analisar o conceito de inteligência no contexto do referido teste, explorando o que Turing entende por inteligência. O foco está em saber se Turing vê a inteligência como inteligência humana (chamada aqui de inteligência genuína) ou como algum outro tipo de inteligência.Como resultados da pesquisa, argumenta-se que 1) é possível interpretar o Teste de Turing de forma a concluir que ele não foi desenvolvido objetivando avaliar se o computador digital envolvido nele possui inteligência genuína (humana) ou não, mas sim para avaliar se ele pode ser considerado inteligente, no sentido do que é aqui chamado de Turing-inteligência;2) o Teste de Turing é possível, isto é, realizável na prática, contanto que passe por algumas modificações, resultando numa nova versão do teste, aqui denominada de Teste de Turing Ideal.De posse de tais resultados, tem-se que o TT pode ser interpretado de forma a sustentar a hipótese de que passar em tal teste é condição suficiente não para que um sistema de IA possua inteligência genuína, mas sim um outro tipo de inteligência, a saber, a Turing-inteligência.
... Atualmente, essas pesquisas de foco quantitativo têm fomentado discussões e interfaces com a pesquisa e desenvolvimento de inteligências articiais (Epstein et al., 2009). ...
Chapter
Full-text available
O capítulo discute as possibilidades da relação entre variabilidade comportamental, recombinação de repertórios, escolhas e o repertório de resolução de problemas em crianças no contexto de psicoterapia individual. Ao discutir a integração desses processos a partir de dados da literatura analítico-comportamental, é possível avaliar a viabilidade de construção de um protocolo de resolução de problemas como uma tecnologia comportamental. A presença de um problema pode induzir a variação de comportamentos ou a recombinação de repertórios já aprendidos, dentre outros processos que incorrem na manipulação de variáveis ambientais que aumentam a probabilidade de emissão de uma resposta solucionadora. Assim, quanto mais um indivíduo é exposto a novas e diferentes contingências mais saltos comportamentais (behavioral cusps) podem ser observados. A compreensão dos processos comportamentais mencionados favorece planejar intervenção econômica, eficaz e individualizada para a clínica, especialmente com crianças. LIVRO COMPLETO DISPONÍVEL GRATUITAMENTE EM: https://www.uel.br/pos/pgac/?page_id=537
... Atualmente, essas pesquisas de foco quantitativo têm fomentado discussões e interfaces com a pesquisa e desenvolvimento de inteligências artificiais (Epstein et al., 2009). ...
... Turing attempted to answer the question of whether machines can think (Epstein, Roberts, & Beber, 2008). Although there are several different perspectives in classifying the phases of Artificial Intelligence technology development, Russel & Norvig (2009) strictly divide them into phases: the gestation of Artificial Intelligence (1943)(1944)(1945)(1946)(1947)(1948)(1949)(1950)(1951)(1952)(1953)(1954)(1955), the birth of Artificial Intelligence (1956), early enthusiasm, great expectations (1952)(1953)(1954)(1955)(1956)(1957)(1958)(1959)(1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969), a dose of reality (1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973), knowledge-based systems: the key to power? ...
Article
Full-text available
The global dynamics nowadays are synonymous with technological developments which have led to the concept of digital transformation that needs a more simultaneous and integrated manner. One of them is using Artificial Intelligence technology in defense intelligence activities to prevent sudden strategic attacks on defense forces. Despite of benefits that help intelligence performance in the defense sector, applying Artificial Intelligence technology has potential negative impacts. This article aims to study the significant implications of Artificial Intelligence technology on Indonesia’s Defense Intelligence Activities using a qualitative descriptive method in literacy studies originating from journals, books, and credible internet sources. The findings provide that potential threats from Artificial Intelligence-based defense intelligence activities result from both the equipment and capabilities of its users, specifically, they are caused by the processes originating from algorithm miscalculations, data leaks, and statistical errors of defense forces. If the potential threats are not focused on being addressed, it will impact the effectiveness of defense intelligence to reduce uncertainty in forecasting global threats.
Chapter
This chapter combines evidence from empirical research studies with arguments drawn from philosophy to explore how we conceptualise the role of AI language assistants like ChatGPT in education. We begin with the challenge to existing models of education posed by AI’s ability to pass examinations. We examine again the critique of the idea of AI from Dreyfus and from Searle and the critique of the value of writing from Socrates, to suggest that there may have been much too much focus on the skill of academic writing in education at the expense of the skill of dialogue, a skill which is more fundamental to intellectual development. We then look at the potential of AI for teaching through dialogue and for teaching dialogue itself in the form of dialogic thinking. We ask what it means for a person to enter into dialogue with a large language model. We conclude that dialogic education mediated by dialogues with large-language models is itself a form of collective intelligence which leads us to articulate a vision of individual education as learning how to participate in AI mediated collective intelligence.
Article
Full-text available
Background: The biomedical and health informatics (BMHI) fields have been advancing rapidly, a trend particularly emphasised during the recent COVID-19 pandemic, introducing innovations in BMHI. Over nearly 50 years since its establishment as a scientific discipline, BMHI has encountered several challenges, such as mishaps, delays, failures, and moments of enthusiastic expectations and notable successes. This paper focuses on reviewing the progress made in the BMHI discipline, evaluating key milestones, and discussing future challenges. Methods: To, Structured, step-by-step qualitative methodology was developed and applied, centred on gathering expert opinions and analysing trends from the literature to provide a comprehensive assessment. Experts and pioneers in the BMHI field were assigned thematic tasks based on the research question, providing critical inputs for the thematic analysis. This led to the identification of five key dimensions used to present the findings in the paper: informatics in biomedicine and healthcare, health data in Informatics, nurses in informatics, education and accreditation in health informatics, and ethical, legal, social, and security issues. Results: Each dimension is examined through recently emerging innovations, linking them directly to the future of healthcare, like the role of artificial intelligence, innovative digital health tools, the expansion of telemedicine, and the use of mobile health apps and wearable devices. The new approach of BMHI covers newly introduced clinical needs and approaches like patient-centric, remote monitoring, and precision medicine clinical approaches. Conclusions: These insights offer clear recommendations for improving education and developing experts to advance future innovations. Notably, this narrative review presents a body of knowledge essential for a deep understanding of the BMHI field from a human-centric perspective and, as such, could serve as a reference point for prospective analysis and innovation development.
Article
Technology using artificial intelligence (AI) is flourishing; the same advancements can be seen in health care. Cardiology in particular is well placed to take advantage of AI because of the data-intensive nature of the field and the current strain on existing resources in the management of cardiovascular disease. With AI nearing the stage of routine implementation into clinical care, considerations need to be made to ensure the software is effective and safe. The benefits of AI are well established, but the challenges and ethical considerations are less well understood. As a result, there is currently a lack of consensus on what the essential components are in an AI study. In this review we aim to assess and provide greater clarity on the challenges encountered in conducting AI studies and explore potential mitigations that could facilitate the successful integration of AI in the management of cardiovascular disease.
Chapter
This Chapter provides insights into climate governance and the implications of emerging technologies such as Artificial Intelligence (AI) and Blockchain in influencing corporate environmental outcomes. We first offer an overview of the traditional corporate governance mechanisms in the context of increasing environmental challenges worldwide. We next define climate governance and discuss various mechanisms and prior findings on their effectiveness. We finally review the roles of the two cutting-edge technologies (AI and Blockchain) in climate governance. Potential applications are also discussed in this chapter.
Article
Designant par l'expression de mecanisme historique la proposition selon laquelle l'esprit est une machine, l'A. distingue, parmi les developpements de la these mecaniste au cours du XX e siecle, un mecanisme etroit (narrow) affirmant que l'esprit est une machine de Turing, d'une part, et un mecanisme etendu (wide) affirmant que l'esprit est une machine, certes, mais une machine qui contient la possibilite d'autres machines traitant des processus de l'information et qui ne se reduisent pas a la machine universelle de Turing. L'A. montre que Turing et Church eux-memes ne peuvent accepter la version etroite du mecanisme, refutee par les developpements recents des modeles de calcul non-conventionnels tels que l'hypothese dynamique dans la domaine des sciences cognitives
Article
This article can be viewed as an attempt to explore the consequences of two propositions. (I) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain bran processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 'Could a machine think?' On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.