Content uploaded by Avron Barr
Author content
All content in this area was uploaded by Avron Barr on Nov 24, 2020
Content may be subject to copyright.
AD7A31
804
ARTIICAL
INTELGNC
COGNITIN ASUMPUfATLONIU)
1/1
STANF
0RD
UNV
CADEP
I O
OM
I RSU EU AC
BARR
AUG 82
SAN-
S 82 956
N00O14-79-
U
02
UN6ASFE
F/G
9/2
1111.0
L;;.
j
.
~3
2
11111'25
1.4
D.
NAN(NA
August
1982
Report
No.
STAN-CS-82-9S6
Also
numbered
HIPP-82-29
Artificial
Intelligence:
Cognition
as
Computation
by
Avron
Barr
M
Department
of
Computer
Science
jAgord
University
gtanford,
CA
94305
DTtO
oCJUIC.
ELEOTE
0*.~G5B~
~~~u
2B
08
0308
SECURITY
CLASIFICATION
or
T.IS
PACE
(whom
Data
E'.e.'.d)
DOCUMETATIO
PAGEREAD
INSTRUMTONS
REPORT
DOUDNAINPG
EFORE
COMPLETING
FORM
I
-g."MT
NUMSER
12.
POVT
ACCESSION
NO.
3.
MECIPIINTSI
CATALOG
NUMMER
See
face
of
report
~
go~APRO
OEE
4.TITLE
(844
&.hltift.)
S
9o
EOT&PRO
OEE
Sees
report
See
report
_____
6. PERFORMING
*MG.
REPORT
NUMBER
t
57. AUTHsOM(@i
6.
CONTRACT
OR
GANT
HUMACOVta1
See
report
and
Work
unit
See
part
IP
of
work
unit
S.
090*0*MING
ORGANIZATION
NAME AND ADDRESS
to.
PROGMAM
ELIEMENTPROJECT.
TASK
ARE
A
&WgORK UNIT I
.N5SA
N
Coordinate
with
Work
Unit
See
part
10
of
work
unit
I
I.
CONTROLLING
OFFICE
NAME
AN
ADDRESS
12.
REPORT
DATE
kL,
Seelreport
ONR
Code
in
part
1
of
work unit
13.
MUMMER
Of
PAGES
14.
MONITORING
AGENCI
NAME
&
AODDRES&II
EijI*tmI
bass
Conti~ftid
Offie)~
15.
61ECURITV
CLASS.
(of
l0e
"Poll)
UNCLASSIFIED
1. DECL
ASSIFICATION/
DOWNGRADING
SCHEDULE
16.
DISTRIBUTION
STATEMENT
(of
this Report)
Approved
for
public
relems;
disrri,'+on
unlimited
17. Of STRMI
8U
TI10
ST
AT
94MNT
t.
o!he
.abtracIr
Aloe
in
Stock
20,
1INIeI
iU
oat
R
epoe)
I*. SUPPLEMENTARY
NOTES
- -
This
report
was
generated
with
the
use of
government
funds.
Work unit
is
attached.
1S.
itET
WORDS
(Conthweu
on
iUwers
ods,
it
noee~wp
old
Idm.ivitif
by
block
wA)
See
report
and
work
unit.
120.
AGSTRACT (Conthoss
an
Mnas
aid*
11
acoew
mod
JddM.Df
SP
Weekh
"MOW)
See
report
and
work
unit.
A
DD
1
1473
901110ON
OF
INMOV
45.
OBS91OLETE
S/
N
0
to2-014
601-----
SECURITY
CLASSIFICATIONI
OF THIS
PAGE
MBon
DarkBubsd
*~4 7>
--
I
-IsL
"'
t-
E5.9I0D
)N L
--
2
%tllH
ECEIP7
D~iF
DO
JON4
S3
-- -
-DAT:E~
G
u;t-
22
Jut;
s2
D-
-
D;ATE
0;:
PREV,
£.UiliiPY:
0W
NCA
7_Q
4 -
1h),
10?G-
SUMMRY;
TERMINAlgTED
5
-U~1~Y
ECUITY15.
UP4CLhE.SIFIED
-- 6 -
£CURITY
0;:
UOR1C
UNCLSSIFIED
-- 3I -
D
LI
T
LIMIiTATIOUN-
UNLIMIVTED
SP
-
CONiTR~rTOP
ACCESS
YES
-
PRIMARY'
PPDiLPAtj1
ELE)1E
0
62709P
LI-
PPIr]I4
Y
PROJECT
NUM1B2R)
9L)1O
--
IIA2
PRIMAR.,Y
IRROJPCT
AGENCY
AND
PROGRAMJi
9
i
0
--D3-
PR~IMARY
T~k'x
ARA
3G97
-04-
WORK
UNIT
N4UMBER:
N'-154-4:36
-1
CLP1
-
CONTRIPUTINC;
PtCRA
ELEME2NT
(
UT):
61
15DF4
0--3R22
-
COT
I
UTIN'G
PROJECT
NUMBERP
(
IS-T)
PP042CIG
-lci-
CO
fT2IPUTIt4G
TASK
AR2EA
(IST);
R~4O0
--
1
-
MILE:
(U)
P~pzO-l*A1L
TECHNOOY:
TUTO2IN
AN4D
PRDRLEII-EDLUILN';
--
STRATGIS
IN
i r
TTELL
IG
E;1vT
CUMPITEP-AIDED
l1-S-TRJC
T
IDN
11A
~-~
TITLE
SECL2ITY7 U
12
-1
50
SEP +E E T O 4 T A F ~
AP
--
013-4
00
PC
LOY
DUA
JP
2-HAVIGIR
-- -3
wbr:,-'
LJ71T 5.TiA2T
DA2TE:
JI1AP
79
A-
4
1-
ESTIMTED
COIIPLFTIOII
DAiTE
CO
-15A
-
PIMP~Y
FUNDINC;
AG6EfCY!
DEPT.
OP
IDE4E't
-- 5P -r
E)o
-
,7R
F
r.-rkv,-
.Ei : N V
-6
-F
)M;N~4C
ETHOD:
COITRH*'CT
*--I
7A1
-
CO
L7tiClR' H-)
4T
EPCIEDATE:
.' R
79Q
*--I
7A2
-
C)_
T
A
LT
C)NT
E
1
PAT
10,1
D4-"E
5P2
!-27P~
-
CC
4Tj*i.*HCTi62H
NT
NLU
P
PE
P
f4C)40
O4
-7C -
003
-17C
-C
ONTP"CT
TYPE
COSTYP
--
I1/D2
-C
0.
P
4
T,/
G
RA
4T
AMJOUNT
IGG66.000
--
17E
-
Y.IND
OF
AUA~RD:
-1
UP
-- 7F (-
L
fU
4T
ij; zk-,T fGRljr4i
CUMIULATIU.E
DOLLAR2
JL
S 7,44
~5~
hi~Y2
CF4-IRESOiJPCE
ESTIMAYTES
-
1eS7",AlM
A
4R,
zlCP''4-
18P4
FUNDS
CFY4-1+
--1
SAD
M)ANAYR
Cr-
3
2.G
1223
FUN)DS;
CFY-3J
S
'1.40G,000
--
3
i~
CP-y-2.
0.3
12E22
FUNDS
CFY-2
sIG
lo
--
i.-
I
ri-
AW.--.CY14
1F~
CCF- +
~N'P
CPY
-1
FUlNDS-
CFY-
DOD
ADDR
_' SP ARICT ,J
VA
~
',
c-.',
-~.
I~~
~~~~~~~
LU~(JJ.iDP5SiRI&O4 ~ 22217
--
SC
~-;I2E NDIVIDUAL:
-PR,2
M.J.
--
':D
R-
2&O
1
P L
E
IflD]1
DUAL
PHONIE
202-696;-4504
-- ~.
-DOD
GP6,N12A~TION
LO'CTION
CODE:
5110
Ati:b~iVQ~
--
1;-DDD
[IRGAi42AT1Or4
SORT
CODE!
D5232
*Av.j
:,v/o
--
T
DODl
FJ26
t4]2A~TIDN
CODE;
2G525D
Dist
20A~~i
-
PEFRIN;Oi;I'TF
TPNDFORD
U
WRiiS-Y~
-41PUT~$ir
-- DEPT07
--
20P
-jEERICL~
.q
DFV
TANOO.C
Sz
-
CC
-PR
I
fl2PAL
i;E~
U~TR
~ -id- B G
--
200
D
PR
NI
rc
I
N;;2'T
1-.
P
H
01iE~
-15
-
4
97-
0!_
355
-- 20F
I
SObT I
T4ISiAT)
0,
is-,
CL,-NCEY.
W
-20G
ASSOC
rITl
2ND)
EZARP.
-20U
-
PERF
GRI-
iu
ORCA4
IZ;
HT
I
ON
LOCAtT
101
CODE:C,612
-2014
-
PERF
.
OQGA
il,~
O
TyTPE
CODE:
I
-20S
-PERF"G
R~ICO
OTcoLiE:
447-44
-20T
-
PERF
ORlliNG'vJZiir
CODE:
'4S120
--
22
-
KEYWORD'S
UW)
PROL'UC
T
ION
SYSTESI
W
U)
CO0,"PUTER-A
r_
1E UN~L
1f;14
--
(U)
CAI
4U)
ARtIFICI1AL
!NTELLIGENCE
;U.,
OBE
S~iOIJ
j~
U)t
*tEDI-A.
DHIAGNOIS
UJ)
fEDICAL
TRAIING
(U N
IY
C I 4
--
23
-
TE2C
HT
IC A
L
C'EJE-CT
I
U
F
U
.t
i1)
NAUY
AND
i,-;E
OP-
J05*
i-
G.,
--
ELECTRONIC
TR
0U'PLESHOO)TINO)
REOUIRE
HIGH-LY
DELP
DIAGNOSTI
D
--
PRORLEP1-_SOLU!NiO
SK(ILLS;
THIS
LIORK
HAS
SEVJERAL
PINAt-Y
0"JECT
I.E
2)0-mG
TO
--
INUESTIGATE
L'I-NEE EN
ETHODG
FOR
TEACHING
RTGE
FOR
--
DEVISE
FOKMAL,
COM1FUTATIO74AL
jIODELS
FORP
PLA111NING
A~uf
E>.'ECuTIO
TUOIA
--
DIALOjGS..
(3)i
TO
EUALUHTL
THE
F
rr-
.C
i- tipE~ 1. O:7
T
UTORIAL
INTERACTION.;
Al-fi
4." 4
TD
O
ULATE
uL~;)i HDUW
--
LEAPI
NC
OF
H
LE-A
_E2D
63NOiJLEDGE
--
24
-
APPRDi
CH:
(U)
AN
1
ITI-1IAL
luESI
OiN
OF
ANl
I
P-RUC
TK,
-1
-
T_ _
--
TEACHING
THE
D-1HCNoS1Si
oF
VATRA
1
i4;EC)I'i
1
.L
04
9-1
STDENTS
OF
tRYIG
A
PLITY, A
ND
THE
RESULTS
USED
TOl
.'T~
tl0 J.F
14E
A
-- SET
OF
DOII-hEEZ;TTEAC
J-UIG
RULES.
tWLI I~tiL, Ft .j.L'J E
--
TOELAT
T
E
EFFECIEESO
LEHU
TUTORIiHL
ST'_..HTDE'!ES
--
AMTONLCPBILITI
ES
FR''IP
PLANNING
AN'ID
EXECUT
INCG
INSTRUCiNIAL
1,1
-!OG'!.7
W-
ITH-
SPECIFIC
ATTENTION
TO01
IDI
U
1D
UAL
2
ED
S)T
RUC
T1;i
,W
UI
LL
B1"E
--
INCORPOATELI
INTO
THE*
Is-STRUCTIONAL
S
z7
1
ONE
OR
NORE
ADDITIONL
--
SY-STEJIS
FOR
TEACHI"G;
DA!NOSIS
IN
THE:
COfTN
OF
LECTRONIC
OR
--
MECHANICAL
TOL
OTh
PROBLEMS WILL
PE'
DEU)ELOPED
AVID
FXPERINEINTED
--
UI-14
TO
EVALUATE
THE
GENE_,7FRALITY
OF
THE
KIERNEL
NTCINL SYSTEMI.
--
5
-
PROGRESS:
'U)
A
D0'AIN-INDEPENDENT
IODEL
OF_
DIAGNOSTIC
PEASONING
WAS
--
FORMALIZED
THAT
TH-E
GUIDON
TUTORIAL
huONITOR
wILL
E>
4PLICITLY
CoNUEY TO
--
STUDENTS.
THE
TOP-LEVEL
PLAN EMiPHAG'VES COPIPLETENESG
AND
CAREFUL
--
REFINEMIENT
OF
THE
HYPOTHESIS
SPCE
"UDO'
IDEL
OF'
THE
SIuDENT
"4s~
--
SEN
REDSI-NE
TO
DETECT
USAGE
PATTERN4S
OF
THE
STUDE:NT,
AS
TH:
--
FIRSTSTEP
IN
CONSTRUCTiiG
A
MODEL
OF
HIS
PPOBLFsi-S0LVUING. EvPLORH
TIVE
--
PROPtEJ1-SOLVA
NO
KNOWJLEDGE-L
1111TED,
PROPLEM1-SOLUI
NC.
AND
STRATEGICAL
--
30
-
SiRELE11ENT
CODE:
42
--
37
-
DESCRIPTOR;
(D)
BACTERIAL
DISEASES4;U)
ARTIFICIAL
INTELLIGENCE
--
(U)
TRAINING
.(L))
TEACHING
METHODS
;(U)
STUIFEN/TS
i
U)
KL
L S.
--
(U)
PRODUCTION
;(LU)
PPOPLE11
SOLUINYG
f(U
)
P
E RSONNE
L f U1)
--
MEDICINE
;(U)
iIAT'HrEV
ATICAL
MIODELS
;~(U)
LEA:TrNING
Nku)
--
INDIUIDUALIZE-rD
TRAININL_:
(U)
DIAGNOSISUIEDICINE)
j(U)
--
DIAGNOSIS(
GENEiRAL
) U)
COMPUTER
AIDED
It'4TRUCTIOiN
;
(U)
COMPUTAt-IO
--
39
-PROCESSING
DATE
(RANGE=
'30
JUN
S2
--
2
OF
- -
"August
1982
Artificial
Intelligence:
Cognition
as
Computation'
Avron
Barr
Ii.
The
ability
and
compulsion
to
know
are
as
characteristic
of
our
human
nature
as
are
our
physical
posture
and
our
languages.
Knowledge
and
intelligence,
as
scientific
concepts,
arc
used
to
describe
how
an
organism's
experience
appears
to
mediate
its
behavior.
This
report
discusses
the
relation
between
artificial
intelligence
(AI)
research
in
computer
science
and
the
approaches
of
other
disciplines
that
study
the
nature
of
intelligence.
cognition,
and
mind. The
sutte
of
Al
after
25
years
of
work
in
the
field
is
reviewed.
as
arc
the
views
of
its
practitioners
about
its relation
to
cognate
disciplines.
i'he
report
concludes
with
a
discussion
of
some
possible
effects
on
our
scientific
work
of
emerging
commercial
applications
of
Al
technology,
that
is,
machines
that
can
know
and
can
take
part
in
human
cognitive activities.
Artificial
Intelligence
Artificial
intelligence
is
the
part
of
computer
science
concerned
with
creating
and
studying
computer
programs
that
exhibit
behavioral
characteristics
we
identify
as
intelligent
in
human
behavior-knowing.
reasoning,
learning, problem
solving,
language
understanding,
and
so
on.
Since
the
field's
emergence
in
the
mid-1950s.
AI
researchers
have
developed
dozens
of
programs
and
programming
techniques that
support
some
sort
of
"intelligent"
behavior.
Although
there
are
many
attitudes
expressed
by
researchers
in
the
field,
most
of
these
people
are
motivated
in
their
work
on
intelligent
computer
programs
by
the
thought
that
this
work
may
lead
to
a
new
understanding
of
mind:
Al
has
also
embraced
the
larger
scientific
goal
of
constructing
an
information-processing
theory
of
intelligence.
If
such
a
science
of
intelligence
could
be
developed,
it
could
guide
tie
design
of
intelligent
machines
as
well
as
explicate
intelligent
behavior
as
it
occurs
in
humans
and other
animals. (Nilsson.
1980.
p. 2)
lTo
appear
in
7he
Study
of
Information:
Interdisciplinary
Messages
edited
by
Fritz
Machlup
and
Una
Mansfield.
and
published
by
John
Wiley
and
Sons.
New
York.
1983.
2
Whether
or
not
it
leads
to
a
bctter
understanding
of
the
mind,
there
is
every
evidence
that
current
work
in
Al
will
lead
to
a
new
intelligent
lechnology-
that
may
have
dramatic
effects
on
our
society.
Experimcntal
Al
systems
have
already
generated
interest
and
enthusiasm
in
industry
and
are
being
developed
commerciall
y.
These
experimental
systems
include
programs
that-
*
solve
some
hard
problems
in
chemistry,
biology,
geology,
engineering,
and
medicine
at
human-
expert
levels
of
performance,
*
manipulate
robotic
devices
to
perform
some
useful
sensory-motor
tasks;
and
"
answer
questions
posed
in
restricted dialects
of
E'nglish
(French,
Japanese, etc.).
Useful
AI
programs
will
play
an
important
part
in
the
evolution
of
the
role
of
compLters
in
our
lies-a
role
that
has
changed,
in
our
lifetimes,
from
remote
to
commonplace
and
that,
if
current
expectations
about
computing
cost
and
power
are
correct,
is
likely
to
evolve
further
from
useful
to
essential.
The
Origins
of
Artificial
Intelligence
Scientific
fields
emerge
as
the
concerns
of
scientists congeal
around
various
phenomena.
Sciences
are
not
defined,
they
are
recognized.
(Newell,
1973a,
p. 1)
The
intellectual
currents
of
the
times
help
direct
scientists
to
the
study
of
certain
phenomena.
For
the
esolution
of
Al,
the
two
most
important
forces
in
the
intellectual environment
of
the
1930s
and
1940s
were
maihenatical
logic, which
had
been
under rapid
development
since
the
end
of
the
19th
century,
and
new
ideas
about
computation.
The logical
systems
of
Frege,
Whitehead
and
Russell,
Tarski,
and
others
showed
that
some
aspects
of
reasoning
could
be
formalized
in
a
relatively
simple framework:
The
fundamental
contribution
was
to
demonstrate by
example
that
the
manipulation
of
symbols
(at
least
some
manipulation
of
some
symbols)
could
be
described
in
terms
of
specific,
concrete
processes
quite
as
readily
as
could
the
manipulation
of
pine
boards
in
a
carpenter
shop....
Formal
logic,
if
it
showed
nothing
else,
showed
that
ideas-at
least
some
ideas-could
be
represented
by
symbols, and
that
these
symbols
could
be
altered
in
meaningful
ways
by precisely
defined
processes.
(Newell
and
Simon.
1972,
p.
877)
Mathematical logic continues
to
be
an
active
area
of
investigation
in
Al,
in
part
because
general-purpose,
logico-deductive
systems
have
been
successfully
implemented
on
computers.
But
even
before
the
advent
of
computers,
the
mathematical
formalization
of
logical
reasoning
shaped
people's
conception
of
the
relation
between
computation
and
intelligence.
Ideas
about
the
nature
of
computation,
due
to
Church,
Turing,
and others,
provided
the
link
between
the
notion
of
formalization
of
reasoning and
the
computing
machines
about
to
be
invented. What
was
essential
in
this
work
was
the
abstract
conception
of
computation
as
symbol
processing.
The
first
computers
were
numerical
calculators
that
did
not
appear
to
embody
much
intelligence
at
all.
lut
before
these
machines
were
even
designed,
Church
and
Tiuring
had
seen
that
numbers
were
an
inessecitial
aspect
of
coimput
ation-they
were
just
one
way
of
interpreting
the
internal
states
of
the
machine:
In
their
striving
to
handle
symbols
rigorously
and
objectively-as
objects-logicians
became
more
and
more
explicit
in
describing
the
processing
system
that
was
supposed
to
manipulate
the
symbols.
In
1936,
Alan
Turing,
an
English
logician,
described
the
procssor,
now
known
as
the
Turing
machine,
that
is
regarded
as
the
culmination
of
this
drive
to%%ard
formnali/ation.
(Newell
and Simon,
1972,
p.
878)
'111C
model
of
a
Turing
machine
c*ontains
within
it
the
notions
both
of
what
can
be
computed
and
of
universal
machines-computers
that
can
do
anything
that
can
be
done
by
any
machine.
(Newell
and
Simon,
1976,
p.
117)
Turing.
who
has been
called
the
father
of
Al.
not
only
invented
a
simple,
universal,
and
nonnimerical
model
of
computation
but
also
argued
directly
for
the
possibility
that
computational
mechanisms
could
behave
in
a
way
that
would
be
perceived
as
intelligent:
Thought
was
still
wholly
intangible
and
ineffable
until
modern
fonlal
logic interpreted
it
as
the
manipulation
of
fonnal
tokens.
And
it
seemed
still
to
inhabit
mainly
the
heaven
of
Platonic
ideals,
or
the
eqtally
obscure
spaces
of
the
human
mind,
until
computers
taught
us
how
symbols
could
be
processed
by machines.
A.
M.
Turing
...
made
his
great
contributions
at
the
mid-century
crossroads
of
these
developments that
led
from modern
logic
to
the
computer.
(Newell
and
Simon.
1976,
p.
125)
As
Allen
Newell
and
Herbert
Simon
point
out
in
the
"'Ilistorical
Epilogue"
to
their
classic
work
Hunan
Problem
Solving
(1972),
there
were
other
strong
intellectual
currents
from
several
directions
that converged
in
the
middle
of
this
century
in
the
people
who
founded
the science
of
artificial
intelligence.
'The
concepts
of
cybernetics
and
self-organiiing
systems
of
Wiener,
McCulloch,
and
others
dealt
with
the
macroscopic
behavior
of
"locally
simple"
systems.
'Te
cyberneticians
influenced
many
fields
because
their
thinking
spanned
many
fields,
linking
ideas
about
the
workings
of
the
nervous
system
with
information
theory
and
control
theory,
as
well
as
with
logic
and
computation.
Their
ideas
were
part
of
the
zeitgeist,
but
in
many
cases
the
cyberncticians influenced
early workers
in
Al
more
directly-as
their
teachers.
What eventually
connected
these
diverse
ideas
was,
of
course,
the
development
of
the
computing
machines
themselves,
conceived by
Babbage
and
guided
in
this
century
by
Turing,
von
Neumann,
and
others.
It
was
not
long after
the
machines
became
available
that
people
began
to
try
to
write
programs
to
solve
puzzles,
play
chess,
and
translate
texts
from
one
language
to
another-the
first
A I
programs.
4
What
was
it
about
computers that triggered
the
development
of
Al?
Many
ideas
about
computing
relevant
to
Al
emerged
in
the
early
designs-ideas
about
memories
and
processors,
about
systems
and
control,
and
about
levels
of
languages
and
programs. But the single
attribute
of
tie
new
machines
that
brought
about
the
emergence
of
the
new science
was
their
inherent
potential
for
complexiio, encouraging
(in
several
fields)
the
development
of
new
and
more
direct
ways
of
describing complex
processes-in
ternis
of
complicated
data
structures
and
procedures
with
hundreds
of
different
steps:
Problem
solving
behaviors,
even
in
the
relatively well-structured
task
environments
that
we
have
used
in
our
research,
have
generally
been
regarded
as
highly
complex
forms
of
human
behavior-so
complex
that
for
a
whole
generation
they
were
usually
aNoided
in
the
psychological
laboratory
in favor
of
behaviors
that
seemed
to
be
simple
....
The
appearance
of
the
modern
computer
at
the
end
of
World
War
If
gave
us
and
other
researchers
the
courage
to
return
to
complex
cognitive
performances
as
our
source
of
data
... a
device
capable
of
symbol-
manipulating
behavior
at
levels
of
complexity
and
generality
unprecedented
for
man-made
mechanisms
....
This
was
part
of
the
general
insight
of
cybernetics,
dela
,ed
by
ten
years
and
applied
to
discrete
symbolic behavior
rather
than
to
continLous
feedback
systems.
(Newell
and
Simon,
1972.
pp.
869-870)
Computers,
Complexity, and
Intelligence
As Pamela
McCorduck
notes
in
her
entertaining
historical
study
of
Al
Machines
Who
Think
(1979),
there
has
been
a
longstanding
connection
between
the
idea
of
complex
mechanical
devices and
intelligence.
Starting
with
the
fabulously
intricate
clocks
and
mechanical automata
of
past
centuries,
people
have
made
an
intuitive
link
between
the
complexity
of
a
machine's
operation
and
some
aspects
of
their
own
mental
life.
Over
the last
few
centuries,
new
technologies
have
resulted
in
a
dramatic
increase
in
the
complexity
we
can
achieve
in
the
things
we
build.
Modern
computer
systems
are
more
complex
by
several
orders
of
magnitude
than
anything
humans
have
built
before.
The first
work
on
computers
in
this
century
focused on
the
numerical
computations
that
had
previously
been
performed
collaboratively
by
teams
of
hundreds
of
clerks,
organized
so
that
each
did
one
small
subcalculation.and
passcd
the
results
on
to
the
clerk
at
the
next
desk.
Not
long
after
the
dramatic
success
of
the
first digital
computers
with
these
elaborate
calculations,
people
began
to
explore
the
possibility
of
more
generally
intelligent
mechanical
behavior-could
machines
play
chess,
prove
theorems,
or
translate
languages?
They
could,
but
not
very
well.
The
computer
performs
its
calculations
following
the
step-by-step
instructions
it
is
given-thc
method
must
be
specified
in complete
detail.
Most
computer
scientists
are
concerned
with
designing
new
algorithms,
new languages,
and
new
machines
for
performing
tasks
like
solving
LL
S
equations
and
alphabetizing
lists-tasks
that
people
perform
using methods
they
can
explicate. However.
people
cannot
specify how
they
decide
which
move
to
make
in
a
game
of
chess
or
how
they
determine
that
two
sentences
"mean
the
same
thing."
lhc
realization
that
the
detailed
steps
of
almost
all
intelligent
human
activity
were
unknown marked
the
beginning
of
artificial
intelligence
as
a
separate
part
of
computer
science.
Al
researchers
investigate
different
kinds
of
computation,
and
different
ways
of
describing
computation,
in
an
attempt
not
just
to
create
intelligent
artifacts
but
also
to
understand
what intelligence
is. A
basic
tenet
of
Al
is
that
human
intellectual
capacity
will
best
be
described
in
the same
terms
as
tie
ones
researchers
invent
to
describe
their
programs.
However,
they are
just
beginning
to
learn
enough
about
those
programs
to
know
how
to
describe
them
scientifically-in
terms
of
concepts
that
illuminate
their
nature
and
differentiate
among
fundamental
categories. These
ideas
about computation
have
been
dceeloped
in
programs
that
perform
many
different
tasks,
sometimes
at
the
level
of
human
performance,
often at
a
much
lower
level.
Most
of
these
methods
are
obviously
not
the
same
as
the
ones
that
people
use
to
perform
the
tasks-some
of
them might
be.
The
Status
of
Artificial
Intelligence
Many
intelligent
activities
besides
numerical
calculation
and
information
retrieval
have been
carried
on
by
programs.
Many
key
aspects
of
thought-like
recognizing
people's
faces
and
reasoning
by
analogy-are
still
puzzles;
they
are
performed
so
unconsciously
by
people
that
adequate computational
mechanisms
have
not
been
postulated.
Some
of
the
successes,
as
well
as
some
of
the
failures, have
come
as
surprises.
We
will
list
here
some
of
the
aspects
of
intelligence investigated
in
Al
research
and
try
to
give
an
indication
of
the
stage
of
progress.
There
is
an
important
philosophical
point
here
that
will
be
sidestepped.
Doing arithmetic
or
learning
the
capitals
of
all
the
countries
of
the
world,
for
example,
are
certainly
activities
that
indicate
intelligence
in.
humans.
The
issue
here
is
whether
a
computer
system
that
can
perform these
tasks
can
be said
to
know
or
understand
anything.
This point
has been
discussed at
length
(see,
e.g.,
Searle,
1980,
and
appended
commentary)
and
will
oje
avoided here
by
describing
the
behaviors
themselves
as
intelligent.
without
commitment
as
to how to
describe
the
machines
that
produce
them.
Problem
solving.
The
first
big "successes"
in
Al
were
programs that
could
solve
puzzles
and
play
games.
Techniques
such
as
looking
ahead
several
moves
and
dividing
difficult
problems
into
easier
subproblems
"I
6
evolved,
respectively,
into
the
fundamental
Al
techniques
of
search
and
problem
reduction.
Today's
programs
play
championship-level
checkers
and backgammon,
as
well
as
very
good
chess.
Another
problem-solving
program,
the
one
that
does
symbolic
evaluation
of
mathematical
functions,
performs
very
well
and
is
being
used
widely
by
scientists and
engineers.
Some
programs
can even
improve
their
own
performance
with
experience.
As discussed
below,
the open
questions
in
this
area
involve
abilities
that human
players
exhibit
but
cannot
articulate,
such
as
the
chess
master's
ability
to
see
the
board
configuration
in
terms
of
meaningful
patterns.
Another
basic
open
question
involves
the
original
conceptualization
of
a
problem,
called
in
Al
the
choice
of
problemn
representation.
Humans
often
solve
a
problem
by
finding
a
way
of
thinking
about
it
that
makes
the
solution
easy,
Al
programs,
so
far,
must
be
told
how
to
think
about
the
problems
they solve
(i.e..
the
space
in
which
to
search
for
the
solution).
Logical
reasoning.
Closely
related
to
problem
and
puzzle
solving
was
early
work
on
logical
deduction.
Programs
were
developed
that
could
"prove"
assertions
by
manipulating
a
data
base
of
facts,
each
represented
by
discrete
data-stnctures
just
as
they
are
represented
by
formulas
in
mathematical
logic.
These
methods,
unlike
many
other
Al
techniques.
could
be
shown
to
be
complete
and
consistent.
[hat
is,
given
a
set
of
facts.
the
programs
theoretically
could
prove
all
theorems
that
followed
from
the
facts,
and
only
those
theorems.
Logical
reasoning
has
been
one
of
the
most
persistently investigated
subareas
of
Al
research.
Of
particular
interest
are
the
problems
of
finding
ways
of
focusing
on
only
the
relevant
facts
from
a
large data
base
and
of
keeping
track
of
the
justifications
for
beliefs
and
updating
them
when
new
information
arrives.
Programming.
Although
perhaps
not
an
obviously
important
aspect
of
human
cognition,
programming
itself
is
an
important
area
of
research
in
Al.
Work
in
this
area,
called
automatic
programning
has
investigated
systems
that
can
write computer
programs
from
a
variety
of
descriptions
of
their
purpose,
such
as
examples
of
input/output
pairs,
high-level
language
descriptions,
and
even
English-language
descriptions
of
algorithms.
Progress
has
been
limited
to
a
few.
fully
worked-out
examples.
Automatic-programming
research
may
result
not
only
in semiautomatcd
systems
for
software
development
but
also
in
Al
programs that learn
(i.e.,
modify
their
behavior)
by
modifying
their
own
code.
Related work in
the
theory
of
programs
is
fundamental
to
all
Al
research.
Language.
The
domain
of
language
understanding
was
also
investigated
by
early
Ai
researchers
and
has
consistently
attracted
interest.
Programs
have
been
written
that
retrieve
information
from
a
data
base
in
7
response
to
questions
posed
in
English,
that
translate
sentences
from
one language to
another,
that
follow
instructions
or
paraphrase
statements
given
in
English,
and
that
acquire
knowledge
by
reading textual
material
and
building
an
internal
data
base.
Soi
programs
have
even
achieved
limited
success
in
interpreting
instructions
that
are
spoken
into
a
microphone
rather
than
typed
into
the
computer.
Although
these
language
systems
are
not
nearly
so
good
as
people
are
at
any
of
these
tasks,
they
are
adequate
for
some
applications.
Early
successes
with
programs
that
answered
simple
queries
and
followed
simple
directions,
and
early
failures
at
machine-translation
attempts,
have
resulted
in
a
sweeping
change
in
the
whole
Al approach
to
language.
The
principal
themes
of
current
language-understanding
research
are
the
importance
of
;t
amounts
of
knowledge
about
the
subject
being
discussed
and
the
role
of
e.vpeclalions,
based
on
the
si
t
matter
and the
conversational
situation,
in
interpreting
sentences.
The
state
of
the
art
of
practical lang
e
programs
is
represented
by
useful
"front
ends" to
a
variety
of
software
systems.
These
programs
accept
only
in
some
restricted
form:
they
cannot handle
some
of
the
nuances
of
English grammar
and
are
usefui
.,)
interpreting
sentences
only
within
a
relatively
limited
domain
of
discourse.
Although
there
has
been
very
limited
success
at
translating
Al
results in language
and
speech-understanding
programs
into
ideas
about the
nature
of
human
language processing
the
realization
of
the
importance
in
language
understanding
of
extensive
background
knowledge,
and
of
the
contextual
setting
and
intentions
of
the
speakers,
has
changed
our notion
of
what
language
or
a
theory
of
language
might
be.
Learning.
Certainly
one
of
the
most
significant
aspects
of
human
intelligence
is
our
ability
to learn.
However, this
is
an
example
of
cognitive behavior
that
is
so
poorly
understood that
very
little
progress
has
been
made
in
accomplishing
it
in
Al
systems.
Although
there
have
been
several
interesting
attempts
at
this,
including
programs
that learn
from
examples,
from
their
own
performance,
or
from advice
from
others.
Al
systems
do
not
exhibit
noticeable
learning.
Robotics
and
vision.
One
area
of
Al
research
that
is
receiving
increasing
attention
involves
1-
rograms
that
manipulate robot
devices.
Research
in this
field
has
looked
at
everything
from
the
optimal
movement
of
robot
arms to
methods
of
planning
a
sequence
of
actions to
achieve
a
robot's
goals.
Some
robots
"see"
through
a
TV
camera
that transmits
an
array
of
information
back
to
the
computer.
The
processing
of
visual
information
is
another
very
active,
and
very
difficult,
area
of
AI
research.
Programs
have
been
developed
that
can
recognize
objects
and
shadows
in
visual
scenes,
and
even
identify
small
changes
from
one
picture
to
the
next,
for
example,
for
aerial
reconnaissance.
The true potential
of
this
research,
however,
is
that
it
deals
with
artificial
intelligences
in
perceived
and
manipulable
environments
similar
to
our
own.
8
Systems
and
languages.
In
addition
to
work
directly
aimed
at
achieving
intelligence,
the
dcelopnieI
of
new
tools
has
always
been
an
important
aspect
of
Al
rcsearch.
Some
of
tie
most
important
contributions
of
Al
to
the
world
of
computing
have
been
in
the
fonn
of
spin-offs.
Cornputcr-s
stem,
ideas like
time-sharing,
list
processing,
and
interactive
debugging
were
developed
in
the
Al
research
enironmcnt.
Specialimcd
prograin
g
languages and
systems,
with
features designed
to
facilitate deduction,
robot
manipulation,
cognitive
modeling,
and
so
on,
have
often
been
rich
sources
of
new
ideas.
Most
recent
anmong
these
has
been
the
nuns
know
ledge-representation
Linguages.
These
are
computer
languages
for encoding
knowledge
s
data
Structures
and
reasoning
methods
as
procedures,
developed
o'
er
the
last
fie
years
to
explore
a
aiaiey
of
ideas
about
how
to
build
reasoning
programs.
Terry
Winograd's
1979
article
"'Ieond
l'rogarnmiig
Languages"
discusses
some
of
his
ideas
about
the
future
of
computing,
inspired
in
part
by
his
research
ol
Al.
Expert
systems.
Finally,
the
area
of
"expert,"
or
"knowledge-based,"
systems
has
recently
emerged
as
a
likey
area
for
useful
applications
of
Al
techniques (Feigenbaum,
1977).
Typically,
the
user interacts
w
ith
an
expert
s~stem
in a
form
of
consultation
dialogue,
just
as
he
(or
she) would
interact
with
a
human
expert
in a
particular
area:
explaining
his
problem,
performing
suggested
tests,
and
asking
questions
about
proposed
solutions.
Current
experimental
systems have
performed
very
well
in
consultation
tasks
like
chemical
and
geological
data
analysis,
computer-system
configuration,
completion
of
income
tax
forms,
and
even
medical
diagnosis.
Fxpert
systems
can
be viewed
as
intermediaries
between
human
experts,
who
interact
with
the
systems
in
knowledge-acquisition
mode,
and
human
users,
who
interact
with
the
systems
in
consultation
nouce.
F-urthenore,
much
research
in
this
area
of
Al
has
focused
on
providing
these
systems
with
the
ability
to
explain
their
reasoning,
both
to
make
the
consultation
more
accelptable
to
the
user
and
to
help
the
human
expert
locate the
cause
of
errors
in
the
system's reasoning
when
they
occur.
Because
its
imminent
commercial
applications
are
indicative
of
important
changes
in the
ficld,
much
of
the
ensuing
discussion
of
the
role
of
Al
in
the
study
of
mind
will
refer
to
tie
expert-systems
research.
That
these
systems
*
"represent"
vast
amounts
of
knowledge
obtained
from
human
experts,
*
are
used
as
tools
to
solve
difficult
problems
using
this
knowledge,
*
can
be
viewed
as
intermediaries
between
human
problem
solvers,
*
must
explain
their "thought
processes"
in
terms
that people
can
understand,
and
o
are
worth
a
lot
of
money
to
people
with
real
problems
9
are
the
essential
points
that
will
be
true
of
all
of
Al
someday,
in
fact,
of
computers
in
general, and
will
change
the
role that
Al
research
plays
in
the
scientific
study
of
thought.
Open
problems.
Although
there
have
been
much
activity
and
progress
in
the
25-)
ear
history
of
Al.
sonme
%ery
central
aspects
of
cognition
have
not
yet
been
achieved
by
computer
programs.
Our
abilities
to
reason
about
others'
beliefs,
to
know
the
limits
of
our
know
ledge.
to vi
talie
things,
to
be
"remnindcd"
of
rele'.nt
events,
to
learn,
to
reason
by
analogy,
and
to
make
plausible
inferences,
reali,e
when
they
are
wrong,
and
know
how
to
recover
from
them
are
not
at
all
understood.
It
is a
fact
that
these
and
many
other
fundatenital
cognitive capabilities
may
remain
problematic
fbr
some
time.
But
it
is
also
a
fact
that
computer
programs
hawc
successfully
achie.ed
a
le'.l
of
performance
on
a
range
of
"intelligent"
behaviors
unmatched
by
anything
other
than
the
human
brain.
Al's
ailure
to
pro%
ide
some
seemingly
simple
cognitive
capabilities
in
computer
programs
becomes,
in
the
vmcvk
of
Al
to
be
presented
in
this
paper,
part
of
the
set
of
phenomena
to
be
explained
by
the
new
scicnce.
Al
and
the
Study
of
Mind
Al
research
in
problem
solving,
language processing, and
so
forth
has
produced
some
impressive
and
usefi
computer
systems.
It
has
also
influenced,
and
been
influenced
by.
research
in
many
other fields.
What,
then,
is
the
relation
between
Al
and
the
other
disciplines
that
study
the various
aspects
of
mind.
for
example,
psychology,
linguistics,
philosophy,
and
sociology?
Al
certainly
has
a
unique
method-designing
and
testing
computer
programs-and
a
unique
goal-making
those
programs
seem
intelligent.
It
has
been
argued
from
time
to
time
that
these
attributes
make
AI
independent
of
the
other disciplines:
Artificial
Intelligence
was an
attempt
to
build
intelligent
machines
without
any
prejudice
toward
making
the system
simple,
biological, or
humanoid,
(Minsky.
1968,
p.
7)
But
one
does
not start from
scratch
in
building
the
first
program to accomplish
some
intelligent
behavior:
the
ideas
about
how
!hat
program
is
to
work
must
come
from
somewhere.
Furthermore,
most
Al
researchers
are
interested
in
understanding
the
human
mind
and
actively
seek
hints
about
its
nature in
their
experiments
with
their
programs.
The
interest
within
Al
in
the
results
and
open
problems
of
other
disciplines
has been
fully reciprocated
by
interest
in
and
application
of
Al
research
activity
among
researchers
in
other
fields.
Many experimental
L*
and
theoretical
insights
in
psychology
and
linguistics,
at
least, ha~e
been
sparked
by
Al
techniique%
dnd
results.
Fu1rtheniore.
this
flow
is
likely
to
increase
dramiatically
in
thle
future,
its
Source
is
thc
ariety
of
new
phenomena
displia
ed
by
Al systemls-the
number,
quality.
utility,
-nd
level
of
acti%
iy
of
%kihich
%%ill
soonl
dramatically
increase.
But
first
let
uts
examine
whiat
kind
of
interaction-;
have taken
place
bet%%
een
Al
I.11d
thle
other
disciplines.
1The
Lan1~!guagi
o(*opuaioni
ANs "C
definled
it
at
thle
outset.
Al
is ai
branch
of
computer
scicnce.
Its
practitioners
Are
traiined
inl thle
%ario
Ll
is
b
ields
of
comxiputecr
science:
fornmal
comnputing
theor%
rr
.githi
ox
esign,
h
iid
are
and
pc
r.
i
i
S
SIIS
steni
a cire,
programming
laniguages,
and
programming.
I'lhe
stud%
of
each
otf
these
suharcas
has
produced
a Ia
nguave
of
its
ownm.
iniidcat
inrg
our
u
nde
rstanrd
inrg
of'
the
i
inportar
t
k
nom%
plictIO111CM1
of
computing.
Thle
underlying
assumption
of
our
research
is
that
this
lainguage
(Ml
ich
in
ol~
es
concepts
like
prices,
procedure,
interpreter.
bottomn-up
and
top-downi
processing,
object-oriented
progratil
illi
1g.
And
trw~eer)
and
uic
experience
with
cotmputation
that
it
embodies
%kill,
in
tun,
riasist
uts
iii nnderst~inding
the
%,arnons
p'henome'na of
mnind.
Beflore
Ae
go
onl
to
discuss
the
utility
of
these
Computational
concepts,,
it
should
he
stated
thait.
iii
fact,
our
un
tde
rsta
nd
ing
of
computation
itself
is
quite
limited.
Von
Neumanni
(1958)
dreamred
of
an
"iniformation
theor
-
of
the
nature
of
thinking:
T'he
body
of
experience
which
has
growk
iup around
thle
planning.
emiluaitirig.
and
coding
of
cornpl
icated
longical
anrd
rmathem
at
ical
au
tornata
%
ill
hebe
l I'
us
'
ofinucII
of
tI
is
inifomiat
ion
theory
....It %i
ould
be
%cry
saiisfac
tory
it'
one
coulId
ta1k
ihout
itj
a
hcorN-
of
such
an
tonata.
Regrettably.
%what
atl
this
mioriemit
exists-arid
to
whait
I
must
appeal-can
as
yet
be
described
only
ats
an
impert'cctly articulated
arid
hardly
forialited
"body
of
experience."
(p. 2)
And
ten
years
later,
iii
their
superb
treatise
onl
perceptronlike
automnata.
NMinsky
amid
Papert
(1969)
lanment:
We
kimo
shamiefully
little
about
our
comrpulter's
amid
their
computations-..We
know
very
little.
for
irismice.
about
ho%
much
comnipuitatirn
a
job
should
require-
li
Fe
iiniturity
sho%
ii
by
our
innxbilmt
to
aiisi~er
questions
oh
this
kinid
is
exhibited
e~
ci
ii
(ie
linigua~ge
used
to
fonnilulate
thle
qunest
ions.
Worid
pairs
such
as
"parallel''
s.
''serial..
local"~
%s.
''lbl'and
-digital"
%S.
'.ianalog''
are used
ats
if
they
referr-ed
to
well-defined
technical
concepts.
Fetil
when
this
is
tre,
(lhe
technical
meaning
%aries
front
tiser
to user
amid
context
to
Context.
fiut usually
dhie%
are
treated
so
loosely
that
the
species
of
Coitputing
machine
defined
by
thiemi
belongs
to
nx
thology
rather
than
scice.
(pp.
[-2)
There
is
still
no
adequate
theory
of
corlpuition
for
understaundinig
the
nature
arid
scope
of
symnbolic
processes.
hut
there
is
rapidly
accumnulating
experienice
with computation
of
all
sorts-iseul
new
concepts
emerge
continually.
11
The
Computational
Metaphor
The
discipline
most
closely
related
to
At
is cognitive
psychology.
These
two
disciplines
deal
primarily
with
the
same
kinds
of
behaviors-perception,
memory,
problem solving.
And
they
are
siblings:
Modern
cognitive
psychology
emerged
from
its
behavior-oriented
precursors
in
conjunction
with
the
rise
of
Al.
That
there
might
be
a
relation
between
the
new
field
of
Al
and
the
traditional interests
of
psychologists
was
evident
from
the
beginning:
Our
fundamental
concern
was
to
discover whether
the
c.bernetic
ideas
have
any
relevance
for
psychology.
[he
men
who
have
pioneered
in
this
area
hjae
been
remarkably
innocent
of
psychology....
There
Must
be
some
way
to
phrase
the
ncw
ideas
so
that
thc
can
contI
ibute
to
and
profit
from
the
science
of
behavior
that
psychologists
ha~e
created.
(Miller,
Galanter,
and
Pribram,
1960,
p.
3)
What
in
fact
happened
was
that
the existence
of
computing
scr
ed
as
an
inspiration
to
traditional
psychologists
to
begin
to
theoriie
in
tenns
of
internal,
cognitive
mechanisms,
Use
of
the
concepts
of
computation
as
metaphors
for
the processes
of
the
mind
strongl
influenced
the
form
of
modern
theories
of
cogniti e
psychology-for
example,
theories
expressed
in
terms
of
memories
and
retrieval
processes:
Computers
accept
information,
manipulate
symbols,
store
items in "memory*"
and
retrieve
them
again,
classify
inputs,
recognize
patterns, and
so
on.
Whether
they
do
these
things
just
like
people
was
less
important
than
that they
do
them
at
all.
The
corning
of
the
computer
provided
a
much-needed
reassurance
that
cognitive
processes
were
real.
(Neisser,
1976,
p.
5)
The
metaphorical
use
of
the
language
of
computation
in
describing
mental
processes
was
Found
to
be,
at
least
for
a
time,
quite
fertile
ground
for sprouting
psychological
theories.
During
a
period
of
concept
formation,
we
must
be
well
aware
of
the
metaphorical
nature
of
our
concepts.
l
lowever,
during
a period
in
which
the
concepts
can
accommodate most
of
our
questions
about
a
given subject
matter,
we
can
afford
to
ignore
their
metaphorical
origins
and
confuse
our description
of
reality
with
that
reality.
(Arbib,
1972,
p.
11)
When
pioneering
work
by
Newell.
Shaw,
and
Simon
and by
other
research
joups
showed
that
"programming
Lip"
their
intuitions
about
how humans
solve
pu/zles,
find
theorems,
and
so
on
was
adequate
to
get
impressive
results,
the
link
between
the
study
of
human
problem-soh
ing
and
Al research
was
firmly
established.
Consider,
for
example,
computer
programs
that
play
chess.
Current
programs
are
quite
proficient-the
best
experimental
systems
play
at
the
human "expert" level,
but
not
as
well
as
human
chess
"masters,"
lhe
programs
work
by
searching
through
a
space
of
possible
moves,
that
is,
considering
the
alternative
moves
and
their
consequences
several
steps
ahead
in
the
game,
just
as
human
players
do.
'hese
programs,
even
some
of
12
the
earliest
versions,
could
search
through
thousands
of
moves
in tile
time
it
takes
human
players
to
consider
only
a
do/en or
so
alternatives.
The
theory
of
optimal
search,
developed
as
a
mathematical
formalism
(paralleling,
as
a
matter
of
fact,
much
of
the
work
oil
optimal
decision
theory
in
operations
research)
constitutes
some
of
tie
core
ideas
of
Al.
The
reason
that
computers cannot
beat
the
best
human
players
is
that
looking
ahcad
is
not
all
there
is
to
chess.
Since
there
are
too
many
possible
iomes
to
search
exhaustively,
even
on
tile
fastest
imiginable
computers,
alternative
moves
(board
positions)
must
be
evaluated
vithout
knowing
for
sure
which
mioe
will
lead
to
a
"inning
game,
and
this
is
one
of
those
skills
that
human
chess
experts
cannot make explicit.
Psychological
studies
have
show
n
that
chess
masters
have
learned
to
see
thousands
of
meaningful
configurations
of
pieces
"hen
thyc
look
at chess
positions,
which
presumably
helps
them
decide
ol
tile
best
move.
but
no
one
has
Net
suggested
how
to design
a
computer
program
that
can
identiFy
these
Col
figurations.
For
the
lack
of
theory
or
intuitions
about
human
perception
and
learning.
A I
progress
on
computer
chess
has
virtually
stopped,
but
it
is
quite
possible
that
new
insights
into
a
very
general
problem
were
gained.
The
computer
programs
had
pointed
tip,
more
clearly
than ever,
what
would
be
useful
for
a
cognitive
systcm
to
learn
to
see.
It
takes
many
years
for
chess
experts
to
develop
their
expertise-their
ability
to
"understand"
the
game
in
terms
of
such
concepts
and
patterns
that
they
cannot
explain
easily,
if
at
all.
The
general
problem
is
of
course,
to
determine
what
it
is
about
our
experience
that
we
apply to
future
problem
solving:
What
kind
of
knowledge
do
we
glean
from
our
experience?
The
work
on
chess
indicated
some
of
the
demands
that
would
be
placed
on
this knowledge.
Language
Translation
and
Linguistics
Ideas
about
getting
computers
to
deal
in
some
useful
way
with
the
human
languages,
called "natural"
languages
by
computer
scientists,
were
conceived
before
any
machines
were
ever
built.
lle
first
line
of
attack
was
to
try to
use
large,
bilingual
dictionaries
stored
in
die
computers
to
translate
sentences
from
one
language
to
another
(Barr
and
Feigenbaum,
1981.
pp.
233-238).
The
machine
would look
tip
the
translation
of
the
words in the
original
sentence,
figure
out
the
"meaning"
of
the
sentence
(perhaps
expressed
in
some
interlingua),
and
produce
a
syntactically
correct
version
in
the
target language.
It
did
not work.
It
became
apparent
early
on
that
processing
language
in
any
useful
way
involved
understanding.
which
in
turn
involved
a
great
deal
of
knowledge
about
the
world-in
fact,
it
could
be
argued
, .
4,
13
that
the
more
one
"knows,"
the
more
one
"understands"
each
sentence
one
reads.
And
the
level
of
world
knowledge
needed
for
any
useful
language-processing
is
much
higher
dun
our
original
intuitions
led
us
to
expect.
There
has
been
a
serious
debate
about
whether
Al
work
in
computational
linguistics
has
enlightened
us
at
all
about
the
nature
of
language
(see
)resher
and
Hornstein,
1976,
and
the
replies
by
Winograd,
1977,
and
Schank and
Wilensky.
1977).
The
position
taken
by
A[
researchers
is
that
if
our
goal
in
linguistics
is to
include
understanding
sentences
like
Do
you
have
the
thne?
and
WVel
have
dinter
afier
the khIs
weash
their
hands,
which
involve
the
total
relationship
between
the
speakers,
then
there
is
much
more
to it than
tie
syntactic arrangement
of
words
with
well-defined
meanings-that
although
the
stud%
in
linguistics
of
tie
systematic
regularities
within
and
between
natural
languages
is
an
important
key
to
the
nature
of
language
and
the
workings
of
the
mind,
it
is
only
a
small
part
of
the
problem
of
building
a
useful
language
processor
and.
therefore,
only
a
small
part
of
an
adequate
understanding
of
language (Schank and
Abelson,
1977):
For both
people and
machines,
each
in
their
own
way,
there
is a
serious
problem
in
common
of
making
sense
out
of
what
they hear,
see,
or
are
told
about
the
world.
The
conceptual
apparatus
necessary
to
perform
even
a
partial
feat
of
understanding
is
formidable
and
fascinating.
(p.
2)
Linguists
have
almost
totally
ignored
the
question
of
how
human
understanding
works....
It
has
nevertheless
been
consistently
regarded
as
important
that computers
deal
well
with
natural
language
....
None
of
these
high-sounding
things
are
possible,
of
coursc,
unless
the
computer
really
'understands'
the
input.
And
that
is
the
theoretical significance
of
these
practical
questions-to
solve
them
requires
no
less
than
articulating
the
detailed
nature
of'understanding'.
If
we
understood
how
a
human understands,
then
we
might
know
how
to make
a
computer
understand,
and
vice
versa.
(p. 8)
'T'his idea
that
building
Al
systems
requires
the
articulation
of
the
detailed
nature
of
understanding,
that
is,
that
implementing
a
theory in
a
computer
program requires
one to
"work
out"
one's
fuzzy
ideas
and
concepts,
has
been
suggested
as
a
major
contribution
of
Al
research
(Schank
and
Abelson,
1977):
Whenever
an
Al
researcher
feels he
understands
the
process
he
is
theorizing
about
in
enough
detail,
he
then
begins
to
program
it
to
find
out
where
he
was
incomplete
or
wrong
....
The
time
between
the
completion
of
the
theory
and
the
completion
of
tile
prograim
that
embodies
tile
theory
is
usually
extremely long.
(p.
20)
And
Newell
(1970),
in
a
thorough
discussion
of
eight
possible
ways
one
might
view
the
relation
of
Al
to
psychology,
suggests
that
building
programs
"forces
psychologists to
become
operational,
that
is.
to
avoid
the
fuzziness
of
using
mentalistic
terms"
(p.
365).
Certainly
the
original
conception
of
tie
machine-translation
effort,
although it
was
intuitively
sensible.
14
fell
tar
short
of
what
would
be
required
to
enable
a
machine
to
handle
language,
indicating
it
limlited
conception
of
%
hat
language
is.
It
is
in
the
broadening
of
this
conception
that
AI
has
contributed
most
to
the
study
of
language (Schank
and
Abelson,
1977,
p.
9).
lThus,
Al
can
show,
as
in
the
examples
of
chess
and
language
understanding,
that
intuitive
notions
and
assumptions
about
mental
processes
just
do
not
work.
Furthermore,
analy/ing
the
behavior
of
Al
programs
implemented
on
the
basis
of
existing,
inadequate
concepts
can
offer
hints
on
how
tie
concepts
of
the
theory
affect
tie
success
of
its
application.
Scientific
Languages
and
Theory
Formation
lawrence
Miller.
in
a
1978
article
that
reviews
the
dialogue
between
psychologists
and
Al
researchers
about
Al's contribution
to
the
understanding
of'
mind,
concludes
that
the
critics
of
Al
believe that
it
is
easy
to
construct plausible
psychological
theories,
the
difficult
task
is
demonstrating
that
these
theories
are
true.
The
advocates
of
Al
believe
that
it
is
difficult
to
construct
adequate
psychological
theories:
but
once
such
a
theory
has
been
constructed,
it
may
be
relatively
simple
to
demonstrate that
it
is
true.
(p.
113)
And
Schank
and
Abelson
(1977)agree:
We
are
not
oriented
toward
finding
out
which
pieces
of
our
theory
are
quantifiable
and
testable
in
isolation.
We
feel
that
such
questions
can
vait.
First
we
need
to
know
if
we
have
a
viable
theory.
(p.
21)
Just
as
A]
must
consider
the
same issues
that
psychology
and
linguistics
address,
other
aspects
of
knowledge
dealt
with
by
other
traditional
disciplines
must
also
be
considered.
For
example,
current
ideas
in
Al
about
linking
computing
machines
into
coherent
systems
or
cooperative
problem-solvers
forces
us
to
consider
the
sociological
aspects
of
knowing.
A
fundamental problem
in
Al
is
communication
among
many
individual
units,
each
of
which
"knows"
some
things
relevant
to
some
problems
as
well
as
something
about
the
other
units.
The form
of
the
communication
between
units,
the
organizational
structurc
of
the
complex,
and
the
nature
of
the
individuals'
knowledge
of
each
other
are
all
questions
that
must
find
some
engineering
solution
if
the
apparent power
of"distributed
processing"
is
to he
realized.
These
issues
have
been
studied
in
other disciplines, albeit
from
very
different
perspectives
and
with
different
goals
and
methods.
We
can
view
the
different
control
schemes
proposed
for
interprocess
communication,
for
example,
as
attempts
to
design
sociad
ssIcms
of
knowledgeable
entities.
Our
intuitions,
once again,
form
the
specifications
for
the
first
systems.
Reid G.
Smith
(1978)
has
proposed
a
contract
net
where
the
individual
entities
negotiate
their
roles
in
attacking
the
problem,
via requests
for
assistance
from
15
other
processors,
proposals
for
help
in
reply,
and contracts
indicating
agreement
to
delegate
part
of
tile
problem
to
another
processor:
and
Kornfeld
and
Hewitt
(1981)
have
developed
a
model
explicitly
based
on
problem
solving
in the
scientific community.
Only
after
we
have been
able
to
build
many
systems based
on
such
models
will
we
be
able
to
identify
the
key
factors
in
the
design
of
such
systems.
There
is
another
kind
of
study
of
the
mind,
conducted
by
scientists who
seek
to
understand
the
workings
of
the
brain.
The
brain
as
a
mechanism
has
been
associated
with
computing
machines
since
their
invention
and
has
puzzled
computer
scientists
greatly:
We
know
the
basic
active
organs
of
the nervous
system
(the
nerve
cells).
There
is e
er.
reason
to
believe that
a
very
large-capacity memory
is
associated
with
this
system.
We
do
most
emphatically
not
know
what type
of
physical
entities
are the
basic
components
for
the
mnemory
in
question.
(von
Neumann,
1958,
p.
68)
If
research
on
Al
produces
a
language
for
describing
what
a
computational
system
is
doing,
in
terms
of
processes,
memories,
messages,
and
so
forth,
then that
language
may
very
well
be
the
one
in which
the
function
of
the
neural
mechanisms
should
be
described
(Lenat,
1981:
Torda,
1982).
And,
as
Herbert
Simon
(1980)
points
out,
this
functionality
may
be
shared
by
nature's
other brand
of
computing
device.
)NA:
It
might
have
been
necessary
a
decade
ago
to
argue
for
the
commonality
of
the
information
processes
that
are
employed
by
such
disparate
systems
as
computers
and
human
nervous
systems.
The
evidence
for
that
commonality
is
now
overwhelming,
and
the
remaining
questions
about
the
boundaries
of
cognitive
science have
more
to
do
with
whether
there
also
exist
nontrivial
commonalities
with
information
processing in
genetic
systems than
with whether
men
and
machines
both
think.
(p.
45)
One
more example
of
the
overlap
of
concerns
between
Al
and the related
disciplines
is
the
following.
Making
it
possible
for
an
individual
to know
something about
what
another
knows,
without
actually
knowing
it,
involves
defining
the
nature
of
what
is
known
elsewhere:
who
the
experts
are
on
what
kinds
of
problems
and what
they
might
know
that
could
be
useful.
This
relates
directly
to
the
categorization
at
knowledge
that
is
the
essence
of
library
science.
Instead
of
dealing
with
categories
according
to
which
static
books
will
be
filed,
however.
AI
must
consider
the
dynamic
aspects
of
systems
that know
and
learn.
'The
relation.
then, between
Al
and
disciplines
like
psychology,
linguistics,
sociology,
brain
science,
and
library
science
is a
complex
one.
Certainly
our
current
understanding
of
the
phenomena
dealt
with
by
these
disciplines--cognition.
perception,
memory,
language.
social
systems,
and
categories
of
knowledge-has
provided
the
intuitions
and
models
on
which
the
first
Al
programs
were
built.
And.
as
has
happened
in
psychology
and
linguistics,
these
first
systems
may.
in
turn,
show
us
new
aspects
of
the
phenomena
that
we
16
have
not
considered
in
studying
their
natural
occurrence.
But,
most
important,
the
development
of
Al
systems,
of
useful
computer
tools
for knowledge-oriented
tasks,
will
expose
us
to
many
new
phenomena
and
variations
that
will
force
us
to
increase
our
understanding.
The
Practice
of
Al oI
Al,
and computer
science in
general,
employs
a
unique
method
among
the
disciplines
involved
in
advancing
our
understanding
of
cognition-building
computers
and
programs, and
observing
and
trying
to
explain
patterns
in
the
behavior
of
these
systems.
'Ihe
programs
are
the
phenomena
to
be
studied
(Newell,
1981):
I.
Conceptual
advances
occur
by
(scientifically)
uncontrolled
experiments
in
our
own style
of
computing....
The
solution
lies in
more practice
and
more
attention
to
what
emerges
there
as
pragmatically
successful.
(p. 4)
Observing
our
own
practice-that
is,
seeing
what
the
computer
implicitly
tells
us
about
the
nature
of
intelligence
as
we
struggle
to
synthesize
intelligent
systems-is
a
fundamental
source
of
scientific
knowledge
for
us.
(p.
19)
'hus.
Al
is
one
of
the
"sciences
of
the
artificial,"
as
Herbert
Simon
(1969)
has
defined
them
in
an
influential
paper.
Half
of
the
job
is
designing
systems
so
that
their
performance
will
be
interesting.
There
is a
valuable
heuristic
in
generating
these
designs: 'he
systems
that
we are
naturally
inclined
to
want
to
build
are
those
that
will
be
useful
in
our
environment.
Our
environment
will
shape
them,
as
it
shaped
us.
As
Simon
described
the
development
of
time-sharing
systems:
Most
actual
designs have
turned out
initially
to
exhibit
serious
deficiencies,
and most
predictions
of
performance
have
been
startlingly
inaccurate.
Under
these
circumstances,
the
main
route
open
to
the
development
and
improvement
of
time-sharing
systems
is
to
build
them
and
see
how
they
behave.
(p.
21)
The Genus
of
Symbol
Manipulators
Newell
and
Simon's
psychologically
phrased idea
of
"observing
the
behavior
of
programs"
follows
from
their
pioneering
research
program
in
what they
have
called
information
processing
psychology.
Newell
and
Simon developed,
in
the
early
years
of
this
enterprise,
some
of
the
first
computer
programs that
showed
reasoning
capabilities.
'Tiis
research
on
chess-playing,
theorem-proving,
and
problem-solving
programs
was
undertaken
as
an
explicit
attempt
to model
the
corresponding
human
behaviors.
But
Newell
and Simon
took
the
strong
position
that
these
programs
were
not to
serve
simply
as
metaphors
for
human
thought
but
were
themselves
theories.
In
fact,
they argued
that
programs
were
the
natural
vehicle
for
expressing
theories
in
psychology:
17
An
abstract
concept
of
an
infornation
processing
system
has
emerged
with
the
development
of
the
digital
computers. In
fact.
a
whole
array
of
different
abstract
concepts
has
dcveloped,
as
scientists
ha~e
sought to
capture
the
essence
of
the
new
technology
in
different
ways....
With
a
model
of
an
information
processing
system,
it
becomes
meaningful
to
try
to
represent
in
some
deutil
a
particular
man
at
work
on
a
particular
task.
Such
a
representation
is
uot
metaiphor,
but
a
precise
symbolic
model
on
the
basis
of
which
pertinent
specific
aspects
of
the
man's
problem
solving
benav
ior
can
be
calculated.
(Newell
and Simon.
1972.
p.
5)
'raking
the
view
that
artificial
intelligence
is
theoretical
psychology,
simulation
(the
running
of
a
program
purporting
to represent
some
human
behavior)
is
simply
the
calculation
of
the
consequences
of
a
psychological
theory.
(Newell,
1973a.
p.
47)
A
framework
comprehensive enough
to
encourage
and
permit
thinking
is
offered.
so
that
not
only
answers,
but
questions,
criteria
of
evidence,
and
relevance
all
become
affected.
(Ne%%ell.
1973a.
p.
59)
Newell
and
Simon,
in
their
view
that
computer
programs
are
a xchicle
for
expressing
lpsochological
theories
rather
than
just
serving
as
a
metaphor
for
mental
processes,
%%erc
already
taking
a
strong
position
relative
to
even the
new
breed
of
cognitive
psychologists
who
were
talking
in
terms
of
compitcrlike
niental
mechanisms.
As
Paul
R.
Cohen
(1982)
puts
it,
in
his
review
of
Al
work
on
models
of
cognition:
We
should
note
that
we
have
presented
the
strongest
version
of
the
information-processing
approach,
that advocated by
Newell
and
Simon.
Their
position
is so
strong that
it
defines
information-processing
psychology
almost
by
exclusion:
It
is
the
field
that
uses
inethods
alien
to
cognitiie
psychology
to
explore
questions alien
to
Al.
This
is
an
exaggeration,
but
it serves
to
illustrate
why there
are
thousands
of
cognitive
psychologists,
and
hundreds
of
AI
researchers,
and
very
few
information-processing
psychologists. (p.
7)
However,
Newell
and
Simon
did
not
stop
there.
A
further
development
in
their
thinking
identified
brains
and
computers
as
two
species
of
the
genus
of
physical
symbol
sjvstic.--the
kind
of
s'sten
that,
they
argue.
must
underlie
any
intelligent
behavior.
At
the
root
of
intelligence
are
symbols,
with
their
denotative
power
and
their
susceptibility
to
manipulation.
And
symbols
can
be
manufactured
of
almost
anything
that
can
be
arranged
and
patterned
and
combined.
Intelligence
is
mind
implemented
by
any
patternable
kind
of
matter.
(Simon,
1980,
p.
35)
A
physical symbol
system
has
the
necessary
and
sufficient
means
for
general
intelligent
action.
(Newell
and
Simon,
1976,
p.
116)
Information
processing
psychology
is
concerned
essentially
with
whether
a
successful
theory
of
human
behavior
can
be
found
within
the
domain
of
symbolic
systems.
(Newell,
1970.
p.
372)
The
basic
point
of
view
inhabiting
our
work
has
been
that programmed
computer
and
human
problem
solver
are
both
species
belonging
to
the genus
IPS.
(Newell
and
Simon.
1972,
p.
869)
18
It
is
this view
of
computers-as
systems
that
share
a
common,
underlying
structure
with
the
human
intelligence
system-that
promotes
the
behavioral
view
of
A!
computer
rescarch.
Although
these
machines
arc
not
limited
by
the
rules
of
development
of
their
natural
counterpart,
they
will
be
shaped
in
their
development
by
the
same
natural
constraints
responsible
for
the
form
of
intelligence
in
nature.
The
Flight
Metaphor
he
question
of
whether
machines
could
think
was
certainly
an
issue
in
the
early
days
of
Al
research,
although
dismissed
rather
summarily
by
those
who shaped
the
emerging
science:
To
ask
whether
these
computers
can
think
is
ambiguous.
In
the
naive
realistic
sense
of
the
term,
it
is
people
who
think,
and
not
either
hrains
or
machines.
If,
however,
we
permit
ourselves
the
ellipsis
of
referring
to
the
operation
of
the brain
as
"'thinking,"
then,
of
course,
our
computers
"think."
(McCulloch,
1964,
p.
368)
Addressing
fundamental
issues
like
this
one
in
their
early
writing,
several
researchers suggested
a
parallel
with
the
study
of
flight,
considering
cognition
as
another
natural
phenomenon
that
could
eventually
be
achieved
by
machines:
Today.
despite
our
ignorance,
we
can
point
to that
biological
milestone,
the
thinking
brain,
in
the
same
spirit
as
the scientists
many
hundreds
of
years
ago
pointed
to
the bird
as
a
demonstration
in
nature
that mechanisms
heavier
than
air could
fly.
(Feigenbaum
and
Feldman.
1963,
p.
8)
It
is
instructive
to
pursue
this
analogy
a bit
farther.
Flight,
as
a
way
of
dealing
with
the
contingencies
of
the
environment,
takes
many
forms-from
soaring
eagles
to
hovering
hummingbirds.
If
we
start
to study
flight
by
examining
its
forms
in
nature,
our
initial
understanding
of
what
we
are
studying
might
involve
terms
like
feathers,
wings,
weight-to-wing-size
ratios,
and
probably
wing-flapping,
too.
This
is
the
language
we
begin
to
develop-identifying
regularities
and
making
distinctions
among
the
phenomena.
But
when
we
start
to
build
flying
artifacts,
our
understanding
changes
immediately:
Consider
how
people
came
to
understand
how
birds
fly.
Certainly
we
observed
birds.
But
mainly
to
recognize
certain
phenomena.
Real
understanding
of
bird
flight
came
from
understanding
flight;
not
birds.
(Papert.
1972,
pp.
1-2)
Even
if
we
fail
a
hundred
times
at
building
a
machine
that
flies
by
flapping
its
wings,
we
learn
from
every
attempt.
And
eventually
we
abandon
some
of
the
assumptions
implicit
in
our
definition
of
the
phenomena
under
study
and
realize
that
flight
does
not
require wing
movement
or
even
wings:
Intelligent
behavior
on
the
part
of
a
machine
no
more
implies
complete
functional
equivalence
between
machine
and
brain
than
flying
by
an
airplane
implies
complete
functional
equivalence
between plane
and
bird.
(Armer,
1963.
p.
392)
19
Every
new
design
brings
new
data
about
what
works
and
what
does
not,
and
clues
as
to
why.
Every
new
contraption
tries
some
different
design
alternative
in
the
space
defined
by
our
theory
language.
And
excry
attempt
clarifies
our
understanding
of
what
it
means
to
fly.
But
there
is
more
to
the
sciences
of
the
artificial
than
defining
the
"true
nature"
of
natural phenomena.
The
exploration
of
the
artifacts
themselves,
the
stiff-winged
flying
machines,
because
they
are
uscful
to
society,
will
naturally
extend
the
exploration
of
the
various
points
of
interface
between
the
technology
and
society.
While
nature's
exploration
of
the
possibilities
is
limited
by
its
mutation
mechanism,
human
inventors
will
vary
every
parameter
they
can
think
of
to produce
effects
that
might
be
uIsCful-exploring
the
constraints
on
the
design
of
their
machines
from every angle.
The
space
of
"flight"
phenomena
%kill
be
populated
by
examples
that
nature
has
not
had
a
chance
to
try.
Exploring
the
Space
of
Cognitive
Phenomena
This
argument,
that
the
utility
of
intelligent
machines
will
drive
the
exploration
of
their capabilities.
suggests
that
the
development
of
Al
technology
has
begun
an
exploration
of
cognitive
phenomena
that
will
involve
aspects
of
cognition
that
are
not
easy
to
study
in
nature.
In
fact,
as
with
the
study
of
flight,
Al
will
allow
us
to
see
natural
intelligence
as
a
limited
capability,
in terms
of
the
design
trade-offs
made in
the
evolution
of
biological
cognition:
Computer
science
is
an
empirical
discipline
....
Each
new
machine
that
is
built
is
an
experiment.
...
Each
new
program
that
is
built
is
an
experiment.
It
poses
a
question
to
nature,
and
its
behavior
offers
clues
to
an
answer
....
We
build
computers
and
programs
for
many
reasons.
We
build
them
to
serve
society
and
as
tools
for
carrying
out
the
economic
tasks
of
society. But
as
basic
scientists
we
build
machines
and
programs
as
a
way
of
discovering
new
phenomena
and
analyzing
phenomena
we
already
know
about
....
The
phenomena
surrounding
computers
are
deep
and
obscure,
requiring
much
experimentation
to
assess
their
nature.
(Newell
and Simon,
1976,
p.
114)
For
what
will
Al
systems
be
useful?
How
will
they
be
involved
in
the
economic
tasks
of
society?
It
has
certainly
been
argued
that
this
point
is
one
that
distinguishes
biological
systems
from
machines
(Norman,
1980):
Tlhe
human
is a
physical symbol
system,
yes.
with
a
component
of
pure
cognition
describable
by
mechanisms
....
But
the
human
is
more: The
human
is
an
animate organism,
with
a
biological
basis
and
an
evolutionary
and
cultural
history.
Moreover.
the
human
is a
social
animal,
interacting
with
others,
with
the
environment,
and
with
itself.
h'lie
core
disciplines
of
cognitive
science
have
tended to ignore
these aspects
oi
behavior.
(pp.
2-4)
'he difference
between
natural
and
artificial
devices
is
not
simply
that
they
are
constructed
of
different
stuff:
their
basic
functions
differ.
Humans survive.
(p.
10)
20
Tools
evolve
and
survive
according
to
their
utility
to
tie
people
who
use
them.
Either
the users
find
better
tools
or their
competitors
find
them.
This
process
will
certainly
continue
with
the
development
of
cognitive
tools and
will
dramatically
change
the
way
we
think
about
AL:
We measure
the
intelligence
of a
system
by
its
ability
to
achieve
suited
ends
in
the
face
of
variations.
difficulties
and
complexities
posed
by
the
task
environment.
This
general
investment
o'computer
science
in
at(aining
intelligence.
becomes
more
obvious
as
we
extend computers to
more
global
complex
and
kno"
]edge-intensive
tasks-as
we
attempt
to
make
them
our
agents,
capable
of
handling
on
their
own
the
full
contingencies
of
the
natural
world.
(Nekell
and
Simon,
1976,
pp.
114-115)
In
fact.
this
change
has
already begun in A[
laboratories, but
the place
where
the
changing perception
of
Al
systems
is
most
dramatic
and
accelerated
is.
not
surprisingly
in
our
society,
tie
marketplace.
Al,
Inc.
To
date,
three
ofthe
emerging
Al
technologies
haxe
attracted
interest
as
commercial
possibilities:
robots
for manufacturing,
natural-language
front-ends
for
information-retrieval
systems,
and
expert
systems.
Ihe
reason
that
a
company
like
General
Motors
invests
millions
of
dollars
in
robots
for the
assembly
line
is
not
scientific curiosity
or
propaganda
about
"retooling"
their
industry.
GM
believes
these
robots
are
essential
to
its
economic
survival. Al
technology
will
surely
change
many
aspects
of
American
industry,
but
its
application
to
real
problems
will
just
as
surely
change
the
emerging
technology--change our
perception
of
its
nature
and
of
its
implications
about knowledge.
"l'he
remaining
discussion
will
focus
on
this
issue
in
the
context
of
expert
systems.
Expert
Systems
With
work
on
the
DENDRAL
system
in
the
mid-1960s.
Al
researchers
began
pushing work
on
problem-solving
systems
beyond
constrained domains
like
chess,
robot
planning,
blocks-world manipulations,
and puzzles:
They
started
to
consider
symbolically
expressed
problems
that
were
known
to
be
difficult
for
the
best
human
researchers
to
solve
(see
Lindsay,
Buchanan,
Feigenbaum,
and
ILederberg,
1980).
One
needs
to
move
toward
task
environments
of
greater
complexity
and
openness-to
everyday
reasoning,
to
scientific
discovery,
and
so
on.
The
tasks
we
tackled,
though
highly
complex
by
prior
psychological
standards,
still
are
simple in
many
respects.
(Newell
and
Simon,
1972,
p.
872)
tiumans
have
difficulty
keeping
track
of
all
of
the
knowledge
that
might
be
relevant
to
a
problem,
exploring
all
of
the
alternative
solution-paths,
and
making
sure
none
of
the
valid
solutions
is
overlooked
in the
process.
Work
on
)ENDRAI,
showed
that
when
human
experts
could explain
exactly
what
they
were
doing in
solving
their
problems,
the
machine
could
achieve
expert-level
performance.
21
Continued
research at
Stanford's
Heuristic
Programming
Project
next
produced
the
N
YCI
N
s.stem.
an
experiment
in
modeling
medical
diagnostic
reasoning
(Shortliftf,
1976).
In
prodIi'tioi
n
o'
of
the
form
If
(Coltdilioti)
htn
'action),
Shortlitfe
encoded
the kind
of
information
about
the
reasoning
processes
of
physicians
that
they
were
o?lost
able
to
give--advice
about
Mhat
to
do
in
certain
situations.
In
other
%kords,
the
if
part
of
tie
rules
contains
clauses
that
attempt
to
differentiate
a
certain
situation,
and
the
lhet
part
describes
what
to
do
if
one
finds
oneself
in
that
situation.
This
production-rule
knonledge
,ertrnralnin
v
orked
surprisingly
well:
MIYCIN
was
able
to
perform
its
task
in
a
specific
arca
of
infectious-disease
diagnosis
as
well
as
the
best
experts in
the
country.
[Fuirthermore,
the
MYCIN
structure
was
seen
to
be, at
least
to
some
extent,
independent
of
the
dolmalin
of
medicine.
So
long
as
experts
could
describe
their
kno\V.
ledge
in
terms
of
If...
thent
.. i.
rtles.
the
reCsonling
mechanism
that
.MYCIN
used
to
make
inferences
from
a
large
set
oC
ru'Lle,
Wtild
coiie
up
%ith
the right
questions
and,
eentually.
a
satisfactory
analysis.
MYCIN-like
s\stems
ha~e
been
succcssfull.
built
in
research
laboratories
for applications
as
diverse
as
mineral
exploration,
diagnosis
of
comLputer-equipment
failure,
and cven
ad
ising
users
about
ho%
to
use
complex
systems.
Transfer
of
Expertise
IbMere
is
an
important
shift
in
the
view
of
expert
systems
just
described
that
illustrates
the
changing
perspective
on
AI
that
is
likely
to
take
place
as
it
becomes
an
applied
s-ience.
The
earl\
work
on
expert
systems,
building
on
Al
research
in
problem solving,
focused
on
representing
and
manipulating
the facts
inl
order
to
get
answers.
But
through
MYCIN,
whose
reasoning
mechanismn
is
actually
quite
shallow,
it
became
clear that
the
way
that
these
systems
interacted
with
the
people who
had
the
knowledge
and
with
those
who
needed
it
was
an
important,
deep
constraint
on
the
system's
architecturc-on
its
kno\sledge
representaftions
and
reasoning
mechanisms:
A key
idea in
our
current
approach
to
building
expert
systems
is
that
these
programs
should
not
only
be
able
to
apply
the
corpus
of
expert
know
ledge
to
specific problems,
but
the\
should
also
be
able
to
interact
with
the
users
and experts
just
as
humans
do
\%
hen
the.
learn,
explain,
and
teach
what
they
know
....
These
transfrr
of
expertise
([VOF)
capabilities
were
originally
necessitated
by
"human
engineering"
considerations-the
people
who
build
and
use
our
s\stems
needed
a
variety
of
"assistance"
and
"explanation"
facilities.
However,
there
is
more
to
the idea
of
TOF
than
the
implementation
of
needed
user
features:
These
social
interactions-learning
from
experts,
explaining
one's
reasoning, and
teaching what
one
knows-are
essential
dimensions
of
human
knowledge.
'T'hey
are
as
fundamental
to
the
nature
of
intelligence
as
expert-level
problem-solving,
and
they
have
changed
our
ideas
about
representation
and
about
knowledge.
(1Barr.
Bennett,
and
Clancey,
1979,
p. 1)
22
ItandallI
I )a
is's
(1976)TI
Il
It
FS
IAS
system,
built
within
the
MYC
IN
fratnework,
wits
thle
first
to focuis
ot
I
the
iransfvcral
aspects
of
expert
systems.
TIRFSIAS
offcred
aids
for
thle
experts
who
were
euticn
knioiw
ledge
in
to
thle
system
and
for
the
System's
users.
lFor
exam ple,
in
order for
an
expert
to
Figure
out
I h\l a
S
Slseml
hast
Come
uip
with
the
wrong
diagnosis
or
is
asking
anl
inappropriate
question,
hie
(or
she)
his
t,
understand
its
behavior
in
his
ow
n
terms:
The
Sv'Stcml
moLst
CXpluin
its
reasoning
in
terms
of
concepts
aind
procedures
with
which
the
expert
is
familiar.
The
same
sort
of
explanation
facility
is
uccessar
CFr
thle
e~entual
user
(if
anl
expert
system
wkho
w~ill
want
to
he
assurled
thA
the
s~
stemn's
ausv.
ers
are
v
eli
founded.
Fxperuest
tes
technolog
had
to
bo
extended
to
facilitate
such
interac
tions.
.1
ud.
ill
thle
pro
ices.
o
I
conception
of
%&flat
anl
expert
s~
stemn
was
had
changed.
Nio
lionger
did
thle
' '
,cm
imlpl
sol
e
plobllis.
thes
nowk
transferred
expertise
fi
urn People
who
had
it
to
PCpeIl
who
Could
use
it:
We
ale
building
55
stemls
that
tike
part
in
thle
humn1
acti
itV
of
trivWu ofr
fu
spcruis
amiong
e
\pcrts.
Practitioners,
and
students
in
different
kinds
of
domains.
"Our
prohicuiN
leliaill
the
Same
aS
lthes
,Acre before:
We
must
find
good
w
avs
to
represent
know
ledge
and
mieta-know%
ledge.
ito
Lcarr\
onl
a
dialogue,
and
to sol%
e
Problems
in
the
domlain.
Blut
thle
guiding
prinlcijlc
of,
our
approach
' nd
thle
ulnderlsingl
constraints
(in
our
solutions
has
e
subtl\
shifted:
Omr
s~
stoius
are
no
loiiger
being
desi
gi ed
solel\
toi
be
expert
probletm
solvers,
using
%ast
amiounts
of
cncoded
know
edge.
[here
are
aspects
of'knowing"
that
have
so
farl
remained
unexplored
in
Al
reseairch:
B%
patcpiin
humaii
transfer
of
expertise,
these
systems
will
involse
miore
of
the
fabric
of
behas
~or
that
is
the
reason
\&e
ascribe
knowledge
and
intelligence
to
people.
(Barr,
Biennett.
and
Clance%,
1979.
p. 5)
The
Techniological
Niche
It
is
the
goal
of
those
who
are
involved
in
thle
commercial
deseloprtent
if
expert-systenis technology
to
incoirporate
that
technology
into
some
device
that
canl
be
sold.
But
the enr-ironmnn
in
which
expert
ss'stenlis
operate
is
our
own
cognitive environment:
it
is
within
this
sphere
of
acti
ts
t-people
solving
their
proiblems-that
the
eventual
expert-system
products
must
be
found
useful.
TheY
tvh
be
enginu'ered
to
our
ininds.
With
these
systems,
it
will
at
last become econoimical
to
match
huaiil
beings
in
reA
time
with
really
largze
machines.
Tbis
means
that
we
can
work toward
programmtning
"hat
wkill
he,
in
effect.
thinking
aids."
In
the
years
to
come
we
expect
that
these
nln-iiiachine
ss
stems
will
share,
and
perhaps
for
at
time
be
dominant,
in
our
advance
toward
thle
des
eloptentll
of'"rifca
in
tell
igence."
(Minsky,
1963.
p.
450)
It
is a
long
way
from
the
expert
systems
developed
in
the
research
laboratories
to
anly
products
that
fit
into
people's lives:
in
fact,
it
is
difficult
even to
envision
what
such
proiducts
will
be.
Fgon
Loebneor
of
llewlett-
P1ackard
Laboratories
tells
of a
conversation lie
had
many
years
ago
with
Vladimir
/worykin.
the
inventor
of
television
technology.
Lochner
asked
Zworykin
what
hie
had
in
mind
for
his
invention
when
lie
was
23
deseloping
the
techlnology
in
thle
1920s-sshat
kind
of
product
fie
thlooght
Ills
efforts
would
produce.
ile
insentor
said
that
hie
had
had
I
very
clear
idea
Of
thle
eventalil
use
of
I
V:
fieI
ens
sioned
medical
tudentIS
III
the
c~allers
ofanil
operating
roomn
getting
a
clear
picture
onl
their
I V
,creens
of
thle
details
i)f
thle
o~peration
heim
ne_.onducted
heloss
themn.
One
cannot,
at
the
outset,
understand
the
application
of
at
ness
tech
nilog%,
because
it
'Ail]
find
its
Waly
Into
rcalmns
of
application
that
do
not
set
exist.
L
ochner
hasv
des.cribed
(h?',
piocess
In
let-Ils
of
tile
lech
n,
l
'gwazI
nih
1'
parallelding
m
id
r n
es
ol
utIO1
n
thory
(I
oeh
lr 1
976:
1
oehincr
and
I
i
rde
n.
190)).
1
Ake
thle
species
and
01heir
ensil-rnient.
insentions
and
their
applicainows
aie
c
-cnd -
heycontnl
esolse
togethecr.
Wsith
niches
representing
periods
of'relaise
ta
Iit.into
a
treks
reality:
\loreos
er.
thle
niches thenisels
es
are
. ..
defined
in
ci
inde
rable
imeasirre
h%
the
s.
hole
ciinstcllationl
of,
irr"11anSm
thenIISel
se. [
here
canl
bN
li!ce
ss
ithouit
hairy
heds
lir
them
to
inha
01t
nor
ani1mals
ssiitpAnts.
(Simon,
1980.
p.
44)
IhuILS,
ech
nolo0gICA
ins
entionls
change
as
they are
a~pplied
to
people*,
needs,
and
thle
act
ities
that
people
undertaLke
changl
e ss
ith
the
aaialtyof
nie".
technologies.
And
as
people
Ii
industry
try
to
push
thie
new
,echnoilouy
tossard
some
profitahle
niche,
they
will
also
explore
the
nature
of
the
ujnderlying
phenotmena.
Of
ciiurse,
it
is
not just
thle
scientists
and
engineers
who
deseloped
the
nes"
technology
Miho
are
insolsed
in
this
cxsph
iration
:
lalf
(he
job
in
ok
es
finding
out
"shat
thle
new
capabilities
can
do
for
people.
Rccoen
it
ion
of'
thle
commercial
application
of
TV
technolog
was
accomplished
h\
I)as
id
Sarnoff.
after
thle
in
0del
hie
had
usedc
for
the
radio
broadcasting
inldustry.
It
is
impottant
to
note
that
the
-commercial
prridur.t
that
re'sulted
from
IV
technology,
the
[V-set
receiver.
Asas
only
part
of
a
gigantic
srisem
that
had
to
he
des
eloped
for
its
support
(aictually
imported
fr-oi
radio,
\k
ith
modifications
and
extensions).
insols
ing
broaidcast
technology,
the
networks.
regulation
of
the
air
was-es. ads
ertising.
and
so
forth.
ILoebner
refers
to
this
need
for
.srnmtsde
coitcern
With
product
dlevelopment
as
the
Idisonian
model
of
technological
inui
ion:
Idison's
ichies
ement
of
the
invsention
of
the
long-life,
commnerciaill\
teasible
liPght
bulb
%kits
1.11nd~ir
tdInl
1
;arallel
ssith
his
successful
desvelopment
of
the
first
dl
namno
for
comimerciailly prioducitng
electric
pos..er
amd
ssith
his
design
and
implementation
of
the
first electric-
posser
distribution
netssork.
The
Knowledge
Industry
Among
the
scientific disciplines
that
study
knowledge,
the
potential for
commercial appl
icat ions
of
artificial
intlligence
presents
unique
opportunities.
Tlo
identify
and
Fill
the
niches
in
which
intelligent
24
machines
"ill
survive,
we
must
ask
questions
about
"knowledge"
from
a
rather
different
pcrspcc(i~e.
We
must
identify
the
role
that
the
various
aspects
of
intelligence
play,
or
could
play.
in the
atihirs
of
men.
in
,uch
a
way
that
we
can
identify
correctable shortcomings
in
how
things
are
done.
I
here
is
no
question
that
the
current
best
design
of
an
intelligcnt
s
SCm,
the
human
brain,
has
its
limitations.
Computers
haNe
already
helped
people
deal
"ith such
shortcomings
as
menury
failure
and
confusions,
o'erloading
in
busy
situations,
their
tendency
to
boredom.
aind
their
need
for
slcep.
lhese
extended
capabilities-total
recall,
rapid
processing,
and
uninterrupted
attention-arc
cognitne
capabilities
that
we
ha1 e
been
w
illing
to
concede
to
the
new
species
in
the genus
of
,,,!
l qu/,h
r'.
Ichk
hae
helped
us
do
the
things
we
did
before,
and
have
made
some
entirely
new
capabilities
possible.
1ir
example.
airline
reservation
systems,
24-hour
banking,
and
Pac-Man
(although
the
truly
challenging
computer
"gamies"
are
Net
to
come!).
Intelligence
is
also
going
to
be
present
in
this
new
species,
as
en'
isioned
20
years
ago
b
Mar'
in
Minsky
1963):
I
believe
. . .
that
"e
are
on
the
threshold
of
an
era
that
w
ill
be
strongly
influenced,
and
quite
possihly
dominated,
by
intelligent
problem-sol%
ing
machines.
(p.
406)
iding
a
wa,
to
apply this
new
intellectual
capability, for
effectively
applying
relevant
expel
ence
to
new
situations,
is
the
task
ahead
for
Al,
Inc.
We
have
hardly
begun to
understand
what
this abundant
and
cheap
intellectual
pow
er will
do
to
our
lives.
It
has
already
started
to
change
physically
the
research
laboratories
and
the
manufacturing
plants.
It
is difficult
for
the
mind
to
grasp
the
ultimate
consequences
for
man
and
society.
(Riboud,
1979)
It
may
be
a
while
in
coming.
and
it
may
involve
a
rethinking
of
the
way we
go
about
some
cognitive
activities.
But
it
is
extremely
important
that
the
development
of
intelligent
machines
be
pursued.
for
the
human
mind
not
only
is
limited
in
its
storage
and processing
capacity
but
it
also
has
known
bugs:
It
is
easily
misled.
stubborn,
and
even
blind
to
the
truth,
especially
when pushed
to
its
limits.
And,
as
is
nature's
way,
everything
gets
pushed
to
tie
limit,
including
humans.
We
must
find
a
way
of
organizing
ourselves more
effectively,
of
bringing
together
the
energies
of
larger
groups
of
people
toward
a
common
goal.
Intelligent
systems,
built
from
computer
and
communications
technology,
will
someday
know
more than
any
individual
human
about
what
is
going
on
in
complex
enterprises
involving
millions
of
people,
such
as
a
multinational
corporation
or
a
city.
And
they
will
be
able
to
explain
each
person's
part
of
the
task.
We
will
build
more
productive
factories
this
way,
and maybe
someday
a
more peaceful
world.
We
must
keep
25
in
mind,
following
our
analogy
of flight,
that
the
capabilities
of
intelligence
as
it
exists
in
nature
are
not
necessarily
its
natural
limits:
There
are
other
facets
to
this
analogy
with
flight;
it,
too,
is a
continuum,
and
some once
thought
that
the
speed
of
sound
represented
a
boundary
beyond
which
flight
was
impossible.
(Armer,
1963,
p.
398)
26
Bibliography
Arbib.
M.
A.
1972.
Themietaphwrical
brain.
New
York:
Wilcy-lntcrsciencc.
Armer,
P.
1963.
Attiudes
toward
intelligent
machines.
In
Fi.
A.
ecigenbaum and
J.
F~eldman
(Eds.),
Computers
ani
hou
gi.
New
York:
McG raw-[
fll,
389-405.
Barr,
A.,
Bennett.
J.
S.,
and
Clanccy.
W.
J.
1979.
Transfi'r
of
expertise:
A
ihemte
for
A/
research
(Working
Paper
No.
I
IPP-79-1l1).
Stanford
University.
Heuristic
Programming
Project.
Barr,
A.,
and
Feigenbaum,
F. A.
(Eds.).
198
1.
The
handbook
of
artificial
intelligence
(Vol.
1).
Los
Altos,
Calif.:
Kaufmann.
Becker.
J.
1).
1975.
Reflections
on
the
formal
description
of
behavior.
In
1).
G.
Bobro~k
and
A.
Collins
(Fds.).
Representation
and
understanding:
Studies
ini
cognitive
science.
Nc~k
York:
Academic
Press.
83-102.
Bernstein,
.1.
198
.
Profiles:
Marvin
Minsky.
Nett
Yorker
lDecermber
14,
pp.
50-126.
Cohen,
P. R.
1982.
Models
of
cognition:
Overview.
In
P). R.
Cohen
and
F. A.
Fcigcnhauin
(Fd'.
),
T'he
handbook
of
artificial
inteligeiice
(Vol.
3).
l
os
Altos.
Calif.:
Kaulfmann,
1-10.
D)avis,
R.
1976.
Applications
of
ineta-level
knowIlede
to
the
construction.
maintenance,
and
use
oif
large
knowledge
bases
(Tech.
Rep.
STIAN-CS-76-564).
SLanford
Unixersity,
Computer
Science
D~epartment.
(Reprinted
in
R.
lDavis
and
1).
Icnat
(Fds.).
Knowledge-
based
systems
in
artificial
intelligence.
New
York:
McGraw-Hill,
1982,
229-490.)
D~resher,
1B.
F..
and
Hornstein,
N.
1976.
On
some
supposed
contributions
of
artificial
intelligence
to
the
scientific
study
of
language.
(ognihion
4(4):321-398.
(Sec
also
their
replies
to
Schank and
Wilensky.
Cognition
5:147-150, and
to
Winograd,
Cognition
5:379-372.)
Feigenbaum,
E.
A.
1977.
'I'le
art
of
artificial
intelligence,
1:
'I1hemes
and
case
Studies
of
knowledge
engincering.
Proceedings
of
the
Fiftlh
International
Joint
C~onferences
on
Artificial
Intelligence.
1014-1029.
Feigenbaum,
F.
A..
and
F-eldman,
J.
(Fds.).
1963.
Computers
and
thought.
Newk
York:
McGraw-Hill.
Kornfeld,
W.
A., and
Hewitt.
C.
1981.
The
scientific
commnunity
metaphor
(Tech.
Rep.
AIM-641).
Massachusetts
Institute
of
Technology,
Al
L
.aboratory.
Lenat.
D.
G.
1981.
The
heuristics
of
nature
(Working
Paper
No.
IIPP-81-22). Stanford
University.
Hleuristic
Programming
Project.
L~indsay.
R.,
Buchanan.
B.
G.,
F~eigenbaum,
F.
A..
and
I
edcrberg.
J.
1980.
IJFNDRAL
New
York:
McG
raw-Ilill.
Locbncr,
E. E.
1976.
Subhistories
of
the
light
emitting
diode.
IEE-.E
Transactions
on
Electron
Devices
23(7):675-699.
Loebnier,
F.
E.,
and
Borden,
H.
1969.
Ecological
niches
for
optoclectronic
devices.
J$'FSCON.
Vol.
13.
Session
20.
18.
Marr.
1).
1977.
Artificial
intelligence-A
personal
view.
Artificial
Intelligence
9(l):1-13.
27
Maturana,
It.
1976.
Iliolog)
of
language:
The
epistemology
of
reality.
In
I's)chohgy
and biology
of
language
and
thought.
Ithaca,
N.Y.:
Cornell University
Press.
McCorduck,
P.
1979.
Machines
who
think.
San
Francisco:
Freeman.
McCulloch,
W.
1964.
The
postulational foundations
of
experimental
epistemology.
In
l'imbodiments
of
mind.
Cambridge,
Mass.:
MIT
Press,
359-372.
Miller,
G.
A.,
(alanter,
F.,
and
Pribram,
K.
H.
1960.
Plans
and
the
structure
of
behavior. New
York:
lolt,
Rinehart
and
Winston.
Miller,
L.
1978.
Has
artificial
intelligence
contributed
to
an
understanding
of
the
human
mind?
A
critique
of
arguments
for
and against.
Cognitive
Science
2(2):
111-128.
Minsky,
M.
1963.
Steps
toward
artificial
intelligence.
In
F. A.
Ucigenbaum
and
J.
Feldman
(i-ds.).
Computers
andthought.
New
York:
McGraw-H
ill,
406-450.
Minsky,
M.
(I'd.).
1968.
Semantic
information
processing.
Cambridge,
Mass.:
MIT
Press.
Minsky,
M.,
and
Papert.
S.
1969.
Perceptrons."
An
introduction
to
computational
geometry.
Cambridge.
Mass.:
MIT
Press.
Neisser,
U.
1976.
Cognition
and reality.
San
Francisco:
Freeman.
Newell,
A.
1970.
Remarks
on
the
relationship
between
artificial
intelligence
and
cognitive
psychology.
In
R.
Banerji
and
M.
I).
Mesarovic
(FIds.),
Theoretical
approaches
to
non-numerical
problem
solving. New
York:
Springer-Verlag,
363-400.
Newell,
A.
1973a.
Artificial
intelligence
and
the
concept
of
mind.
In
R.
Schank
and K.
Colby
(lds.).
Computer
models
of
thought
and
language.
San
Francisco: Freeman,
1-60.
Newell,
A.
1973b.
You
can't
play
20
questions
with
nature
and
win.
In
W.
G.
Chase
(Ed.),
Visual
information
processing.
New
York:
Academic
Press,
283-308.
Newell,
A.
1980.
Physical
symbol
systems.
Cognitive
Science
4(2):
135-183.
Newell.
A.
1981.
The
knowledge
level.
AI
Magazine
2(2):1-20.
Newell,
A.,
and
Simon.
H.
A.
1972.
Human
problem
solving.
Englewood
Cliffs,
N.J.:
Prentice-[
fall.
Newell,
A..
and
Simon,
H. A.
1976.
Computer
science
as
empirical
inquiry:
Symbols
and
search
(Turing
Award Lecture,
Association
for
Computing
Machinery).
Communications
of
the
ACM
19(3):1
13-126.
Nilsson,
N.
1974.
Artificial
intelligence.
In
J.
L.
Rosenfeld
(.d.),
Proceedings
(f
the
IH4P
Congress
(Vol.
4).
New
York:
American
Elsevier.
778-801.
Nilsson,
N.
1980.
Principles
of
artificial
intelligence.
Palo
Alto,
Calif.:
Tioga
Press.
Norman,
).
A.
1980.
Twelve
issues
for
cognitive
science.
Cognitive
Science
4(1):1-32.
Papert,
S.
1972.
Paper
given
at
the
NUFFIC
summer
course
on
process
models
in
psychology.
The
I1ague:
NUFFIC.
Riboud.
J.
1979.
Address
to
the
meeting
of
shareholders,
Schlumberger
limited.
28
Schank,
R.,
and
Abelson,
R.
1977.
Scripts,
plans
goals,
and
understanding.
Hillsdale,
N.J.:
Erlbaum.
Schank,
R.,
and
Wilensky,
R.
1977.
Response
to
Dresher and Hornstein.
Cognition
5:133-146.
Searle,
J. R.
1980.
Minds,
brains,
and programs.
Behavioral
and
Brain
Sciences
3(3):411-457.
Shortliffe,
E.
H.
1976.
Comnputer-based
medicalconsullations:
MYCIN.
New
York:
American
Elsevier.
Simon,
H.
A.
1969.
The
sciences
of
the
artificial.
Cambridge,
Mass.:
M
IT
Press.
Simon,
H.
A.
1980.
Cognitive
science:
The
newest science
of
the
artificial.
Cognitive
Science
4(1):33-46.
Smith,
R.
G.
1978.
A
framework
for
problem
solving in
a
distributed
processing
environment
(Tech.
Rep.
STAN-CS-78-7009).
Stanford
University,
Department
of
Computer
Science.
(I)octoral
dissertation.)
Torda,
C.
1982.
Infonnation
processing
by
the
central
nervous
system
and
the
computer
(a
comparison).
Berkeley, Calif.:
Walters.
Turing,
A.
M.
1950.
Computing
machinery and
intelligence.
Mind
59:433-460.
von
Neumann,
J.
1958.
The
computer
and
the
brain. New
Haven,
Conn.:
Yale
University
Press.
Winograd,
T.
1977.
On
some
contested
suppositions
of
generative
linguistics
about
the
scientific
study
of
language.
Cognition
5:151-179.
Winograd,
T.
1979.
Beyond
programming
languages.
Communications
of
the
ACM
22(7):391-401.
ILMEI