ArticlePDF Available

On Looking Into the Black Box. Prospects and Limits in the Search for Mental Models

American Psychological Association
Psychological Bulletin
Authors:

Abstract

To place the arguments advanced in this paper in alternative points of view with regard to mental models are reviewed. Use of the construct in areas such as neural information processing, manual control, decision making, problem solving, and cognitive science are discussed. Also reviewed are several taxonomies of mental models. The available empirical evidence for answering questions concerning the nature and usage of mental models is then discussed. A variety of studies are reviewed where the type and form of humans' knowledge have been manipulated. Also considered are numerous transfer of training studies whose results provide indirect evidence of the nature of mental models. The alternative perspectives considered and the spectrum of empirical evidence are combined to suggest a framework within which research on mental models can be viewed. By considering interactions of dimensions of this framework, the most salient unanswered questions can be identified.
center
for
man-machine
systems research
on
looking
into
the
black
box:
prospects
and
limits
in
the
search
for
mental
models
AD-A
159
080
william
b.
rouse
nancy
m.
morris
0
report
no.
85-2
may
1985
Dl:
lFILE
Copy
.. "
school
of
industrial
and
systems
engineering
"9SEP
1
981,
georgia
institute
of
technology
a
unit
of
the
university
system
georgia
A
atlanta,
georgia
30332
'Thiho V1 iICit dVLJ
di.;tribution
is
unjiiiitl
,a
" "" " -" ' " " " - """
" ---" - "- " "" -" " " - -"" - '"-~~~~~~_
_ _
8"
" "
.'
D.i " , -• : . • - "" "A ) --
'
.- .? -~ " '
:"•
'
..-
--- - .. ...-- - . -, -, , ... . . . . , . -, , ., -, ,
, ., ., -- , ., . -, . ,
:
. . , . , . . , . .. , - .
SUhRRY
LASICTON
OF THIS
PAGA
REPORT
DOCUMENTATION
PAGE
t...
Ia.
REPORT
SECURITY
CLASSIFICATION
lb.
RESTRICTIVE
MARKINGS
Unclassified
2a.
SECURITY
CLASSIFICATION
AUTHORITY
3.
DISTRIBUTION
/AVAILABILITY
OF
REPORT
2b'
SCHEDULE
Approved
for
public
release;-
2.DECLASSIFICATION
I
DOWNGRADING
disriuEonunimte
distribution
unlimited
4.
PERFORMING ORGANIZATION REPORT
NUMBER(S)
S.
MONITORING
ORGANIZATION
REPORT NUMBER(S)
Technical
Report
85-2
6a.
NAME
OF
PERFORMING
ORGANIZATION
6b.
OFFICE
SYMBOL
7a.
NAME
OF
MONITORING
ORGANIZATION
Center
for
Man-Machine
(If
appicable)
Personnel
and
Training
Research
Programs
Systems
Research
Office
of
Naval
Research
(Code
442PT)
6c.
ADDRESS
(City,
State, and
ZIPCode)
7b.
ADDRESS
(City,
State,
and
ZIP Code)
School
of
industrial
&
Systems
Engineering
800
North
Quincy
St.
Georgia
Institute
of
Technology
8
rin
cy
V 2 -
AtlataGA 3332Arlington,
VA
22217-5000
Atlanta,
GA
30332
8a.
NAME
OF
FUNDING/SPONSORING
Ob. OFFICE
SYMBOL
9.
PROCUREMENT
INSTRUMENT
IDENTIFICATION
NUMBER
ORGANIZATION
(if
applicable)
Office
of
Naval
Research
N00014-82-K-0487
8c.
ADDRESS
(City,
State,
and
ZIP
Code)
10.
SOURCE
OF
FUNDING
NUMBERS
800
North
Quincy
St.
PROGRAM
PROJECT
'
TASK
'
WORK
UNIT
ELEMENT
NO.
NO.
NO. ACCESSION
NO.
Arlington,
VA
22217-5000
NR
154-491
11.
TITLE
(Include
Security
Classification)
On
Looking
into
the
Black
Box:
Prospects
and
Limits
in
the
Search
for
Mental
Models
12.
PERSONAL AUTHOR(S)
William
B.
Rouse,
Nancy
M.
Morris
13a.
TYPE
OF
REPORT
113b.
TIME
COVERED
!14.
DATE
OF REPORT
(Yeav,
Mon~th,
Day)
tIS.
P
AGE
COUNT
Technical
FROM
6/1/82
To5/31/86
May
1985
61
16.
SUPPLEMENTARY
NOTATION
17.
COSATI
CODES
18.
SUBJECT
TERMS
(Continue
on
reverse
if
necessary
anor
identify
by
block
number)
FIELD
GROUP SUB-GROUP
mental
models;
expertise
instruction;
knowledge
representation
19.
ABSTRACT
(Continue
on
reverse
if
necessary
and
identify
vy
block
number)
f
-:
The
notion
that
humans
have
"mental
models"
of
the
systems
with
which
they
interact
i1
a
ubiquitous
construct
in
many
domains
of
study.
This
paper
reviews
the
ways
in
which
different
domains
define
mental
models,
characterize
the
purposes
of
such models,
and
attempt to
identify
the
forms,
structures,
and
parameters
of
models.
The
resulting
distinctions
among
domains
are
described
in
terms of
two
dimensions:
1)
nature
of
model
manipulation,
and
2)
level
of
behavioral
discretion.
A
variety
of
salient
issues
emerge,
including
accessibility
of
mental
models, forms
and
content
of
representation,
nature
of
expertise,
cue
utilization,
and,
of
most
importance,
instructional
issues.
Prospects
for
dealing
with
these
issues
are
considered,
as
well
as
fundamental
limits
to
identifying
or
capturing
humans'
"true"
mental
models.
20.
DISTRIBUTION,/AVAILABILITY
OF
ABSTRACT
.21.
ABSTRACT
SECURITY
CLASSIFICATION
CUNCLASSIFIED/JNLINvITED
C1
SAME
AS
RPT.
0
DTIC
USERS
Unclassified
22a.
NAME OF
RESPONSIBLE
INDIVIDUAL
22b.
TELE'PHONE
(Include
Area
Code)
22c.
OFFICE
SYMBOL
Michael
G.
Shafto
202-696-4596
DD
FORM
1473.84
MAR
83
APR
editicn
may
be
used
until
exhausted.
SECURITv
CLASSIFICATION
OF
THIS
PAGE
All
other
editions
are
obsolete.
•!'•- i ; -: ; -..
;i-
i
i'."
' '
:
.';:
:-:•
;-.•'<
':<
;:
°••"
:L
-•:
I:?
.•:-:'
;-•
•;
•7
.. i;
•;''
. . -. ";';•..<
:;;'•••-i
"••
- . . .-. . . ,. , ..
-:
. , • .. . - . . . . - -. - . . .. ' . ., . . . .- .- . ,. . . . . .
]
, '• ,. , . . , . - ,.' ." . - . , - . . .' , - ,.'... ' . " ,. , ' .
7
'ý7
7 -7
. , -7
-, --.
ON
LOOKING
INTO
THE
BLACK
BOX:
PROSPECTS
AND
LIMITS
IN
THE
SEARCH
FOR
MENTAL
MODELS
William
B.
Rouse
Center
for
Man-Machine
Systems
Research
Georgia
Institute
of
Technology
Atlanta,
Georgia
30332
:
'
Nancy
M.
Morris
Search
Technology,
Inc.
25B
Technology
Park/Atlanta
Norcross,
Georgia
30092
..
iV
ABSTRACT
The
notion
that
humans
have
"mental
models"
of
the
systems
hi1
with
which
they
interact
is a
ubiquitous
construct
in
many
Cof
domains
of
study.
This
paper
reviews
the
ways
in
which
different
ISPF
I
domains
define
mental
models,
characterize
the
purposes
of
such
\
models
and
attempt
to
identify
the
forms,
structures,
and
parameiers
of
models.
The
resulting
distinctions
among
domains
are
described
in
terms
of
two
dimensions:
1)
nature
of
model
manipulation,
and
2)
level
of
behavioral
discretion.
A
variety
of
salient
issues
emerge,
including
accessibility
of
mental
models,
forms
and
content
of
representation,
nature
of
expertise,
cue
utilization,
and,
of
most
importance,
instructional
issues.
Prospects
for dealing
with
these
issues
are considered,
as
well
as
fundamental
limits
to
identifying
or
capturing
humans'
"true"
mental
models.
Approved
for
public
release;
distribution
unlimited.
I"'
.
SEP
2
•" C.-
•, -'-" ' ~ ' i• • •' .' ,
l-'i,
' %
"
"""".
.
,"'""''""''
. ' "'
J
"'' .
'''""".i
.. . ., .
,,..
. .A.., .
W.I
*.
INTRODUCTION
It is
a
common
assertion
that
humans
have
"mental
models"
of
the
systems
with
which
they
interact.
In
fact,
it is
difficult
to
explain
many
aspects
of
human
behavior
without
resorting
to
a
"*
construct
such
as mental
models (Conant
and
Ashby,
1970].
However,
acceptance
of
the
logical
necessity
of
mental
models
-
does
not
eliminate
conceptual
and
practical
difficulties;
it
--
simply
raises
a
whole
new
set
of
finer-grained
issues.
For
example, what
forms
do
mental
models
take?
How
does
the
form
affect
the
usage
of
the
models?
Is
guidance
in
the
use
of
models
as
important
as
their
form?
How
can
and
should
designers
and
trainers
attempt
to
affect
humxns'
mental
models?
These
-
really
are
not
new
questions.
However,
as
is
discussed
later,
once
they
are
expressed
in
terms
of
the
concept
of
mental
models,
they
tend
to
be
dealt
with
somewhat
differently.
Further,
despite
many
sweeping
claims
in
the
contemporary
literature,
available
answers
to
the
above
questions
are
rather
-
inadequate.
There
are
prospects
for
improving
this
situation.
However,
there
also
are
limits;
the
"black
box"
of
human
mental
models
will
never
be
completely
transparent.
This
paper
considers these
prospects
and
limits.
To
place the
arguments
advanced
in
this
paper
in
•"'"-" ."-.-
"•""""""'•"""'"
'•
-.- .• ", .•-,• *. ..
, .,. -.- -- .....-..
,.......-........
-...
,..'....",..-.."
....-.. .." .......
".,.......,-.""""...,.....,...,....'..,..................ilil:•i~iiilS
'i;•il
il
4"
*i
INTRODUCTION
It is
a
common
assertion that
humans
have
"mental
models"
of
the
systems
with
which
they
interact.
In
fact,
it is
difficult
to
explain
many
aspects
of
human
behavior
without
resorting
to
a
construct
such
as
mental
models
[Conant
and
Ashby,
1970].
However,
acceptance
of
the
logical
necessity
of
mental
models
does
not
eliminate
conceptual
and
practical
difficulties;
it
simply
raises
a
whole
new
set
of
finer-grained
issues.
For
example,
what
forms
do
mental
models
take?
How
does
the
form
affect
the
usage
of
the
models?
Is
guidance
in
the
use
of
models
as
important
as
their
form?
How
can
and
should
designers
and
trainers
attempt
to
affect
hum.ns'
mental
models?
These
really
are
not
new
questions.
However,
as
is
discussed
later,
once
they
are
expressed
in
terms of
the
concept
of
mental
models,
they
tend to
be
dealt
with
somewhat
differently.
Further,
despite
many
sweeping
claims
in
the
contemporary
literature,
available
answers
to
the
above
questions
are
rather
inadequate.
There
are
prospects
for
improving
this
situation.
However,
there
also
are
limits;
the
"black
box"
of
human
mental
models
will
never
be
completely
transparent.
This
paper
considers
these prospects
and
limits.
To
place the
arguments
advanced
in
this
paper
in
PAGE
2
perspective,
several
points
of
view
with
regard
to
mental
models
are
first
reviewed.
Alternative
definitions,
purposes,
and
taxonomies
are
discussed
in
the
context
of
a
variety
of
behavioral
domains.
This
leads
to
a
discussion
of
differences
among
domains,
particularly
in
terms
of
methods
for
identifying
the
form,
structure,
parameters,
etc.
of
mental
models.
From
this
discussion
emerges
a key
set
of
issues,
which
initially
are
discussed
in
general.
Discussion
then
focuses
on
issues
-.
specifically
associated
with
instruction
(i.e.,
fostering
the
creation
of mental
models).
Finally,
fundamental
limits
in
the
search
for
mental
models
are
considered.
DEFINITIONS
While
the
phrase
"mental
models"
is
ubiquitous
in
the
literature,
there
are
surprisingly
few
explicit
definitions
provided.
This
most
likely
reflects
the
extent
to
which
the
concept
has
come
to
be
completely
acceptable
on
an
almost
intuitive
basis.
Nevertheless,
it is
interesting
to
consider
the
few
formal
definitions
that
have
been
espoused.
The
manual
control
community
has
traditionally
focused
on
skilled,
psychomotor
performance.
More
recently,
the
term
"manual"
is
giving
way
to
"supervisory"
in
recognition
of
the
fact
that
the
human's
role
is
increasingly
becoming
one
of
monitoring
automatically-controlled
systems
for
the
purpose
of
*.•
i .* * . .- * . . * -* --
['•••
---
"[
-
'--•i[i•[•
'[
[•
[••['
[[[[[••
•[
i• [ *
[•[
"
•[
•[
i
iii•[•
PAGE
3
detecting,
diagnosing,
and
compensating
for
system
failures
[Sheridan
and
Johannsen,
1976;
Rasmussen
and
Rouse,
19811.
In
reviewing
the
use of
the
concept
of
mental
models
in
this
domain,
Veldhuyzen
and
Stassen
[1977]
conclude
that
a
human's
mental
model
includes
knowledge
about
the
system
to
be
controlled,
knowledge
about
the
properties
of
disturbances
likely
to
act
on
the
system,
and
knowledge
about
the
criteria,
strategies,
etc.
associated
with
the
control
task.
In
a
recent,
and
more
circumspect,
discussion
of
research
in
this
area,
Wickens
[1984]
refers
to
the
concept
of
a
mental
model
as
a
"hypothetical
construct"
to
account
for
human
behavior
in
sampling,
scanning,
planning,
etc.
Jagacinski
and
Miller
[1978],
also
working
in
manual
control,
define
mental
models
as
special
cases
of
"schema,"
a
fairly
well-accepted psychological
notion
of
how
skilled
performance
is
organized
(see
Wickens
[1984]).
While
the
manual
control
community
has
been
blithely
using
the
mental
models
concept
for
at
least
twenty
years,
cognitive
psychology
has
only
recently
embraced
this
notion.
This
acceptance
is
clearest
in
the
area
of
"cognitive science,"
which
is
basically
the
result
of
a
liaison
between
cognitive
psychology
and
computer
science
or
artificial
intelligence.
This
relatively
new
community
of
researchers
has
recently
produced
an
edited
book
on
mental
models
[Gentner
and
Stevens,
1983].
In
contrast
to
manual
and
supervisory
control
where
mental
PAGE
4
models
serve
as
assumptions
which
allow
calculations
of expected
control
performance,
research
in
cognitive
science
tends
to
focus
directly
on
mental
models,
particularly
in
terms
of
the
ways
in
which
humans
understand
systems.
Norman
(19831
characterizes
this
understanding
as
messy,
sloppy,
incomplete,
and
indistinct
knowledge
structures.
Lehner
and
his
colleagues
[1984]
have
asserted
that
humans'
mental
models
of
a
particular
class
of
computer
programs
(i.e.,
expert
systems)
include understanding
that:
1)
the program's
knowledge
is
encoded
in
rules,
2)
rules
are
organized
in
the
program
in
terms of
a
network
of
relationships,
and
3)
explanatory
traces
of
program
behavior
involve
chaining
along
this
network.
Definitions
that
emphasize
somewhat
narrower
behavioral
domains
include
topologies
of
device
models
[Brown
and
deKleer,
1981;
deKleer
and
Brown,
1983] and
collections
of
autonomous
objects
[Williams,
et
al.,
1983].
Thus,
it
can
be
seen
that
definitions
within
the
cognitive
science
community
range
from
broad
and
intentionally
amorphous
generalizations
to
specific
and
somewhat
esoteric
constructs.
A
very
significant
difficulty
with
the
phrase
"mental
models"
involves
how
one
should
differentiate
this
concept
from
that
of
"knowledge"
in
gener&..
Does
this
phrase
reflect
the
common
tendancies
of
young
sciences
to
re-label
everyday
phenomena?
Certainly
cognitive
science
and
especially
artificial
intelligence
appear
to
have
penchants
for
coining
terminology.
Nevertheless,
in
this
case,'
it
appears
to
be
reasonable
to
employ
* . ...... *
PAGE
5
"the
concept
of
mental
models
to
connote
special
types
of
knowledge.
This
becomes
clear
when
one
considers
the
purposes
that
mental
models
are
supposed
to
serve.
PURPOSES
The
above
discussion
tended
to
emphasize
the
differences
in
-
perspectives
of
researchers
in
manual/supervisory
control
and
cognitive
science.
These
differences
in
definitions
and
terminology
are
considerably
lessened
once
one
considers
purposes.
Veldhuyzen
and
Stassen
[1977],
in
their
review
of
the
use
of
the
mental
model
concept
in
manual
control,
conclude
that
mental
models
provide the
basis
for
estimating
the
"state"
of
the
system
(i.e.,
estimating
state
variables
that
are
not
directly
displayed),
developing
and
adopting
control
strategies,
selecting
proper
control
actions,
determining
whether or
not
actions
led
to
desired
results,
and
understanding
unexpected
phenomena
that
*
occur
as
the
task
progresses.
This
conclusion,
in
effect,
asserts
that
mental
models
are
the
basis
for
all
aspects
of
manual
control.
Such
a
sweeping
assertion
can
lead
one
to
sarmise
that
"mental
models"
are
synonymous
with
"knowledge"
in
generai.
In
fact,
Veldhuyzen
and
Stassen
appear to
be
correct
in
the
----------------- --- --- -------- *.,
................. ... ... .................. .. ... . ...
•..
-' .-- " ,"
"..-...
" -'." "/-.".- .-"-"
PAGE
6
K
~eznse
that
the
phrase
mental
models
is
used
in
this
way,
especially
in
the
manual/supervisory
control
community.
However,
this
is
not
the
way
the
phrase
should
be
used.
More
precision
is
needed;
otherwise,
there
is
a
great
risk
that
the
result
of
research
in
this
area
will
simply
be
that,
"humans
have
to
know
something
in
order
to
perform
their
tasks."
Clearly,
this
result
will
not
be
a
great
stride
for
science.
Rasmussen [1979,
1983],
also
working
within
the
domain
of
*
supervisory
control,
limits
the
range
of
purposes
of
mental
models.
He
asserts
that
mental
models
are
for
predicting
future
events,
finding
causes
of observed
events,
and
determining
appropriate
actions
to
cause
changes
(Rasmussen,
1979].
He
also
includes
the
use
of
mental
models
for
performing
"internal"
*
experiments
[Rasmussen,
1983],
or
what
physicists
refer
to
as
"thought"
or
"Gedanken"
experiments
[Zukav,
1979].
Alexander
[1964]
discusses
the
"mental
pictures"
employed
by
engineering
and
architectural
designers.
These
pictures
are
defined
quite
broadly
in
terms
of
contexts
(problem
definitions)
and
forms
(alternative
solutions).
Hence,
the
purposes
of
designers'
mental
pictures
or
models
are
viewed
as
much
more
encompassing
than
the
models
discussed
in
the
supervisory
control
arena.
This
difference
in
scope
most
likely
reflects
inherent
differences
between
open-ended
tasks
such
as
design
and
well-defined
tasks
like
supervisory
control.
...... •- , ..................••..'. . . ..
"..-
°
%':..
. :.. .
PAGE
7
Within
the
cognitive
science
domain,
Williams
and
his
colleagues
(1983)
claim
the
purposes
of
mental
models
to
be
predicting
and
explaining
system
behavior
and
serving
as
mnemonic
devices
for
remembering
relationships
and
events.
Evidencing
a
-
more
traditional
psychological
point
of
view,
Wickens [1984)
-*
reports
that
mental
models
are
constructs
used
by
researchers
to
-*•
explain
display
sampling
and
scanning,
formulating
of
plans,
and
*.
translating
of
goals
into
actions.
He
also
suggests
that
mental
models
are
sources
of
humans'
expectations.
The
intersection
of
the
various
points
of
view
outlined
in
this
section
leads
to
a
fairly
clear
set
of
purposes
for
mental
_
models.
The
common
themes
are
describing,
explaininf,
and
predicting, regardless
of
whether
the
human
is
performing
internal
experiments,
scanning
displays,
or
executing
control
actions.
These
three
terms
can
be
combined
with
a
modification
of
Rasmussen's.taxonomy
of
mental
models
(Rasmussen,
1979]
to
*
yield
the
integrated
view
of
the
purposes
of
mental
models
shown
in
Figure
1.
Based
on
this
figure,
a
functional
definition
of
mental
models can
be
proposed:
mental
models
are
the
mechanisms
whereby
-*
humans
are
able
to
generate
descriptions
of
system
purpose
and
form,
explanations
of
system
functioning
and
observed
system
*
states,
and
predictions
of
future
system
states.
It is
important
__
to
emphasize
that
this
definition
does
not
differentiate
between
................
,.*..-
PURPOSE
->
WHY
A
SYSTEM
EXISTS
DESCRIBING
Nk
FUNCTION
-
HOW
A
SYSTEM
OPERATES
EXPLAINING
STATE
>
WHAT
A
SYSTEM
IS
DOING
PREDICTING
FORM
->
WHAT
A
SYSTEM
LOOKS
LIKE
FIGURE
1.
PURPOSES
OF
MENTAL
MODELS
~~~ ..
~
..
...
-.
..
PAGE
8
knowledge
that
is
simply
retrieved
and
knowledge
that
involves
_
some
type
of
calculation.
Thus,
humans'
mental
models
are
not
necessarily
computational
models.
It
was
noted
earlier
that
a
models
=
knowledge
definition
_.
should
be
avoided
if
the
mental
models
construct
is
to
have
any
real
utility.
The
above
definition
does
not
eliminate
this
problem,
which
serve8
to
underscore the
possibly
marginal
value
*
of
the
construct.
Nevertheless,
the
proposed
definition
does
*
specify
particular
types
of
knowledge
and
the
purposes
for
which
*
this
knowledge
is
used.
This
level
of
specificity
is
sufficient
to
enable
a
meaningful
inquiry
into
the
nature
of
mental
models.
IDENTIFICATION
Given
the
above
functional
definition
of
mental
models,
one
can
then
reasonably
consider
how
these
mechanisms
might
be
identified.
In
other
words,
what
forms,
structures,
parameters,
etc.
are
associated
with
mental
models
of
particular
individuals
for
specific
task
situations?
There
are
a
variety
of
approaches
to
these
types
of
question.
Inferring
Characteristics
Via
Empirical
Study
Perhaps
the
most
traditional
approach
to
the
study
of
mental
models
is
the
use
of
experimental
methods
to
infer
the
PAGE
10
Empirical
Modeling
In
situations
where
perception
and
response
execution
are
unlikely
to
interact
with
model
manipulation,
empirical
modeling
may
be
poseible.
This
approach
involves
algorithmically
identifying
the
relationship
between
humans'
observations
and
subsequent
actions.
If it
can
be
assumed
that
humans
actually
perceive
what
is
displayed
and
response
execution
is
very
simple,
then
techniques
such
as
regression
can
be
used
to
identify
input-output
relationships.
From
these
relationships,
the
structure
and
parameters
of
mental
models
can
be
inferred.
Jagacinski
and
Miller
[1978]
employed
this
approach
for
a -
"bang-bang"
time-optimal
manual
control task
where
regression
on
subjects'
"switching
curves"
allowed
inferences
about
mental
models.
Several
investigators
have
-studied
the
relationships
between
humans'
explicit
predictions
of
future
system
states
and
%%%,
currently
displayed
states,
using
regression
or
time-series
models
to
identify
input-output
relationships
[Rouse,
1977;
van
Bussel,
1980;
van
Heusden, 1980].
All
four
of
the
above
studies
resulted
in
hypothesized
mental
models
that
differed
systematically
from
the
"true"
model
of
the
system
involved.
It is
worth
noting
that related
approaches
have been
-
employed
in
a
variety
of
studies
of
human
judgement.
Anderson's
"cognitive
algebra"
and
Hammond's
"policy capturing"
are
two
notable
examples;
a
thorough
review
of
these
and
other
efforts
P %i..
PAGE
11
is
provided
by
Hammond
and
his
colleagues
[1980].
These
studies
of
the
combining
of
cues
to
form
judgements
are
rather
different
than
the types
of
task
discussed
thus
far
in
this
paper,
in
that
-,
the
combination
rules
that
are
identified
do
not
necessarily
directly relate
to
any
explicit
model
of
the
system.
Nevertheless,
the
whole
issue
of
cue
utilization is
very
"important
and
is
discussed
further
later in
this
paper.
-
Analytical
Modeling
There
are
very
few
tasks
where
empirical
modeling
is
appropriate.
For
most
tasks,
the
input-output
relationships
identified
would
be
very
likely
to
be
confounded
with
characteristics
of
displays
and
controls,
as
well
as
subjects'
interpretations
of
performance
criteria.
Analytical
modeling
is
a
common
approach
to
these
types
of
task,
particularly
in
the
manual/supervisory
control
community.
Analytical
modeling
involves
using
available
theory
and
data
to
formulate
assumptions
about
the
form,
structure,
and
perhaps
parameters
of mental
models
for
particular
tasks.
Based
on
these
assumptions,
human
performance
(e.g.,
RMS
tracking
error)
is
calculated
or
computed
analytically
and
compared
to
empirical
performance
data.
A
common
practice
is
to
adjust
the
parameters
of
the
assumed
mental
model
in
order
to
minimize
differences
between
the
analytical
and
empirical
performance
metrics.
If
the
..
....
..
.
.
PAGE
12
resulting
differences
are
fairly
small,
one
can
conc-lude
that
the
assumed
mental
model
is
a
reasonable
approximation
for
the
purpose of
predicting
the
performance
metric
of
interest. In
contrast,
one
cannot
safely
conclude
that
one
has
identified
the
"real"
mental
model.
Unfortunately,
this
leap,
perhaps of
faith,
occurs
not
infrequently.
The
nature
of
some
domains
virtually
dictates
the
use
of
analytical
modeling.
Neural
information
processing
is
a
good
example where
basic
knowledge
of
neuron
behavior
is
used
to
synthesize
network
models.
The
overall
behaviors
of
these
network
models
are
analytically
determined
and
compared
to
empirical
results
of
basic
psychological
studies
(Anderson,
19831.
The
complexity
of
the
neural
system
is
such
that
a
purely
empirical
approach
is
simply
not
feasible.
As
noted
earlier,
analytical
modeling
is
quite
common
in
the
manual/supervisory
control
domain.
Because
of
the
very
constrained
nature
of
many
manual
control
environments
(i.e.,
the'
human
must
adapt
to
the
task
in
order
to
perform
acceptably),
a
common
assumption
is
that
humans'
mental
models
are
perfect
relative
to
the
real
system
(e.g.,
[Kleinman,
et
al.,
1971]).
However,
for
tasks
involving
only
monitoring
[Smallwood,
1967;
Sheridan,
1970],
especially
when
apparent
discontinuities
occur
in
the
state
trajectory
[Cagalayan
and
Baron, 1981],
imperfect
models
are
often
assumed.
Imperfect
mental
models
are
also
. ....... . -..-... -......
...-... ..-...-......-...-.... ..........-.....-........... ..
PAGE
13
assumed
for tasks
that
involve
slowly-responding
systems
such
as
ships
and
process
plants
[Veldhuyzen
and
Stassen,
19771,
where
the
human
has
much
greater
discretion
in
terms
of
the
timing
and
magnitude
of
control
actions.
The
assumption
of
an
imperfect
mental
model
can
be
problematic
from
an
analytical
point
of
view.
If
a
perfect
mental
model
can
be
assumed,
one
need
only
perform
an
engineering
analysis
of
the
system of
interest
to
identify
the
model.
In a
sense,
there
is
only
one
choice.
In
contrast,
there
is
an
infinity
of
alternative
imperfect
models,
and
justifying
the
*
choice
of
any
particular
alternative
can
be
difficult.
Of
course,
if
one's
objective
is
solely
the
prediction
of
some
overall
performance
metric,
this
difficulty
may
be
minor.
However,
the
fact
that
one
is
able
to
"match"
such
an
overall
metric
does
not
imply
that
one
can
reasonably
conclude
that
the
imperfections
assumed
in
the
analytical
model
are
identical
to
the
actual
imperfections
inherent
in
the
human's
mental
model.
Direct
Incuiry
Perhaps
an
obvious
alternative
to
the
somewhat
indirect
methods
of
identification
discussed
above
is
simply
to
ask
people
about
their
mental
models.
Introspection,
in
a
variety
of
forms,
was
a
common
approach
to
psychological
research
in
the
19th
century,
particularly
in
Europe.
However,
the
behaviorist
PAGE
14
movement
of
Watson
(19141
and
later
Skinner
[19383
almost
__
completely
divested
this
approach
cf
any
credibility
it
may
have
had
within
experimental
psychology.
Fortunately,
the
last
two
decades
have
produced
a
substantial
softening
of
the
strict
behaviorist
perspective.
Nevertheless,
psychologists'
yearning
to
be
like
physicists
still
persists
to
some
extent,
despite
fundamental
and
irreducible
differences
between
the
two
domains
of
study
[Rouse, 1982].
An
approach
to
introspection
that
has
gained
substantial
currency
of
late
is
the
verbal
protocol,
which
is
simply
a
transcript
of
a
human
"thinking
aloud"
as
he
or
she
performs
a
task.
Insightful
analyses
of
verbal protocols
have
been
performed
fcr
troubleshooting
[Rasmussen
and
Jensen,
1974],
process
control
[Bainbridge,
1979],
device
understanding
[Williams,
et
al.,
1983],
problem
solving
in
elementary
physics
[Gentner
and
Gentner,
1983],
and
various
game-like
tasks
[Newell
and
Simon,
1972].
In
the
cognitive
science
domain,
there
are
many
examples
of
verbal
protocols
serving
as
the
"data"
from
experiments;
see
[Gentner
and
Stevens,
1983].
While
there
are
strong
advocates
of
this
approach
in
the
manual/supervisory
control
community
[Bainbridge,
1979;
Rasmussen,
1979,
1983]
as
well
as
the
cognitive
science
community
[Newell
and
Simon,
1972;
Ericsson
and
Simon, 1980,
19843,
there
are
also
more
circumspect
views
[Nisbett
and
Wilson,
19771.
PAGE
15
Certainly,
what
humans
say
they
are
thinking
about
or
intend
to
do
is
interesting
and
of
value.
However,
verbalization
of
a
non-verbal
(e.g.,
spatial
or
pictorial)
image
may
result
in
severe
distortions
and
biases.
Further, verbal protocols
provide,
at
best,
information
about
what
humans
are
thinking
about,
but
little
direct
information
about
how
they
are
thinking
(i.e.,
about
the'underlying
information
processing).
Therefore,
it
seems
prudent
to
view
verbal
protocols
as
quite
useful,
but
far
from
conclusive.
As
a
result,
such
data
may
be
more
useful
for
generating
hypotheses
for
subsequent
experimentation
rather
than
as
a
primary
means
for
testing
hypotheses
(unless,
of
course,
the
hypotheses
only
address
the
"what"
of
thinking).
Another
approach
to
direct
identification
of
mental
models
is
interviews
and/or
questionnaires.
In
general,
this
approach
is
quite
different
from
verbal
protocols.
However,
in
some
cases,
the
only
difference
between
this
approach
and
verbal
protocols
is
the
fact
that
the
inquiry
does
not
occur
as
the
task
is
performed.
Studies
of
air
traffic
control
by
Falzon
[1981)
and
Whitfield
and
Jackson
[1982),
and
of
marine
navigation
by
Hutchins
[1983),
are
of
this
type.
In
contrast,
interviews
and/or
questionnaires
concerning
preferences
or
judgements
are
not
necessarily task-oriented.
In
such
cases,
there
is
really
no
reason to
make
inquiries
during
task
performance.
An
excellent
example
of
this
type
of
situation
PAGE
16
is
the
study
of
"mental
maps"
by
Gould
and
White
[19741,
where
the
concern
was
with
geographical
perceptions
and
preferences.
(Wickens
[1984,
pp.
189-192]
reviews
a
variety
of
studies
of
how
humans'
mental
representations
of
imagined
maps
tend
to
be
distorted.)
As
an
interesting
aside,
the
above
observations
on
direct
inquiry
have
important
implications
for
the
design
of
"expert
systems.!!
Succinctly,
expe.ts
may
not
be
able
to
verbalize
their
expertise.
Perhaps
worse,
their
verbalizations
may
reflect
what
they
expect
is
wanted
by
the
inquirer
rather
than
how
they
actually
perform.
An
example
of evidence
of
this
phenomenon
is a
recent
study
of
process
control
operators
whose
explanations
of
what
they
thought
they
would
(or
perhaps
should)
do
were
at
variance
with
their
actual
behaviors
[Morris
and
Rouse,
1985;
Knaeuper
and
Rouse,
1985].
Summary
Reconsidering
all
of
the
approaches
to
identification
discussed
in
this
section,
it is
clear
that
each
type
of
approach
has
substantial
advantages
for
some
types
of
task,
but
also
important
weaknesses.
Further,
while
employing
multiple
approaches
can
compensate
for
these
weaknesses
to
an
extent,
the
possibility
of
totally
"capturing"
the
mental
model
is
rather
remote.
This
is, in
part,
due
to
the
great
likelihood
that
a
PAGE
17
mental
model
does
not
exist
as
a
static
entity
having
only
a
single
form.
TAXONOMIES
-It is
fairly
easy
to accept
the
assertion that
any
particular
phenomenon
can
be
thought
of
in
a
variety
of
ways.
For
example,
one
can
think
of
an
automobile
as
a
collection
of
"-
electromechanical
elements
that
convert
chemical
energy of fuel
to
mechanical
energy
in
terms
of
motion.
In
contrast,
one
can
view
an
automobile
as
a
sleek,
sculptured,
and
powerful
extension
of
one's
persona.
Both
of
these
"mental
models"
involve
the
same
physical
entity.
However,
the
verbal
protocols
produced
for
these
two
models
of
an
automobile
would
differ
in
rather
dramatic
ways.
This
would
be
the
case
even
if
the
two
protocols
were
produced
by
the
same
individual.
As
noted
earlier,
Rasmussen
[1979]
has
developed
a
taxonomy
of
alternative
mental
models
of systems.
His
taxonomy
moves
from
concrete
to
abstract
perspectives
in
terms
of
five
types
of
model:
1)
physical
form,
2)
physical
function,
3)
functional
structure,
4)
abstract
function,
and
5)
functional
meaning
or
purpose.
Thus,
roughly
speaking,
a
system
can
be
viewed
as
what
it
looks
like,
how
it
functions,
or
why
it
exists.
All
of
these
views
are
"correct"
and
of
value
for
answering
a
variety
of
questions
about
a
system.
.........................................
X
PAGE
18
Norman
[1983]
uses
the
word
"conceptaalization"
to
characterize
researchers'
models
of
humans'
mental
models.
This
characterization
serves
to
emphasize
the
difficulty
of
studying
-*
mental
models
in
that
orne
is
basically
searching
for
approximations
of
approximations
of
reality
(Cohen
and
Murphy,
1984],
a
process
that
can
be
viewed as
akin
to
estimating
the
variance
of
the
variance
in statistical
modeling.
The
conceptualizations
chosen
by
researchers
tend
to
reflect
their
methodological
backgrounds
and
the
way
in
which
they
assume
humans
are
likely
to
view
the
systems
of
interest.
Assumptions
about
how
people
view
systems
are,
of
course,
also
likely
to
be
affected
by
researchers'
backgrounds
(e.g.,
engineers
may
think
that
operators
and
maintainers
view
systems
from
an
engineering
perspective).
Thus,
researchers'
mental
models
affect
their
conceptualization
of
other
humans'
mental
models;
to
avoid
getting
sidetracked
by
this
issue,
it is
not
pursued
further
until
a
later
section
of
this
paper.
A
practical
implication
of
this
phenomenon
is
that
it is
quite
natural
to
taxonomize
mental
models
in
terms
of
conceptualizations.
In
reviewing
how
researchers
have
approached
human
detection
and
diagnosis
of
system
failures,
Rasmussen
and
Rouse
(1981]
contrast
conceptualizations
involving
differential
equations,
functional
block
diagrams,
and
"snapshots"
of
physical
form
as
examples
of
different
ways
that
various
researchers
view
PAGE
19
"similar
problems.
Beyond
differences
in
conceptualizations
dictated
by
researchers'
natural
inclinations,
there
are
"important,
and
hopefully
more
substantial,
effects
of
differences
in how
mental
models
are
used.
Young
(1983)
has
suggested
a
range of
uses
of
mental
models.
*
For
example,
a
mental
model
might
be
used
as
a
way
of
describing
a
device
independent
of
its
usage.
Another
use
of
a
mental
model
of
a
device
might
be
to
represent
the
input-output
relationships
•*
associated
with
typical
uses
of
the
device.
Yet
another
use
of
a
mental
model
of
a
device
is
as
a
means
of
understanding
an
analogous
device
(e.g.,
a
VDU
is
like
a
typewriter).
The
clear
implication
of
such
usage-oriented
perspectives
is
that
humans'
mental
models
of
a
system
(e.g.,
within
Rasmussen's
taxonomy),
and
the
most
appropriate
conceptualizations
of
these
models,
depend
upon
the
tasks
to
be
performed.
If
the
system
is
used
in
multiple
ways
(e.g.,
the
automobile
example
noted
earlier),
then
multiple
mental
models
are
likely
to
be
developed.
Therefore,
a
taxonomy
that
is
purely
system
oriented
(i.e.,
task
independent),
will
be,
at
best,
inadequate;
a
behavior-oriented
framework
is
also
needed.
Of
course,
approaching
mental
models,
or
cognition
in
general,
from
a
behavior
or
performance
point
of
view
is
the
norm
in
experimental
psychology.
Taxonomic
efforts
in
this
discipline
tend
to
produce
PAGE
20
attributes-oriented
characterizations
for
particular
tasks.
For
example,
Wickens
[1984)
discusses
specificity
and code
of
.representation
as
attributes
of mental
models
in
process
control.
From
the
foregoing
discussion,
it is
clear
that
efforts
to
develop
taxonomies of
mental
models
are
heavily
influenced
by
the
domain
being
investigated
(e.g.,
word
processing
vs.
vehicle
control),
as
well
as
the
backgrounds
of
the
investigators
(e.g.,
psychology
vs.
engineering
vs. computer
science).
Research
in
a
wide
variety
of
domains can
be
characterized
as
dealing
with
mental
models.
Thus,
the
literature
cited
in
this
paper
includes
several
domains:
1)
neural
information
processing,
2)
manual
control,
3)
supervisory
control,
4)
understanding
of
devices
(e.g.,
for
maintenance
purposes),
5)
problem
solving
in
physics,
and
6)
making
value
judgements.
While
all
of
tha
research
cited
in
these
domains
explicitly
deals
with
mental
models
(or
equivalent
concepts),
these
efforts
differ
substantially
in
terms
of
conceptualizations
chosen
and
identification
methods
employed.
It
appears
that
these
differences
can
be
explained
by
distinctions
among
domains
along
two
dimensions:
1)
nature
of
model
manipulation,
and
2)
level
of
behavioral
discretion.
The
distinctions
among
the
various
domains
listed
above
are
illustrated in
terms
of
these
two
dimensions
in
Figure
2.
(Note
that
"understanding
of
devices"
appears
as
"system
maintenance"
and
"using
assembly
I ,o
•'o
FULL
MAKING
VALUE
JUDGEMENTS
PROBLEM
SOLVING
IN
PHYSICS
SUPERVISORY
CONTROL
EVEL
OF
EHAVIORAL
-,ISCRETION
MANUAL
SYSTEM
CONTROL
MAINTENANCE
NEURAL
USING
INFORMATION ASSEMBLY
PROCESSING INSTRUCTIONS
NONE
IMPLICIT
<
NATURE
OF
3
EXPLICIT
MODEL
MANIPULATION
FIGURE
2.
DISTINCTIONS
AMONG
DOMAINS
°' "' '' " ' %
'"""
' ,*% '" "% - " ' % ,," "
%•"
•'• •' " ' ' -* "'P P• - •",
PAGE
21
instructions."
)
The
nature
of
model
manipulation
can
range
from
implicit
to
explicit,
where
these
terms
refer
to
whether
or
not
a
human
is
aware
of
his
or
her
manipulation
of
a
mental
model.
As
an
example,
one
is
likely
to
be
totally
unaware
of
manipulating
"neural
network
representations
in
associative
memory.
In
contrast,
assembling
devices
or
solving
physics
problems
is
"likely
to
involve
explicit
manipulation
of
models.
An
alternative
point
of
view
relative
to
this
dimension
is
to
consider
the
terms
"implicit"
and
"explicit"
as
indicative
of
a
dichotomy
rather
than
end
points
on
a
continuum.
The
result
is
an
analogy
of
the
compiled
vs.
interpreted
processes
of
Newell
and
Simon
[19723.
One
can
also
express
this
difference
in
terms
of
systems
vs.
applications
software.
The
basic
idea
is
that
the
"source
code"
for
compiled
processes
or
systems
software
is
*
no
longer
available
to
the
human
who,
therefore,
cannot
report
on
how
it
operates.
The
level
of
behavioral
discretion
can
range
from none
to
full,
where,
as
above,
these
terms
refer
to
the
extent
that
a
human's
behavior
is
a
matter
of
choice,
as
opposed
to
being
dictated
by
the
task.
At
one
extreme,
phenomena
such
as
neural
information
processing are
unlikely
to
be
discretionary.
However,
as
tasks
are
more
oriented
toward
decision
making
and
I I
~~~~~~~~~~...................
.... ................... .............-.--.... *..-••.
-•,
- .-......ۥ.. -. -...
PAGE
22
problem
solving,
opportunities
for
discretion
are
more
likely.
Interestingly,
humans'
roles
in
many
engineering
systems
are
tending
toward
tasks
that
involve
greater
discretion;
the
more
task-dominated
aspects
of
system
operations
are
being
increasingly
automated.
While
the
relative
placement of
domains
in
Figure
2
is
far
F
from
eyact,
the
distinctions
emphasized
in
this
figure
provide
a
basis
for
explaining
methodological
differences
among
domains.
Considering
identification
methods,
two
generalizations
seem
reasonable.
First, inferential
methods
(i.e.,
empirical
assessment,
empirical
modeling,
and
analytical
modeling)
tend to
yield
more
accurate
descriptions
when
there
is little
discretion.
This
is
because
the
nature
of
the
conceptualization
of
a
mental
model
can
be
based
on
external
environmental
and
organizational
constraints.
Since
the
human
has
little
discretion,
he
or
she
can
be
assumeA
to
adapt
to
these
constraints
and
the
resulting
mental
model
will
reflect
this
adaptation.
The
second
generalization
is
that
verbalization
methods
(i.e.,
verbal
protocols,
interviews,
and
questionnaires)
are
likely
to
provide
more
appropriate
descriptions
when
there
is
*.
,
explicit
manipulation.
This
is
simply
due
to
the
fact
that
the
need
for
explicit
manipulation
may
result
in
verbalization
being
.
::
:............................
.......... .. .::::::::::::
:::::::
:::: ::::
::"::.
:
::
:::::::::::::::
::::::::::::::::::
:
::!i
PAGE
23
a
"natural" part
of
a
task.
Of
course,
it is
also
quite
possible
that
manipulation
may
be
explicit,
but
the
mental
model
is,
for
-..
example,
spatial
rather
than
verbal,
or perhaps
in
terms
of
subjective
images
rather
than
objective
constructs.
If
accepted,
these
two
generalizations
have
important
implications.
Most
obvious
is
the
conclusion
that
domaina
toward
the
upper
left
of
Figure
2
are
likely
to
present
methodological
difficulties,
at
least
in
the
sense
that
mental
models
will
be
elusive.
An
example
is
the
aforementioned
research
on
human
judgement
(e.g.,
[Hammond,
1980]),
which
attempts
to
"capture".
relationships
between
features
observed
and
decisions
made.
The
results
of
such
analyses
indicate,
at
most,
wha
is
taken
into
account
in
the process
of
social
decision
making,
but
not
how
this
information
is
processed
in
the
context
of
one
or
more
mental
models.
The
types
of
situation
addressed
are
too
laden with
implicit
values
and
too
open
to
discretion
to
allow
mental
models
to
be
"captured"
to
the
extent
that
they
can
be,
for
example,
for
device
understanding.
Studies
of
human
judgment
-
in
the
area
of
personal
relations
[Harvard,
1980]
and
personal
geographical
preferences
[Gould and
White,
1974]
are
good
examples
of
this
limitation.
Expanding
upon
the
above
notion,
an
overall
implication
of
the
generalizations
drawn
from
Figure
2
is
that
the
possible
L -- .- *..* *- *** * *. ...-*-•* ~
7.~~U W '
7W1
V7
wr.
.T7 IV-w- *V1 .
PAGE 24
level
of
specificity
of
conceptualizations
of
mental
models,
and
perhaps
even
the
form
of
conceptualizations,
are
limited
by
the
location
of
a
task
domain
along
the
nature
of
manipulation/level
of
discretion
dimensions.
In
fact,
it
seems
reasonable
to
conjecture
that
these
limits
may
be
fundamental.
Elaboration
of
this
conjecture
is-,
however,
delayed
until
a
later
section
of
this
paper.
SALIENT
ISSUES
From
the
discussion
thus
far,
it is
clear
that
there
are
a
plethora
of
issues
surrounding
the
topic
of
mental
models.
Many
of
these
are
relatively
minor,
involving
terminology
and
inherent
differences
among
domains.
A
few
issues,
however,
appear
repeatedly
in
the
literature
and
are
dominant
in
many
of
the
domains
discussed
in
this
paper.
This
section,
as
well
as
the
following
section,
explore
the
nature
of
these
issues.
The
discussion
proceeds
in
the
following
sequence:
1.
Accessibility
-
To
what
extent
is it
possible
to
"capture"
individuals'
mental
models?
2.
Forms
of
representation
-
What
do
mental
models
look
like
(e.g.,
spatial
vs.
verbal)?
3.
Context
of
representation
-
To
what
extent
can
mental
models
be
general
rather
than
totally
context-dependent?
!I
PAGE
25
4.
Nature
of
expertise
-
How
do
the
mental
models
of
novices
and
experts
differ?
5..
Cue
utilization
-
How
are
mental
models
affected
by
the
cues
one
employs,
either
by
choice
or
due
to
availability?
6.
Instruction
-
How
can
and
should
training affect
individuals'
mental
models?
The
rationale
underlying
the
ordering
of
these
topics
is to
consider
first
the
inherent
nature
of
mental
models,
particularly
as
affected
by
context,
expertise,
and
available
cues,
and
then
to
focus
on
approaches
to
fostering
the
development
of
appropriate
mental
models.
Accessibility
As
might
be
surmised
from
the
foregoing
discussion,
the
accessibility
of
mental
models
is
a
recurrent
and
important
issue.
While
the
considerations
outlined
earlier
need
not
be
repeated,
it is
of
value
to note
a
few
examples
where
accessibility
appears
limited
in
the
sense
that
researchers'
abilities
to
"capture"
mental
models
are
constrained
by
humans'
lack
of
abilities
to
verbalize
their
models.
Van
Heusden [1980.
found
that
subjects
had
difficulty
verbalizing
how
they
predictea.
future
states
qf
time
series.
Whitfield
4nd
Jackson
[1982.
reported
that
air
traffic
controllers
had
difficulty
verbalizing
their
"picture"
of
the
state
of
the
system.
Wickens
[1984]
notes
that
models
for
control
are
less
verbalizable
than
models
for
....
.....
PAGE
26
detection
and
diagnosis.
As
noted
earlier,
Morris
and
Rouse
.
(1985]
and
Knaeuper
and
Rouse
(1985)
found
that
subjects'
answers
"*
regarding
what
they
would
(or
perhaps
should)
do
were
different
from
what
they
actually
did.
Therefore,
while
the
intent
is
not
to
belabor
the
point,
an
important
issue
concerns
when
verbalization
is
possible,
reliable,
and
valid.
(The
previous
discussion
surrounding
Figure
2
suggests
how
this
issue
might
be
viewed).
Forms
of
Representation
The
accessibility
of
mental
models.
as
well
as
their
use
in
general,
depends
on
their
forms
of
representation.
This
issue
concerns
how
mental
models
are
encoded
and
perhaps
evolve.
While
neural
information processing
approaches
to
this
issue
are
emerging [Anderson,
1983),
the
potential
of
such
fine-grained
descriptions
appears,
at
least
at this
point
in
time:,
to
be
limited
to
providing
explanations
of
very
elementary
psychological
phenomena
rather
than
behavior
in
realistically
complex
taeks.
One
important
distinction
relative
to
form
is
spatial
vs.
verbal.
Considering
humans'
exquisite
pattern
recognition
abilities,
it is
likely
that
the
human
information
processing
system
is
particularly
adept
at
processing
spatially-oriented
information
and,
hence;
may
tend
to
store
information
in
that
-* -,
.
.--.-...
.. .
PAGE
27
manner.
Therefore,
it
seems
reasonable
to
suggest
that
mental
models
are
frequently
pictorial
or
image-like
rather
than
symbolic
in
a
list-processing
sense.
This
obviously
presents
"•.
difficulties
when
humans
are
asked
to
verbalize
their
models
(e.g.,
the
air
traffic
controllers
of
Whitfield
and
Jackson
(1982]).
Even
when
verbal
representations
are
likely
(or
at
least
*•
useful),
the
vocabulary
or
"ontology"
of
such
descriptions
can
be
*
an
important
factor
in
the
effectiveness
of
these
representations
for
problem
solving
[Greeno,
1983].
An
excellent
example
is
that
reported
by
Falzon
[1982]
where
air
traffic
controllers
thought
of
their
task
in
terms
of
aircraft
"separations"
rather
than
"positions.0
Another
important
distinction
relative
to
form
is
representational
vs.
abstract.
Rasmussen's
taxonomy
of
mental
models
illustrates
how
any
particular
system
can
be
described
at
various
points
along
this
dimension
[Rasmussen,
1979].
Larkin
(1983]
distinguishes
expert
from
novice
solvers
of
physics
problem
in
terms of
abstract
vs.
representational
mental
models.
Context
of
Representation
A
related.
issue
concerns
the
context
of
representation,
rather
than
the
form,
and
whether
it is
general
or
specific
~~~~~~~~~~~~~~~..
... ••--..-....i.-.
.--- "."...'--.. i.-.'."...,-.-.--...-.....-...-...---..-.;.. -•....-..•'-••'•-"
PAGE 28
4 `4
(e.g.,
general
principles
of
physics
or'
specific
heuristics
for
troubleshooting
a
particular
device).
In
reviewing
the
available
evidence
for
process
control,
Wickens
[1984]
concludes
that
mental
models
tend
to
be
specific.
However,
if
specific
representations
are
predominant,
it is
difficult
tD
account
for
the
richness
of
human
problem
solving
behavior
(i.e.,
abilities
to solve
novel
problems).
Explanations
of
this
richess
have
included
learning
via
metaphors
(Carroll
and
Thomas,
1982],
analogical
problem
solving
(Steinberg,
1977;
Gentner
and
Gentner,
1983;
Silverman,
1983],
and
use
of
multiple
models
(Rasmussen,
1983].
While
the
issue
of
general
vs.
specific
knowledge
is
certainly
not
new
(e.g.,
(Peirce,
1877]),
it is
far
from
resolved.
Part
of
the
difficulty
is
inherent
in
the
topic.
Tasks
and
behavior
are
always
specific.
Hence,
"general"
phenomena
are
not observable.
Yet,
such
constructs
seem
to
be
necessary
to
explain,
for
example,
human
behavior
in
unfamiliar
..
-
situations
[Glaser,
1984].
Given
the
fact
that
much
of
what
is
routine
is
increasingly
being
automated,
leaving
humans
to
deal
with
the
non-routine,
a
recurring
theme
is
training
of
humans
to
have
general
skills
to
deal
with
a
wider
variety
and
less
familiar
tasks.
As
might
be
expected,
therefore,
the
general
vs.
specific
issue
is
likely
to continue
to
receive
attention.
,'
• .
.A
..........................................................-
'*....*.,....,.
.................
PAGE
29
Nature
of
Expertise
At
least
a
portion
of
the
general
vs.
specifc
debate
has
focused
on
the
nature
of
expertise.
The
question
of
concern,
within
the
context
of
this
paper,
is how
experts'
mental
models
differ
from
those
of
novices.
Intuitively,
one
might
think
that
experts
simply
know
more
than
novices
(i.e.,
have
more
elaborate
and
accurate
mental
models).
However,
experts'
mental
models
are
not
just
more
elaborate
or
accurate;
evidence
suggests
that
they
are
fundamentally
different
from
novices'
models
[Chi
and
Glaser,
1984;
Glaser,
1984;
Greeno
and
Simon,
1984].
Wisner
and
Carey
[1983] have
concluded
that
the
"novice-expert
shift"
involves
a
conceptual
change,
rather
than
just
refinement
of
the
novice's
perspective.
As
noted
earlier,
Larkin
11983]
discusses
this
shift
as
a
movement
from
representational
to
abstract
models.
Chase and
Simon
11973],
as
well
as
Dreyfus
and
Dreyfus
[1979],
describe
expertise
in
terms
of
highly-developed
repertoires
of
pattern-oriented
representations.
If
one
accepts
the
conclusion
that
experts
tend
to
have
conceptually
abstract,
pattern-oriented
mental
models,
then
one
must
simultaneously
question
the
accessibility
of
these
models
via
verbalization
methods.
This
has,
of
course,
important
implications for
developers
of
"expert
systems."
An
interesting
phenomenon
related
to
expertise
is
the
fact
PAGE
30
that
the
shift
away
from
novice
does
not
necessarily
imply
that
all
naive
notions
are
discarded.
DiSessa
(1982]
and McCloskey
(1983]
found
that
naive,
"pre-Newtonian"
theories
of
motion
were
retained
by
students
even
after
instruction
in
"correct"
theories.
Similarly,
Clement
(1983]
found
that
the
naive
idea
of
"motion
implies
force"
was
retained
even
after
instruction
that
indicated
otherwise.
Thus,
individuals
who
know
what
is
"correct"
may
also
retain
ideas
that
are
"wrong,"
perhaps
because
their
real-world
(as
opposed
to
instructional)
experiences
tend
to
be
such
that
inconsistencies
do
not
occur.
In
other
words,
mental
models
may
include
a
bit
of
"baggage"
remaining
from
earlier
experiences
that
humans
find
no
need
to
question
or
discard,
even
though
this
baggage
may
create
difficulties
when
novel
situations
are
encountered.
An
alternative
interpretation
of
the
above
results
is
that
the
subjects
studied
were
not
"experts"
in
the
full
sense of
the
word;
otherwise,
their
naive
notions
would
have been
dispelled.
While
this
position
is
reasonable,
it
runs
the
risk
of
investing
in
experts
the
non-human
characteristic
of
always
being
correct.
Alternatively,
one
can
define
expertise
in
relative
terms.
From
this
perspective,
the
results
cited
above
are
perhaps
suggestive
of
the
inherent
limitations
of
expert
opinion.
PAGE
31
Cue
Utilization
An
issue
that
is
often
overlooked
in
discussion
of
mental
modelo
is
cue
utilization.
In
order
to
predict
future
system
states
or
explain
the
current
state,
two
things
nre
neaded:
1)
one
has
to
know
what
the
current
state
is,
and
2)
one
has
to
have
some
mechanism
that
emulates
the
process
whereby
the
state
evolves.
The
human's
internalization
of
this
mechanism
is
usually
thought
of
as
the
mental
model;
however,
the
development
and
use
of
this
mechanism
cannot
be
divorced
from
the
human's
abilities
to
extract
from
the
environment
the
cues
necessary
to
form
the
state
estimates
upon
which
this
mechanism
operates.
An
excellent
example
of
possible
confounding
of
cue
utilization
and
mental
models
can
be
found
in
various
studies
of
humans'
abilities
to
predict
future
system
states.
Independent
studies
by
Rouse
[1977],
van
Bussel
[1980],
and
van
Heusden
[19801
have
concluded,
via
empirical
modeling
methods,
that
humans'
models
reflect
inappropriate
weightings
of
past
system
states.
All
three
of
these
efforts
assumed
that
past
states
were
accurately
observed,
or
at
most were
subject
to
zero-mean
Gaussian
observation
noise.
However,
despite
these
researchers' serious
efforts
to avoid
it,
subjects
may
have
produced
consistently
biased
or
distorted
state
estimates
which
led
them
to
develop
what
appeared
to
be
PAGE
32
inappropriate
mental
models.
For
example,
subjects
may
have
looked
for
spatial
patterns
such
as
number
of
reversals
or
repeated
subpatterns
in
the
displayed
time
series
rather
using
*
the
"state"
as
the
investigators
had
intended.
If
this
was
the
case,
it
may
have
been
that
the
mental
models
developed
by
subjects
were
"optimal"
(i.e.,
the
best
fit)
for
those
cues.
In
other
words,
it
may
have
been
that
their
cue
utilization
dictated
the
limits
to
the
accuracy
of
their
models.
This
phenomenon
has
implications for
explaining
the
impact
of
predictor displays.
A
predictor
display
explicitly
depicts,
via
a
model
of
the
system,
the
future
states
of
the
system
and
has
been
shown
to
result
in
improved
system
performance
[Sheridan
and
Ferrell,
1974,
pp.
268-273).
One
explanation
for
this
improvement
is
that
humans'
mental
models
of
the
systems
involved
were
other
than
perfect.
Alternatively,
as
argued
above,
it
could
be
that
they
simply
tended
to
have
difficulty
estimating
*
the
higher-order
state
variables
(e.g.,
acceleration
and
its
derivatives).
A
study
by
Johannsen
and
Govindaraj
[1980]
supports
the
latter
hypothesis.
They
used
a
manual
control
model
to
assess
the
effects
of
a
predictor
display,
which
they
represented
solely
in
terms
of
improved
cue
utilization.
Experimental
data
supported
their
formulation,
although
their
study
was
designed
for
purposes
other
than
providing
a
definitive
test
of
the
cue
PAGE
33
utilization
vs.
imperfect
mental
model
issue.
Increasing
levels
of
automation
in
engineering
systems
have
led
to
a
variety
of
studies
of
the
impact
on
human
performance
of
manually
controlling
vs.
monitoring
of
automatic
controls
in
tasks
such
as
failure
detection.
Kessel
and
Wickens
(1982]
found
that
subjects
trained
in
failure
detection
while
manually
controlling
subsequently
produced
better
failure
detection
performance
when
monitoring
an
automatically
controlled
system.
They
concluded
that
training
that
included
manual
control
leads
to
improved
cue
utilization.
Ephrath
and
Young
(19811
reach
what
at
first
glance
appears
to
be
almost
the
opposite
conclusion
but,
Supon
closer inspection,
mainly
serves
to
illustrate
the
subtleties
of
the
issue.
(For
example,
the
value
of
information
is
related
to the
human
information processing
resources
required
to
utilize
the
information.)
In
a
rather
different
study,
but
still
within
the
manual
control
domain,
Cohen
and
Ferrell
(1967]
*
found
that
subjects'
abilities
to
estimate
"readiness"
of
the
driver
to
perform
difficult
maneuvers
with
an
automobile
were
no
.*
different
if
they
were
to
perform
the
maneuver
themselves
or
they
were
simply
observing
another
driver
(i.e.,
manual
involvement
did
not
enhance
performance).
The
above
studies
on
prediction,
predictor
displays,
and
manual
control
mainly
serve to
emphasize
the
importance
of
cue
utilization
in
development
and
use
of
mental
models.
Succinctly,
PAGE
34
one's
conceptualization
of
how
something
works
is
highly
*
influenced
by
what
observations
one
chooses
to
make.
Therefore,
when
attempting
to
identify
the
cause
of
suboptimal
performance
by humans,
one
should
try
to
avoid
confounding
information
processing
limits
(e.g.,
memory)
and
inappropriate
or
inadequate
cue
utilization.
In
some
situations,
these
two
types
of
*
limitation
seem
to
have
demonstrably
different
effects
[Baron
and
-
Berliner,
1977].
However,
in
general
it
appears
that
--
insufficient
attention
has
been
devoted
to
this
issue.
An
interesting
aspect
of
cue
utilization
is
the
extent
to
which
it
differs
for
novices
and
experts.
In
general, experts
are
not
found
to
be
unduly
influenced
by
superficial
cues
[Chi
and
Glaser,
1984].
For
example,
in
a
study
of
the
use
of
research
literature,
Morehead
and
Rouse
[1985] found
that faculty
*
members
were
much
more
definitive
than
Ph.D.
students
in
specifying
attributes
of
information
that
they
did
not
want
retrieved.
However,
there
are
situations
where
novices
perform
relatively
better
because
they
utilize
more
concrete,
detailed
representations
[Adelson,
1984].
Nevertheless,
available
evidence
indicates
that
an
important
attribute
of
expertise
is
the
ability
to
select
the
most
useful
features
of
problems.
A
Central
Issue
To
the
extent
that
it is
reasonable
to
characterize
any
* .* \ *. .* * *
_V--V
-7 7
PAGE
35
single
issue
as
the
central
issue,
that
issue
has
to
be
instruction
or
training.
For
any
particular
task,
job,
or
profession,
what
mental
models
should
humans
have
and
how
shopld
these
models
be
imparted?
This
question
is
of
sufficient
theoretical
and
practical
importance
to
warrant
a
much more
detailed
treatment
than
accorded
to the
other
salient
issues
considered
in
this
section.
INSTRUCTIONAL
ISSUES
The
purpose
of
instruction
is
to
provide
the,
learner
with
necessary
knowledge
and
skills,
as
well
as
improve
confidence,
attitude,
etc.
For
instruction
related
to
any
given
system,
a
subset
of
the
necessary
knowledge
and
skills
relates
to
the
ability
to
describe
purpose
and
form,
explain
functions
and
observed
states,
and
predict
future
states.
Therefore,
one
of
the
purposes
of
instruction
is
to
provide
necessary
mental
models.
While
this
may
seem,
at
least
initially,
straightforward,
it
is
a
very
difficult
issue.
The
basic
questions
are:
For
a
given
system,
what
do
the
humans
involved
with
that
system
need
to
be
able
to
do,
and
what
knowledge
is
necessary
for
them
to
develop
and
maintain
this
repertoire
of
skills?
An
important
related
question
is:
What
is
the
most
appropriate
form
for
this
knowledge?
PAGE
36
Within
this
section,
thqse
questions
are considered
in
terms
of
the
types
of
iaowledge
included
within
the
proposed
definition
of
mental
models.
For
the
most
part,
this
discussion
emphasizes
the
impacts
of
particular
types
of
knowledge
rather
than
the
more
global
concepts
of
mental
models.
This
level
c~f
specificity
serves
to
emphasize
the
potential
utility
of
many
of
the
iesults
cited.
*
Knowledge
of
Theories
and
Principles
When
considering
the
questions
noted
above,
a
fairly
common
assertion
is
that
humans
(particularly
operators
and
maintainers)
need
to understand thoroughly
the
fundamental
principles
upon
which
the
design
and
operation
of
the
system
of
interest
is
based.
The
"principles"
of
concern
usually
include
fundamentals
of
thermodynamics,
heat
transfer,
fluid
mechanics,
Polid
mechanics,
dynamics,
electricity,
and
perhaps
mathematics.
Many
technical
training
programs
place
heavy
emphasis
on
these
types
cf
principle.
Unfortunately,
there
is little if
any
evidence
that
this
emphasis
results
in
better
and
more
useful
mental
models.
In
tho
The
need
for
this
level
of
specificity
also
serves
to
highlight
the
fact
that
expressing
results
solely
in
terms
of
global
and
somewhat
vague
concepts
tends
to
dissipate
any
impact
these
results
might
potentially
have.
...............................................................
PAGE
37
domain
of
process
control,
a
variety
of
independent
studies
have
shown
that
explicit
training
in
knowledge
of
theories,
fundamentals,
or
principles
did
not
enhance
performance,
and
sometimes
aedtua3ly
degraded
performance
[Grossman
and
Cooke,
1962;
Kragt
and
Landeweerd,
1974;
Brigham
and
Laios,
1975;
Shepherd,
et
al.,
1977;
Morris
and
Rouse,
1985).
It
has
also
'been
found
that
scores
on
tests
of
fundamental
understanding
did
not
correlate
significantly
with
process
control
performance
[Surgenor
and
McGeachy,
1983;
Morris
and
Rouse,
1985].
Similar
results
have been
founid
in
the
domain
of
electronics
troubleshooting.
Schorgmayer
and
Swanson
[1975]
determined
that
an
account
of
system
functioning
did
not
enhance
performance
relative
to
procedural
assistance.
Williams
and Whitmore
[19591
found
that
knowledge
of
theory
was
greatest
and
troubleshooting
*
performance
poorest
immediately
following
training;
the
opposite
conclusions
were
reached
when
the
same
subjects
were
tested
three
.
years
later.
Foley
[1977]
reviewed
seven
studies
of
troubleshooting,
including
that
of
Williams
and
Whitmore,
and
concluded
that
performance
on
tests
of
theory
and
job
knowledge
did
not
correlate
with
actual
job
performance.
....
Results
in
the
domain
of
mathematical
problem
solving
are
--
-•
also
similar.
Two
studies
compared
training
that
emphasized
-,
o
general
understanding
of
mathematical.
principles
to
training
that
stressed
calculational
techniques
[Mayer
and Greeno,
1972;
.............................................
* * *. . . . . .
.. . . . . . .- ... ....
.. .*m.-.s.'
PAGE
38
Mayer,
et
al.,
19771.
For
both
studies,
it
was
found
that
general
understanding
was
better
for
answering
questions
about
mathematics,
while
knowledge
of
calculational
techniques
was
better
for
actually
solving
problems.
.A
very
consistent picture
emerges
from
the
above
studies
of
"
process
control,
electronics
troubleshooting,
and
mathematical
problem
solving.
While
the
theories,
fundamentals,
and
principles
were
certainly
relevant
to
the
systems
and
tasks
investigated,
this
knowledge
did not
have
observable
effects
on
the
performance of
the
operators,
maintainers,
and
problem
solvers
studied.
It
seems
reasonable
to
assert
that
theoretically-oriented
training
increased
knowledge
about
the
system
and
task,
but
the
form
and/or
guidance
in
use
of
this
knowledge
were
not
sufficient
to
improve
performance
and,
in
some
instances,
were
such
that
performance
was
degraded.
Related
to
this
issue
is
the
research
of
Eylon
and
Reif
[1984]
who
studied
the
effects
of
forms
of
knowledge
organization
on
college-level
physics
problem
solving.
They
found
that
hierarchical
organizations
had
positive
effects,
particularly
for
the
better
students.
They
conclude
that
the
organization
of
knowledge
for
instruction
is
as
important
as
the
content
of
instruction.
.4 4
. - -'. - ... , - -.
4-.
-.
°.
.. ".
PAGE
39
Guidance
and
Cueing
Guidance
in
the
use
of
knowledge
can
occur
in
several
ways.
Many
of
the
studies
noted
above
provided
trainees
with
explicit
procedures
for
performing
their
tasks.
In
some
cases,
the
comparison
was
procedures
vs.
principles;
in
other
cases,
training
via
procedures served
as
more
of
a
control
group.
In
general,
procedures
tended
to
be
at
least
as
useful
as
principles,
and
at
least
as
useful
as
having
both
procedures
and
*
principles.
"
Procedures
represent
an
extreme
form
of
converting
general
principles
into
operationally-useful
guidance.
A
less
extreme
form
of
guidance
involves
simply
informing
trainees
of
how
and
when
the
knowledge
gained
during
training
should
be
used,
without
telling
them
exactly
what
they
should
do.
A
variety
of
studies
in
problem
solving
[Reed,
et
al.,
1974;
Weisberg,
et
al.,
1978],
word
puzzles
[Perfetto,
et
al.,
1983],
and
mathematics
[Mayer,
et
al.,
1977]
have
considered
the
effect
of
this
type
of
"cueing"
and
found
it
to
be
necessary
if
clues,
analogies,
and
general
principles
are
to
be
transferred
successfully
to
task
performance
subsequent
to
training.
It is
not
always
possible
for
guidance
to
be
explicit.
If
systems
are
very
complex
and/or
completely
unanticipated
situations
may
arise,
it is
likely
to
be
impossible
to
synthesize
PAGE
40
procedures
that
can
be
validated
in
the
sense
of
assuring
success.
Similarly,
it
may
be
impossible
to
inform
trainees
of
how
and
when
knowledge
will
be
applicable
(i.e.,
"cueing"
may
not
be
viable).
Nevertheless,
one
hopes
that
the
knowledge
gained
during
training
will
be
called
upon
when
unusual
situations
arise.
One
approach
to
enhancing
this
possibility
is
to
provide
training
in
a
variety
of
contexts
(e.g.,
for
more
thdn
one
system,
one
or
more
of
which
may
be
unfamiliar).
The
use
of
unfamiliar
contexts
can
"force"
trainees
to
utilize
general
principles
such
as
analogies
because
that
may
be
the
only
way
in
which
they
can
succeed.
Rouse
and
Hunt (1984]
have
investigated
various
aspects
of
this
concept
as
applied
to
troubleshooting
training.
While
they
found
that
the
use
of
unfamiliar
contexts
is
somewhat
more
subtle
and
complicated
than
originally
anticipated,
the
concept
was
sufficiently
viable
and
useful
to
become
an
important
element
in
training
programs
in
the
aviation
and
marine
domains
[Rouse,
1982-83].
Brooke and
his
colleagues
[1980]
have
also
investigated
a
variation
of
this
concept
and
found
that
training
in
multiple
contexts
improved
transfer
of
problem
solving
skills
to
new
contexts.
These
results
serve
to
emphasize
the
possibility
that
human
performance
within
a
particular
system
context
may
be
significantly
affected
by
their
knowledge
of
other
contexts.
...................................................
PAGE
41
Thus,
not
only
are
tasks
within
a
particular
system
likely
to
be
addressed
via multiple
mental
models
of
that
system,
but
task
performance
may
also
be
influenced
by
mental
models
of
other
systems
and
classes
of
systems.
This
leads
to the
issue
of
prior
knowledge.
Effects
of
Prior
Knowledge
With
the
possible
exception
of
very
young
children,
instruction
never
involves
the
filling
of
a
'tabula
rasa.
Trainees
always
approach
an
instructional
experience
with
prior
knowledge
and
skills.
In
particular,
trainees
always
have
a
variety
of
a
priori
mental
models
which
provide
both
".
opportunities
and
difficulties
from
an
instructional
point
of
view.
The
availability
of
prior
knowledge
presents
an
opportunity
in
that
it
can
serve
as
a
basis for
gaining
new
knowledge.
In
fact,
it
can
be
argued
that
prior
knowledge
will
almost
certainly
affect
learning
[Glaser,
19843.
For
example,
in
the
domain
of
human-computer
interaction,
Carroll
and
Thomas
[1982]
argue
that
new
"cognitive
structures"
are
developed
by
using
metaphors
to
existing
cognitive
structures,
Norman
and
his
colleagues
[1976]
S
offer
a
similar
assertion
with
regard
to the
design
of
instructional
programs.
Rasmussen
[1979,
1983)
discusses
implications
of
alternative
mental
models
for display
design
and
....:
PAGE
42
suggests
that
analogies
offer
an
important
mechanism
for
matching
displays
to
humans'
models.
With
regard to
analogies,
Gentner
and
Gentner
(1983]
found
that
the
usefulness
of
analogies
in
solving
electricity
problems
was
greatest
when
people
used
their
own
a
priori
analogies
rather
than
using
those
that
they
had
only
recently
learned
as
part
of
the
instructions
associated
with
the
*
equipment.
"While
existing
"cognitive
structures"
offer
a
foundation
on
which
to
build,
they
also
can
be
an
impediment.
Prior
knowledge
that
is
incorrect
will
not
necessarily
be
discarded
once
the
correct
knowledge
is
provided.
Instead,
an
amalgam
of
the
correct
and
incorrect
may
be
retained, especially
if
the
incorrect
aspects
are
such
that
everyday
life
experiences
are
unlikely
to
yield
any
inconsistencies.
This
phenomenon
has
emerged
several
times
in
studies
of
physics
problem
solving.
As
discussed
earlier,
DiSessa
[1982]
and McCloskey
(1983]
both
found
that
students'
naive,
"pre-Newtonian"
views
of
motion
persisted
even
after
college-level
instruction
had
provided
them
with
more
appropriate
formulations.
Similarly,
Clement [1983]
found
that
the
"motion
implies
force"
misconception
was
retained
after
college-level
instruction
had
provided
the
appropriate conceptualization.
The
implication
of
these
findings
is
that
instruction
must
remediate
a
priori
misconceptions
as
well
as
provide
correct
knowledge.
.. . .. . .....
. . ..... . ... .
.................. .. .
.. . .. . . . .. . . . . . .. .
R
-
-V'
PAGE
43
3urnmar'•
S
aar-
zing
the
evidence
presented
in
this
section
on
inst
tc
lonal
issues,
the
following
assertions
seem
reasonable*:
Kzincwdge
of
theories,
fundamentals,
and
principles
does
not
necessarily
enhance
task
performance;
measures
of
the
extent
of
such
knowledge
are
not
good
predictors
of
task
-.
performance.
2.
The
operational
utility
of
this
type
of
knowledge
is
highly
dependent,
on
the
form
in
which
it is
presented
and
the
gutdance
in
;is
use
that
is
provided.
3.
Gu-idance
is
the
use
of
knowledge
can
be
explicit
in
terms
of
procedures
and
cueing,
or
implicit
by
providing
a
range
of
training
experiences
that
foster
or
require
the
use
of
knowledge.
4.
A
priori
knowledge
can
serve
as
a
powerful
basis for
gaining
new
knowledge
or,
if
incorrect,
an
impediment
to
gaining
correct
knowledge;
both
cases
argue
for
consideration
of
a
priori
knowledge
in
designing
instructional
programs.
From
the
perspective
of
mental
models,
the
above
assertions
imply
that
the
form
of
knowledge,
guidance
in
use
of
knowledge,
and
prior
knowledge
all
interact
to
affect
the
development
and
use
of
mental
models.
*Morris
and
Rouse
(1985),
in
a
recent
comprehensive
review
of
empirical
research
on
human
performance
in
troubleshooting
tasks,
present
considerable
evidence
for
a
similar
set
of
assertions
relative
to
training
for
troubleshooting
tasks.
PAGE
44
FUNDAMENTAL
LIMITS
At
many
points
throughout
the
discussions
in
this
paper
various
considerations
have
arisen
that
appear
to
pose
limits
to
understanding the
"true"
nature
of
mental
models,
particularly
for
any
specific
individual
and
situation.
In
this
section,
the
apparent
characteristics
of
these
limits
are
formalized
and
explored.
The
purpose
of
this
discussion
is
to
outline
clearly
what
appear
to
be
fundamental
limits
in
the search
for
mental
models.
One
of
these
limits
is
fundamental
to
science
in
general.
Scientists'
conceptualizations
of
phenomena
are
almost
totally
dependent
on
their
own
mental
models.
These
models
dictate
what
observations
are
made
and
how
the
resulting
data
is
organized.
The
ultimate
subjectivity
and
arbitrariness
of
this
process
has
long
been
recognized
[James,
1909;
Whitehead,
1925].
However,
only
recently
has
it
come
to
be
viewed
as
a
predominant
aspect
of
the
social
and
psychological
processes
within
science
[Kuhn,
1962;
Zukav,
1979].
This
subjectivity
and
arbitrariness
is
particularly
problematic
in
the
behavioral
sciences.
As
Ziman
[1968]
has
emphasized,
controversy
and
uncertainty
seem
to
be
endemic
in
psychology,
where
many
of
the
basic
phenomena
are
familiar
to
both
researchers
and
laymen. These
problems
are
aggravated
in
PAGE
45
"4.t
the study
of
mental
models
because,
in
effect,
such
studies
arnouni:,
to
one
or
move
humans
developing
models
of
other
humans'
models
of
-he
external
world.
This
dilemma
is
fundamental
in
that
it
cannot
be
resolved.
Howeve.,0
the
effects
of
this
problem
can
perhaps
be
lessened
if
researchLrs
are
aware
of
the
biases
that
they
britng
to
a
study,
and
thil
these
biases
may
not
be
indicative
oY
the
tendencies
of
the
popuJl.tion
of
subjects
being
studied.
Therefore,
for
example,
it
1;,-
important
for
scientists
and
engineers
to
avoid
the
presumption
that
operators,
maintainers,
and
managers
approach
their
systems
from
a
scientific
or
ený,°ineerltn6
ierspective.
Beyond
the
limits
imposed
by
investigators'
biases,
there
are
difficulties
that
preclude
uncovering
the
"truth."
Several
of'
these
difficulties
are
discussed,
or
at
least
alluded
to,
in
earlier
sections
of
this
paper.
The
discussion
of
identification
methods
considered
several
important
limitations.
It
was
noted
that
empirical
approaches
are
limited
by
the
fact
that
behavioral
effects
of
access
and
manipulation
of
mental
models
may
possibly
be
confoanded
with
percoption
and
response
execution.
Analytical
approaches
that
consider
the
possibility
of
other
than
perfect
mental
models
mu:•t
choose
among
an
infinity
of
alternative
imperfect
models.
In
an
attempt
to
generalize
across
domains,
it
was
suggested
that
the
specificity
and
perhaps
the
form
of
conceptualizations
.... -. ....
.... .... .... .... .... .... ....
PAGE
46
of
mental
models
are
limited
by
the
location
of
a
domain
along
two
dimensions:
1)
nature
of
model
manipulation,
ranging
from
implicit
to
explicit,
and
2)
level
of
behavioral
discretion,
ranging
from
none
to
full.
This
two-dimensional
characterization
*|
of
differences
among
domains
appears to
have
clear
implications
for
the
potential
usefulness
of
alternative
identification
methods.
Namely,
inferential
methods
seem
to
work
best
when
there
is little
behavioral
discretion,
while
verbalization
methods
appear
to
be
most
successful
when
explicit
model
manipulation
is
inherent
to
the
task
of
interest.
If
the
above
limitations
are,
in
fact,
fundamental,
then
the
search
for
mental
models
will
never
comletely
eliminate
uncertainty;
the
black
box
will
never
be
completely
transparent.
This
type
of
problem
has
been
addressed
by
particle
physicists,
who
ultimately
accepted
this
inherent
limitation
in
terms
of
IHeisenberg's
uncertainty
principle
[Heisenberg,
1958;
Zukav,
1979].
The
basic idea
is
that
one
cannot
measure
perfectly
both
the
position
and
momentum
(the
product
of
mass
and
velocity)
of
a
particle,
because
the
process
of measuring
position
produces
uncertainty
in
momentum
and
vice versa.
Heisenberg
[1958]
generalizes
this
notion
by
stating,
"What
we
observe
is
not
nature
itself
but
nature
exposed
to
our
method
of
questioning."
The
general
perspective
provided
by
this
statement,
as
well
as
the
specifics
of
the
uncertainty
principle,
appear
to
be
quite
. • -" ' . . .• " °' -"• - ' '
* ) .• . ) 1 -. . i , ') - - L ' ' '" '' ' •' 'i ' '• / • ' " " " ' ' - "/ ' ' " " '' -• " •' • • ' . *' . ." . . .•"
'' ' • * '' ' '" *- '
i•
PAGE
47
relevant
to
research
on
mental
models.
Much
of the
literature
implies
that
mnental
models
are static,
unitary
entities
that
can
be-identified
if
appropriate
methods
are
employed. However,
as
N
Norman
[1983]
notes, this
view
is
much
'too
simplistic.
Available
evidence
suggests
that
mental models
are
more
likely
to
be
dynamic entities
that
can
have
a
multiplicity
of
forms.
If,
at
least
for
the
sake
of
argument,
one
asserts
that
mental
models
are
analogous
to
physicists?
elementary
particles
which
are
dynamic
entities
that
can
be
in
multiple
states,
then
it
is
quite
straightforward
to
map
the
physicists'
uncertainty
principle
to
an
analagous principle
for
mental
models.
The
position
of
a
particle
is
analogous
to
the
current
state
of
a
mental
model
(i.e.,
what
it is
now)
and
the
velocity
(or
*
momentum)
of
a
particle
is
analagous
to
the
changes
occurring
in
a
mental
model
(i.e.,
what
it is
becoming).
Uncertainty
is
fundamental
in
the
following
ways.
In
order
*
to
measure
perfectly
what
a
mental model
is
now,
one
inevitably
intrudes
on
what
the
model
is
becoming.
Less
intrusive
*
measurement
methods reduce
the
effects
on
future model
states,
*
but
increase
the
uncertainty
about
the
current
state.
Similarly,
if
one
attempts
to
measure perfectly
what
a
model
is
becoming,
in
attempting
to
measure
these
changes,
one
introduces
uncertainty
about
the
instantaneous
state
of
the
model
(i.e.,
what
it
is
now)
relative
to
which
these
changes
are
being
measured.
.................................................
PAGE
48
Heisenberg's
principle
specifies
that
the
product
of
the
uncertainties
in
position
and
momentum
is
constant
(i.e.,
Heisenberg's
constant!).
The
psychological
analog
of
this
S
constant
is
not
apparent.
In
fact,
it
seems
reasonable
to
*
conjecture
that
the
magnitude
of
this
constant
might
be
domain
dependent
in
the
sense
that
the
dimensions
in
Figure
2
may
affect
*
the
level
of
inherent
uncertainty.
Despite
the
intuitive
appeal
*•
of
such
a
formulation,
it
must
be
remembered,
however,
that
it is
totally
a
conjecture.
This
raises
the
question
of
how
this
line
of
reasoning
might
move
beyond
pure
conjecture.
Certainly,
more
thought
is
needed
and a
mathematical/logical
formulation
might
be
possible.
While
progress
might
be
made
in
this
way,
it is
also
possible
that
a
limit
such
as
that
of
Godel
may
be
reached,
where
"truth"
cannot
*'
be
proven
and
must
simply
be
accepted
(Godel,
1962;
Guillen,
1983].
Obviously,
the
possibility
of
such
"meta"
limits
is
yet
another
conjecture
at
this
point
in
time.
This
section
has
outlined
several
fundamental
limits
in
the
search
for
mental
models,
as
well
as
several
conjectures
regarding
limits
to
"knowing
what
can
be
known."
The
intent
of
this
discussion
was
to
illustrate
why
pursuit
of
"truth"
may
be
inherently
elusive,
particularly
when
studying
mental
models.
Given
these
limits,
dogged
pursuit
of
"truth"t
is
unreasonable.
Instead,
the
emphasis
should
be
on
the
utility
of
research
on
PAGE
49
*
mental
models
for
system
design,
instruction,
etc.
This
pragmatic
view
of
science
is
hardly
new
[Peirce,
1878;
James,
1907);
however,
it
often
seems
to
be
forgotten.
CONCLUSIONS
This
paper
has
explored
a
wide
range
of
issues
associated
with
research
on
mental
models.
At
this
point
in
time,
this
area
of
study
is
rife
with
terminological
inconsistencies
and
a
preponderance
of
conjectures
rather
than
data.
This
situation
is,
to
a
great
extent,
due
to the
fact that
a
variety
of
subdisciplines
have
adopted
the
concept
of
mental
models and
proceeded
to
develop
their
own
terminology
and
methodology,
independent
of
past
or
current
work
in
this
area
in
other
subdisciplines.
Nowhere
is
this
situation
more
evident
than
in
the
important
matter
of
definitions.
In
many
cases,
the
phrase
"mental
models"
appears
to
be
simply
a
substitute
for
"knowledge"
in
general.
Such
a
substitution
is
not
particularly
useful.
This
paper
has
suggested
a
more
concise
working
definition,
based
on
a
functional perspective:
mental
models
are
the
mechanisms
whereby
humans
generate
descriptions
of
system
purpose
and
form,
explanations
of
system
functioning
and
observed
systems
states,
and
predictions
of
future
system
states.
Much
of
the
discussion
in
this
paper
is
premised
on
this definition.
. . . . ... ... .. .
. . .. .. . .. .. . . ...
. . ...
PAGE
50
.,
,
A
portion
of
this
discussion
has
focused
on
limits
in
identifying
or
capturing
mental
models.
Some
of
the
difficulties
in
this
area
are
due
to
the
likelihood
that
mental
models
are
dynamic
entities
that
caa
have
a
multiplicity
of
forms,
even
for
a
particular
individual
in
a
specific
situation.
Beyond
this
issue,
other
types
of
limit
may
be
more
fundamental.
The
biases
imposed
by
scientists'
own
mental
models
and
the
possibility
of
<•
an
uncertainty
principle
have been
suggested
as
fundamental
in
nature.
All
of
the
limits
outlined
in
this
paper
have
practical
implications.
For
example,
the
deUign
of
"expert
systems"
is
premised
on
humans'
abilities
to
verbalize
their
models;
in
light
of
the
above
discussion,
this
ability
would
appear
to
be
more
limited
than
is
commonly
assumed.
Despite
the
fundamental
nature
of
some
of
the
limits
outlined
in
this
paper,
the
issues
underlying
the
mental
models
construct
are
important
and
deserve
substantial attention.
What
is
needed,
however,
is
to
move
away
from
the
perception
that
"truth"
is
being sought
and,
instead,
emphasize
the
utility
of
researching
these
issues
to
advance
the
state
of
understanding
of
learning,
problem
solving, etc.
This
shift
should
help
to
eliminate
many
minor
issues,
most
of
which
appear to
emanate
f om
a
rather
zealous
tendancy
to
coin
new
terminology.
By
purging
the
debate
of
these
minor
issues, research
should
be
able
to
focus
on
the
major,
substantive issues
including
..................................................
............
.. .... . . . . . . . .. ... .. .. . . . .C **.*.
PAGE
51
accessibility,
form
and
content
of
representation,
nature
of
expertise,
cue
utilization,
and,
of
most
importance,
"instructional
issues.
The
literature
is
replete
with
insightful
thinking
on
these
issues
and
a
variety
of
interesting
and
'N '
potentially
important
hypothese6
have
been
suggested.
-
Unfortunately,
however,
there
is
a
paucity
of
solid
emirical
data
available
to
support
or
refute
these
hypotheses.
At
the
moment,
the
research
community's
ability
to generate
conjectures
and
publish
them
seems
to
be
much
greater
than
its
ability
to
test
them
empirically.
What
i
needed
are
innovative
(and
validated)
empirical
approaches
to
employing
the
mental
models
construct
*.
usefully,
moat
Iikely
involving
a
mix
of
several
traditional
experimental
methods
with
newer
methods
such
as
computational
modeling
and
linguistic
analysis.
To
conclude,
the
search
for
mental
models
is
potentially
of
great
importance:
any
success
that
is
achieved
is
likely
to
have
substantial
impacts
on
system
design,
training,
etc.
However,,
there
are
fundamental
limits
on
4hat
can
be
clearly
seen
on
looking
into
the
black
box.
It
appears
that
these
limits
will
have
to
be
accepted
as
precluding
the
uncovering
of
"truth."
Fortunately,
truth
may
not
be
necessary.
If a
pragmatic
perspective
is
adopted,
research
on
mental
models
can
avoid
the
ephemeral.
issues
and
concentrate
on
providing
rigorously
tested
answers
to
a
variety
of
far-reaching
arýd
important
questions.
""%. .
... . .." ....
".. .. ":',-,"."."
.',,"
"'.
.
."'
N. . . . . ...... . . . ." . .- . ..'' ,. . .... . .
..-.
.-..
21".;
'•112
:ii
.i2
<
~
-;?-'':
i.:,;
''?i
'
PAGE
52
ACKNOWLEDGEMENTS__
*An
abbreviated version
of
this
paper
was
presented
at
the
Twentieth
Annual
Conference
on
Manual
Control,
NASA
Ames
Research
Center,
June
1984. This
research
was
partially
supported
by
the
Office
of
Naval
Research under
Work
Unit
NR
154-491
(Contract
*
N00014-82-K-0487).
Dre.
Henry
M.
Half
f
and
Susan
K.
Chipman
have served
as
Contract
Monitors.
PAGE
53
REFERENCES",-
1.
Adelson,
B.
When
novices
surpass
experts:
the
difficulty
of
task
may
increase
with
expertise.
Journal
of
Experimental
*ch2-
Learning,
Memory,
and
ij,--9'
2.
Alexander,
C.
Notes
on
the
syntesis
of
form*
Cambridge,
MA:
Harvard
University
Press,
1964.
3.
Anderson,
J.A.
Cognitive
and
psychological
computation
with
neural
models.
IEEE
Transactions
on
Systems
Man
and
Cybernetics,
1983,
MC-13,
79-9-815.
'-
.-
4.
Bainbridge,
L.
Verbal
reports
as
evidence
of
the
process
operator's
knowledge.
International
Journal
of
Man-Machine
Studies,
19#'9,
11,
411.
-__"__ _,
5.
Baron,
S.
and
Berliner,
J.E.
The
effects
of
deviate
internal
representations
in
the optimal
model
of
the
human
operator.
Proceedings
of
the
thirteenth
annual
conference
on
manual
controil,
MITJune
T7,-17:.
.%%-
6.
Brigham,
F.
and
Laios,
L.
Operator
performance
in
the
control
of
a
laboratory
process
plant.
Ergonomics,
1975,
18,
53-66.
7.
Brooke,
J.
B.,
Duncan,
K. D. &
Cooper,
C.
Interactive
instruction
in
solving
fault-finding
problems:
An
experimental
study.
International
Journal
of
Man-Machine
_Studies,
1980,
12,
217-227.
8.
Brown,
J.S.
and
de
Kleer,
J.
Towards
a
theory
of
qualitative
reasoning
about
mechanisms
and
its
role
in
troubleshooting.
In J.
Rasmussen and
W.B.
Rouse
(Eds.)
Human
detection
and
diagnosis
of
system
failures.
New
York:
P•ue-nm
Press,
178T737•335.
9.
Caglayan,
A.K.
and
Baron,
S.
On
the
internal
target
model
-7=•
in
a
tracking
task.
Proceedipgs
of
the
seventeenth
annual
conference
on
manual
control,
Univers-ity
of
California
Los.
. . . . ..A
n
*,1*- '..
.*" , "."
... '"
PAGE
54
10.
Carroll,
J.M.
and
Thomas,
J, C.
Metaphor
and
the
cognitive
representation
of
computing
systems.
IEEE
Transactions
on
Systems,
Man,
and
Cybernetics,
1982,
SMC-T--
107
116."'
11.
Chase,
W.G.
and
Simon,
H.A.
The
mind's
eye
in
chess.
In
W.G.
Chase
(Ed.)
Visual
information
processing.
New
York:
Academic
Press,
1973-.
12.
Chi,
M.T.M.
and
Glaser,
R.
Problem
solving
abilities.
In
R.
Sternberg
(Ed.),
Human
abilities:
An
information
processing
approach,
San
r
isco:
W.H.f
Fr•-eman,•T
.•. 2
13.
Cohen,
B.
and
Murphy, G.L.
Models
*.f
concepts.
Cognitive
Science,
1984,
8,
27-58.
14.
Cohen,
H.S. and
Ferrell,
W.R.
Human
operator
decision-making
in
manual
control.
IEEE
Transactions
on
Man-Machine
Systems,
1969,
MMS-10,
41-47.
15.
Conant,
R.C.
and
Ashby,
W.R.
Every
good
regulator
of
,a ,
system
must
be a
model
of
that
system.
international
Journal
4-..4
of
Systems
Science,
1970,
1,
89-97..
16.
Clement,
J. A
conceptual
mo.-l
d--
>-ssý_
y
-. 1e)
and
used
intuitively
by
physics
st
-ie:*
-.
_ervnlE
ald
A.L.
Stevens
(Eds.)
Mental
moels
*._
_
.1: -! .
1983,
325-340-
17.
Crossman,
E.R.r.,
.ad ,d
J.
k
E_
0 r-
of
slow-response
3y
Oemi.
..
u,
a
Qrt,•x
T t .
c7.grese
on
Human
Factoi,
J.in
E
e
L.
-.
nia,
-,ncc8 ~
Ji
The-. o
1962.
Reprinted
u:
.. .... a-- -
s
a
human
operator
ii .T'c s•6
tlcol.
Lo.
a'".
Prn
i-s-,-T7-
7ý7.
-
::..
18.
de
Kleer,
J.
and
Brc,,n,
.
ý•.m
).,..ons
a.,
c
a&ni,
.i
,
-:8
in
mechanistic
mental
mic
ie_
I,
L
IexItner
rý,
1 -1, .
rens
(Eds.)
Mental
Models.
I :
dE
;
Lrlbaui.
,1 31
190.
19.
Dreyfus,
H.L.
and
Dreyfus,
ýE
.tr ,.
m:i
Sthe
thought
bariier
T_
E
•--
er,7±..y,
CAI
un iv
er
s it
o
f
0'a
IT -f j i."L-ea:
cr
C-
r
ter,
1979,
.'..~
'Z
I A'
:
-LI %4r. rJK-. 71' '!
PAGE
55
20.
Ephrath,
A.R.
and Young,
L.R.
Monitoring
v86
man-in-the-loop
detection
of
aircraft
control failures.
In
J.
Rasmussen
and
W.D.
Rouse
(Eds.)
Human
detection
and
di
9osis
oflse
failures.
New
York:-
-Mnum
Press,
191-f-
21.
Eylon,
B-S.
and
Reif,
F.
Effects
of
knowledge organization
on
task
performance. Cognition
and
Instruction,
1984,
1
5-44.
22.
Ericsson,
K.A.
and
Simon,
H.A.
Verbal
reports
as
data.
Psychological
Review,
1980,
8_7,
215-251.
23.
Ericsson,
K.
and Simon,
H.A.
Protocol
analysis:
Verbal
re]2orts
as
data.
Cambridge,
MA:
MIT
Pre~ss
1594.
*
24.
Falzon,
P.
Display
structures:
compatibility
with
the
operators' mental
representation
and
reasoning
processes.
Proceeadings
of
the
second
eurpean
annal
conference
on
human
acson
'gaklngand
-manual
con
rol,
Univ~ersity
of
Bonn, F.R.
U -i a
ny,
J
une
4(
8~72TF7-30(5-.
25~.
Foley,
J.P.,
Jr.
Performance measurement
of
maintenance
(Tech.
Rept.
AFHRLT'-77Y76).
Wright-Ptterso'n
Air
Force
Base,
OH:
Air
Force
Human
Resources Laboratory, December
1977.
*
26.
Gentner,
D.
and
Gentner,
D.R.
Flowing
waters
or
teeming
crowds:
mental models
of
electricity.
In
D.
Gentner
and
A.L.
Stevens
(Eds.)
Mental
Models.
Hillsdale,
NJ:
Erlbaum,
*
1983v
99-129.
27.
Gentner,
D.
and
Stevens,
A.L.
(Eds.)
Mental
Models.
Hillsdale,
NJ:
Erlbaum,
1983.
28.
Glaser,
R.
Education
and
thinking:
the
role
of
knowledge.
American
Psychologist,
1984,
39,
93-104.
::~
29.
Godel,
K.
On
formally
undecidable
Rroostions-
New
York:
Basic
Books-,
6
30.
Gould,
P.
and
White,
R.
Mi.ntal
maf-
.
Midj
*e-x,
Penguin
books,
1974.
~~44~~lim4fi
I4*4 EI.**-
.4-
PAGE
56
31.
Greeno,
J.G.
Conceptual
entities.
In D.
Gentner
and
A.L.
Stevens
(Eds.)
Mental
models.
Hillsdale,
NJ:
Erlbaum,
1983,
227-252.
32.
Greeno,
J.G.
and
Simon,
H.A.
Problem
solving
and
reasoning.
In
R.C.
Atkinson,
R.
Hernstein,
G.
Lindzey,
and
R.D.
Luce
(Edo.),
Stevens'
Handbook
of
Experimental
Psychology.
(Revised
Edit-on).
New
York:
-,TohnWil.ey,
1.9874.
33.
Guillen,
M.
Bridges
to
infinity:
the
human
side
of
mathematics.
Los
Angeles
: .
Tarcher-
Inc.,1985.
34.
Hammond,
K.R.,
McClelland,
G.H.,
and
Mumpower,
J.
Human
J
udgemgnt
and
decision
making,
New
York:
Hemisphere
Publshing
Corp.,
T
35.
Harvard
Business
Review,
On
human
relations.
New
York:
Harper
and
Row,
1980.
36.
Heisenberg,
W.
Physics
and
philosophy.
New
York:
Harper
and
Row,
1958.,
37.
Hutchins,
E.
Understanding
micronesian
navigation.
In D.
Gentner
and
A.L.
Stevens
(Eds.)
Mental
models.
Hillsdale,
NJ:
Erlbaum,
1983,
191-225.
-
38.
Jagcinski,
Rt.J.
and
Miller,
R.A.
Describing
the
human
operator's
internal
model
of
a
dynamic
system.
Human
Factors,
1978,
20,
425-433.
39.
James,
W.
Pragmatism:
a
new
name
for
some
old
ways
of
thinking:.
New
York:
Longmans,
Green,
1907.
40.
James,
W.
The
meaning
of
truth:
a
sequel
to
pragmatism.
New
York:
ong-mans,7Gre-n,
T5.
41.
Johannsen,
G.
and
Govindaraj,
T.
Optimal
control
model
predictions
of
system
performance
and
attention
allocation
and
their
experimental
validation
in
a
display
design
study.
IEEE
Transactions
on
Systems,
Maýn
and
Cybernetics,
1980,
=_101,-2Z9-
261.
,., . .
'.---,-.
-" .-.-.
,-----
-.
**. .. ._ . .. ..
........-.-..--.
""....-.... ..... .....-. ...
,.
,.. . -... '-,
-r
s
.w..v-r.w-r r - -
n
r rmv
rv•-r. U I-W.. U i"
PAGE
57
42.
Kessel,
C.J.
and
Wickens,
C.D.
The
transfer
of
failure-detection
skills
between
monitoring
and
controlling
dynamic
systems.
Human
Factors,
1982,
24,
49-60.
43.
Kleinman,
D.L.,
Baron,
S.,
and
Levinson,
W.H.
A
control
theoretic
approach
to
manned-vehicle
system
analysis.
IEEE
Transactions
on
Automatic
Control,
1971,
AC-16,
824-832.
44.
Knaeupe3,
A.
and Rouse,
W.B.
A
rule-based
model
of
human
problem
solving
behavior
in
dynamic
environments.
IEEE
Transactions
on
Systems,
Man
and
Cybernetics,
1985, SMC-T7.
45.
Kragt,
H
and
Landeweard,
J.A.
Mental
skills
in
process
control.
In
E.
Edwards and
F.P.
Lees
(Eds.)
The
human
ator
in
process
control.
London:
Taylor
and
-ancisi
1974.i
46.
Kuhn,
T.S.
The
structure
of
scientific
revolutions.
Chicago:
Univie-ity
of
Chicago
Press,
1962.
47.
Larkin,
J.H.
The
role
of
problem
representation
in
physics.
In D.
Gentner
and
A.L.
Stevens
(Eds.)
Mental
models.
Hillsdale,
NJ:-
Erlbaum,
1983,
75-98.
48.
Lehner,
P.E.,
Rook,
F.W.,
and
Adelman,
L.
Mental
models
and
cooperative
problem
solving
with
expert
systems
(Tech.
Rep.T..
84-i16).
McLean,
VA:
PAR
T~cEology
Corporation,
September
1984.
49.
McCloskey,
M.
Naive
theories
of
motion.
In D.
Gentner
and
A.L.
Stevens
(Eds.)
Mental
models.
Hillsdale,
NJ:
Erlbaum,
1983,
299-324.
50.
Mayer,
R.E.
and
Greeno,
J.G.
Structural
differences
between
learning
outcomes
produced
by
different
instructional
methods.
Journal
of
Educational
Psychology,
1972,
6,
165-173.
51.
Mayer,
R.E.,
Stiehl,
C.,
and Greeno,
J.G.
Acquisition
of
understanding
and
skill
in
relation
to
subject's
preparation
and
meaningfulness
of
instruction,
Journal
of
Educational
Psychology,
1975,
67,
331-350.
........................
PAGE
58
52.
Morehead,
D.R.
and
Rouse,
W.B.
Online
assessment
of
the
value
of
information
for
searchers
of
a
bibliographic
data
base.
Information
Processing
and
Management,
1985,
21.
"53.
Morris,
N.M.
and
Rouse,
W.B.
The
effects
of
type
of
knowledge
upon
human
problem
solving
in
a
process
control
task.
IEEE
Transactions
on
Systems,
Man,
and
Cybernetics,
1985,
SH
"-5.-
54.
Morris,
N.M.
and
Rouse,
W.B.
Review
and
evaluation
of
empirical
research
in
troubleshooting.
Human
Factors,
1985,
27.
55.
Newell,
A.
and
Simon,
H.A.
Human
problem
solving.
Englewood
Cliffs,
NJ:
Prentice-HalIl,'-72.
56.
Nisbett,
R.E.
and
Wilson,
T.D.
Telling
more
than
we
can
know:
verbal
reports
on
mental
processes.
Psychological
Review,
1977,
§,
231-259.
57.
Norman,
D.A.
Some
observations
on
mental
models.
In D.
Gentner
and
A.L.
Stevens
(Eds.)
Mental
models.
Hillsdale,
NJ:
Erlbaum,
1983,
7-14.
58.
Norman,
D.A.
Gentner,
D.R.
and
Stevens,
A.L.
Comments on'
learning
schemata
and
memory
representation.
In D.
Klahr
(Ed.),
Cognition
and
instruction.
Hillsdale,
N.J.:
Erlbaum,
1976.
59.
Peirce,
C.S.
The
fixation
of
belief.
Popular
Science
Monthly
1877,
1_2t
1-15.
60.
Peirce,
C.8.
How
to
make
our
ideas
clear.
Popular
Science
Monthly,
1878,
286-302.
61.
Perfetto,
G.A.,
Bransford,
J•D.,
and
Franks,
J.J.
Constraints
on
access
in
a
problem
solving context.
Memory
and
Cognition,
1983,
11,
24-31.
62.
Rasmussen,
J.
On
the
structure
of
knowledge
-m
hology
of
mental
mod-eTs
in
a
man-machine
system
context
(Tech.
7-gpt7--Tso-M-22)
-
oskilde,
Denmark:
Riso
National
Laboratory,
November 1979.
_.....
...-...... .............. ' ..
'..'."..
..-........
.........
.-.
".-
,',...,...-...'..".
. .
-2-"-''"22,"--'
PAGE
59
63.
Rasmussen,
J.
Skills,
rules,
and
knowledge;
signals,
signs,
and
symbols,
and
other
distinctions
in
human
performance
models.
IEEE
Transactions
on
Systems,
Man,
and
Cybernetics,
1983,
SMC-
T
-27-266.-
64.
Rasmussen,
J.
and
Jensen,
A.
Mental
procedures
in
real
life
tasks:
a
case
study
of
electronic
troubleshooting.
Ergonomics,
1974,
17,
193-307.
65.
Rasmussen,
J.
and
Rouse,
W.B.
(Eds.)
Human
detection
and
diagnosis
of
system
failures.
New
York:--
ntum
Press,
19MT.
66.
Reed,
S.K.,
Ernst,
G.W.,
and
Banerji,
R.
The
role
of
analogy
in
transfer
between
similar
problem
states.
Cognitive
Psychology,
1974,
6,
436-450.
67.
Rouse,
W.B.
A
theory
of
human
decision
making
in
stochastic
estimation
tasks.
IEEE
Transactions
on
Systems,
Man,_
and
Cybernetics,
1977,
SMW,
27--283.
68.
Rouse,
W.B.
On
models
and
modelers:
n
cultures.
IEEE
Transactions
on
Systems
Man,
and
Cybernetics,
1982,
SMC-T,
665,610:
69.
Rouse,
W.B.
A
mixed-fidelity
approach
to
technical
trainin
Journal
of
Educational
Technology
Systems,
1982-83,
11,
70.
Rouse,
W. B.
and
Hunt,
R. M.
Human
problem
solving
in
fault
diagnosis
tasks.
In W. B.
Rouse
(Ed.),
Advances
in
man-machine
systems
research
(Vol.
1).
Greenwich,
CT:
JT
Press,
-1-984,
195-222.
71.
Schorgmayer,
H.
and
Swanson,
R.A.
The
effect
of
alternative
training
methods
on
the
troublesho-otng
perormances
of
maintenance
technicians.
Bowling Green,
KY:
Gowling
reen
State
University,
August
1975.
72.
Shepherd,
A.,
Marshall,
E.
C.,
Turner,
A.
and
Duncan
K
D.
Diagnosis
of
plant
failures
from
a
control
panel:
A
comparison of
three
training
methods.
Ergonomics,
1977,
20,
347-361.
. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .. ... ...
PAGE
60
73.
Sheridan,
T.B.
On
how
often
the
supervisor
should
sample.
IEEE
Transactions
on
Systems
Science
and
Cybernetics,
1970,
*-6,
-4oT1•-T
74.
Sheridan,
T.B.
and
Ferrell,
W.R.
Man-machine
systems.
Cambridge,
MA:
MIT
Press,
1974.
75.
Sheridan,
T.B.
and
Johannson,
G.
(Eds.)
Monitoring
behavior
and
supervisory
control.
New
York:
Plenum
Press,
1961.
76.
Silverman,
B.G.
Analogy
in
systems
management:
a
theoretical
inquiry.
IEEE
Transactions
on
Systems,
Man,
and
Cybernetics,
1983,
SMC-13t
1049-1075.
77.
Skinner,
B.F.
The
behavior
of
organisms.
New
York:
Appleton-Century--rofts,--1
.-
8
78.
Smallwood,
R.D.
Internal
models
and
the
human
instrument
monitor.
IEEE
Transactions
on
Human
Factors
in
Electronics,
1967,
HFE-8,T11-187.
-.
..
79.
Sternberg,
R.J.
Component
processes
in
analogical
reasoning.
Psychological
Review,
1977,
84,
353-378.
80.
Surgenor,
B.W.
and
McGeachy,
J.D.
Validation
for
performance
measurement
in
the
task
of
fault
management.
Proceedings
of
the
27th
Annual
Meeting
of
the
Human
Factors
"
clety,
Nor8l7,
"V7rg~n•,
Octol
87,
T758-102.
81.
van
Bussel,
F.J.J.
Human
prediction
of
time
series.
IEEE
Transactions
on
Systems,
Man,
and
Cybernetics,
1980,
SMC-7,
0-:414.
82.
van
Heusden,
A.R.
Human
prediction
of
third-order
autoregressive
time
series.
IEEE
Transactions
on
Systems,
Man,
and
Cybernetics,
1980,
SMC-T.,
38-.
83.
Veldhuyzen,
W.
and
Stassen,
H.G.
The
internal
wodel
concept:
an
application
to
modeling
human
control
of
large
ships.
Human
Factors,
1977,
19,
367-380.
84.
Watson,
J.B.
Behavior.
New
York:
Henry
Holt,
1914.
PAGE
61
85.
Weisberg,
R.,
DiCamillo,
M.
and
Phillips,
D.
Transferring
old
associations
to
new
situations:
A
nonautomatic
process.
Journal
of
Verbal
Learning
and
Verbal Behavior,
1978,
17,
219.2.9..
86.
Whitehead,
A.N.
Science
and
the
modern
world.
London:
MacMillan,
1925.
87.
Whitfield,
D.
and
Jackson,
A.
The
air
traffic
controller's
"picture"
as
an
example
of
a
mental
model.
In G.
Johannsen
and
J.E.
Rijnsdorp
(Eds.)
Analysis,
design,
and
evaluation
of
man-machine
systems.
London:
Pergamon-Tress,1
*
88.
Wickens,
C.D.
Engineering
psychology
and
human
performance.
*Colombus,
OH:
C.E.
Merrill
Publishing
Co.,
1984.
" "
89.
Williams,
M.D.,
Hollan,
J.D.,
and
Stevens,
A.L.
Human
reasoning
about
a
simple
physical
system.
In D.
Gentner
and
A.L.
Stevens
(Eds.)
Mental
models.
Hillsdale,
NJ:
Erlbaum,
1983,
131-153.
90.
Williams,
W.L.,
Jr.
and
Whitmore,
P.G.,
Jr.
The
development
and
use of
a
performance
test
as
a
basis--or
comparing
technicians
with
and
without
field
e-•erence:---
Te
likbflAjax
maintenance
-ehn-iian
(Tec-Rep--t.
52).•
Wasng-on,-DC:
George
Washington
University,
Human
Resources Research
Office,
1959.
91.
Wiser,
and
Carey,
S.
When
heat
and
temperature
were
one.
In
D.
Gentner
and
A.L.
Stevens
(Eds.)
Mental
models.
Hillsdale,
NJ:
Erlbaum,
1983,
267-297.
92.
Young,
R.M.
Surrogates
and
mappings:
two
kinds
of
conceptual
models
for
interactive
devices.
In D.
Gentner
and
A.L.
Stevens
(Eds.)
Mental
models.
Hillsdale,
NJ:
Erlbaum,
1983,
35-52.
93.
Ziman,
J.
Public
knowledge:
the
social
dimension
of
science.
Cambridge,
England:
Cambridge
University
Press,
1968.
94.
Zukav,
G.
The
dancing
wu
li
masters:
an
overview of
the
new
hysics.
New-
YkWii
Morrow,
199.
Tuesday,
June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
Personnel
Analysis
Division
Special
Assistant
for Projects
AF/MPXA
OASN(M&RA)
5C360,
The
Pentagon
5D800,
The
Pentagon
Washington,
DC
20330
Washington,
DC
20350
Air
Force
Human
Resources
Lab
Dr.
Alan
Baddeley
AFHRL/MPD
Medical
Research
Council
Brooks
AFB,
TX
78235
Applied Psychology
Unit
15
Chaucer
Road
Air
Force
Office
Cambridge
CB2
2EF
of
Scientific
Research
ENGLAND
Life
Sciences
Directorate
Bolling
Air
Force
Base
Dr.
Patricia
Baggett
Washington,
DC
20332
University
of
Colorado
Department
of
Psychology
Dr.
Robert
Ahlers
Box
345
Code
N711
Boulder,
CO
80309
Human
Factors
Laboratory
NAVTRAEQUIPCEN
Dr.
Eva
L.
Baker,
Director
Orlando,
FL
32813
UCLA
Center
for
the
Study
of
Evaluation
Dr.
Ed
Aiken
145
Moore
Hall
Navy
Personnel
R&D
Center
University
of
California
San
Diego,
CA
92152
Los
Angeles,
CA
90024
Dr.
William
E.
Alley
Dr.
Isaac
Bejar
AFHRL/MOT
Educational
Testing
Service
Brooks
AFB,
TX
78235
Princeton,
NJ
08450
Dr.
Earl
A.
Alluisi
Dr.
John
Black
HQ,
AFHRL
(AFSC)
Yale
University
Brooks
AFB,
TX
78235
Box
11A,
Yale
Station
New
Haven,
CT
06520
Dr.
John
R.
Anderson
Department
of
Psychology
Code
N711
Carnegie-Mellon
University
Attn:
Arthur
S.
Blaiwes
Pittsburgh,
PA
15213
Naval
Training
Equipment
Center
Orlando,
FL
32813
Dr.
Phipps
Arabie
University
of
Illinois
Dr.
R.
Darrell
Bock
Department
of
Psychology
University
of
Chicago
603
E.
Daniel
St.
Department
of
Education
Champaign,
IL
61820
Chicago,
IL
60637
Technical
Director
Dr.
Jeff
Bonar
Army
Research
Institute
for
the
Learning
R&D
Center
Beha'ioral
and
Social
Sciences
University
of
Pittsburgh
5001
Eisenhower
Avenue
Pittsburgh,
PA
15260
Alexandria,
VA
22333
Dr.
Nick
Bond
Office
of
Naval
Research
Liaison
Office,
Far
East
APO
San
Francisco,
CA
96503
Tuesday, June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
SDr.
Lyle
Bourne
Dr.
Susan
Carey
Department
of
Psychology
Harvard
Graduate
School
of
University
of
Colorado
Education
Boulder,
CO
80309
337
Gutman
Library
Appian
Way
Dr.
Gordon
H.
Bower
Cambridge,
MA02138
Department
of
Psychology
Stanford
University
Dr.
Pat
Carpenter
"Stanford,
CA
94306
Carnegie-Mellon
University
Department
of
Psychology
Dr.
Richard
Braby
Pittsburgh,
PA
15213
*'
NTEC
Code
10
*
Orlando,
FL
32751 Dr.
Robert
Carroll
NAVOP
01B7
'
Dr.
Robert
Breaux
Washington,
DC
20370
Code
N-095R
NAVTRAEQUIPCEN
Dr.
Fred
Chang
"Orlando,
FL
32813
Navy
Personnel
R&D
Center
Code
51
Dr.
Ann
Brown
San
Diego,
CA
92152
"Center
for
the
Study
of
Reading
University
of
Illinois
Dr.
Davida Charney
51
Gerty
Drive
Department
of
Psychology
Champaign,
IL
61280
Carnegie-Mellon
University
Schenley
Park
--
Dr.
John
S.
Brown
Pittsburgh,
PA
15213
XEROX
Palo Alto
Research
Center
Dr.
Eugene
Charniak
3333
Coyote
Road
Brown
University
Palo
Alto,
CA
94304
Computer
Science
Department
Providence,
RI
02912
Dr.
Bruce
Buchanan
Computer
Science
Department
Dr.
Michelene
Chi
Stanford
University
Learning
R & D
Center
S
Stanford,
CA
94305
University
of
Pittsburgh
3939
O'Hara
Street
Dr.
Patricia
A.
Butler
Pittsburgh,
PA
15213
NIE
Mail
Stop
1806
-
1200
19th
St.,
NW
Dr.
Susan
Chipman
Washington,
DC
20208
Code
442PT
Office
of
Naval
Research
Dr.
Jaime
Carbonell
800
N.
Quincy
St.
Carnegie-Mellon
University
Arlington,
VA
22217-5000
Department
of
Psychology
,
Pittsburgh,
PA
15213
Dr.
Yee-Yeen
Chu
Perceptronics,
Inc.
Mr.
James
W.
Carey
21111
Erwin
Street
Commandant
(G-PTE)
Woodland
Hills,
CA
91367-3713
U.S.
Coast
Guard
2100
Second
Street,
S.W.
Dr.
William
Clancey
Washington,
DC
20593
Computer
Science
Department
Stanford
University
Stanford,
CA
94306
....
7
"Tuesday,
June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
Director
Dr.
Lynn
A.
Cooper
Manpower
Support
and
Learning
R&D
Center
Readiness
Program
University
of'
Pittsburgh
Center
for
Naval
Analysis
3939
O'Hara
Street
"
2000
North
Beauregard
Street
Pittsburgh,
PA
15213
,
Alexandria,
VA
22311
Dr.
Lee
Cronbach
Scientific
Advisor
16
Laburnum
Road
to
the
DCNO (MPT)
Atherton,
CA
94205
Center
for
Naval
Analysis
2000
North Boauregard
Street
Dr.
Mary
Cross
Alexandri.a,
VA
2231.1
Department
of
Education
Adult
Literacy
Initiative
Chief
of
Naval
Education
Room
4145
and
Training
400
Maryland
Avenue, SW
Liaison
Office
Washington,
DC
20202
',.
Air
Force
Human
Resource
Laboratory
Operations
Training Division
CTB/McGraw-Hill
Library
Williams
AFB,
AZ
85224
2500
Garden
Road
D
Monterey,
CA
93940
Assistant
Chief
of
Staff
MnrA30
*
Research,
Development,
CDR
Mike
Curran
Test,
and
Evaluation
Office
of
Naval
Research
Naval
Education
and
800 N.
Quincy
St.
Training
Command
(N-5)
Code
270
NAS
Pensaoola,
FL
32508
Arlington,
VA
22217-5000
Dr.
Michael
Cole
Bryan
Dallman
University
of
California
AFHRI./LRT
at
San
Diego
Lowry
AFB,
CO
80230
Laboratory
of
Comparative
Human
Cognition
-
DOO3A
Dr.
Charles
E.
Davis
..
Le
Jolia,
CA
92093
Personnel
and
Training
Research
O3ffice
of
Naval
Research
Dr.
Allan
M.
Collins
Code
442PT
Bolt
Beranek
&
Newman.
Inc.
800
North
Quincy
Street
50
Moulton
Street
A-lington,
VA
22217-5000
Cambridge,
MA
02138
Defense
Technical
SDr.
Stanley
Collyer
Information
Center
Office
of
Naval
Technology
Cameron
Station,
B±dg
5
800
N.
Quincy
Street
Alexandria,
VA
22314
Arlington,
VA
22217
Attn:
TC
(12
Copies)
.......
Dr.
L~eon
Cooper
,.':
Brown
University
Dr.
Thomas
M.
Duffy
Center
for
Neural
Scice
Communications
Design
Cent.-
Providence,
RI
02912
Carnegie-Mellon
University
Schenley
Park
Pitt3burgh,
PA
15213..
............................. - ... .....
....
.,. .
-. . .. . . .. . . . . . . . . . . . .
Tuesday,
June
11,
1985
5:39
am
Rouse &
Morris,
"Limits in
the
Search
for
Mental
Models"
Edward
E.
Eddowes
Mr.
Wallace
Feurzeig
CNATRA
N301
Educational
Technology
Naval
Air
Station
Bolt
Beranek
&
Newman
Corpus
Christi,
TX
78419
10
Moulton
St.
Cambridge,
MA
02238
Dr.
John
Ellis
D
Ci ie
Navy
Personnel
R&D
Center
Dr.
Craig
1.
Fields
San
Diego,
CA
92252
ARPA
1400
Wilson
Blvd.
Dr.
Jeffrey
Elman
Arlington,
VA
22209
University
of
California,
San
Diego
Dr.
Gerhard
Fischer
Department
of
Linguistics,
C-008
Liebigoasse
5/3
La
Jolla,
CA
92093
A
1010
Vienna
AUSTRIA
Dr.
Ri--hard
Elster
Deputy
Assistant
Secretary
Dr.
Linda
Flower
of
the
Navy
(Manpower)
Carnegie-Mellon
University
Washington,
DC
20350
Department
of
English
Pittsburgh,
PA
15213
Dr.
Susan
Embretson
D.K
Fou
Universilty
of
Kansas
Dr.
Ken
Forbus
.••:
Psychology
Department Department
of
Computer
Science
Lawrence,
KS
66045
University of
Illinois
Champaign,
IL
61820
ERIC
Facilit;-Acquisitions
4833
r"ugby
Avenue
Dr.
Carl
H.
Frederiksen
Bethesda,
MD
20014
McGill
University
3700
McTavish
Street
'-
Dr.
K.
Anders
Ericsson
Montreal,
Quebec
H3A
1Y2
University
of
Colorado
CANADA
Department
of
Psychology
Boulder,
CO
80309
Dr.
John
R.
Frederiksen
Bolt
Beranek
&
Newman
.,
Edward
Esty
50
Moulton
Street
Department
of
Education,
OERI
Cambridge,
MA
02138
MS
40
1200
19th
St.,
NW
Dr.
Norman
Fredertksen
Washington,
DC
20208
Educational
Testing
Service
Princeton,
NJ
08541
Dr.
Beatrice
J.
Farr
Army
Research
Institute
Dr.
Michael
Genesereth
.,.'
5001
Eisenhower
Avenue
Stanford
University
Alexandria,
VA
22333
Computer
Science
Department
Stanford,
CA
94305
Dr.
Marshall
J.
Farr
2520
North
Vertion
Street
Dr.
Dedre
Gentner
Arlington,
VA
22207
University of
Illinois
Dopartment
of
Psychology
Dr.
?at
Federico
S,:-7.
Daniel
St.
Code
511
Champaign,
IL
i,1820
NPRDC
San
Diego,
CA
921,.
... .. ... .. 2..
Tuesday,
June
11,
1985
5!39
am
House
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
Dr.
Don
Gentner
Dr.
James
G.
Greeno
Center
for
Human
University
of
California
Information
Processing
Berkeley,
CA
94720
University
of
California
La
Jolla,
CA
92093
Dr.
Henry
M.
Halff
Halff
Resources,
Inc.
Dr.
Robert
Glaser
4918
33rd
Road,
North
Learning
Research
Arlington,
VA
22207
&
Development
Center
University
of
Pittsburgh
Dr.
Ronald
K.
Hambleton
3939
O'Hara
Street
Laboratory
of
Psychometric
and
Pittsburgh,
PA
15260
Evaluative
Research
University
of
Massachusetts
Dr.
Arthur
M.
Glenberg
Amherst,
MA
01003
University
of
Wisconsin
W.
J.
Brogden
Psychology
Bldg.
Mr.
William
Hartung
1202
W.
Johnson
Street
PEAM
Product
Manager
Madison,
WI
53706
Army
Research
Institute
5001
Eisenhower
Avenue'
Dr.
Marvin
D.
Glock
Alexandria,
VA
22333
13
Stone
Hall
Cornell
University
Dr.
Wayne
Harvey
Ithaca,
NY
14853
SRI
International
333
Ravenswood
Ave.
\- ,
Dr.
Joseph
Goguen
Room
B-S324
Computer
Science
Laboratory
Menlo
Park,
CA
94025
SRI
International
333
Ravenswood
Avenue
Prof.
John
R.
Hayes
Menlo
Park,
CA
94025
Carnegie-Mellon
University
Department
of
Psychology
Dr.
Daniel
Gopher
Schenley
Park
Industrial
Engineering
Pittsburgh,
PA
15213
&
Management
TECHNION
Dr.
Barbara
Hayes-Roth
Haifa
32000
Department
of
Computer
Science
ISRAEL
Stanford
University
Stanford,
CA
95305
Dr.
Sherrie
Gott
AFHRL/MODJ
Dr.
Frederick
Hayes-Roth
Brooks
AFB, TX
78235
Teknowledge
525
University
Ave.
Dr.
Richard
H.
Granger
Palo
Alto,
CA
94301
Department
of
Computer
Science
University
of
California,
Irvine
Dr.
Joan
I.
Heller
Irvine,
CA
92717
Graduate
Group
in
Science
and
Mathematics
Education
Dr.
Bert
Green
c/o
School.
of
Education
Johns
Hopkins
University University
of
California
Department
of
Psychology
Berkeley,
CA
94720
Charles
&
34th
Street
Baltimore,
MD
21218
"• - .- -, ''". . . .." ,-. '.- - -, -.... . ..
,'o.. ...
" ''- -, . , -. i . " " "-" " " " *'<
.
,",
... ' --.- , " "
-;77"'. ., .. ,, ,
Tuesday, June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
Dr.
Jim
Hollan
Dr.
Douglas
H.
Jones
Code
51
Advanced
Statistical
.
Navy
Personnel
R & D
Center
Technologies
Corporation
San
Diego,
CA
92152
10
Trafalgar
Court
"Lawrenceville,
NJ
08148
"Dr.
John
Holland
University
of
Michigan
Dr.
Marcel
Just
-
2313
East
Engineering
Carnegie-Mellon
University
*
Ann
Arbor,
MI
48109
Department
of
Psychology
Schenley
Park
Dr.
Melisa
Holland
Pittsburgh,
PA
15213
"Army
Research
Institute
for the
Behavioral
and
Social
Sciences
Dr.
Milton
S.
Katz
5001
Eisenhower
Avenue
Army
Research
Institute
Alexandria,
VA
22333
5001
Eisenhower
Avenue
Alexandria,
VA
22333
Dr.
Keith
Holyoak
University
of
Michigan
Dr.
Steven
W.
Keele
Human
Performan-.e
Center
Department
of
Psychology
330
Packard
Road
University
of
Oregon
Ann
Arbor,
MI
48109
Eugene,
OR
97403
Dr.
Lloyd
Humphreys
Maj.
John
Keene
University
of
Illinois
ADP
Systems
Branch
-
Department
of
Psychology
C3
Development
Center
(D104)
603
East Daniel
Street
MCDEC
Champaign, IL
61820
Quantico,
VA
22134
Dr.
Earl
Hunt
Dr.
Scott
Kelss
'-
Department
of
Psychology
Haskins
Laboratories,
University
of
Washington
270
Crown
3treet
Seattle,
WA
98105
New
Haven,
CT
06510
Dr.
Ed
Hutchins
Dr.
Norman
J.
Kerr
Navy
Personnel
R&D
Center
Chief
of
Naval
Education
San
Diego,
CA
92152
and
Training
Code
00A2
Dr.
Dillon
Inouye
Naval
Air
Station
WICAT
Education
Institute
Pensacola,
FL
32508
Provo,
UT
84057
Dr.
Dennis
Kibler
Dr.
Zachary
Jacobson
University
of
California
Bureau
of
Management
Consulting
Department
of
Information
365
Laurier
Avenue
West
and
Computer
Science
Ottawa,
Ontario
KIA
0S5
Irvine,
CA
92717
*
CANADA
Dr.
David
Kieras
Dr.
Joseph
E.
Johnson
University
of
Michigan
Assistant
Dean
for
Technical
Communication
Graduate
Studies
College
of
Engineering
College
of
Science
and
Mathematics
1223
E.
Engineering
Building
University
of
South
Carolina
Ann
Arbor,
MI
48109
Columbia,
SC
29208
:::*..*.....*.....................'*,. .- *-*
ruesday,
June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Searcit
for
Mental
Models"
Dr.
Peter
Kincaid
Dr.
Pat
Langley
Training
Analysis
University
of
'alifornia
&
Evaluation
Group
Department
of liformation
Department
of
the
Navy
and
Computer
Science
SOrlando,
FL
32813
Irvine,
CA
92717
Dr.
Walter
Kintach
Dr.
Marcy
Lansmat.
Department
of
Psychology
Universityi
of
Nor;
h
Caroliný4
University
of
Colorado
The
L. L.
Thurston
Lat.
Campus
Box
345
Davie
Hall
0I3A
Boulder,
CO
80302
Chapel.
Hill,
NC;
'7
4
Dr.
David
Klahr
Dr.
Kacnleen
a
-i,
i.
Carnegie-Mellon
University
Naval
iealth
n
er
el,
Department
of
Psychology
Education
d
I
ýiii
in
o'imand
Schenley
Park
Naval
ledical
.mm•rE
- 1,
Pittsburgh,
PA
15213
Nati
-nal.
Ca
a
tua
ýegý.u
Bet.hesd
MD
2.
14-4
302,2
Dr.
Mazie
Knerr
Program
Manager
Dr.
Jill
-arkin
Training
Research
Division
Carnegie-,eLlon
ina
ýrst
"
HumRRO
Department
of
Psv,"h
._)g.'.
*
1100
S.
Washington
Pittsburgh,
HI
1*21~.
Alexandria,
VA
22314
Dr.
Alan
M.
Lesg;_u
Dr.
Janet
L.
Kolodner
Lear?•ing
R&D
Center
Georgia
Institute
of
Technology
University
of
Pittsburgh
School
of
Information
Pittsburgh,
PA
15260
&
Computer
Science
Atlanta,
GA
30332
Dr.
Jim
Levin
University
of
California
Dr.
Stephen
Kosslyn
Laboratory
for
Com;ýarative
Harvard
University
Human
Cognition
1236
William
James
Hall
DO03A
33
Kirkland
St.
La
Jolla,
CA
92093
Cambridge,
MA
02138
Dr.
Michael
Levine
Dr.
Kenneth
Kotoqsky
Educational
Psychology
Department
of
Psychology
210
Education
Bldg.
Community
College
of
University
of
Illinois
Allegheny
County
Champaign,
IL
61801
800
Allegheny
Avenue
Pittsburgh,
PA
15233
Dr.
Clayton
Lewis
University
of
Colorado
Dr.
Benjamin
Kuipers
Department
of
Computer
Science
MIT
Laboratory
for
Computer
Science
Campus
Box
430
545
Technology
Square Boulder,
CO
80309
Cambridge,
MA
02139
Science
and
Technology
Division
Dr.
Patrick
Kyllonen
Library
of
Congress
AFHRL/MOE
Washington,
DC
205110
Brooks
AFB,
TX
78235
._..................................*.i-'
.-
. . . . .......... . . .... . .
Tuesday,
.iune
11.
!985
5:39
a.
Rouse
&
Morris,
"Limit.;
in
the
Search
for
Meitral
Mode
-,,' N';•<
Dr.
Charlotte
Linde
Dr.
James
McMichael
SRI
International
Navy
Personnel
R&D
Center
333
Ravenswood
Avenue
San
Diego,
CA
92152
Menlo
Park,
CA
94025
Dr.
Barbara
Means
Dr.
Robert
Linn
Human
Resources
College
of
Education
Research
Organization
University
of
Illinois
1100
South
Washington
Urbana,
IL
61801
Alexandria,
VA
22314
Dr.
Frederic
M.
Lord
Dr.
Arthur
Melmed
Educational
Testing Service
U. S.
Department
of
Education
Princeton,
NJ
08541 724
Brown
_'_
Washington,
DC
20208
Dr.
Don
Lyon
P.
0.
Box
44 Dr.
Al
Meyrowitz
Higley,
AZ
85236
Office
of
Naval
Research
Code
433
Dr.
William
L.
Maloy
(02)
800
N.
Quincy
Chief
of
Naval
Education
Arlington,
VA
22217-5000
-'
and
Training
Naval
Air
Station
Dr.
George
A.
Miller
Pensacola,
FL
32508
Department
of
Psychology
Green
Hall
Dr.
Sandra
P.
Marshall
Princeton
University
Department
of
Psychology
Princeton,
NJ
08540
University
of
California
Santa
Barbara,
CA
93106
Dr.
Robert
Mislevy
D.Educational
Testing
Service
Dr.
Richard
E.
Mayer
Princeton,
NJ
08541
Department
of
Psychology
University
of
California
Dr.
Andrew
R.
Molnar
Santa Barbara,
CA
93106
Scientific
and
Engineering
Personnel
and
Education
Dr.
Jay
McClelland
National
Science Foundation
Department
of
Psychology
Washington,
DC
20550
Carnegie-Mellon
University
Pittsburgh,
PA
15213
Dr
William
Montague
NPRDC
Code
13
Dr.
James
L.
McGaugh
San
Diego,
CA
92152
Center
for
the
Neurobiology
of
Learning
and
Memory
Headquarters,
Marine
Corps
University
of
California,
Irvine
Code
MPI-20
Irvine,
CA
92717
Washington,
DC
20380
Dr.
Kathleen
McKeown
Dr.
Allen
Munro
Columbia
University
Behavioral
Technology
Department
of
Computer
Science
Laboratories
-
USC
New
York,
NY
10027 1845
S.
Elena
Ave.,
4th
Floor
Redondo
Beach,
CA
90277
Dr.
Joe
McLachlan
Navy
Personnel
R&D
Center
San
Diego,
CA
92152
": '-[::''<."-:•? -" i-''::::•:-'
•:
'::• :
••
::
i
i::
'::: -
:::•
'
:'•.•:'
"•.•:
: •:::j
::
j
.:
::•.
•.
.',.::
: S I:'I
Tuesday,
June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
Director
Leadership
Management
Education
Distribution
Department
and
Training
Project
Officer
Naval
Military
Personnel
Command
Naval
Medical
Command
(Code
05C)
N-4
Washington,
DC
20372
-•
Washington,
DC
20370,""
Mr.
Bill
Neale
Head,
HRM
Operations
Branch
HQ
ATC/TTA
Naval
Military
Personnel
Command
Randolph
AFB,
TX
78148
N-62F
Washington,
DC
20370
Technical
Director
Navy
Health
Research
Center
Assistant
for
Evaluation,
P.O.
Box
85122
Analysis,
and
MIS
San
Diego,
CA
92138
Naval
Military
Personnel
Command
N-6C
Dr.
Richard
E.
Nisbett
Washington,
DC
20370
University
of
Michigan
Institute
for
Social
Research
Spec.
Asst.
for
Research,.
Experi-
Room
5261
mental
&
Academic
Programs
Ann
Arbor,
MI
48109
Naval
Technical
Training
Command
(Code
016)
Dr.
Donald
A.
Norman
NAS
Memphis (75)
Institute
for
Cognitive
Science
Millington,
TN
3805a
University
of
California
La
Jolla,
CA
92093
Program
Manager
for
Manpower,
Personnel,
and
Training
Dr.
Melvin
R.
Novick
NAVMAT
0722
356
Lindquist
Center
Arlington,
VA
22217-5000
for
Measurement
University
of
Iowa
Dr.
David
Navon
Iowa
City,
IA
52242
Institute
for
Cognitive
Science
University
of
California
Director,
Training
Laboratory
La
Jolla,
CA
92093
NPRDC
(Code
05)
San
Diego,
CA
92152
Assistant
for
Plannin'g
MANTRAPERS
NAVOP
01B6
Director,
Manpower
and
Personnel
Washington,
DC
2037C
Laboratory
NPRDC
(Code
06)
Head
San
Diego,
CA
92152
Workforce
Information
Section
NAVOP
140F
Director
Washington,
DC
20370
Human
Factors
&
Organizational
Systems
Lab.
Head
NPRDC
(Code
07)
Manpower, Personnel,
Training
San
Diego,
CA
92152
and
Reserve
Team
NAVOP
914D
Fleet
Support
Office
5A578,
The
Pentagon
NPRDC
(Code
301)
Washington,
DC
20350
San
Diego,
CA
92152
4 .•.
.
Tuesday,
June
11,
1985
5:39
am
'..
Rouse
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
Library Office
of
Naval
Research
Code
P2O1L
Code
442EP
Navy
Personnel
R&D
Center
800
N.
Quincy
Street
San
Diego,
CA
92152
Arlington,
VA
22217-5000
Technical
Director
Group
Psychology
Program
Navy
Personnel
R&D
Center
Code
442GP
San
Diego,
CA
92152
Office
of
Naval
Research
800
N.
Quincy
St.
%
Commanding
Officer
Arlington,
VA
22217-5000
Naval
Research
Laboratory
*
Code
2627
Office
of
Naval
Research
-
Washington,
DC
20390
Code
442PT
"800
N.
Quincy
Street
Dr.
Harry
F.
O'Neil,
Jr.
Arlington,
VA
22217-5000
*
Training
Research
Lab
(6
Copies)
Army
Research
Institute
5001
Eisenhower
Avenue
Special
Assistant
for
Marine
Alexandria,
VA
22333
Corps
Matters
Code
lOOM
Dr.
Stellan
Ohlsson
Office
of
Naval
Research
Learning
R & D
Center
800 N.
Quincy
St.
University
of
Pittsburgh
Arlington,
VA
22217-5000
3939
O'Hara
Street
Pittsburgh,
PA
15213
Psychologist
ONR
Branch
Office
Director
Technology
Programs
1030
East
Green
Street
Office
of
Naval
Research
Pasadena,
CA
91101
Code
200
800
North
Quincy
Street
Commanding
Officer
Arlington,
VA
22217-5000
Army
Research
Institute
ATTN:
PERI-BR
(Dr.
J.
Orasanu)
Director
Research
Programs
5001
Eisenhower
Avenue
Office
of
Naval
Research
Alexandria.
VA
22333
800
North
Quincy
Street
Arlington,
VA
22217-5000
Prof.
Seymour
Papert
20C-109
Mathematics
Group
Massachusetts
Institute
Office
of
Naval
Research
of
Technology
Code
411MA
Cambridge,
MA
02139
800
North
Quincy
Street
Arlington,
VA
22217-5000
Dr.
James
Paulson
Department
of
Psychology
Office
of
Naval
Research
Portland
State
University
Code
433
P.O.
Box
751
800 N.
Quincy
Street
Portland,
OR
97207
Arlington,
VA
22217-5000
Dr.
Douglas
Pearse
Office of
Naval
Research
DCIEM
Code
442
Box
2000
800
N.
Quincy
St.
Downsview,
Ontario
Arlington,
VA
22217-5000
CANADA
.. . ..... . . * ,-. * .. ,
Tuesday, June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
Dr.
James
W.
Pellegrino
Dr.
Mike
Posner
University
of
California,
University
of
Oregon
Santa Barbara
Department
of
Psychology
Department
of
Psychology
Eugene,
OR
97403
Santa
Barbara,
CA
93106
Dr.
Joseph
Psotka
Dr.
Nancy
Pennington
ATTN:
PERI-IC
University
of
Chicago
Army
Research
Institute
Graduate
School
of
Business
5001
Eisenhower
Ave.
1101
E.
58th
St.
Alexandria,
VA
22333
Chicago,
IL
60637
Dr.
Mark
D.
Reckase
Military
Assistant
for
Training
and
ACT
Personnel
Technology
P, 0.
Box
168
OUSD
(R & E)
Iowa
City,
IA
52243
Room
3D129,
The
Pentagon
Washington,
DC
20301
Dr.
Lynne
Reder
Department
of
Psychology
LCDR
Frank
C.
Petho,
MSC,
USN
Carnegie-Mellon
University
CNATRA
Code
N36,
Bldg.
1
Schenley
Park
NAS
Pittsburgh,
PA
15213
Corpus
Christi,
TX
78419
Dr.
James
A.
Reggia
Dr.
Tjeerd
Plomp
University
of
Maryland
Twente
University
of
Technology
School
of
Medicine
Department
of
Education
Department
of
Neurology
P.O.
Box
217
22
South
Greene
Street
7500
AE
ENSCHEDE
Baltimore,
MD
21201
THE
NETHERLANDS
Dr.
Fred
Reif
Dr.
Martha
Polson
Physics
Department
Department
of
Psychology
University
of
California
Campus
Box
346
Berkeley,
CA
94720
University
of
Colorado
Boulder,
CO
80309
Dr.
Lauren
Resnick
Learning
R & D
Center
Dr.
Peter
Polson
University of
Pittsburgh
University
of
Colorado
3939
O'Hara
Street
Department
of
Psychology
Pittsburgh,
PA
15213
Boulder,
CO
80309
Dr.
Gil
Ricard
Dr.
Steven
E.
Poltrock
Code
N711
MCC
NAVTRAEQUI PCEN
9430
Research
Blvd.
Orlando,
FL
32813
Echelon
Bldg
#1
Austin,
TX
78759-6509
Dr.
Mary
S.
Riley
Program
in
Cognitive
Science
Dr.
Harry
E.
Pople
Center
for
Human
Information
University
of Pittsburgh
Processing
Decision
Systems
Laboratory
University of
California
1360
Scaife
Hall
La
Jolla,
CA
92093
Pittsburgh,
PA
15261
Z.....
"..
. . . . . ..-.
'..
.. ...
•'. , , • ,•: ;,.,", ,', .••,.,,•... .. •... . . . . . . . . . ........ . . .
.. .•.
,•
•.a
~
,,,•t,.
.
,,..,,,•
.
•"•
. =••
•.-
,• ".. "-•,.. ... •• •' U
e,.-±
, ,•
•,.•".
-•
-,.•'•,
,•'
Tuesday,
June
11,
1985
5:39
am
Rouse &
Morris,
"Limits
in
the
Search
for
Mental
Models"
William
Rizzo
Dr.
Robert
Sasmor
Code
712
NAVTRAEQUIPCEN
Army
Research
Institute
Orlando,
FL
32813
5001
Eisenhower
Avenue
Alexandria,
VA
22333
Dr.
Andrew
M.
Rose
American
Institutes
Dr.
Roger
Sohank
for
Research
Yale
University
1055
Thomas Jefferson
St.,
NW
Computer
Science Department
*
Washington,
DC
20007
P.O.
Box
2158
New
Haven,
CT
06520
Dr.
Ernst
Z.
Rothkopf
AT&T
Bell
Laboratories
Dr.
Walter Schneider
Room
2D-456
University
of
Illinois
600
Mountain
Avenue
Psychology
Department
Murray
Hill,
NJ
07974
603
E.
Daniel
Champaign,
IL
61820
Dr.
William
B.
Rouse
Georgia
Institute
of
Technology
Dr.
Alan
H.
Schoenfeld
School
of
Industrial
&
Systems
University
of
California
Engineering
Department
of
Education
Atlanta,
GA
30332
Berkeley,
CA
94720
Dr.
Donald
Rubin
Dr.
Janet
Schofield
Statistics
Department
Learning
R&D
Center
Science
Center,
Room
608
University
of
Pittsburgh
1
Oxford
Street
Pittsburgh,
PA
15260
Harvard
University
Cambridge,
MA
02138
Dr.
Judah
L.
Schwartz
MIT
Dr.
David
Rumelhart
20C-120
Center
for
Human
Cambridge,
MA
02139
Information
Processing
Univ.
of
California
Dr.
Judith
Segal
La
Jolla,
CA
92093
Room
819F
NIE
Dr.
E. L.
Saltzman
1200
19th
Street
N.W.
Haskins
Laboratories
Washington,
DC
20208
270
Crown
Street
New
Haven,
CT
06510
DR.
ROBERT
J.
SEIDEL
US
Army
Research
Institute
Dr.
Fumiko
Samejima
5001
Eisenhower
Ave.
Department
of
Psychology
Alexandria,
VA
22333
University
of
Tennessee
Knoxville,
TN
37916
Dr.
Ramsay
W.
Selden
NIE
Dr.
Michael
J.
Samet
Mail
Stop
1241
Perceptronics,
Inc
1200
19th
St.,
NW
6271
Variel
Avenue
Washington,
DC
20208
Woodland
Hills,
CA
91364
Dr.
Michael
G.
Shafto
ONR
Code 442PT
800
N.
Quincy
Street
Arlington,
VA
22217-5000
.....................---
.....
.....................
.. . . .
Tuesday, June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
Dr.
Sylvia
A.
S.
Shafto
Dr.
Richard
Snow
National
Institute
of
Education
Liaison
Scientist
1200
19th
Street
Office
of
Naval
Research
,
Mail
Stop
1806
Branch
Office,
London
Washington,
DC
20208
Box
39
FPO
New
York,
NY
09510
Dr.
Ted
Shortliffe
Computer
Science
Department
Dr.
Elliot
Soloway
Stanford
University
Yale
University
Stanford,
CA
94305
Computer
Science
Department
P.O.
Box
2158
Dr.
Lee
Shulman
New
Haven,
CT
06520
Stanford
University
1040
Cathcart
Way
Dr.
Richard
Sorensen
Stanford,
CA
94305
Navy
Personnel
R&D
Center
San
Diego,
CA
92152
Dr.
Robert
S.
Siegler
Carnegie-Mellon
University
Dr.
Kathryn
T.
Spoehr
Department
of
Psychology
Brown
University
Schenley
Park
Department
of
Psychology
Pittsburgh,
PA
15213
Providence,
RI 02912
Dr.
Zita
M
Simutis,
Chief
James
J.
Staszewski
Instructional
Technology
Research
Associate
Systems
Area
Carnegie-Mellon
University
ARI
Department
of
Psychology
5001
Eisenhower
Avenue
Schenley
Park
.
Alexandria,
VA
22333
Pittsburgh,
PA
15213
Dr.
H.
Wallace
Sinaiko
Dr.
Marian
Stearns
Manpower
Research
SRI
International
and
Advisory
Services
333
Ravenswood
Ave.
Smithsonian
Institution
Room
B-S324
801
North
Pitt
Street
Menlo
Park,
CA
94025
Alexandria,
VA
22314
Dr.
Frederick
Steinheiser
Dr.
Derek Sleeman
CIA-ORD
Stanford
University
612
Ames
School
of
Education
Washington,
DC
20505
Stanford,
CA
94305
Dr.
Robert
Sternberg
Dr.
Edward
E.
Smith
Department
of
Psychology
Bolt
Beranek
&
Newman,
Inc.
Yale
University
50
Moulton
Street
Box
11A,
Yale
Station
Cambridge,
MA
02138
New
Haven,
CT
06520
Dr.
Alfred
F.
Smode
Dr.
Saul
Sternberg
Senior
Scientist
University
of
Pennsylvania
Code
7B
Department
of
Psychology
Naval
Training
Equipment
Center
3815
Walnut
Street
Orlando,
FL
32813
Philadelphia,
PA
19104
Tuesday,
June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Search
for
'iental
Models"
-
Dr.
Albert
Stevens
Dr.
Martin
M.
Taylor
Bolt
Beranek
&
Newman,
Inc.
DCIEM
10
Moulton
St.
Box
2000
-
Cambridge,
MA
02238
Downsview,
Ontario
CANADA
Dr.
Paul
J.
Sticha
Senior
Staff
Scientist
Dr.
Perry
W.
Thorndyke
Training
Research
Division
FMC
Corporation
*
HumRRO
Central
Engineering
Labs
*
1100
S.
Washington
1185
Coleman Avenue,
Box
580
Alexandria,
VA
22314
Santa Clara,
CA
95052
"Dr.
Thomas
Sticht
Major
Jack
Thorpe
Navy
Personnel
R&D
Center
DARPA
San
Diego,
CA
92152
1400
Wilson
Blvd.
Arlington,
VA
22209
Dr.
David
Stone
KAJ
Software,
Inc.
Dr.
Martin
A.
Tolcott
*•
3420
East
Shea
Blvd.
3001
Veazey
Terr.,
N.W.
Suite
161
Apt.
1617
Phoenix,
AZ
85028
Washington,
DC
20008
Cdr
Michael
Suman,
PD
303
The
Douglas
Towne
.
Naval
Training
Equipment
Center
Bu,&i-vioral
Technolowl"
*-•
Code
N51,
Comptroller
1845
' . i
Ave.
Orlando,
FL
32813
Redondo Beach,
CA
90;,i7
Dr.
Hariharan
Swaminathan
Dr.
Robert
Tsutakawa
Laboratory
of
Psychometric
and
Department
of
Statistics
Evaluation
Research
University of
Missouri
School
of
Education
Columbia,
MO
65201
University of
Massachusetts
SAmherst,
MA
01003
Dr.
David Vale
Assessment
Systems
Corp.
Mr.
Brad
Sympson
2233
University
Avenue
*
Navy
Personnel
R&D
Center
Suite
310
San
Diego,
CA
92152
St.
Paul,
MN
55114
Dr.
John Tangney
Dr.
Kurt
Van
Lehn
AFOSR/NL
Xerox
PARC
Boiling
AFB,
DC
20332
3333
Coyote
Hill
Road
Palo
Alto,
CA
94304
Dr.
Kikumi
Tatsuoka
CERL
Dr.
Beth
Warren
252
Engineering
Research
Bolt
Beranek
&
Newman,
Inc.
Laboratory
50
Moulton
Street
Urbana,
IL
61801
Cambridge,
MA
02138
Dr.
Maurice
Tatsuoka
Roger
Weissinger-Baylon
220
Education
Bldg
Department
of
Administrative
1310
S.
Sixth
St.
Sciences
Champaign,
IL
61820
Naval
Postgraduate
School
Monterey,
CA
93940
.. .
..-. '. .. .' -.-..-
_=*\
Tuesday,
June
11,
1985
5:39
am
Rouse
&
Morris,
"Limits
in
the
Search
for
Mental
Models"
Dr.
Donald
Weitzman
Dr.
George
Wong
MITRE
Biostatistics
Laboratory
-
1820
Dolley
Madison
Blvd. Memorial
Sloan-Kettering
MacLean,
VA
22102
Cancer
Center
1275
York
Avenue
Dr.
Shih-Sung
Wen
New
York,
NY
10021
Jackson
State
University
1325
J.
R.
Lynch
Street
Dr.
Wallace
Wulfeck,
III
Jackson,
MS
39217
Navy
Personnel
R&D
Center
San
Diego,
CA
92152
*-
Dr.
Keith
T.
Wescourt
FMC
Corporation
Dr.
Joe
Yasatuke
_
Central
Engineering
Labs
AFHRL/LRT
1185
Coleman Ave.,
Box
580
Lowry
AFB,
CO
80230
"
Santa
Clara,
CA
95052
21
Major
Frank
Yohannan,
USMC
-
Dr.
Douglas Wetzel
Headquarters,
Marine
Corps
"Code
12
(Code
MPI-20)
-
Navy
Personnel
R&D
Center
Washington,
DC
20380
San
Diego,
CA
92152
"Mr.
Carl
York
Dr.
Mike
Williams
System
Development
Foundation
IntelliGenetics
181
Lytton
Avenue
124
University
Avenue
Suite
210
*
Palo
Alto,
CA
94301
Palo Alto,
CA
94301
Dr.
Hilda
Wing
Dr.
Joseph
L.
Young
Army
Research
Institute
Memory
&
Cognitive
5001
Eisenhower
Ave.
Processes
Alexandria,
VA
22333
National
Science
Foundation
Washington,
DC
20550
Dr.
Robert
A.
Wisher
U.S.
Army
Institute
for
the
Cmdr.
Joe
Young
Behavioral
and
Social
Sciences
HQ,
MEPCOM
"5001
Eisenhower
Avenue
ATTN:
MEPCT-P
Alexandria,
VA
22333
2500
Green
Bay
Road
North Chicago,
IL
60064
Dr.
Martin
F.
Wiskoff
Navy
Personnel
R & D
Center
Dr.
Steven
Zornetzer
San
Diego,
CA
92152
Office of
Naval
Resea'rch
Code
440
Dr.
Frank
Withrow
800
N.
Quincy
St.
U. S.
Office
of
Education
Arlington,
VA
22217-5000
400
Maryland
Ave.
SW
Washington,
DC
20202
Mr.
John
H.
Wolfe
Navy
Personnel
R&D
Center
San
Diego,
CA
92152
... This is accomplished by conveying words, symbols or gestures that the receiver decodes to establish or enrich a mental representation of the present or future state of the 'world'. This process is reinforced by pre-existing knowledge already stored in memory (Johnson-Laird, 1983;Jones et al., 2011;Rouse & Morris, 1986). Thus, these mental representation encompass models of appropriate actions based on the current state of the 'world' such as the team sports contexts (Eccles, 2010). ...
... In applications to football, these shared mental representations are assumed to form the foundation of effective coordination among the players, particularly vital in situations where the time for deliberate intra-team communication is limited. By interacting with each other, the players engage in a form of collective mental simulation, which automatically creates continuous feedback loops and updating of internalized shared mental representations or models (Blickensderfer et al., 2010;Cannon-Bowers et al., 1993;Fiore & Salas, 2006;Lausic et al., 2014Lausic et al., , 2015Reimer et al., 2006;Salas et al., 2008;Silva et al., 2013). From our perspective, there are three critical points for discussions. ...
Thesis
Full-text available
This doctoral thesis investigates the perceptual-motor skills essential for elite soccer performance, with a focus on creativity, variability in actions, visual exploratory activity (VEA), and intra-team communication. Using video notational analysis, a method that bridges the gap between controlled laboratory studies and the complex and dynamic conditions of competitive sport, this thesis examines how these skills manifest and influence performance. The studies are grounded in the ecological psychology framework, which emphasizes the continuous interaction between perception and action within the sporting environment. The thesis comprises five studies. The first study explores how small-sided games encourage creative actions, proposing that increased action variability fosters creativity. The second and third studies examine the VEA rates (considering the number of VEA in the time frame prior to ball possession) of elite soccer players, comparing different field positions and distinguishing between super-elite (award-winning) players and their elite teammates. The findings highlight how VEA, defined as purposeful body or head movements to gather environmental information, correlates with successful passing outcomes. The fourth and fifth studies focus on intra-team communication. A case study assesses the effectiveness of traditional video notational analysis compared to methods incorporating audio recordings, allowing for a more comprehensive examination of verbal and nonverbal communication. The final study challenges static, cognitive models of communication in sport, instead proposing an ecological perspective in which communication dynamically facilitates collective attention and shared affordances among teammates. This thesis contributes to a deeper understanding of how perceptual-motor skills can be analyzed, trained, and applied to enhance elite soccer performance. It advocates for methodologies that preserve ecological validity, ensuring that research findings remain relevant to real-world sporting contexts.
... Az összesítés során egyrészt nehézséget okozott a cetlik nagy mennyiségének kezelése, másrészt a csoportok kialakítása is többkörös folyamatot takart: a tematikus blokkokon belül a cetlik csoportosítását egy szerző végezte el, a végleges összevonásokat azonban legalább két fő döntése eredményezte. Erre azért volt szükség, mert a koncepciók interpretálása valamelyest személyfüggő lehet (Rouse-Morris 1986), így egymás munkájának validálásával küszöböltük ki az egyéni értelmezésekből fakadó esetleges inkonzisztenciákat. Ez kifejezetten sok időt és befektetett munkát igényelt, mivel három kutató szemszögéből nézve is többféle értelmezés és csoportosítási javaslat született, amelyek közül a legpontosabb meglátásokat igyekeztünk beépíteni a végső modellbe. ...
Article
Diszciplínákon átívelő konszenzus támasztja alá, hogy a jelenlegi élelmiszerrendszerek fenntarthatatlan módon működnek és jelentős mértékben hozzájárulnak a globális éghajlatváltozáshoz, a biológiai sokféleség csökkenéséhez és a természetes környezet pusztulásához. Az utóbbi években tapasztalt extrém események ráirányították a közfigyelmet arra, hogy az élelmiszer-ellátás és -fogyasztás megszokott módjai súlyos szocioökológiai kihívásokkal néznek szembe. Tanulmányunk célja, hogy egy valós példán keresztül mutassa be a modellezés technikáját. A rendszertérképezés módszertani családjába tartozó mentális modellezés alkalmas eszköz az olyan complex rendszerek, mint például az élelmiszerrendszerek holisztikus értelmezésére és ábrázolására, ami lehetőséget nyújt a problémák és beavatkozási pontok rendszerszintű összefüggéseinek feltérképezésére. A kutatásba bevont közösség a Gazda-Molnár-Pék hálózat, amely környezetbarát és etikus termesztésből származó kenyérgabonából készült sütőipari termékek előállításával foglalkozó szereplőket foglal magába. A tanulmány empirikus hátterét tíz hálózattaggal készült, direkt elicitációra támaszkodó mentális modell adja. Ezek a modellek a közös alkotás keretében, félig strukturált interjú protokollba ágyazva feltárták azokat a kognitív konstrukciókat, amelyekkel a résztvevők a konvencionális sütőipari értéklánc fenntarthatatlan természetét leírják és magyarázzák, valamint azonosították azokat a kitörési pontokat, amelyek az értéklánc fenntarthatósági átmenetének mozgatórugói lehetnek. Tanulmányunk elsősorban módszertani fókuszú, ennek megfelelően az empíria gyűjtésének és feldolgozásának folyamatát kívánja végigjárni, bemutatva a módszer előnyeit és korlátait, valamint alkalmazásának további lehetőségeit.
... The primary construct representing cognitive learning processes is the mental model. A mental model is defined as a reflection of an operator's knowledge regarding a system's purpose, form, function, and observed and anticipated states (e.g., Johnson-Laird, 1983;Rouse & Morris, 1986). These models are based on a user's beliefs and perceptions, which can originate from various sources, such as the owner's manual, representing explicit knowledge. ...
Article
Full-text available
The appropriate usage of automated driving systems must be learned, with the learning process supposedly tied to the continuous engagement of the driver with the system. Despite extensive research on the short-term effects of automated driving systems, there is limited understanding of the long-term usage and the adaptation processes involved. This work addresses this gap through a systematic literature review focused on the effects of repeated or prolonged exposure to automated driving systems. In particular, the review aims to clarify definitions and theoretical constructs related to the long-term usage of driving systems, to provide recommendations for the methodological design of studies investigating learning and adaptation processes, and to synthesize existing findings on the effects of repeated exposure. A comprehensive literature research adhering to the PRISMA guidelines resulted in the review of 96 articles. The review emphasizes that other influencing factors, such as the sequence and quality of experienced events, play a crucial role in driver adaptation rather than the mere duration of exposure. Finally, a conceptual model of driver adaptation is developed, distinguishing between learning processes and temporary state adjustments and serving as a basis for exploring and comprehending the mechanisms and dynamics of such processes. This model highlights the need for future studies to adopt a more nuanced approach to "long-term" exposure, considering intermission periods and the influence of system capabilities, limits, and failures on the development of mental models. The paper concludes by offering recommendations for future research, stressing the importance of both behavioral and attitudinal adaptation.
... The psychologist Kenneth Craik (Craik, 1943) originally proposed the notion of mental models as small-scale models about how the world works that exist in people's minds. These models predict events, inference, and formal interpretation (Rouse & Morris, 1986). Some scholars have defined the mental model as a knowledge structure (Kraiger et al., 1993) or knowledge representation (Greene & Azevedo, 2009). ...
Article
Full-text available
In science education, the abstraction and complexity of scientific concepts are usually stumbling blocks that prevent students from learning science. Recently, augmented reality (AR) has offered transformative potential to support scientific concept learning by visualizing scientific phenomena and enhancing students' experiences. However, the lack of appropriate pedagogical scaffolds might not ensure effective learning in the AR learning environment (ARLE). In this study, we developed an AR-based learning tool (PeachBlossom) to support students' scientific concept learning and integrated the concept map strategy into AR learning activities. We conducted a quasi-experiment to examine the educational effectiveness of the concept map strategy on students' mental models and cognitive load in an ARLE. Eighty-five seventh graders (aged 12–14) from Central China were assigned into two groups (AR and AR with a concept map [ARCM]). The results showed that when considering students' prior mental models, the positive effect of the concept map strategy was found only in students with low and medium levels of prior mental models. In addition, the concept map strategy reduced students' mental effort but did not significantly affect students' mental load. This study emphasises the importance of considering students' prior mental models when implementing the concept map strategy in ARLEs.
Article
The rapid development of driving automation systems (DAS) in the automotive industry aims to support drivers by automating longitudinal and lateral vehicle control. As vehicle complexity increases, it is crucial that drivers comprehend their responsibilities and the limitations of these systems. This work investigates the role of the driver’s perception for the understanding of DAS by cross-analysing four empirical studies. Study I investigated DAS usage across different driving contexts via an online survey conducted in Germany, Spain, China, and the United States. Study II explored contextual DAS usage and the factors influencing drivers’ understanding through a Naturalistic Driving Study (NDS), followed by in-depth interviews. Study III employed a Wizard-of-Oz on-road driving study to simulate a vehicle offering Level 2 and Level 4 DAS, paired with pre- and post-driving interviews. Study IV following up used a Wizard-of-Oz on-road driving study to simulate Level 2 and Level 3 DAS and subsequent in-depth interviews. The findings from these studies allowed the identification of aspects constituting a driver’s understanding and factors influencing their perception of DAS. The identified aspects and factors were consolidated into a unified conceptual model, describing the process of how perception shapes the driver’s mental model of a driving automation system.
Article
Managerial mental representations (MMRs) are mental constructs that structure cognitive content to guide perception and interpretation. MMRs have been examined across a broad spectrum of management research contexts, leading to the use of numerous related terms such as “mental representation,” “schema,” “mental model,” “cognitive frame,” “cognitive map,” and “mindset.” This proliferation of terms has caused considerable definitional overlap and ambiguity. To foster definitional clarity, this review systematically analyzes 206 articles employing any of 33 MMR terms used during the past 30 years. We identify the conceptual and functional definition facets of MMRs and use them to analyze commonalities and differences among the most prominent MMR terms. We further examine both established and emerging discussions surrounding the characteristics of MMRs. Established discussions focus on MMR content and levels of analysis, while emerging discussions explore MMR permanence and implicitness. We propose suggestions to advance each conversation. Based on this comprehensive analysis, we create a guiding framework aiding future research to conceptualize MMRs and navigate terminology choices. Finally, we propose two future research directions: integrating the content and process perspectives on MMRs and applying an MMR lens to examine the emergence of artificial intelligence in organizations.
Article
Review of Systems (ROS) forms are a common tool for clinical assessment and billing. However, terms on ROS forms vary widely. Understanding the variations in ROS terms and perceived definitions (or misperceptions) has implications for patient care and effective implementation of electronic health record (EHR) documentation practices. To define a representative list of ROS terms and to assess the range of perceived definitions (and misperceptions) of ROS terms among clinicians and lay volunteers. Qualitative review of ROS forms and qualitative interviews with clinicians and lay volunteers. Eleven clinicians in general internal medicine and internal medicine sub-specialties; 30 lay volunteers. We employed a mental models framework approach to understand patient perceptions, accurate and inaccurate, of commonly used ROS terms. To do this, we first abstracted common ROS terminology used in medicine practices. Then, we developed consensus definitions of ROS terms with a sample of expert clinicians. Lastly, we conducted qualitative interviews with lay volunteers to assess their interpretations of these terms. By consensus, clinicians generally agreed on general principles of operational definitions. Yet, misinterpretations among clinicians were common, particularly with respect to timeframe, duration, and nuances of symptoms. Laypersons had varied interpretations which often differed from the clinicians. ROS forms are inconsistently operationalized and defined, with misunderstandings across both clinicians and the public. Simultaneously, the forms create additional burden across individuals involved in care delivery, including patients and families. Broad terms and discussion of concerns with patients may provide an efficient alternative to meeting both billing requirements and patient needs.
Article
Much of the research on silence as a response to group-level abuse suggests that group members, despite collectively recognizing the leader's abuse, choose to remain silent and accept the abuse due to fear of retaliation. However, there is also growing evidence that groups can be vulnerable to the development of blind spots that distort or entirely preclude any recognition of the leader's misdeeds. In this paper, we draw upon existing theory and research to present a conceptual framework that outlines the group characteristics and dynamics leading such blind spots—impediments that prevent the accurate recognition or appraisal of group psychological abuse committed by a leader. The aim is to explain why groups may unwittingly tolerate abusive leaders, and to provide insights into the complexities of group dynamics and leader influence.
Article
Full-text available
Following an analysis of task requirements for successful troubleshooting, this paper considers human abilities. limitations, and inclinations with respect to troubleshooting. Research on the effects of various approaches to the training of troubleshooting is reviewed. The extent to which troubleshooting performance is influenced by instruction is highly related to the level of explicitness of action-related information provided. An approach that forces people to use their system knowledge explicitly is a promising alternative to explicit instruction in algorithms or diagnostic heuristics, but such an approach is not supported by data from transfer studies. A combination of the two approaches may be the most effective means of teaching troubleshooting, and research evaluating the soundness of this idea should be conducted.
Article
Evidence is reviewed which suggests that there may be little or no direct introspective access to higher order cognitive processes. Subjects are sometimes (a) unaware of the existence of a stimulus that importantly influenced a response, (b) unaware of the existence of the response, and (c) unaware that the stimulus has affected the response. It is proposed that when people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit causal theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes.
Article
Eighteen subjects either controlled or monitored the system dynamics of a two-dimensional pursuit display. Detection of changes in system dynamics was faster and more accurate when subjects controlled them when they monitored. The skill acquired by controlling transferred positively to the monitoring mode, producing enhanced detection performance. There was no transfer from the monitoring mode to the controlling mode. Monitors of automatic systems who have had prior manual experience rely upon different perceptual cues in making their detection response than do those who have had no experience. The training implications of these findings are discussed.
Article
An analysis of phase-plane switching loci was used to infer the human operator's internal model of a dynamic system he was attempting to control. Striking differences were found between the internal model and the actual dynamic system, and the internal model exhibited orderly changes with practice. The difficulties involved in incorporating a non-veridical internal model into optimal control models of the human operator are discussed.
Article
Many human operator studies have used successfully the concept that the human operator performs his task on the basis of certain knowledge about the system to be controlled, called the internal model. In this paper, the literature on manual control will be reviewed briefly with attention focused on the use of the internal model concept. To illustrate the applicability of the internal model concept in the field of man-machine systems, an application is given in the human control of large ships. A model to describe the helmsman's behavior in steering a supertanker, and the influence on his behavior of additional displays such as a rate of turn indicator will be described.
Article
Three experiments examined the influence of previously established associations on production of the box solution to the candle problem. In Experiment I, experimental subjects learned candle—box as one of a list of verbal paired associates before attempting to solve the candle problem; control subjects learned the pair candle—paper. The candle—box association was effective in cuing box solutions only when the experimental subjects were informed that one pair from the just-learned list would help them solve the problem. In Experiments II and III these findings were replicated using physical objects in the paired associate task: learning to place a candle in a box as a paired associate had no effect on later production of the box solution to the candle problem, unless subjects were informed that one of the paired associates was relevant. The results were intrepreted on the basis of a neo-Selzian model of problem solving.
Article
Approaches to training based on theory, proceduralization, and simulation are reviewed. The strengths of each approach are combined to produce a new method involving a mix of low, moderate, and high fidelity simulators. Applications of the mixed-fidelity approach to aircraft maintenance and marine engineering are discussed.
Article
Subjects were required to control a relatively complex and slow-response laboratory process plant for a total of three hours. Output error and control movement scores were measured, and power spectral density and cross correlation functions were computed. The subjects' strategies were assessed by observation and from the responses to a questionnaire given after they had completed the experiment. The results show that when intermediate information from the plant is available, subjects tend to develop an anticipatory model, which enables them to control the plant more effectively than with output error feedback alone. Special instructions about the structure and dynamics of the plant tended to facilitate the development of the anticipatory model. Under the particular conditions of the experiment, a time-history record of the control settings and output error were of little value. It was found that observation of the output of a three-term automatic controller was of limited value as a training aid. In general, the human operator's strategies and information requirements were very different from those of the automatic controller.