Conference PaperPDF Available

A connectionist model of motivation

Authors:

Abstract and Figures

This paper presents Mona, a connectionist model of motivation implemented by a network of neuron-like components whose interactions drive behavior toward goals which reduce homeostatic needs. The network incorporates a short term memory capability allowing it to model a state-space with relatively few neurons
Content may be subject to copyright.
Abstract
This paper presents Mona, a connectionist model of moti-
vation implemented by a network of neuron-like compo-
nents whose interactions drive behavior toward goals
which reduce homeostatic needs. The network incorporates
a short term memory capability allowing it to model a
state-space with relatively few neurons.
Introduction
This paper presents Mona, a connectionist model of moti-
vation. In this context, a motive, instead of being a type of
goal state [4], is a function which drives behavior toward
goals which reduce homeostatic needs. In living organisms
these are basic needs such as thirst, hunger, sex, etc. The
earliest neurological mechanisms evolved to ensure sur-
vival and reproduction by satisfying these needs. Moreover,
given nature’s penchant for creating new capabilities by
extending and adapting old ones (the reptilian, mammalian,
and neocortical layers of the human brain a case in point), it
is plausible that these fundamental mechanisms underpin
more intelligent behavior.
Mona’s neural network incorporates a short term memory
capability which allows it to model a state-space with rela-
tively few neurons, thus addressing the problem of manag-
ing the intractable size of real-world state-spaces.
Mona uses the same “hardware” as a predecessor, GIL (a
Goal-directed Inductive Learner) [6], interacting with its
environment as shown in Figure 1. All knowledge of the
state of the environment is absorbed through “senses”;
there are no special modalities or channels by which
instructions or meta-information are given. Responses are
expressed to the environment with the goal of eliciting sen-
sory inputs which are internally associated with the reduc-
tion of needs.
Human and animal intelligence is performed by a network
of neurons which operate by mutual excitation and inhibi-
tion [2,8]. Mona is an abstraction of a natural neurological
system consisting of a network of computational units, each
of which is capable of receiving and expressing mutually
mediating influences. To elucidate by example, consider a
functional view of the nervous system of a simple organism
which controls feeding behavior, shown in Figure 2. Feed-
ing consists of the sequence of catching, killing and eating
prey.
“Feed”, “Catch”, “Kill”, and “Eat” are neurons which fire
when their namesake events occur. The solid arrows are
enabling (or disabling) signals directed from one neuron to
another; these signals are analogous to the excitatory and
inhibitory influences of living neurons. The dotted arrows
are drive, or motivating, signals derived from the organ-
ism’s need for food. The above is read as follows. When the
organism becomes hungry, the goal associated with the
need of hunger-reduction, “Eat”, becomes a source of drive
signals propagating to antecedent neurons in a kind of
“bucket brigade” [1] from primary to secondary goals,
causing them to become motivated, or capable of respond-
ing. Once the prey is caught, the killing neuron is enabled,
andoncethatisdone,theeatingneuronisenabled.The
enabling of a neuron means that it is sensitized, or ready to
fire. Thus a neuron’s state consists of the 3-tuple (drive,
enablement, firing). The events with which neurons are
associated can be drawn from sensors, responses, or, as in
the case of the mediating “Feed” neuron, the states of com-
ponent neurons.
Neurons maintain a base level of enablement, analogous to
long term memory, which is dynamically modified, as part
of short term memory, to accomplish a concerted operation.
For example, the “Feed” and “Catch” neurons might be
enabled by default, while the “Kill” and “Eat” neurons rest
in a disabled state awaiting the catching of prey. This would
prevent an attempt to kill an object being caught by the
organism for the purpose of mating or building a nest.
The ability of neurons to disable other neurons allows fur-
ther opportunity for context-dependent cooperation. For
example, it would be sensible for the “Catch” neuron to dis-
able itself upon firing to prevent the seizing of prey while a
catch is being eaten.
Environment
Mona
responses
sensory data
Figure 1 - The Mona/environment interaction
Figure 2 - Feeding control
Feed
Kill
Catch
Eat
enable
enable
drive
drive
A Connectionist Model of Motivation
Thomas E. Portegys, Lucent Technologies, portegys@lucent.com
Description
The events which neurons represent can be drawn from
sensors, responses, or the states of “component” neurons,
calling for three types of neurons. Neurons attuned to sen-
sors are “receptors”, those associated with responses are
“motors”, and those mediating other neurons are “media-
tors”. A mediator neuron controls the transmission of drive
and enablement through the sequence of its component
neurons.
Here is an example task: Mona must get into her home
from somewhere out in the world, a locked door barring the
way inside, thus necessitating the use of a key to unlock the
door. She needs to know several things, such as how to get
to the door, how to unlock the door, and how to enter her
home through the unlocked door. Mona must produce a
sequence of responses to proceed from an initial keyless
condition in the world to her home.
Figure 3 depicts the portion of Mona’s neural network
which manages the entering of home through an unlocked
door. The house-shaped objects are receptor neurons, such
as the one marked “Door”; the inverted houses are motor
neurons, such as “Move”; and the diamonds are mediator
neurons, such as “Enter home”. The numbers in parenthe-
ses indicate drive levels, which will be discussed presently;
suffice it to say for now that the “Home” receptor has been
associated with the reduction of a need, and is thus a goal
for Mona. The numbered arrows proceeding from a media-
tor indicate a sequence of neurons mediated by it, known to
the mediator as its “events”. In this case, “Enter home”
mediates a sequence of events associated with the receptor
“Door”, the motor “Move”, and the receptor “Home”. This
mediator thus governs the process of entering home by
moving through a door. The type of mediation exerted by
“Enter home” is an enabling one, meaning that it allows fir-
ing events to propagate enabling influences. Although not
depicted in this example, a disabling mediator has dotted
arrows instead of solid.
Initially the door is locked, meaning that the “Enter home”
mediator is disabled, or not expected to function, and this is
represented by the dotted outline of the mediator. In order
to enable “Enter home”, another mediator must come into
play: “Enable enter home”. This mediator will enable the
“Enter home” neuron when the “Unlock door” neuron fires.
However, the “Unlock door” neuron is also in a disabled
state, requiring “Get key”, shown in Figure 4, to fire as a
precondition - the door cannot be unlocked without the key.
The final two pieces are supplied in Figure 5: how to get a
Figure 3 - Enable enter home/Enter home
Enable
enter
home
(0)
Unlock
door
(0)
0
Enter
home
(0)
1
Door
(0)
0
Move
(0)
1
Home
(1)
2
Figure 4 - Enable unlock door/Unlock door
Door
(0)
0 2
Use
key
(0)
1
Enable
unlock
door
(0)
Get
key
(0)
0
Unlock
door
(0.97)
1
key (“Get key”), and how to get to the door (“Go to door”).
Since these diagrams show the initial state of network, the
“World” and “No key” receptors are firing, denoted by the
double outlines on their graphical symbols.
When this example is run on a software implementation,
the following trace of firing neurons is obtained:
Receptor firing: No key
Receptor firing: World
Motor firing: Take key
Receptor firing: Key
Mediator firing: Get key
Receptor firing: World
Motor firing: Move
Receptor firing: Key
Receptor firing: Door
Mediator firing: Go to door
Motor firing: Use key
Receptor firing: Key
Receptor firing: Door
Mediator firing: Unlock door
Mediator firing: Enable unlock door
Motor firing: Move
Receptor firing: Key
Receptor firing: Home
Mediator firing: Enter home
Mediator firing: Enable enter home
The following textual notation can also be used to more
concisely describe a network.
Receptor neuron:
Receptor{“<name>”: (<sensors>)}
where <sensors> describes the sensory condition under
which the receptor fires.
Motor neuron:
Motor{“<name>”: (<responders>)}
where <responders> describes the responder output when
the neuron fires.
Mediator neuron:
Mediator{“<name>”: <enabled value>/<enabling
value>(<events>)}
where <enabled value> is the mediator’s initial state of
enablement, either “enabled” or “disabled”; <enabling
value> is the type of enabling influence on its event neu-
rons, either “enabling” or “disabling”; and <events> is a
comma-separated sequence of mediated neuron names.
Appendix 1 contains pseudo-code details of the following
functions. Neurons use a simple firing threshold function.
Receptor and motor neurons fire when their associated sen-
sory/response events occur. A mediator neuron contains an
eventFiring() function which fires the mediator when
each event in its sequence fires within the maximum delay
imposed by the mediator’s maxEventDelay value. If the
mediator is enabled, the eventFiring() function also
allows enablement/disablement to be propagated from a fir-
ing component neuron to the next.
Mona’s raison d’etre is need-reduction. For this purpose,
some receptors are associated with the reduction of needs
and are thereby defined to be goals. For example, a warmth
receptor would be associated with a reduction of feeling
cold. The drive() function allows need to propagate from
goal sources to other neurons in the network, attenuating to
preferably drive “closer” neurons and to prevent endless
propagation. Need causes a mediator neuron to perform a
check: if it is enabled, it will pass the need into its expected
event neuron in order to motivate it to occur; otherwise, it
passes the need to its super-mediating neurons to motivate
them to enable it. The mediator-specific eventDrive()
function provides a way for a mediator to pass need from
its final event to its expected event. Upon completion of
propagation, the need resident in motor neurons is trans-
lated into the response potentials associated with those neu-
rons. The system response is that associated with the
Figure 5 - Get key/Go to door
Get
key
(0.95)
0 1 2
Go
to
door
(0)
World
(0)
0
Move
(0)
1
Door
(0)
2
No
key
(0)
Take
key
(0.94)
Key
(0)
maximum potential value.
The network as a state-space model
The network must embody a model of the environment.
However, instead of directly implementing a state-space of
possibly intractable proportions, the enabling and disabling
operations allow the “topology” of the network to be modi-
fied by the act of operating it. The network may be consid-
ered to be a hybridization of a logic engine and a state-
space search engine having two key properties: (1) more
than one state (neuron) can be current (firing) at a particular
time, and (2) current states can “prove” (enable) or “dis-
prove” (disable) the reachability of states. These properties
allow a network to assume a large number of states relative
to the number of neurons which comprise it.
As an illustration, consider the personal financial state-
space shown in Figure 6, in which money is earned at a job
(pocket), spent at a store (broke), and deposited/withdrawn
at a bank (saved). For the sake of simplicity, let there be a
single quantum of money in the economy, e.g., if the
money is in the bank, more money cannot be earned. The
space contains (3 places) X (3 money situations) = (9
states).
Figure 7 shows a network representation of the problem in
abbreviated graphical form, for clarity. The ‘+’/’-’ indicate
enabling/disabling influences. It can be seen that moving
between places (the top portion of the figure) is indepen-
dent of the transactions which transpire at those places
since the enablement states (for earn, spend, deposit, and
withdraw) store the transaction possibilities. The addition
of places not involved in monetary dealings, a reasonable
real-world supposition, alters only the place transition por-
tion of the network, avoiding the combinatorial expansion
of the state-space model. For example, if “home”, “park”,
and “museum” are added as places, the state-space expands
by 9 states, while the network expands by 3 neurons.
A comparison with other ANNs
In contrast with Hopfield and backpropagating ANNs [3,5],
which are primarily stateless pattern classifiers, Mona
employs a short term memory capability to navigate the
environment toward goals. Short term memory is imple-
mented by the retention of neural firing sequences and by
the enabling and disabling operations, which modify the
state of the network based on sensory events and responses
motivated by needs. The incorporation of memory into
ANNs is a topic of current interest [7].
Demonstration
This problem demonstrates how the neural network can be
used to simulate a foraging ant. Natural ants are known to
follow trails of chemical markers to get about. In this prob-
lem, the artificial ant must following a meandering trail of
marks from its nest to a piece of food, which it must then
carry back to the nest. The problem illustrates the interplay
of mediator neurons, using mutual enablement and disable-
ment, to guide the ant to a goal in an unpredictable environ-
ment.
A sample trail is shown in Figure 8. The ant starts at its nest
and follows the trail marks to the cake. The trail is ran-
domly generated in such a way that it never crosses itself.
The ant senses what is at the current location, and has the
ability to move forward and backward, grab and drop the
food, and orient itself in the direction of the trail. Generally
a trail leads straight on, so the most efficient strategy is to
plunge ahead and orient upon leaving the trail. An initial
positive need is associated with the receptor which detects
the presence of food at the nest.
broke
saved
pocket
broke
pocket
saved
broke
pocket
saved
job
bank
store
Figure 6 - Financial state-space
saved
pocket
broke
pocket
job
store
bank
job
store
bank
+
++
Figure 7 - Financial network
+
+
bank
++
-
+
+
+
+
-
--
earn
spend
deposit
withdraw
The following mediators were defined for this problem:
# Top-level control: “Forage” is to “Get food” then
# enable “Bring food” to bring it back to the nest.
Mediator{“Forage”: enabled/enabling (“Get
food”,”Bring food”)}
Mediator{“Get food”: enabled/enabling (“Grab
food”,”Orient”,”Mark”)}
Mediator{Grab food”: enabled/enabling
(“Food”,”Grab”,”No food”)}
Mediator{“Bring food”: disabled/enabling
(“Nest”,”Drop”,”Food at nest”)}
# After food obtained, reset “Bring food” to disabled
# state for next forage.
Mediator{“Disable bring food”: enabled/disabling
(“Food at nest”,”Bring food”)}
# “Travel trail” is the normal way to move.
# “To food” and “To nest” associate “Travel trail” with
# getting somewhere.
Mediator{“Travel trail”: enabled/enabling
(“Mark”,”Forward”,”Mark”)}
Mediator{“To food”: enabled/enabling (“Travel
trail”,”Food”)}
Mediator{“To nest”: enabled/enabling (“Travel
trail”,”Nest”)}
# If the ant steps off the trail (“No mark”), these
# control getting it back on:
# 1) Disable normal traveling (“Travel trail”).
# 2) Initiate “Trail search” to back-up and orient.
# 3) Once oriented, enable normal traveling.
Mediator{“Disable travel trail”: enabled/disabling
(“No mark”,”Travel trail”)}
Mediator{“Trail search”: enabled/enabling (“No
mark”,”Backward”,”Mark”,”Orient”,”Mark”)}
Mediator{“Enable travel trail”: enabled/enabling
(“Trail search”,”Travel trail”)}
Running the program on the sample results in the following
responses to fetch the cake to the nest:
Forward X 5
Backward
Orient
Forward X 2
Backward
Orient
Forward X 8
Backward
Orient
Forward X 6
Backward
Orient
Forward X 3
Grab
Orient
Forward X 4
Backward
Orient
Forward X 6
Backward
Orient
Forward X 8
Backward
Orient
Forward X 2
Backward
Orient
Forward X 4
Drop
Conclusion
When a cat wants to catch a mouse, why does it wait at one
mouse hole and not another? Perhaps because the context
surrounding a particular mouse hole - how to get there, a
window nearby - mediates the expectation of success. This
paper describes a model of this type of motivated behavior:
a network wherein needs drive events along enabled paths
toward goals. To do this, the network incorporates a short
term memory of the firing sequences and the enablement
states of neurons.
In order to focus on the existing capabilities of Mona, of
which there remains much to explore, a learning capacity
was deferred; learning is thus a major area for investigation.
It also seems possible that the neural model may be cast
into parallel/specialized hardware to improve its perfor-
mance.
The complete C++ source code for Mona and several prob-
lems may be downloaded from http://www.megsinet.net/
portegys/.
Figure 8 - Meandering ant trail
References
[1] Forsyth, R. and Rada, R. (1986) Machine Learning: Applica-
tions in expert systems and information retrieval, Ellis Hor-
wood Limited: Chichester, West Sussex.
[2] Holmes, W. and Rall, W. (1992) Electrotonic Models of Neu-
ronal Dendrites and Single Neuron Computation. Thomas
McKenna, Joel Davis, and Steven F. Zornetzer (Eds.) Single
Neuron Computation, Academic Press: San Diego.
[3] Hopfield, J. and Tank, D. (1986) Computing with neural cir-
cuits: A model. Science, 233, 625-633.
[4] McClelland, D. (1987) Human Motivation, Cambridge Uni-
versity Press: Cambridge.
[5] Munakata, T. (1998) Fundamentals of the new artificial intel-
ligence: beyond traditional paradigms, Springer-Verlag Inc.:
New York.
[6] Portegys, T. (1986) GIL - an experiment in goal-directed
inductive learning. Ph.D. dissertation, Northwestern Univer-
sity, 109 pp. (available from UMI at http://www.umi.com/).
[7] Roy, A. (1997) Panel Discussion at ICNN97 on Connection-
ist Learning, Levine, D. (Ed.) Neural Networks, Vol. II, No.
2.
[8] Thompson, R., Berger, T., and Berry S. (1980) Brain Anat-
omy and Function. Wittrock, M. (Ed.) The Brain and Psy-
chology, Academic Press.
Appendix 1 - C++ pseudo-code
// Detect firing of event
Mediator::eventFiring(eventNumber)
{
if (eventNumber != expectedEvent) return;
if (eventNumber == finalEvent)
{
// Mediator firing - notify super-mediators.
firing = TRUE;
for (notify = eventNotify.first(); notify != NULL;
notify = eventNotify.next())
{
mediator = notify->mediator;
event = notify->eventNumber;
mediator->eventFiring(event);
}
expectedEvent = 0; // Reset event counter.
return;
}
// Expect next event.
expectedEvent++;
eventTimer = maxEventDelay;
// If enabled, propagate enablement.
if (enabled == TRUE)
{
neuron = components[expectedEvent];
neuron->enabled = eventEnablement;
}
}
// Neuron drive
Neuron::drive(need) {
{
if (need <= currentNeed) return;
currentNeed = need;
if ((need -= ATTENUATION) <= 0) return;
// If enabled mediator, drive expected event.
if (type == MEDIATOR && enabled == TRUE)
{
neuron = components[expectedEvent];
neuron->drive(need);
return;
}
// Drive super-mediators.
for (notify = eventNotify.first(); notify != NULL;
notify = eventNotify.next())
{
mediator = notify->mediator;
event = notify->eventNumber;
mediator->eventDrive(event, need);
}
}
// Mediator event drive
Mediator::eventDrive(eventNumber, need)
{
if (eventNumber != finalEvent) return;
if ((need -= ATTENUATION) <= 0) return;
if (enabled == FALSE)
{
// Drive super-mediators to enable this.
for (notify = eventNotify.first(); notify != NULL;
notify = eventNotify.next())
{
mediator = notify->mediator;
event = notify->eventNumber;
mediator->eventDrive(event, need);
}
} else { // Drive expected event.
if (expectedEvent != finalEvent)
{
neuron = components[expectedEvent];
neuron->drive(need);
}
}
}
... Learning, for example, is reflected as change in neural connections through experience and simulated for example through Hebbian algorithms (McClelland, Rumelhart, & Hinton, 1986 ) or backpropagation (see, e.g., Rumelhart, Hinton, & Williams, 1986). It is important to mention that there is not one connectionist model, but there is a multitude of different connectionist models each with different assumptions, focusing on different phenomena (e.g., connectionist models of emotion regulation, Armony, Servan-Schreiber, Cohen, & LeDoux, 1997 ; or connectionist models of motivation, e.g., Portegys, 2001). It is also important to note that in the literature connectionism is often discussed as contrary to production-rule architectures. ...
Article
Full-text available
This article describes PSI theory, which is a formalized computational architecture of human psychological processes. In contrast to other existing theories, PSI theory not only models cognitive, but also motivational and emotional processes and their interactions. The article starts with a brief overview of the theory showing the connections between its different parts. We then discuss the theory’s components in greater detail. Key constructs and processes are the five basic human needs, the satisfaction of needs using the cognitive system, including perception, schemas in memory, planning, and action. Furthermore, emotions are defined and the role of emotions in cognitive and motivational processes is elaborated, referring to a specific example. The neural basis of the PSI theory is also highlighted referring to the “quad structure,” to specific brain areas, and to thinking as scanning in a neural network. Finally, some evidence for the validity of the theory is provided.
... Benson and Nilsson, 1995) are typically symbolic, not connectionistic systems, necessitating a novel learning solution for Mona. Mona has modeled complex behavior on a number of tasks, including foraging and cooperative nest-building (Portegys, 1999Portegys, & 2001). See www.itk.ilstu.edu/faculty/portegys/programs/NestViewer/NestViewer.html for an exhibit of the nest-building task. ...
Article
Full-text available
An important function of many organisms is the ability to learn contextual information in order to increase the probability of achieving goals. For example, a cat may watch a particular mouse hole where she has experienced success in catching mice in preference to other similar holes. Or a person will improve his chances of getting back into his house by taking his keys with him. In this paper, predisposing conditions that affect future outcomes are referred to as environmental contexts. These conditional probabilities are learned by a goal-seeking neural network. Environmental contexts of varying complexities are generated that contain conditional state-transition probabilities such that the probability of some transitions is affected by the completion of others. The neural network is capable of expressing responses that allow it to navigate the environment in order to reach a goal. The goal-seeking effectiveness of the neural network in a variety of environmental complexities is measured.
... Their general infrastructure can be expanded to involve a larger array of resources to satisfy, such as those of instinctive drives. Portegys (1999) describes his Mona architecture, which models human cognition with special emphasis on the concept of behavioral motivation, where motives, rather than goals, drive action. Motives are the general tendency to satisfy immediate homeostatic needs (those bodily functions required to maintain physiological balance). ...
Article
Full-text available
This paper presents the feasibility of implementing an instinct-based behavioral model in an artificial agent. This research serves as a collection of information related to devising an artificially intelligent entity whose decisions are based on human instinct. The treatment of instincts will be defined by the bounds of psychology and ethnology. The research focuses on two concepts: (1) the psychological and ethological implications of instinctive behavior, and (2) the use of instinct-like functional primitives in artificial intelligence systems. The primary goal of this paper is to survey the existing research that may reveal insight in producing a realistic behavior model based on instincts. Such a model would serve as the basis for a human behavior model to be used in simulation and training scenarios.
... Planners [12] are typically symbolic, not connectionistic systems, necessitating a novel learning solution for Mona. Mona has modeled complex behavior on a number of tasks, including foraging and cooperative nest-building [13,14]. For an exhibit of the nest-building task, see www.itk.ilstu.edu/faculty/portegys/programs/NestViewer ...
Conference Paper
Full-text available
An important function of many organisms is the ability to use contextual information in order to increase the probability of achieving goals. For example, a street address has a particular meaning only in the context of the city it is in. In this paper, predisposing conditions that influence future outcomes are learned by a goal-seeking neural network called Mona. A maze problem is used as a context-learning exercise. At the beginning of the maze, an initial door choice forms a context that must be remembered until the end of the maze, where the same door must be chosen again in order to reach a goal. Mona must learn these door associations and the intervening path through the maze. Movement is accomplished by expressing responses to the environment. The goal-seeking effectiveness of the neural network in a variety of maze complexities is measured.
Article
Full-text available
Goal-seeking behavior in a connectionist modelis demonstrated using the examples of foragingby a simulated ant and cooperativenest-building by a pair of simulated birds. Themodel, a control neural network, translatesneeds into responses. The purpose of this workis to produce lifelike behavior with agoal-seeking artificial neural network. Theforaging ant example illustrates theintermediation of neurons to guide the ant to agoal in a semi-predictable environment. In thenest-building example, both birds, executinggender-specific networks, exhibit socialnesting and feeding behavior directed towardmultiple goals.
Thesis
Full-text available
GIL, a goal-directed inductive learning program, is presented. GIL learns without environment-dependent heuristics and without the "special" help of a teacher by modeling an abstraction of instrumental conditioning. In addition, GIL uses a secondary goal value learning mechanism as a substitute for state-space searching. GIL's learning of four problems is presented: (1) the game of tic-tac-toe, (2) a maze problem in which the ability to learn a novel maze was tested as a function of previous exposure to similar mazes, (3) a dual goal problem, and (4) a pattern recognition problem, in which "friend" and "foe" patterns were to be distinguished.
Article
Full-text available
A new conceptual framework and a minimization principle together provide an understanding of computation in model neural circuits. The circuits consist of nonlinear graded-response model neurons organized into networks with effectively symmetric synaptic connections. The neurons represent an approximation to biological neurons in which a simplified set of important computational properties is retained. Complex circuits solving problems similar to those essential in biology can be analyzed and understood without the need to follow the circuit dynamics in detail. Implementation of the model with electronic devices will provide a class of electronic circuits of novel form and function.
Article
This chapter focuses on different electrotonic models of neuronal dendrites and single neuron computation to reduce the number degrees of freedom. The areas that help in reducing the degrees of freedom include the importance for modeling studies of having good estimates of the electrotonic structure of a cell, the dynamic range of computational possibilities available to a neuron, by considering its possible resting states, assuming that a real neuron ever can be considered to be at rest, and variables that may be important for producing modification in dendritic spines. Dendritic models concerned with computation must make assumptions about the morphological and electrotonic structure of the neuronal dendrites. The dynamic range of computational possibilities for a neuron is immense.
Brain Anatomy and Function
  • R Thompson
  • T Berger
Thompson, R., Berger, T., and Berry S. (1980) Brain Anatomy and Function. Wittrock, M. (Ed.) The Brain and Psychology, Academic Press.
Panel Discussion at ICNN97 on Connectionist Learning
  • A Roy
Roy, A. (1997) Panel Discussion at ICNN97 on Connectionist Learning, Levine, D. (Ed.) Neural Networks, Vol. II, No. 2.
The Brain and Psychology
  • R Thompson
  • T Berger
  • S Berry
  • M Wittrock