A preview of this full-text is provided by Springer Nature.
Content available from Nature Human Behaviour
This content is subject to copyright. Terms and conditions apply.
Nature Human Behaviour | Volume 7 | March 2023 | 442–463 442
nature human behaviour
Article
https://doi.org/10.1038/s41562-023-01543-7
Human generalization of internal
representations through prototype learning
with goal-directed attention
Warren Woodrich Pettine1, Dhruva Venkita Raman2, A. David Redish 3,4
& John D. Murray 1,4
The world is overabundant with feature-rich information obscuring the
latent causes of experience. How do people approximate the complexities
of the external world with simplied internal representations that
generalize to novel examples or situations? Theories suggest that
internal representations could be determined by decision boundaries
that discriminate between alternatives, or by distance measurements
against prototypes and individual exemplars. Each provide advantages
and drawbacks for generalization. We therefore developed theoretical
models that leverage both discriminative and distance components to form
internal representations via action-reward feedback. We then developed
three latent-state learning tasks to test how humans use goal-oriented
discrimination attention and prototypes/exemplar representations. The
majority of participants attended to both goal-relevant discriminative
features and the covariance of features within a prototype. A minority
of participants relied only on the discriminative feature. Behaviour of all
participants could be captured by parameterizing a model combining
prototype representations with goal-oriented discriminative attention.
The high-dimensional sensory environment we experience is struc-
tured by underlying latent states
1,2
. Internal representations of these
latent states must generalize to new observations or situations. For
example, people must not only recognize nutritious and poisonous
fruits, but must also generalize to all cases of discriminating between
them. Previous models of latent-state learning focused on conditions
where latent states were defined by the underlying reward probabil-
ity, with environmental features being irrelevant
2–6
(Supplementary
Table 1). However, causal latent states in the world are often signalled
by substantially fewer features than are available in the vast space of
feature dimensions we experience (for example, a poisonous fruit is
defined by its colour and shape, but the position of the sun is irrelevant).
Moreover, latent states can exist in recursive hierarchical relationships
(for example, the forest is a place where many poisonous fruits can
grow, and a poisonous fruit grows in the forest). How do people use
experiences caused by latent states to learn generalizable internal rep-
resentations? Furthermore, what role does goal-directed attention play
in generalizing internal representations to new individual latent-state
examples (observations) or new latent-state contexts (situations)? (See
the Glossary of terms in Supplementary Table 2 (refs. 7–19)).
The field of category learning has proposed several models of
how features can be organized into internal representations of latent
states20 (Supplementary Table 1). With ‘prototype’ or ‘exemplar’
models, new observations are categorized by their distance from
either an idealized internal-state prototype21,22, or individual past
state examples (exemplars)23–26. Both of these models assign a new
observation’s internal-state membership to the state with the short-
est distance. Although a prototype model may use the covariance
Received: 5 December 2021
Accepted: 31 January 2023
Published online: 9 March 2023
Check for updates
1Department of Psychiatry, Yale School of Medicine, New Haven, CT, USA. 2Department of Informatics, University of Sussex, Brighton, UK. 3Department of
Neuroscience, University of Minnesota, Minneapolis, MN, USA. 4These authors jointly supervised this work: A. David Redish and John D. Murray.
e-mail: john.murray@yale.edu
Content courtesy of Springer Nature, terms of use apply. Rights reserved