ArticlePDF Available

Visual memory benefits from prolonged encoding time regardless of stimulus type

Authors:

Abstract and Figures

It is generally assumed that the storage capacity of visual working memory (VWM) is limited, holding about 3-4 items. Recent work with real-world objects, however, has challenged this view by providing evidence that the VWM capacity for real-world objects is not fixed but instead increases with prolonged encoding time (Brady, Störmer, & Alvarez, 2016). Critically, in this study, no increase with prolonged encoding time was observed for storing simple colors. Brady et al. (2016) argued that the larger capacity for real-world objects relative to colors is due to the additional conceptual information of real-world objects. With basically the same methods of Brady et al., in Experiments 1-3, we were unable to replicate their basic findings. Instead, we found that visual memory for simple colors also benefited from prolonged encoding time. Experiment 4 showed that the scale of the encoding time benefit was the same for familiar and unfamiliar objects, suggesting that the added conceptual information does not contribute to this benefit. We conclude that visual memory benefits from prolonged encoding time regardless of stimulus type. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Content may be subject to copyright.
Journal of Experimental Psychology:
Learning, Memory, and Cognition
Visual Memory Benefits From Prolonged Encoding Time
Regardless of Stimulus Type
Xinyu Li, Zijun Xiong, Jan Theeuwes, and Benchi Wang
Online First Publication, May 21, 2020. http://dx.doi.org/10.1037/xlm0000847
CITATION
Li, X., Xiong, Z., Theeuwes, J., & Wang, B. (2020, May 21). Visual Memory Benefits From Prolonged
Encoding Time Regardless of Stimulus Type. Journal of Experimental Psychology: Learning,
Memory, and Cognition. Advance online publication. http://dx.doi.org/10.1037/xlm0000847
Visual Memory Benefits From Prolonged Encoding Time Regardless of
Stimulus Type
Xinyu Li and Zijun Xiong
Zhejiang Normal University
Jan Theeuwes
Zhejiang Normal University and Free University, Amsterdam,
the Netherlands
Benchi Wang
South China Normal University and Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, Guangzhou
It is generally assumed that the storage capacity of visual working memory (VWM) is limited, holding
about 3– 4 items. Recent work with real-world objects, however, has challenged this view by providing
evidence that the VWM capacity for real-world objects is not fixed but instead increases with prolonged
encoding time (Brady, Störmer, & Alvarez, 2016). Critically, in this study, no increase with prolonged
encoding time was observed for storing simple colors. Brady et al. (2016) argued that the larger capacity
for real-world objects relative to colors is due to the additional conceptual information of real-world
objects. With basically the same methods of Brady et al., in Experiments 1–3, we were unable to replicate
their basic findings. Instead, we found that visual memory for simple colors also benefited from
prolonged encoding time. Experiment 4 showed that the scale of the encoding time benefit was the same
for familiar and unfamiliar objects, suggesting that the added conceptual information does not contribute
to this benefit. We conclude that visual memory benefits from prolonged encoding time regardless of
stimulus type.
Keywords: visual working memory, long-term memory, encoding time benefits, real-world objects
Human visual memory systems, especially visual working mem-
ory (VWM) and visual long-term memory (VLTM), are funda-
mental to human cognition. Their capacities are of great impor-
tance as they are strongly related to overall cognitive ability. The
standard view is that the storage capacity of VLTM is large and is
assumed to hold more than thousands of objects with numerous
details (Brady, Konkle, Alvarez, & Oliva, 2008; Konkle, Brady,
Alvarez, & Oliva, 2010), while the storage capacity of VWM is
limited (e.g., Cowan, 2001) and is assumed to hold about three to
four items after hundreds of milliseconds of presentation time
(Bays, Gorgoraptis, Wee, Marshall, & Husain, 2011; Luck &
Vogel, 1997). Recent work with real-world objects, however, has
challenged the latter view by providing evidence that the VWM
capacity for real-world objects is not fixed but instead increases
with prolonged encoding time (Brady, Störmer, & Alvarez, 2016).
Brady et al. (2016) showed that people were capable of mem-
orizing real-world objects when encoding time was prolonged,
while no such benefit was found for encoding simple stimuli like
colors. They argued that compared to simple stimuli, real-world
objects have additional conceptual information, which might be
related to this encoding time benefit in VWM. However, it is also
possible that this benefit is solely due to the involvement of VLTM
system, which has a very large capacity and is assumed to play an
important role for encoding real-world objects (Brady, Konkle,
Oliva, & Alvarez, 2009).
To examine the involvement of VLTM, Brady et al. (2009)
employed electrophysiological recordings and measured the con-
tralateral delay activity (CDA). Because the CDA amplitude in-
creases with the number of stored objects and correlates with
individual memory capacity (Vogel & Machizawa, 2004), it is
generally believed that the CDA provides a neural signature of
active storage in VWM. Critically, it was shown that the CDA
disappears when the stored information had been entered into
VLTM (Carlisle, Arita, Pardo, & Woodman, 2011). Brady et al.
XXinyu Li and Zijun Xiong, Department of Psychology, Zhejiang
Normal University; Jan Theeuwes, Department of Psychology, Zhejiang
Normal University, and Department of Experimental and Applied Psychol-
ogy, Free University, Amsterdam, the Netherlands; XBenchi Wang, In-
stitute for Brain Research and Rehabilitation, Center for Studies of Psy-
chological Application, Guangdong Key Laboratory of Mental Health and
Cognitive Science, South China Normal University, and Key Laboratory of
Brain, Cognition and Education Sciences (South China Normal Univer-
sity), Ministry of Education, Guangzhou.
Xinyu Li and Zijun Xiong equally contributed on to the current research.
Xinyu Li, Zijun Xiong, and Benchi Wang designed the experiment. Zijun
Xiong collected and analyzed the data. All authors wrote the article
together and approved the final version of the manuscript for submission.
This research was supported by the Natural Science Foundation of Zheji-
ang Province (LY18C090007) to Xinyu Li, a China Scholarship Council
(CSC) scholarship (201508330313) and the Guangdong Regional Joint
Foundation (2019A1515110581) to Benchi Wang.
Correspondence concerning this article should be addressed to Benchi
Wang, Institute for Brain Research and Rehabilitation, South China Nor-
mal University, Zhongshan Road West 55, Guangzhou 510000, China.
E-mail: wangbenchi.swift@gmail.com
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Journal of Experimental Psychology:
Learning, Memory, and Cognition
© 2020 American Psychological Association 2020, Vol. 2, No. 999, 000
ISSN: 0278-7393 http://dx.doi.org/10.1037/xlm0000847
1
showed an increased CDA amplitude for real-world objects when
observers had to memorize five instead of three real-world objects,
while no such an effect was found when storing five instead of
three simple colors. These findings indicate that more than three
real-world objects could be stored in VWM, while the VWM
capacity of simple colors was limited to three.
Even though Brady et al. (2016) argued that real-world objects
are stored in an active VWM system, they did not rule out the
possibility that these objects are also stored in VLTM. That is,
participants might employ two different memory systems at the
same time to store more items in visual memory. For example,
Brady et al. (2016) make the claim that “the current data do not
rule out the idea that real-world objects also lead to better episodic
long-term memory representations than simple stimuli do (in ad-
dition to being better represented in active working memory sys-
tems)” (p. 7462). In addition, it should be noted that Quirk and
Vogel (2017)
1
failed to replicate the critical CDA results of Brady
et al. (with more participants and more trials to increase statistical
power). They showed no reliable differences between three and
five real-world objects (with the same stimuli set employed in
Brady et al., 2016). Also, they failed to replicate the critical
behavioral findings but instead showed that with prolonged encod-
ing time, memory performance was improved for both simple
colors and real-world objects. These failures in replicating the
basic effects make the argument regarding more capacity for
real-world objects in VWM less convincing.
Brady et al. (2016) argued that the benefits from prolonged
encoding time are due to the fact that compared to simple colors,
real-world objects contain additional conceptual information. Yet,
compared to simple colors, real-world objects do not only have
additional conceptual information, but they are also perceived as
being more complex. That is, their perceptual complexity might
play a role, especially in discrimination tasks. For simple colors,
participants make a judgment based on only one dimension, while
for real-world objects, they could complete the comparison based
on multiple dimensions (e.g., shape, color, texture, material). Al-
though previous studies have shown that perceptual complexity
per se only leads to impoverished memory performance (Alvarez
& Cavanagh, 2004; Awh, Barton, & Vogel, 2007), this may not be
the case when encoding time is extended. That is, with more
perceptual complexity, observers might exploit multiple dimen-
sions as clues to encode and retrieve real-world objects, resulting
in encoding time benefit.
The goal of the present study was to investigate whether the
encoding time benefit in visual memory depends on the stimulus
type employed. As alluded above, compared to simple colors,
real-world objects have additional conceptual information and
additional perceptual complexity that both may contribute to im-
proving memory performance. Therefore, it is important to exam-
ine whether perceptual complexity and/or conceptual information
of the to-be-memorized stimuli contributed to these benefits. First,
we set out to replicate basic findings of Brady et al. (2016) with
basically the same paradigm, in which participants were required
to memorize either real-world objects or simple colors that were
presented with different encoding times (0.2 s, 1 s, or 2 s). In
Experiments 2 and 3, we further addressed the potential impact
factor of the encoding time benefit on visual memory and repli-
cated the critical color condition of Experiment 1. In Experiment
4, we presented familiar or unfamiliar real-world objects that had
the same perceptual complexity allowing us to investigate the role
of conceptual information in the encoding time benefit while
controlling for perceptual complexity.
Experiment 1
In the present experiment, we replicated the main experiment of
Brady et al. (2016) to examine whether the benefit from prolonged
encoding time only exists for real-world objects or whether it also
exists for simple stimuli (colors).
Method
Participants. Eighteen female participants (mean age: 19.7
years) with normal or corrected-to-normal vision were recruited
from Zhejiang Normal University for monetary compensation.
Sample size was predetermined based on Brady et al. (2016). With
a relatively small sample size (12), the critical pvalue for the
significant slope of real-world objects was already smaller than
.001. Thus, we chose larger sample sizes in the present and
subsequent experiments and in the meantime ensured that the
conditions could be counterbalanced between subjects. The pro-
cedure complied with a generic protocol approved by the Scientific
and Ethical Review Committee of the Department of Psychology
of Zhejiang Normal University.
Apparatus and stimuli. During the testing, participants were
required to keep their chin on a chinrest positioned at a viewing
distance of 65 cm from a 21-in. color monitor in a dimly lit
laboratory. Stimulus presentation and response collection were
controlled by custom scripts written in Python.
Colored squares and real-world objects were chosen as experi-
mental materials, which were presented against a white back-
ground (122 cd/m
2
). The colors of the squares (subtended by
1°) were randomly selected from nine equal-brightness
colors, evenly distributed along a circle in the CIE L
a
b (CIE-
LAB) color space (centered at L 70, a 5, b 0, with a radius
of 40). Each image for real-world objects (subtended by 2° 2°)
was downloaded from the online image database (https://bradylab
.ucsd.edu/stimuli.html) created by Brady and colleagues. The stim-
uli could appear at six locations, evenly distributed along the
circumference of one invisible circle centered at the display center
with the radius of 4°.
Procedure and design. The procedure was exactly the same
as in Experiment 1 of Brady et al. (2016). Participants were
required to memorize a set of either real-world objects or simple
colors while performing a simultaneous verbal task (rehearsing
two digits) to ensure visual memory, rather than verbal memory,
was activated (see Figure 1). First, two digits were presented for 1
s, followed by a 1-s interval. Then, six to-be-memorized materials
(i.e., color squares or real-world objects) were presented for 0.2 s,
1 s, or 2 s. After another 0.8-s interval, a retro-cue (i.e., a thick dot)
was presented for 0.5 s to indicate the to-be-probed item. In the
probe display, two items were presented slightly above or below
the probed item location, and participants were required to indicate
which one was shown in the sample display by pressing the “up
arrow” or “down arrow” key (i.e., made a [Two-alternative Forced
1
This is an unpublished study that was presented during a poster
presentation at the Vision Science Society, May, 2017.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
2LI, XIONG, THEEUWES, AND WANG
Choice] judgement). Finally, they were required to input the sum
of the two digits shown in the beginning by pressing the corre-
sponding number key in the keyboard.
When color was tested, we chose a novel probe color, a color
that was categorically distinct from the original color (defined as
the distance on the CIELAB color space between these two colors
of 180°; see Figure 2, left panel, for an example) and did not
appear on the original display. For real-world objects, two condi-
tions were tested: (a) An object was tested against another cate-
gorically distinct object (e.g., an apple you had seen vs. a back-
pack; see Figure 2, middle panel, for an example), and (b) an
object was tested against another exemplar from the same category
(e.g., a bread vs. another bread; see Figure 2, right panel, for an
example). Thus,a3(encoding time: 0.2s,1s,and2s)3(probe
type: color, objects, and objects with detail) within-subject design
was adopted. For each encoding time and each probe type, partic-
ipants were tested in different blocks, with each including 7
practice trials and 33 testing trials. The order of nine blocks was
counterbalanced across participants.
Analysis. Memory capacity (K) was calculated for Nto-be-
memorized items according to the method adopted in Brady et al.
(2016). It was assumed that participants could correctly answer the
[Two-alternative Forced Choice] test in p(percent correct) trials.
They would definitely remember the tested items in K/Ntrials or
could correctly guess the answer in (NK)/N50% trials
(chance 50%) when they could not remember the tested items.
Thus, by simplifying the equation for percent correct, p(K/N)
(NK/N)0.5, the formula for capacity is as follows: KN
(2p– 1).
Results
There were no effects on mean accuracies for digits report task
in all experiments. The results of this task for all experiments are
provided in the Appendix.
The mean VWM capacities for each probe type as a function of
encoding time are shown in Figure 2. A repeated-measures anal-
ysis of variance (ANOVA) on the mean capacities with variables
probe type (color, objects, and objects with detail) and encoding
time (0.2 s, 1 s, and 2 s) showed that there were significant main
effects for encoding time, F(2, 34) 30.96, p.001, partial
2
.65, and probe type, F(2, 34) 6.76, p.003, partial
2
.29.
Also, a significant interaction was observed, F(4, 68) 3.96, p
.006, partial
2
.19. Planned follow-up comparisons revealed
that the mean VWM capacities increased as the encoding time was
prolonged for colors, F(2, 34) 24.66, p.001, partial
2
.59;
for objects, F(2, 34) 25.42, p.001, partial
2
.6; and for
objects with detail, F(2, 34) 4.55, p.018, partial
2
.21.
We also used the linear slope to quantify the systematical
process of the encoding time benefit. Subsequent planned compar-
isons showed that the slope for colors (0.95) and that for objects
(1.05) were not statistically different, t(17) 0.57, p.565, d
0.16, but the slope for objects with detail (0.45) was statistically
smaller compared to that for colors, t(17) 3.56, p.002, d
0.81, and that for objects, t(17) 3.0, p.008, d0.89.
Discussion
Our findings regarding real-world objects and objects with
detail perfectly replicate the findings of Experiment 1 in Brady et
al. (2016). However, while Brady et al. did not observe the
encoding time benefit for simple colors, our results unequivocally
show such an effect (see also Quirk & Vogel, 2017). In fact, there
was no reliable difference in encoding time benefit between real-
world objects and simple colors, suggesting visual memory bene-
fits from prolonged encoding time regardless of stimulus type.
To ensure that the current failure to replicate is not accidental,
we sought to replicate the results in Experiments 2 and 3. In
addition, we observed another interesting finding that the encoding
time benefit was reduced in the condition of objects that contained
Figure 1. The procedure adopted in the present study. See the online article for the color version of this figure.
Figure 2. Experiment 1: Estimated capacity as a function of ending time for colors, objects, and objects with
details. Error bars denote within-subjects 95% confidence intervals. See the online article for the color version
of this figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
3
BENEFITS FROM PROLONGED ENCODING TIME
details. One might argue that this reduction is due to the fact that
more details need to be encoded into visual memory. That is, due
to the prolonged encoding time, in each condition, a high-fidelity
memory representation has been formed, yet in the condition of
objects that contained details, the memory representation is of a
lower fidelity than in the condition without details, because with
details, more has to be encoded. However, it should be noted that
the details of objects for encoding were the same in both condi-
tions. The only difference between the two conditions is that a
higher fidelity memory representation was required to make a
judgment during retrieval for the objects with detail condition than
the objects condition. Thus, we argue that the scale of the original
encoding time benefit is the same between those two conditions,
but the benefit is reduced when the task requires a high-fidelity
representation during retrieval in the condition of objects that
contain details. In Experiments 2 and 3, we examined whether the
task requirements during retrieval play a role in the reduction in
encoding time benefit.
Experiment 2
For replication purposes, in one condition, we adopted the same
difference between two color probes (i.e., 180° difference in CIE-
LAB color space) as used in Experiment 1; in the other condition,
we reduced the difference between two color probes (i.e., 20°),
forcing participants to use a high-fidelity representation to make a
judgment in the discrimination task. Yet, we used the same color
set as in the “replication” condition to ensure that participants
encoded the same colors. Therefore, in this experiment, what
needed to be encoded was the same; the only difference was that
in one condition, the task required only a relatively low fidelity,
while in the other condition, a high fidelity was required.
Method
Eighteen new participants (two males; mean age: 20.4 years)
participated in Experiment 2. The task procedure was identical to
that of Experiment 1, except participants were only required to
memorize a set of simple colors with different probe difference
(180° vs. 20°). The 180° difference was defined as the distance on
the CIELAB color space between two probe colors of 180°; 20°
difference was defined as the distance on the CIELAB color space
between two probe colors of 20°. Thus,a3(encoding time: 0.2 s,
1s,and2s)2(probe difference: 180° vs. 20°) within-subject
design was adopted.
Results
The mean VWM capacities for each probe difference as a
function of encoding time are shown in Figure 3A. A repeated-
measures ANOVA on the mean VWM capacities with variables
probe difference (180° vs. 20°) and encoding time (0.2 s, 1 s, and
2 s) showed that there were significant main effects for encoding
time, F(2, 34) 16.1, p.001, partial
2
.49, and probe
difference, F(1, 17) 247.91, p.001, partial
2
.94. Also,
there was a significant interaction, F(2, 34) 9.45, p.001,
partial
2
.36. Planned follow-up comparisons revealed that for
180° probe difference, the mean VWM capacities increased as the
encoding time was prolonged (slope: 0.95), F(2, 34) 25.37, p
.001, partial
2
.69; however, for 20° probe difference, there
were no such benefits from prolonged encoding time (slope: 0.28),
F(2, 34) 1.62, p.212, partial
2
.08.
Discussion
With large probe difference (180°), we replicated the findings
from our Experiment 1, showing that the mean VWM capacities
for simple colors increased as the encoding time was prolonged.
However, with small probe difference (20°), such benefits were no
longer found, suggesting that the encoding time benefit is com-
promised for simple colors as well when more fidelity of the
representation is required in the discrimination task.
Experiment 3
To further confirm our previous findings, in Experiment 3, we
added a medium probe difference (60°) to Experiment 2. If the
previous findings were really due to the task requirement for the
representation fidelity, we expect that the scale of the encoding
Figure 3. Estimated capacity as a function of ending time for different probe differences in Experiment 2 (A)
and in Experiment 3 (B). Error bars denote within-subjects 95% confidence intervals. See the online article for
the color version of this figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
4LI, XIONG, THEEUWES, AND WANG
time benefit from this medium probe condition (60°) should fall in
between the 20° and 180° condition.
Method
Eighteen new participants (one male; mean age: 20.2 years) took
part in the experiment. The procedure was identical to that of
Experiment 2, except for adding a medium probe difference (60°).
Thus,a3(encoding time: 0.2s,1s,and2s)3(probe difference:
180°, 60°, and 20°) within-subject design was adopted.
Results
The mean VWM capacities for each probe difference as a
function of encoding time are shown in Figure 3B. A repeated-
measures ANOVA on the mean VWM capacities with variables
probe difference (180°, 60°, and 20°) and encoding time (0.2 s, 1
s, and 2 s) showed that there were significant main effects for
encoding time, F(2, 34) 12,47, p.001, partial
2
.42, and
probe difference, F(2, 34) 87.91, p.001, partial
2
.84. The
interaction was marginally significant, F(4, 68) 2.27, p.071,
partial
2
.12. Planned follow-up comparisons revealed that, for
180° and 60° probe differences, the mean VWM capacities in-
creased as the encoding time was prolonged with the slopes of
0.86, F(2, 34) 19.07, p.001, partial
2
.53, and 0.64, F(2,
34) 5.45, p.009, partial
2
.24, respectively. Again,
however, for 20° probe difference, there was no such benefit from
prolonged encoding time (slope: 0.16), F(2, 34) .45, p.644,
partial
2
.03.
Discussion
Not being surprised, again, we replicated the critical findings
from our Experiment 1, showing that the mean VWM capacities
for simple colors increased as the encoding time was prolonged
with large probe difference (180°). With three independent repli-
cations, we provide compelling evidence that memory perfor-
mance for simple colors is also improved when the encoding time
is extended. That is, visual memory benefits from prolonged
encoding time regardless of stimulus type.
Consistent with Experiment 2, we found that when adopting the
same color sets, the encoding time benefit was eliminated when the
difference between two probes was substantially reduced from
180° on the color wheel to 20°. Once the probe difference in-
creased to 60°, the size of the encoding time benefit recovered and
fell in between the 20° and 180° conditions. It suggests that the
scale of the encoding time benefit is impacted by how much the
fidelity of the memory representation is required by the task.
Experiment 4
As mentioned above, compared to simple colors, real-world
objects not only have additional conceptual information but are
also perceptually more complex. In previous experiments, we
found the same effects for simple colors as for real-world objects,
indicating that neither the conceptual information nor the percep-
tual complexity is linked to the encoding time benefit. However,
one might still argue that there are different mechanisms underly-
ing the benefits for real-world objects and for simple colors. As
outlined earlier, the involvement of VLTM is one factor that may
play a crucial role in driving the encoding time benefit. Previous
studies have shown that real-world objects with different concep-
tual information barely interfered with each other in VLTM, sug-
gesting that conceptual information is one important factor to help
memorize real-world objects into VLTM (Konkle et al., 2010).
Thus, it is quite feasible that the conceptual information associated
with real-world objects still plays a critical role in obtaining the
benefits from prolonged encoding time.
The current experiment examined this possibility. In this study,
participants had to memorize real-world objects that were either
familiar (generating additional conceptual information) or unfa-
miliar (not generating additional conceptual information), with
more or less the same perceptual complexity. To determine which
objects were familiar or unfamiliar, we first conducted Experiment
4a, in which participants had to indicate whether they recognized
the object that was shown and indicated on a 5-point scale how
confident they were in their answer. On the basis of these data, we
selected real-world objects that were indicated as being very
familiar and real-world objects that were considered very unfamil-
iar. In Experiment 4b, we used these stimuli to create two condi-
tions: Participants had to memorize familiar real-world objects in
one condition and unfamiliar real-world objects in the other con-
dition. We used the same procedure as in Experiment 1. If the
encoding time benefit depends on stimulus familiarity (which is
assumed to generate additional conceptual information), we expect
to see an increase in memory performance for familiar objects but
not for unfamiliar objects when the encoding time is prolonged. If
this effect is not found, we have to conclude that the encoding time
benefit has nothing to do with stimulus familiarity.
Experiment 4a
Method. Twelve new participants (two males; mean age: 20.4
years) took part in the experiment. A new set of 210 images of
real-world objects (subtended by 2° 2°) were selected from the
same database as in Experiment 1. In each trial, an image of the
real-world object was present for 5 s, and participants had to
indicate whether they could recognize what it is or not. If yes, they
pressed “left arrow”; otherwise, they pressed “right arrow” on the
keyboard. Following this response, participants indicated their
confidence about their answers on 1–5 confidence scale test (1
represented very sure and 5 represented very unsure). There was
no time limit for their responses.
Results. As illustrated in Figure 4A, we randomly selected 15
familiar objects and 15 unfamiliar objects from one array of
objects with which all participants were completely familiar (with
the answer of “yes” and the confidence of “very sure”) and the
other array of objects with which all participants were completely
unfamiliar (with the answer of “no” and the confidence of “very
sure”), respectively.
Experiment 4b
Method. Twenty-four new participants (one male; mean age:
20.1 years) took part in the experiment. The procedure was iden-
tical to that of Experiment 1, except that participants were only
required to memorize real-world objects with different familiarity.
Thus,a3(encoding time: 0.2s,1s,and2s)2(familiarity:
familiar vs. unfamiliar) within-subject design was adopted.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
5
BENEFITS FROM PROLONGED ENCODING TIME
Results. The mean VWM capacities for different familiarities
as a function of encoding time are shown in Figure 4B. A repeated-
measures ANOVA on the mean VWM capacities with variables
familiarity (familiar vs. unfamiliar) and encoding time (0.2 s, 1 s,
and 2 s) showed significant main effects for encoding time, F(2,
46) 40.36, p.001, partial
2
.64, and familiarity, F(1,
23) 10.07, p.004, partial
2
.31. No interaction was
observed, F(2, 46) 0.28, p.755, partial
2
.01, Bayes factor
[BF] 0.14. Planned follow-up comparisons revealed that the
mean VWM capacities increased as the encoding time was pro-
longed for both familiar objects, F(2, 34) 19.07, p.001,
partial
2
.53, and unfamiliar objects, F(2, 34) 5.45, p.009,
partial
2
.24. Statistically, the slope for familiar objects (0.82)
and that for unfamiliar objects (0.86) were also the same, t(23)
.23, p.823, d0.06, BF 0.22.
Discussion
The current results showed that the overall memory perfor-
mance for familiar objects was better than that for unfamiliar
objects, but the encoding time benefit was the same regardless of
whether participants were familiar or unfamiliar with these objects.
This indicates that, similar to what has been shown regarding the
perceptual complexity, the additional conceptual information re-
lated to the real-world objects is not critical for obtaining benefits
from prolonged encoding time.
General Discussion
Recently, Brady et al. (2016) found that the memory perfor-
mance for real-world objects was improved when the encoding
time was prolonged while this effect was not found for simple
stimuli such as colors. On the basis of this, Brady et al. argued that
the richer conceptual information of real-world objects enabled
observers to exploit the extended encoding time to store a larger
number of items in working memory. This important result chal-
lenged the standard view that the storage capacity of VWM is
limited (Adam, Vogel, & Awh, 2017; Awh et al., 2007; Cowan,
2001; Luck & Vogel, 1997; Zhang & Luck, 2008). However,
inconsistent with Brady et al., the current study unequivocally
shows that the encoding time benefit does not depend on the type
of stimuli employed: There were encoding time benefits for both
real-world objects (Experiments 1 and 4) and for simple colors
(Experiments 1, 2, and 3; see also Quirk & Vogel, 2017, for similar
results). We also show that this benefit is not the result of percep-
tual complexity or the presence of conceptual information (Exper-
iment 4). As shown in Experiments 2 and 3, one of the factors that
impacted the encoding time benefit was the task requirement: If a
higher fidelity memory representation was required, the encoding
time benefit was reduced.
Brady et al. (2016) found an increased CDA amplitude for
real-world objects when observers had to memorize five objects
instead of three objects, suggesting that more than three real-world
objects could be stored in VWM. This might indicate that VLTM
does not play a crucial role. However, Quirk and Vogel (2017),
who used more participants and more trials to increase statistical
power, were not able to replicate this effect on the CDA and
reported no reliable difference in CDAs between three and five
real-world objects. Therefore, it is difficult to decide whether or
not real objects are exclusively stored in VWM (as argued by
Brady et al., 2016) and whether VLTM plays a role in the encoding
time benefit. Regardless of this specific discussion, our conclusion
is the same: Visual memory benefits from prolonged encoding
time regardless of stimulus type.
So, one question that needs to be answered is why visual
memory benefits from prolonged encoding time. There are several
possibilities, for example: (a) The items are stored in VWM, but
Figure 4. (A) Familiar and unfamiliar objects used as memory materials in Experiment 4b were selected based
on the results of Experiment 4a. (B) Upper panel shows estimated capacity as a function of encoding time in
Experiment 4b, and bottom panel shows the estimated slopes for familiar and unfamiliar objects. Error bars
denote within-subjects 95% confidence intervals. See the online article for the color version of this figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
6LI, XIONG, THEEUWES, AND WANG
with the involvement of (or the interaction with) VLTM storage,
the memory capacity is temporally increased (see also Endress &
Potter, 2014; Hollingworth, 2005; Hollingworth & Hollingworth,
2004; Shoval, Luria, & Makovski, 2019); (b) the items are not
stored in VWM but instead in VLTM, which basically has a very
large storage space (see also Brady et al., 2008); and (c) the items
are stored in both VWM and VLTM by chunking information from
different memory types, resulting in larger memory capacity (see
also Ngiam, Khaw, Holcombe, & Goodbourn, 2019). Therefore, it
seems that, if anything, the involvement of VLTM might be a
critical factor in driving the benefits from prolonged encoding
time. However, it should be noted that none of the explanations
support the notion that VWM capacity is extended.
It has been argued that relative to simple colors, real-world
objects have additional conceptual information (which is related to
VLTM), which in turn may be the reason for encoding time
benefits of real-world objects (Brady et al., 2016; see also Curby,
Glazek, & Gauthier, 2009; Olsson & Poom, 2005). However,
compared to simple colors, real-world objects not only have ad-
ditional conceptual information but also have additional perceptual
complexity. Here, we found that the encoding time benefit existed
both for simple colors and for real-world objects, which indicates
that neither the conceptual information nor the perceptual com-
plexity is necessarily linked to the encoding time benefit. Criti-
cally, while controlling for the perceptual complexity, our Exper-
iment 4 further showed that the encoding time benefit for
real-world objects did not depend on whether participants had any
conceptual knowledge about real-world objects as the benefits
were the same for familiar and unfamiliar objects. Therefore, it is
unlikely that conceptual information has any potential impact on
obtaining the encoding time benefit.
In Experiment 4, we found that the overall memory performance
for familiar objects was better than that for unfamiliar objects.
There is some controversy in the literature regarding this effect. On
the one hand, using Pokémon figures as memory materials, Xie
and Zhang (2017a, 2017b, 2018) found no increase in memory
capacity due to stimulus familiarity but only found an effect on
memory consolidation. On the other hand, using alphabet letters
as memory materials, Ngiam et al. (2019) reported an increase in
memory capacity due to stimulus familiarity. Ngiam et al. argued
that the degree of familiarity between first-generation (familiar)
and recent-generation (unfamiliar) Pokémon is too small to gen-
erate an effect on memory capacity. Indeed, Ngiam et al. (2019)
used English letters as familiar stimuli and novel characters as
unfamiliar stimuli, which generates a large difference in familiar-
ity. Under those conditions, a difference in memory capacity was
found. This is consistent with our Experiment 4, in which there
was a large difference between familiar and unfamiliar objects
because the unfamiliar real-world objects we chose were com-
pletely unknown to observers while the familiar ones were well
known to the observers.
In summary, we conclude that visual memory benefits from
prolonged encoding time regardless of stimulus type employed.
Our study shows that neither the presence of conceptual informa-
tion nor the perceptual complexity of the stimuli plays a critical
role in obtaining the encoding time benefit. This implies that
factors other than those related to conceptual information and
perceptual complexity associated with VLTM might be critical in
obtaining the benefits from prolonged encoding time.
References
Adam, K. C. S., Vogel, E. K., & Awh, E. (2017). Clear evidence for item
limits in visual working memory. Cognitive Psychology, 97, 79 –97.
http://dx.doi.org/10.1016/j.cogpsych.2017.07.001
Alvarez, G. A., & Cavanagh, P. (2004). The capacity of visual short-term
memory is set both by visual information load and by number of objects.
Psychological Science, 15, 106 –111. http://dx.doi.org/10.1111/j.0963-
7214.2004.01502006.x
Awh, E., Barton, B., & Vogel, E. K. (2007). Visual working memory
represents a fixed number of items regardless of complexity. Psycho-
logical Science, 18, 622– 628. http://dx.doi.org/10.1111/j.1467-9280
.2007.01949.x
Bays, P. M., Gorgoraptis, N., Wee, N., Marshall, L., & Husain, M. (2011).
Temporal dynamics of encoding, storage, and reallocation of visual
working memory. Journal of Vision, 11(10), 6. http://dx.doi.org/10
.1167/11.10.6
Brady, T. F., Konkle, T., Alvarez, G. A., & Oliva, A. (2008). Visual
long-term memory has a massive storage capacity for object details.
Proceedings of the National Academy of Sciences of the United States of
America, 105, 14325–14329. http://dx.doi.org/10.1073/pnas.0803
390105
Brady, T. F., Konkle, T., Oliva, A., & Alvarez, G. A. (2009). Detecting
changes in real-world objects: The relationship between visual long-term
memory and change blindness. Communicative & Integrative Biology, 2,
1–3. http://dx.doi.org/10.4161/cib.2.1.7297
Brady, T. F., Störmer, V. S., & Alvarez, G. A. (2016). Working memory
is not fixed-capacity: More active storage capacity for real-world objects
than for simple stimuli. Proceedings of the National Academy of Sci-
ences of the United States of America, 113, 7459 –7464. http://dx.doi
.org/10.1073/pnas.1520027113
Carlisle, N. B., Arita, J. T., Pardo, D., & Woodman, G. F. (2011).
Attentional templates in visual working memory. Journal of Neurosci-
ence, 31, 9315–9322. http://dx.doi.org/10.1523/JNEUROSCI.1097-11
.2011
Cowan, N. (2001). The magical number 4 in short-term memory: A
reconsideration of mental storage capacity. Behavioral and Brain Sci-
ences, 24, 87–114. http://dx.doi.org/10.1017/S0140525X01003922
Curby, K. M., Glazek, K., & Gauthier, I. (2009). A visual short-term
memory advantage for objects of expertise. Journal of Experimental
Psychology: Human Perception and Performance, 35, 94 –107. http://
dx.doi.org/10.1037/0096-1523.35.1.94
Endress, A. D., & Potter, M. C. (2014). Large capacity temporary visual
memory. Journal of Experimental Psychology: General, 143, 548 –565.
http://dx.doi.org/10.1037/a0033934
Hollingworth, A. (2005). The relationship between online visual represen-
tation of a scene and long-term scene memory. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 31, 396 – 411. http://dx
.doi.org/10.1037/0278-7393.31.3.396
Hollingworth, A., & Hollingworth, A. (2004). Constructing visual repre-
sentations of natural scenes: The roles of short- and long-term visual
memory. Journal of Experimental Psychology: Human Perception and
Performance, 30, 519 –537. http://dx.doi.org/10.1037/0096-1523.30.3
.519
Konkle, T., Brady, T. F., Alvarez, G. A., & Oliva, A. (2010). Conceptual
distinctiveness supports detailed visual long-term memory for real-world
objects. Journal of Experimental Psychology: General, 139, 558 –578.
http://dx.doi.org/10.1037/a0019165
Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory
for features and conjunctions. Nature, 390, 279 –281. http://dx.doi.org/
10.1038/36846
Ngiam, W. X., Khaw, K. L., Holcombe, A. O., & Goodbourn, P. T. (2019).
Visual working memory for letters varies with familiarity but not com-
plexity. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 45, 1761–1775. http://dx.doi.org/10.1037/xlm0000682
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
7
BENEFITS FROM PROLONGED ENCODING TIME
Olsson, H., & Poom, L. (2005). Visual memory needs categories. Proceedings of
the National Academy of Sciences of the United States of America, 102,
8776 – 8780. http://dx.doi.org/10.1073/pnas.0500810102
Quirk, C., & Vogel, E. (2017). No evidence for an object working memory
capacity benefit with extended viewing time. Journal of Vision, 17(10),
112. http://dx.doi.org/10.1167/17.10.112
Shoval, R., Luria, R., & Makovski, T. (2019). Bridging the gap between
visual temporary memory and working memory: The role of stimuli
distinctiveness. Journal of Experimental Psychology: Learning, Mem-
ory, and Cognition. Advance online publication. http://dx.doi.org/10
.1037/xlm0000778
Vogel, E. K., & Machizawa, M. G. (2004). Neural activity predicts indi-
vidual differences in visual working memory capacity. Nature, 428,
748 –751. http://dx.doi.org/10.1038/nature02447
Xie, W., & Zhang, W. (2017a). Familiarity increases the number of
remembered Pokémon in visual short-term memory. Memory & Cogni-
tion, 45, 677– 689. http://dx.doi.org/10.3758/s13421-016-0679-7
Xie, W., & Zhang, W. (2017b). Familiarity speeds up visual short-term
memory consolidation. Journal of Experimental Psychology: Human
Perception and Performance, 43, 1207–1221. http://dx.doi.org/10.1037/
xhp0000355
Xie, W., & Zhang, W. (2018). Familiarity speeds up visual short-term
memory consolidation: Electrophysiological evidence from contralateral
delay activities. Journal of Cognitive Neuroscience, 30, 1–13. http://dx
.doi.org/10.1162/jocn_a_01188
Zhang, W., & Luck, S. J. (2008). Discrete fixed-resolution representations
in visual working memory. Nature, 453, 233–235. http://dx.doi.org/10
.1038/nature06860
Appendix
Analysis for the Digits Report Task
The mean accuracies for digits report task in all experiments are
shown in Table A1. In Experiment 1, a repeated-measures
ANOVA on the mean accuracies with variables probe type (color,
objects, and objects with detail) and encoding time (0.2 s, 1 s, and
2 s) showed that there were no significant main effects for encod-
ing time, F(2, 34) 2.17, p.129, partial
2
.11, and probe
type, F(2, 34) 1.86, p.171, partial
2
.1. Also, no
significant interaction was observed, F(4, 68) 0.13, p.97,
partial
2
.01.
In Experiment 2, a repeated-measures ANOVA on the mean
accuracies of digits report task with variables probe difference
(180° vs. 20°) and encoding time (0.2 s, 1 s, and 2 s) showed that
there were no significant main effects for encoding time, F(2,
34) 0.33, p.719, partial
2
.02, and probe difference, F(1,
17) 1.03, p.324, partial
2
.06. Also, no significant
interaction was observed, F(2, 34) 1.4, p.26, partial
2
.08.
In Experiment 3, a repeated-measures ANOVA on the mean
accuracies of digits report task with variables probe difference
(180°, 60°, and 20°) and encoding time (0.2 s, 1 s, and 2 s) showed
that there were no significant main effects for encoding time, F(2,
34) 0.1, p.909, partial
2
.01, and probe difference, F(2,
34) 0.25, p.781, partial
2
.01. Also, no significant
interaction was observed, F(4, 68) 0.79, p.536, partial
2
.04.
In Experiment 4b, a repeated-measures ANOVA on the mean
accuracies of digits report task with variables familiarity (familiar
vs. unfamiliar) and encoding time (0.2 s, 1 s, and 2 s) showed that
there were no significant main effects for encoding time, F(2,
46) 1.07, p.352, partial
2
.04, and familiarity, F(1, 23)
0.06, p.809, partial
2
.01. Also, no interaction was ob-
served, F(2, 46) 0.62, p.543, partial
2
.03.
Overall, participants performed the digits report task quite well
in all conditions in all experiments.
Received August 27, 2019
Revision received March 6, 2020
Accepted March 25, 2020
Table A1
Mean Accuracies for Digits Report Task in the Present Study
Variable 0.2 s, M(SD)1s,M(SD)2s,M(SD)
Experiment 1
Colors .98 (.03) .98 (.03) .99 (.01)
Objects .98 (.04) .97 (.05) .99 (.03)
Objects with detail .97 (.04) .97 (.06) .98 (.03)
Experiment 2
Colors 20° .97 (.03) .97 (.04) .99 (.06)
Colors 180° .99 (.03) .98 (.03) .97 (.06)
Experiment 3
Colors 20° .97 (.05) .95 (.08) .96 (.06)
Colors 60° .96 (.08) .97 (.04) .97 (.05)
Colors 180° .96 (.07) .96 (.07) .96 (.06)
Experiment 4b
Familiar objects .98 (.04) .97 (.04) .98 (.03)
Unfamiliar objects .98 (.03) .97 (.03) .98 (.04)
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
8LI, XIONG, THEEUWES, AND WANG
... en ideas, words and concepts are associated with images (Sims et. al., 2002;Ainsworth & Loizou, 2003;Raiyn, 2016) while Philominaj et. al. (2017) postulated that visual learning increases student's interest and sustains the student's memory for a longer term. More recent studies further reaffirmed that visual representations are better remembered. Li et. al. (2020) concluded that visual memory benefits from prolonged encoding time ;Conci et. al. (2021) proposed that visual memory could be enhanced by using familiar objects that are associated with knowledge stored in longterm memory while Rustamova & Umarova (2022) further demonstrated that visual representations have a significant impact on the b ...
... Another assessment was conducted in the following session after the semester break and the experimental group demonstrated remarkable higher mean (as in Table 4), attesting that the VRM Technique stores information in longterm memory. The results are in line with the previous studies (Mayer, 2001;Bitter & Legacy, 2008;Sadoski & Paivio, 2013;Li et. al., 2020;Rustamova & Umarova, 2022) that the students are capable to store information in their long-term memory when the concept learned is paired with meaningful image which acts as a specific reference point or an impetus. This is because the visual representations help students engage with the content and direct attention and subsequently inc ...
Article
Full-text available
The finance educators nowadays face challenges in providing effective pedagogy to students who are the digital natives: whose life are engaged with visual stimulus and technologies. Linguistic resources and text are no more adequate to accomplish the learning objectives, especially the concept in finance which is difficult to comprehend. Visual Risk Measurement (VRM) is an innovative visual representation in teaching the measurement of risk. It illustrates how standard deviation can be employed to measure level of uncertainty (risk) of the returns on investment. Action research was conducted on 50 students who enrolled in Business Finance course. One class of 26 students was randomly selected as the experimental group while another class of 24 students was assigned as the control group. A formative test was administered to compare the students' learning outcomes, followed by a survey to elicit the experimental group's perceptions towards adopting the VRM Technique and a learning recall assessment after the semester break to compare the students' learning recall (information store). The results indicated that the experimental group performed better and that the VRM technique allured the students' interest and enhanced their learning as they could see how standard deviation works in measuring risk. As for the learning recall assessment, the experimental group demonstrated remarkable higher mean, attesting that the VRM technique stores information in long-term memory. This study contributes to the literature by adding empirical findings on the significance of visual representation in finance education which is relatively scarce. The pedagogy on risk measurement tends to emphasize teaching a formula and intensive practice with performing risk calculations. Students could answer the question correctly but how valid is their understanding pertaining to the risk measurement? Hence, a study was carried out to investigate whether the students who had passed the Business Finance course could explain how risk is measured. Based on the students' performance in their final exam, 57 out of 108 scored full marks for the risk measurement question. However, it was disheartening to note that out of the 108 students, only 5 students could explain how could standard deviation be used to measure risk and only 11 students could still remember the formula to calculate the risk. This indicates that the students did not truly understand what had been learned but merely memorized the formulas in order to achieve good grades at the expense of understanding the rationale of doing so. As a result, students are unable to apply what they had learned and tend to forget it after the exam. These findings highlight that it is no longer possible to assume that the learning could be accomplished solely by linguistic resources and slides of texts to the students. Past literature proposed that pictureable material is generally easier to learn and remember than less pictureable material (Kepes, 1995). Sadoski and Paivio (2013) showed that the visual representation facilitated the acquisition of concepts due to its capability to demonstrate information in ways that are easy to understand the relationships and patterns. One strand of study focused on the proposition of long-term memory via visual learning. Mayer (2001) proposed that information needs to be organized and visualized in order to be moved from the short-term memory to the long-term memory. This is supported by subsequent studies which revealed that learners retain information better when ideas, words and concepts are associated with images (Sims et. al., 2002; Ainsworth & Loizou, 2003; Raiyn, 2016) while Philominaj et. al. (2017) postulated that visual learning increases student's interest and sustains the student's memory for a longer term. More recent studies further reaffirmed that visual representations are better remembered. Li et. al. (2020) concluded that visual memory benefits from prolonged encoding time; Conci et. al. (2021) proposed that visual memory could be enhanced by using familiar objects that are associated with knowledge stored in long-term memory while Rustamova & Umarova (2022) further demonstrated that visual representations have a significant impact on the brain as it can capture the image faster and help to retain information for a longer period of time. Another strand of study suggested that visual learning could be the learning enhancer when it is used with other forms of learning like auditory (Riad, 2015). When a concept is explained to students verbally, the effect is less because students could not "see" the concept. With the visual aids, students will understand better and remember what they have learned. It is increasingly important that pedagogy incorporates multi learning styles especially the utilization of visuals in today's massive information age where students are brought up by visuals technologies. Indeed, recent study by Samuel et. al. (2022) showed that visual objects are more effective than auditory objects and in an associative
... Previous studies in visual working memory using real-world objects have shown that working memory for meaningful stimuli in particular can benefit from longer encoding time that can facilitate deeper processing (Brady, Störmer, & Alvarez, 2016;Brady & Störmer, 2022; but see Li et al., 2020 andQuirk et al., 2020 for encoding time benefits for both real-world objects and simple stimuli). To test whether encoding time would modulate incidental working memory of the meaningful identities, we used a much shorter encoding time in Experiment 2. ...
Preprint
Full-text available
Prior research has shown that visual working memory capacity is enhanced for meaningful stimuli (i.e., real-world objects) compared to abstract shapes (i.e., colored circles). Furthermore, a simple feature that is part of a real-world object is better remembered than the same feature presented on an unrecognizable shape, suggesting that meaningful objects can serve as an effective scaffold in memory. Here, we hypothesized that the shape of meaningful objects would be better remembered incidentally than the shape of non-meaningful objects in a color memory task where identity itself is task-irrelevant. We used a surprise-trial paradigm in which participants performed a color memory task for several trials before being probed with a surprise trial that asked them about the shape of the last object they saw. Across three experiments, we found a memory advantage for recognizable shapes relative to scrambled and unrecognizable versions of these shapes (Exp. 1) that was robust across different encoding times (Exp. 2), and the addition of a verbal suppression task (Exp. 3). In contrast, when we asked about the location of objects in a surprise trial, we did not observe any difference between the two stimulus types (Exp. 4). These results show that identifying information about a meaningful object is encoded into working memory despite being task-irrelevant. This privilege for meaningful shape information does not exhibit a trade-off with location memory, suggesting that meaningful identity influences representations of visual working memory in higher-level visual regions without altering the use of spatial reference frames at the lower level.
... This discrepancy may be explained by the use of a short encoding duration (500ms vs. 3,000ms), which likely made it harder for participants to fully utilize the meaning of the stimuli (e.g., identify or label them). Interestingly, this may potentially imply that the effect of meaning on VLTM might be even larger than we report because it was recently found that meaningful items are more likely to benefit from longer and deeper encoding than scrambled objects or simple features Brady et al., 2016; but see Li et al., 2020). ...
Article
Full-text available
Previous research demonstrated a massive capacity of visual long-term memory (VLTM) for meaningful images. However, the capacity and limits of a "pure" VLTM that is independent of conceptual information still need to be determined. In the encoding phase of three experiments, participants viewed hundreds of images depicting real-world objects, along with visually similar images that were stripped of their semantic meaning. VLTM was evaluated using a four-alternative-forced-choice test including old and new images and their counterpart mirror transformations. The results revealed superior memory for meaningful than for meaningless stimuli and importantly, there was no hint of a massive VLTM for the meaningless items. Furthermore, when examining memory recognition of visual properties per-se (i.e., original/mirror state), memory was overall poor, and practically negligible for the meaningless items. Taken together, our findings suggest that meaning is critical for massive VLTM and for the ability to store visual properties.
... Parametric manipulations of the rate of presentation of memoranda during encoding can be informative to assessing the time course under which underlying memory representations can be formed. Indeed, visual short-and long-term memories benefit from prolonged presentation of memoranda at encoding (e.g., Brady et al., 2016;Li et al., 2020), and shorter presentation times result in poorer recognition (Hirshman & Hostetter, 2000;Potter & Levy, 1969;Shepherd et al., 1991) and recall (McDermott & Watson, 2001;Roberts, 1972;Waugh, 1967) performance. However, most inferences about the effects of presentation time (i.e., short vs. long) are made on the level of task performance in terms of deficits in accuracy in recall or recognition under speeded presentation rates. ...
Article
Assessing the time course under which underlying memory representations can be formed is an important question for understanding memory. Several studies assessing item memory have shown that gist representations of items are laid out more rapidly than verbatim representations. However, for associations among items/components, which form the core of episodic memory, it is unclear whether gist representations form more quickly than, or at least in parallel with, verbatim representations, as fuzzy-trace theory predicts, or whether gist is extracted more slowly from inferring the meaning of verbatim representations, as in gist macroprocessor theories. To test these contrasting possibilities, we used a novel associative recognition task in which participants studied face-scene pairs for .75, 1.5, or 4 seconds each, and were later tested on their ability to discriminate intact pairs from foils which varied in how similar they were to originally studied pairs. Across 2 experiments, we found that verbatim memory for associations, measured using a multinomial-processing-tree model, improved from .75 to 1.5 to 4 seconds of presentation time. Paralleling these effects of encoding time on verbatim memory, for gist memory, there were improvements from .75 seconds to 1.5 seconds in both experiment 1 and 2, while improvements from 1.5 seconds to 4 seconds were only evident when the retention interval between study and test was increased (experiment 2). These results provide strong support for the parallel processing framework of fuzzy-trace theory over the slow gist extraction framework of an alternative gist macroprocessor theory. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... In most studies of real-world memory, it is difficult to precisely control the amount of time that people spend viewing events. People may spontaneously spend more time viewing/interacting with stimuli that they find more interesting, and these differences in encoding time are known to improve memory (Li et al., 2020). Here we used virtual reality (VR) to examine how people's natural viewing time with ecological valid experiences impacts their memories. ...
Article
Full-text available
A variety of evidence demonstrates that memory is a reconstructive process prone to errors and distortions. However, the complex relationship between memory encoding, strength of memory reactivation, and the likelihood of reporting true or false memories has yet to be ascertained. We address this issue in a setting that mimics a real-life experience: We asked participants to take a virtual museum tour in which they freely explored artworks included in the exhibit, while we measured the participants’ spontaneous viewing time of each explored artwork. In a following memory reactivation phase, participants were presented again with explored artworks (reactivated targets), followed by novel artworks not belonging to the same exhibit (activated lures). For each of these objects, participants provided a reliving rating that indexed the strength of memory reactivation. In the final memory recognition phase, participants underwent an old/new memory task, involving reactivated vs. baseline (i.e., non-reactivated) targets, and activated and baseline lures. The results showed that those targets that were spontaneously viewed for a longer amount of time were more frequently correctly recognized. This pattern was particularly true for reactivated targets associated with greater memory strength (a higher reliving rating). Paradoxically, however, lures that were presented after targets associated with higher reliving ratings in the reactivation phase were more often erroneously recognized as artworks encountered during the tour. This latter finding indicates that memory intrusions, irrespective of the viewing time, are more likely to take place and be incorporated into true memories when the strength of target memory is higher.
... For example, Brady et al. (2016) showed a boost in performance for real-world objects that was attributable to more active storage in visual working memory, consistent with a theory where additional high-level information about such objects, perhaps in the ventral stream, is maintained in working memory in addition to low-level information. Some recent studies (Li, Xiong, Theeuwes, & Wang, 2020;Quirk, Adam, & Vogel, 2020) instead found no difference between storing simple features and real-world objects in visual working memory, but these results were likely due to a lack of control for similarity between targets and foils in the color versus real-world object tasks (Brady & Störmer, 2020;Brady & Störmer, in press). With better control for target-foil similarity (Brady & Störmer, 2020), real-world objects result in significantly better performance compared with simple features (Brady & Störmer, 2020;Brady & Störmer, in press). ...
Article
Full-text available
When storing multiple objects in visual working memory, observers sometimes misattribute perceived features to incorrect locations or objects. These misattributions are called binding errors (or swaps) and have been previously demonstrated mostly in simple objects whose features are easy to encode independently and arbitrarily chosen, like colors and orientations. Here, we tested whether similar swaps can occur with real-world objects, where the connection between features is meaningful rather than arbitrary. In Experiments 1 and 2, observers were simultaneously shown four items from two object categories. Within a category, the two exemplars could be presented in either the same or different states (e.g., open/closed; full/empty). After a delay, both exemplars from one of the categories were probed, and participants had to recognize which exemplar went with which state. We found good memory for state information and exemplar information on their own, but a significant memory decrement for exemplar-state combinations, suggesting that binding was difficult for observers and swap errors occurred even for meaningful real-world objects. In Experiment 3, we used the same task, but in one-half of the trials, the locations of the exemplars were swapped at test. We found that there are more errors in general when the locations of exemplars were swapped. We concluded that the internal features of real-world objects are not perfectly bound in working memory, and location updates impair object and feature representations. Overall, we provide evidence that even real-world objects are not stored in an entirely unitized format in working memory.
Article
Elaboration enriches newly encoded information by connecting it to prior knowledge. Here, we tested if prior knowledge about object-color associations improves visual working memory (VWM) for colors. A sequence of four colored objects was presented in four screen locations for a continuous color reproduction test. Object-color associations were either congruent with prior knowledge (e.g., red tomato) or incongruent (e.g., blue tomato). In Experiments 1 and 2, congruency had no effect on memory irrespective of memoranda format (images or words), encoding time (1,500 vs. 4,500 ms), and an instruction to elaborate. In Experiment 3, the object was also tested with a three-alternative forced-choice before or after probing color memory. We also included neutral objects (no color association) and abstract shapes and tested VWM and episodic memory. Congruent items were remembered better than in all other conditions, which did not systematically differ. In Experiment 4, we assessed the congruency effect when only color or both color and object were tested. Congruent objects were remembered better only when both features were tested. Hence, prior knowledge boosts VWM only when this knowledge is relevant at test. Our results suggest that retrieval manipulations can be critical for promoting the use of long-term memory knowledge. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Preprint
Elaboration enriches newly encoded information by connecting it to prior knowledge. Here, we tested if prior knowledge about object-color associations improves visual working memory (VWM) for colors. A sequence of four colored objects was presented in four screen locations for a continuous color reproduction test. Object-color associations were either congruent with prior knowledge (e.g., red tomato) or incongruent (blue tomato). In Experiments 1 and 2, congruency had no effect on memory irrespective of memoranda format (images or words), encoding time (1500 vs. 4500 ms), and an instruction to elaborate. In Experiment 3, the object was also tested with a 3-alternative forced-choice before or after probing color memory. We also included neutral objects (no color association) and abstract shapes, and tested VWM and episodic memory. Congruent items were remembered better than in all other conditions, which did not systematically differ. In Experiment 4, we assessed the congruency effect when only color or both color and object were tested. Congruent objects were remembered better only when both features were tested. Hence, prior knowledge boosts VWM only when this knowledge is relevant at test. Our results suggest that retrieval manipulations can be critical for promoting the use of long-term memory knowledge.
Article
The paper considers the practical experience of creating educational visual materials on the Ukrainian language as a foreign language using innovative technologies, namely, the specifics of the role and place of colour and colour symbols as a code sign during the assimilation of new educational information. This study employed the following theoretical methods—analysis, systematisation and generalisation of modern research; empirical methods—experimental work; statistical methods—qualitative and quantitative processing of the results of empirical research; systematisation and correlation of the results of empirical research in accordance with the values of the case paradigm of the Ukrainian language. It was established that colour can affect not only a person’s emotions and behaviour, but also cognitive processes, thinking and memory. Upon recollecting information, a person reproduces events and focuses on everything associated with them. Within the framework of this study, the authors have identified the role of the influence of colour on the activation and mobilisation of students’ attention and memory, found ways to learn grammatical categories of the Ukrainian language using a generalising colour table. The authors proved that studying grammatical categories of the Ukrainian language using the colour coding method improves awareness and reproduction of educational information. Context and implications Rationale for this study The process of Eurointegration in Ukraine has an impact on all spheres of life, in particular on the educational system, where significant changes related to the optimisation of technologies, forms and means of teaching, improving the ways of motivating the assimilation of the necessary information are taking place. The quality of international students teaching and its effectiveness need to be improved. The solution to the problem of the effectiveness of the educational materials lies in the harmonious combination of structured content and cognitively oriented design. This paper considers the practical experience of creating educational materials on the Ukrainian language as a foreign language using innovative technologies such as the specifics of the role and place of colour and colour symbols as a code sign in the assimilation of new educational information. Why the new findings matter In the study, it was found that colour can affect cognitive processes, thinking and memory of a person, and the role of the colour influence on the activation and mobilisation of students’ attention and memory was revealed. The results of the research prove that the study of grammatical categories of the Ukrainian language using the colour coding method improves the comprehension and reproduction of teaching information. Implications for educational researchers and policymakers The grammatical table created by the authors contains the most difficult grammatical material for assimilation; it takes into account the basic psychological regularities of visual perception of information, colour influence on the human subconscious and is based on the principle of material accessibility, which manifests itself in the presence of symbols and words that are understandable for the speakers of different languages. Using the developed table, the conjugation of nouns, as well as adjectives and endings of pronouns and ordinal numbers can be studied. Educators should take into account that professional selection of colours significantly increases cognitive and motivational characteristics and decreases the level of negative psychoemotional states.
Article
Are all real-world objects created equal? Visual search difficulty increases with the number of targets and as target-related visual working memory (VWM) load increases. Our goal was to investigate the load imposed by individual real-world objects held in VWM in the context of search. Measures of visual clutter attempt to quantify real-world set-size in the context of scenes. We applied one of these measures, the number of proto-objects, to individual real-world objects and used contralateral delay activity (CDA) to measure the resulting VWM load. The current study presented a real-world object as a target cue, followed by a delay where CDA was measured. This was followed by a four-object search array. We compared CDA and later search performance from target cues containing a high or low number of proto-objects. High proto-object target cues resulted in greater CDA, longer search RTs, target dwell times, and reduced search guidance, relative to low proto-object targets. These findings demonstrate that targets with more proto-objects result in a higher VWM load and reduced search performance. This shows that the number of proto-objects contained within individual objects produce set-size like effects in VWM and suggests proto-objects may be a viable unit of measure of real-world VWM load. Importantly, this demonstrates that not all real-world objects are created equal.
Article
Full-text available
Visual working memory (VWM) is traditionally assumed to be immune to proactive interference (PI). However, in a recent study (Endress & Potter, 2014), performance in a visual memory task was superior when all items were unique and hence interference from previous trials was impossible, compared to a standard condition in which a limited set of repeating items was used and stimuli from previous trials could interfere with the current trial. Furthermore, when all the items were unique, the estimated memory capacity far exceeded typical capacity estimates. Consequently, the researchers suggested the existence of a separate memory buffer, the "temporary memory," which has an unbounded capacity for meaningful items. However, before accepting this conclusion, methodological differences between the repeated-unique procedure and typical estimates of VWM should be considered. Here, we tested the extent to which the exceptional set of heterogeneous, complex, meaningful real-world objects contributed to the large PI in the repeated-unique procedure. Thus, the same paradigm was employed with a set of real-world objects and with homogenous sets (e.g., houses, faces) in which the items were meaningful, yet less visually distinct, and participants had to rely on subtle visual details to perform the task. The results revealed a large PI effect for real-world heterogeneous objects, but substantially smaller effects for the homogenous sets. These findings suggest that there is no need to postulate a new memory buffer. Instead, we suggest that VWM capacity and vulnerability to PI are highly influenced by task characteristics, and specifically, by the stimuli distinctiveness. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Article
Full-text available
Visual working memory (WM) capacity is thought to be limited to 3 or 4 items. However, many cognitive activities seem to require larger temporary memory stores. Here, we provide evidence for a temporary memory store with much larger capacity than past WM capacity estimates. Further, based on previous WM research, we show that a single factor-proactive interference-is sufficient to bring capacity estimates down to the range of previous WM capacity estimates. Participants saw a rapid serial visual presentation of 5-21 pictures of familiar objects or words presented at rates of 4/s or 8/s, respectively, and thus too fast for strategies such as rehearsal. Recognition memory was tested with a single probe item. When new items were used on all trials, no fixed memory capacities were observed, with estimates of up to 9.1 retained pictures for 21-item lists, and up to 30.0 retained pictures for 100-item lists, and no clear upper bound to how many items could be retained. Further, memory items were not stored in a temporally stable form of memory but decayed almost completely after a few minutes. In contrast, when, as in most WM experiments, a small set of items was reused across all trials, thus creating proactive interference among items, capacity remained in the range reported in previous WM experiments. These results show that humans have a large-capacity temporary memory store in the absence of proactive interference, and raise the question of whether temporary memory in everyday cognitive processing is severely limited, as in WM experiments, or has the much larger capacity found in the present experiments. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Article
Full-text available
Most theories of attention propose that we maintain attentional templates in visual working memory to control what information is selected. In the present study, we directly tested this proposal by measuring the contralateral-delay activity (CDA) of human event-related potentials during visual search tasks in which the target is cued on each trial. Here we show that the CDA can be used to measure the maintenance of attentional templates in visual working memory while processing complex visual scenes. In addition, this method allowed us to directly observe the shift from working memory to long-term memory representations controlling attention as learning occurred and experience accrued searching for the same target object. Our findings provide definitive support for several critical proposals made in theories of attention, learning, and automaticity.
Article
Visual working memory (VWM) is limited in both the capacity of information it can retain and the rate at which it encodes that information. We examined the influence of stimulus complexity on these 2 limitations of VWM. Observers performed a change-detection task with English letters of various fonts or letters from unfamiliar alphabets. Average perimetric complexity (κ)—an objective correlate of the number of features comprising each letter—differed among the fonts and alphabets. Varying the time between the memory array and mask, we used change-detection performance to estimate the number of items held in VWM (K) as a function of encoding time. For all alphabets, K increased over 270 ms (indicating the rate of encoding) before reaching an asymptote (indicating capacity). We found that rate and capacity for each alphabet were unrelated to complexity: Performance was best modeled by assuming that both were limited by number of items (K ), rather than by number of features (K × κ). We also found a higher encoding rate and capacity for familiar alphabets (∼45 items s<sup>−1</sup>; ∼4 items) than for unfamiliar alphabets (∼12 items s<sup>−1</sup>; ∼1.5 items). We then compared the familiar English alphabet to an unfamiliar artificial character set matched in complexity. Again, rate and capacity was higher for the familiar than for the unfamiliar stimuli. We conclude that rate and capacity for encoding into visual working memory is determined by the number of familiar feature-integrated object representations.
Article
To test how pre-existing long-term memory (LTM) influences visual short-term memory (STM), the present study takes advantage of individual differences in participants’ prior familiarity with Pokémon characters and uses an event-related potential component, the Contralateral Delay Activity (CDA), to assess whether observers’ prior stimulus familiarity affects STM consolidation and storage capacity. In two change detection experiments, consolidation speed, as indexed by CDA fractional area latency and/or early-window (500 to 800 ms) amplitude, was significantly associated with individual differences in Pokémon familiarity. In contrast, the number of remembered Pokémon stimuli, as indexed by Cowan’s K and late-window (1,500 to 2,000 ms) CDA amplitude, was significantly associated with individual differences in Pokémon familiarity when STM consolidation was incomplete due to a short presentation of Pokémon stimuli (500 ms, Experiment 2), but not when STM consolidation was allowed to complete given sufficient encoding time (1,000 ms, Experiment 1). Similar findings were obtained in between-group analyses when participants were separated into high-familiarity and low-familiarity groups based on their Pokémon familiarity ratings. Together, these results suggest that stimulus familiarity, as a proxy for the strength of pre-existing LTM, primarily speeds up STM consolidation, which may subsequently lead to an increase in the number of remembered stimuli if consolidation is incomplete. These findings thus highlight the importance of research assessing how effects on representations (e.g., STM capacity) are in general related to (or even caused by) effects on processes (e.g., STM consolidation) in cognition.
Article
Long-term memory (LTM) can influence many aspects of short-term memory (STM), including increased STM span. However, it is unclear whether LTM enhances the quantitative or qualitative aspect of STM. That is, do we retain a larger number of representations or more precise representations in STM for familiar stimuli than unfamiliar stimuli? The current study took advantage of participants’ prior rich multimedia experience with Pokémon, without investing on laboratory training to examine how prior LTM influenced visual STM. In a Pokémon visual STM change detection task, participants remembered more first-generation Pokémon characters that they were more familiar with than recent-generation Pokémon characters that they were less familiar with. No significant difference in memory quality was found, when quantitative and qualitative effects of LTM were isolated using receiver operating characteristic (ROC) analyses. Critically, these effects were absent in participants who were unfamiliar with first-generation Pokémon. Furthermore, several alternative interpretations were ruled out, including general video gaming experience, subjective Pokémon preference, and verbal encoding. Together, these results demonstrated a strong link between prior stimulus familiarity in LTM and visual STM storage capacity.
Article
Existing long-term memory (LTM) can boost the number of retained representations over a short delay in visual short-term memory (VSTM). However, it is unclear whether and how prior LTM affects the initial process of transforming fragile sensory inputs into durable VSTM representations (i.e., VSTM consolidation). The consolidation speed hypothesis predicts faster consolidation for familiar relative to unfamiliar stimuli. Alternatively, the perceptual boost hypothesis predicts that the advantage in perceptual processing of familiar stimuli should add a constant boost for familiar stimuli during VSTM consolidation. To test these competing hypotheses, the present study examined how the large variance in participants’ prior multimedia experience with Pokémon affected VSTM for Pokémon. In Experiment 1, the amount of time allowed for VSTM consolidation was manipulated by presenting consolidation masks at different intervals after the onset of to-be-remembered Pokémon characters. First-generation Pokémon characters that participants were more familiar with were consolidated faster into VSTM as compared with recent-generation Pokémon characters that participants were less familiar with. These effects were absent in participants who were unfamiliar with both generations of Pokémon. Although familiarity also increased the number of retained Pokémon characters when consolidation was uninterrupted but still incomplete due to insufficient encoding time in Experiment 1, this capacity effect was absent in Experiment 2 when consolidation was allowed to complete with sufficient encoding time. Together, these results support the consolidation speed hypothesis over the perceptual boost hypothesis and highlight the importance of assessing experimental effects on both processing and representation aspects of VSTM.
Article
Significance Visual working memory is the cognitive system that holds visual information in an active state, making it available for cognitive processing and protecting it against interference. Here, we demonstrate that visual working memory has a greater capacity than previously measured. In particular, we use EEG to show that, contrary to existing theories, enhanced performance with real-world objects relative to simple stimuli in short-term memory tasks is reflected in active storage in working memory and is not entirely due to the independent usage of episodic long-term memory systems. These data demonstrate that working memory and its capacity limitations are dependent upon our knowledge. Thus, working memory is not fixed-capacity; instead, its capacity is dependent on exactly what is being remembered.