Content uploaded by Xinyu Li
Author content
All content in this area was uploaded by Xinyu Li on Dec 20, 2021
Content may be subject to copyright.
Journal of Experimental Psychology:
Learning, Memory, and Cognition
Visual Memory Benefits From Prolonged Encoding Time
Regardless of Stimulus Type
Xinyu Li, Zijun Xiong, Jan Theeuwes, and Benchi Wang
Online First Publication, May 21, 2020. http://dx.doi.org/10.1037/xlm0000847
CITATION
Li, X., Xiong, Z., Theeuwes, J., & Wang, B. (2020, May 21). Visual Memory Benefits From Prolonged
Encoding Time Regardless of Stimulus Type. Journal of Experimental Psychology: Learning,
Memory, and Cognition. Advance online publication. http://dx.doi.org/10.1037/xlm0000847
Visual Memory Benefits From Prolonged Encoding Time Regardless of
Stimulus Type
Xinyu Li and Zijun Xiong
Zhejiang Normal University
Jan Theeuwes
Zhejiang Normal University and Free University, Amsterdam,
the Netherlands
Benchi Wang
South China Normal University and Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, Guangzhou
It is generally assumed that the storage capacity of visual working memory (VWM) is limited, holding
about 3– 4 items. Recent work with real-world objects, however, has challenged this view by providing
evidence that the VWM capacity for real-world objects is not fixed but instead increases with prolonged
encoding time (Brady, Störmer, & Alvarez, 2016). Critically, in this study, no increase with prolonged
encoding time was observed for storing simple colors. Brady et al. (2016) argued that the larger capacity
for real-world objects relative to colors is due to the additional conceptual information of real-world
objects. With basically the same methods of Brady et al., in Experiments 1–3, we were unable to replicate
their basic findings. Instead, we found that visual memory for simple colors also benefited from
prolonged encoding time. Experiment 4 showed that the scale of the encoding time benefit was the same
for familiar and unfamiliar objects, suggesting that the added conceptual information does not contribute
to this benefit. We conclude that visual memory benefits from prolonged encoding time regardless of
stimulus type.
Keywords: visual working memory, long-term memory, encoding time benefits, real-world objects
Human visual memory systems, especially visual working mem-
ory (VWM) and visual long-term memory (VLTM), are funda-
mental to human cognition. Their capacities are of great impor-
tance as they are strongly related to overall cognitive ability. The
standard view is that the storage capacity of VLTM is large and is
assumed to hold more than thousands of objects with numerous
details (Brady, Konkle, Alvarez, & Oliva, 2008; Konkle, Brady,
Alvarez, & Oliva, 2010), while the storage capacity of VWM is
limited (e.g., Cowan, 2001) and is assumed to hold about three to
four items after hundreds of milliseconds of presentation time
(Bays, Gorgoraptis, Wee, Marshall, & Husain, 2011; Luck &
Vogel, 1997). Recent work with real-world objects, however, has
challenged the latter view by providing evidence that the VWM
capacity for real-world objects is not fixed but instead increases
with prolonged encoding time (Brady, Störmer, & Alvarez, 2016).
Brady et al. (2016) showed that people were capable of mem-
orizing real-world objects when encoding time was prolonged,
while no such benefit was found for encoding simple stimuli like
colors. They argued that compared to simple stimuli, real-world
objects have additional conceptual information, which might be
related to this encoding time benefit in VWM. However, it is also
possible that this benefit is solely due to the involvement of VLTM
system, which has a very large capacity and is assumed to play an
important role for encoding real-world objects (Brady, Konkle,
Oliva, & Alvarez, 2009).
To examine the involvement of VLTM, Brady et al. (2009)
employed electrophysiological recordings and measured the con-
tralateral delay activity (CDA). Because the CDA amplitude in-
creases with the number of stored objects and correlates with
individual memory capacity (Vogel & Machizawa, 2004), it is
generally believed that the CDA provides a neural signature of
active storage in VWM. Critically, it was shown that the CDA
disappears when the stored information had been entered into
VLTM (Carlisle, Arita, Pardo, & Woodman, 2011). Brady et al.
XXinyu Li and Zijun Xiong, Department of Psychology, Zhejiang
Normal University; Jan Theeuwes, Department of Psychology, Zhejiang
Normal University, and Department of Experimental and Applied Psychol-
ogy, Free University, Amsterdam, the Netherlands; XBenchi Wang, In-
stitute for Brain Research and Rehabilitation, Center for Studies of Psy-
chological Application, Guangdong Key Laboratory of Mental Health and
Cognitive Science, South China Normal University, and Key Laboratory of
Brain, Cognition and Education Sciences (South China Normal Univer-
sity), Ministry of Education, Guangzhou.
Xinyu Li and Zijun Xiong equally contributed on to the current research.
Xinyu Li, Zijun Xiong, and Benchi Wang designed the experiment. Zijun
Xiong collected and analyzed the data. All authors wrote the article
together and approved the final version of the manuscript for submission.
This research was supported by the Natural Science Foundation of Zheji-
ang Province (LY18C090007) to Xinyu Li, a China Scholarship Council
(CSC) scholarship (201508330313) and the Guangdong Regional Joint
Foundation (2019A1515110581) to Benchi Wang.
Correspondence concerning this article should be addressed to Benchi
Wang, Institute for Brain Research and Rehabilitation, South China Nor-
mal University, Zhongshan Road West 55, Guangzhou 510000, China.
E-mail: wangbenchi.swift@gmail.com
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Journal of Experimental Psychology:
Learning, Memory, and Cognition
© 2020 American Psychological Association 2020, Vol. 2, No. 999, 000
ISSN: 0278-7393 http://dx.doi.org/10.1037/xlm0000847
1
showed an increased CDA amplitude for real-world objects when
observers had to memorize five instead of three real-world objects,
while no such an effect was found when storing five instead of
three simple colors. These findings indicate that more than three
real-world objects could be stored in VWM, while the VWM
capacity of simple colors was limited to three.
Even though Brady et al. (2016) argued that real-world objects
are stored in an active VWM system, they did not rule out the
possibility that these objects are also stored in VLTM. That is,
participants might employ two different memory systems at the
same time to store more items in visual memory. For example,
Brady et al. (2016) make the claim that “the current data do not
rule out the idea that real-world objects also lead to better episodic
long-term memory representations than simple stimuli do (in ad-
dition to being better represented in active working memory sys-
tems)” (p. 7462). In addition, it should be noted that Quirk and
Vogel (2017)
1
failed to replicate the critical CDA results of Brady
et al. (with more participants and more trials to increase statistical
power). They showed no reliable differences between three and
five real-world objects (with the same stimuli set employed in
Brady et al., 2016). Also, they failed to replicate the critical
behavioral findings but instead showed that with prolonged encod-
ing time, memory performance was improved for both simple
colors and real-world objects. These failures in replicating the
basic effects make the argument regarding more capacity for
real-world objects in VWM less convincing.
Brady et al. (2016) argued that the benefits from prolonged
encoding time are due to the fact that compared to simple colors,
real-world objects contain additional conceptual information. Yet,
compared to simple colors, real-world objects do not only have
additional conceptual information, but they are also perceived as
being more complex. That is, their perceptual complexity might
play a role, especially in discrimination tasks. For simple colors,
participants make a judgment based on only one dimension, while
for real-world objects, they could complete the comparison based
on multiple dimensions (e.g., shape, color, texture, material). Al-
though previous studies have shown that perceptual complexity
per se only leads to impoverished memory performance (Alvarez
& Cavanagh, 2004; Awh, Barton, & Vogel, 2007), this may not be
the case when encoding time is extended. That is, with more
perceptual complexity, observers might exploit multiple dimen-
sions as clues to encode and retrieve real-world objects, resulting
in encoding time benefit.
The goal of the present study was to investigate whether the
encoding time benefit in visual memory depends on the stimulus
type employed. As alluded above, compared to simple colors,
real-world objects have additional conceptual information and
additional perceptual complexity that both may contribute to im-
proving memory performance. Therefore, it is important to exam-
ine whether perceptual complexity and/or conceptual information
of the to-be-memorized stimuli contributed to these benefits. First,
we set out to replicate basic findings of Brady et al. (2016) with
basically the same paradigm, in which participants were required
to memorize either real-world objects or simple colors that were
presented with different encoding times (0.2 s, 1 s, or 2 s). In
Experiments 2 and 3, we further addressed the potential impact
factor of the encoding time benefit on visual memory and repli-
cated the critical color condition of Experiment 1. In Experiment
4, we presented familiar or unfamiliar real-world objects that had
the same perceptual complexity allowing us to investigate the role
of conceptual information in the encoding time benefit while
controlling for perceptual complexity.
Experiment 1
In the present experiment, we replicated the main experiment of
Brady et al. (2016) to examine whether the benefit from prolonged
encoding time only exists for real-world objects or whether it also
exists for simple stimuli (colors).
Method
Participants. Eighteen female participants (mean age: 19.7
years) with normal or corrected-to-normal vision were recruited
from Zhejiang Normal University for monetary compensation.
Sample size was predetermined based on Brady et al. (2016). With
a relatively small sample size (12), the critical pvalue for the
significant slope of real-world objects was already smaller than
.001. Thus, we chose larger sample sizes in the present and
subsequent experiments and in the meantime ensured that the
conditions could be counterbalanced between subjects. The pro-
cedure complied with a generic protocol approved by the Scientific
and Ethical Review Committee of the Department of Psychology
of Zhejiang Normal University.
Apparatus and stimuli. During the testing, participants were
required to keep their chin on a chinrest positioned at a viewing
distance of ⬃65 cm from a 21-in. color monitor in a dimly lit
laboratory. Stimulus presentation and response collection were
controlled by custom scripts written in Python.
Colored squares and real-world objects were chosen as experi-
mental materials, which were presented against a white back-
ground (⬃122 cd/m
2
). The colors of the squares (subtended by
1° ⫻1°) were randomly selected from nine equal-brightness
colors, evenly distributed along a circle in the CIE L
ⴱ
a
ⴱ
b (CIE-
LAB) color space (centered at L ⫽70, a ⫽5, b ⫽0, with a radius
of 40). Each image for real-world objects (subtended by 2° ⫻2°)
was downloaded from the online image database (https://bradylab
.ucsd.edu/stimuli.html) created by Brady and colleagues. The stim-
uli could appear at six locations, evenly distributed along the
circumference of one invisible circle centered at the display center
with the radius of 4°.
Procedure and design. The procedure was exactly the same
as in Experiment 1 of Brady et al. (2016). Participants were
required to memorize a set of either real-world objects or simple
colors while performing a simultaneous verbal task (rehearsing
two digits) to ensure visual memory, rather than verbal memory,
was activated (see Figure 1). First, two digits were presented for 1
s, followed by a 1-s interval. Then, six to-be-memorized materials
(i.e., color squares or real-world objects) were presented for 0.2 s,
1 s, or 2 s. After another 0.8-s interval, a retro-cue (i.e., a thick dot)
was presented for 0.5 s to indicate the to-be-probed item. In the
probe display, two items were presented slightly above or below
the probed item location, and participants were required to indicate
which one was shown in the sample display by pressing the “up
arrow” or “down arrow” key (i.e., made a [Two-alternative Forced
1
This is an unpublished study that was presented during a poster
presentation at the Vision Science Society, May, 2017.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
2LI, XIONG, THEEUWES, AND WANG
Choice] judgement). Finally, they were required to input the sum
of the two digits shown in the beginning by pressing the corre-
sponding number key in the keyboard.
When color was tested, we chose a novel probe color, a color
that was categorically distinct from the original color (defined as
the distance on the CIELAB color space between these two colors
of 180°; see Figure 2, left panel, for an example) and did not
appear on the original display. For real-world objects, two condi-
tions were tested: (a) An object was tested against another cate-
gorically distinct object (e.g., an apple you had seen vs. a back-
pack; see Figure 2, middle panel, for an example), and (b) an
object was tested against another exemplar from the same category
(e.g., a bread vs. another bread; see Figure 2, right panel, for an
example). Thus,a3(encoding time: 0.2s,1s,and2s)⫻3(probe
type: color, objects, and objects with detail) within-subject design
was adopted. For each encoding time and each probe type, partic-
ipants were tested in different blocks, with each including 7
practice trials and 33 testing trials. The order of nine blocks was
counterbalanced across participants.
Analysis. Memory capacity (K) was calculated for Nto-be-
memorized items according to the method adopted in Brady et al.
(2016). It was assumed that participants could correctly answer the
[Two-alternative Forced Choice] test in p(percent correct) trials.
They would definitely remember the tested items in K/Ntrials or
could correctly guess the answer in (N–K)/N⫻50% trials
(chance ⫽50%) when they could not remember the tested items.
Thus, by simplifying the equation for percent correct, p⫽(K/N)⫹
(N–K/N)⫻0.5, the formula for capacity is as follows: K⫽N⫻
(2p– 1).
Results
There were no effects on mean accuracies for digits report task
in all experiments. The results of this task for all experiments are
provided in the Appendix.
The mean VWM capacities for each probe type as a function of
encoding time are shown in Figure 2. A repeated-measures anal-
ysis of variance (ANOVA) on the mean capacities with variables
probe type (color, objects, and objects with detail) and encoding
time (0.2 s, 1 s, and 2 s) showed that there were significant main
effects for encoding time, F(2, 34) ⫽30.96, p⬍.001, partial
2
⫽
.65, and probe type, F(2, 34) ⫽6.76, p⫽.003, partial
2
⫽.29.
Also, a significant interaction was observed, F(4, 68) ⫽3.96, p⫽
.006, partial
2
⫽.19. Planned follow-up comparisons revealed
that the mean VWM capacities increased as the encoding time was
prolonged for colors, F(2, 34) ⫽24.66, p⬍.001, partial
2
⫽.59;
for objects, F(2, 34) ⫽25.42, p⬍.001, partial
2
⫽.6; and for
objects with detail, F(2, 34) ⫽4.55, p⫽.018, partial
2
⫽.21.
We also used the linear slope to quantify the systematical
process of the encoding time benefit. Subsequent planned compar-
isons showed that the slope for colors (0.95) and that for objects
(1.05) were not statistically different, t(17) ⫽0.57, p⫽.565, d⫽
0.16, but the slope for objects with detail (0.45) was statistically
smaller compared to that for colors, t(17) ⫽3.56, p⫽.002, d⫽
0.81, and that for objects, t(17) ⫽3.0, p⫽.008, d⫽0.89.
Discussion
Our findings regarding real-world objects and objects with
detail perfectly replicate the findings of Experiment 1 in Brady et
al. (2016). However, while Brady et al. did not observe the
encoding time benefit for simple colors, our results unequivocally
show such an effect (see also Quirk & Vogel, 2017). In fact, there
was no reliable difference in encoding time benefit between real-
world objects and simple colors, suggesting visual memory bene-
fits from prolonged encoding time regardless of stimulus type.
To ensure that the current failure to replicate is not accidental,
we sought to replicate the results in Experiments 2 and 3. In
addition, we observed another interesting finding that the encoding
time benefit was reduced in the condition of objects that contained
Figure 1. The procedure adopted in the present study. See the online article for the color version of this figure.
Figure 2. Experiment 1: Estimated capacity as a function of ending time for colors, objects, and objects with
details. Error bars denote within-subjects 95% confidence intervals. See the online article for the color version
of this figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
3
BENEFITS FROM PROLONGED ENCODING TIME
details. One might argue that this reduction is due to the fact that
more details need to be encoded into visual memory. That is, due
to the prolonged encoding time, in each condition, a high-fidelity
memory representation has been formed, yet in the condition of
objects that contained details, the memory representation is of a
lower fidelity than in the condition without details, because with
details, more has to be encoded. However, it should be noted that
the details of objects for encoding were the same in both condi-
tions. The only difference between the two conditions is that a
higher fidelity memory representation was required to make a
judgment during retrieval for the objects with detail condition than
the objects condition. Thus, we argue that the scale of the original
encoding time benefit is the same between those two conditions,
but the benefit is reduced when the task requires a high-fidelity
representation during retrieval in the condition of objects that
contain details. In Experiments 2 and 3, we examined whether the
task requirements during retrieval play a role in the reduction in
encoding time benefit.
Experiment 2
For replication purposes, in one condition, we adopted the same
difference between two color probes (i.e., 180° difference in CIE-
LAB color space) as used in Experiment 1; in the other condition,
we reduced the difference between two color probes (i.e., 20°),
forcing participants to use a high-fidelity representation to make a
judgment in the discrimination task. Yet, we used the same color
set as in the “replication” condition to ensure that participants
encoded the same colors. Therefore, in this experiment, what
needed to be encoded was the same; the only difference was that
in one condition, the task required only a relatively low fidelity,
while in the other condition, a high fidelity was required.
Method
Eighteen new participants (two males; mean age: 20.4 years)
participated in Experiment 2. The task procedure was identical to
that of Experiment 1, except participants were only required to
memorize a set of simple colors with different probe difference
(180° vs. 20°). The 180° difference was defined as the distance on
the CIELAB color space between two probe colors of 180°; 20°
difference was defined as the distance on the CIELAB color space
between two probe colors of 20°. Thus,a3(encoding time: 0.2 s,
1s,and2s)⫻2(probe difference: 180° vs. 20°) within-subject
design was adopted.
Results
The mean VWM capacities for each probe difference as a
function of encoding time are shown in Figure 3A. A repeated-
measures ANOVA on the mean VWM capacities with variables
probe difference (180° vs. 20°) and encoding time (0.2 s, 1 s, and
2 s) showed that there were significant main effects for encoding
time, F(2, 34) ⫽16.1, p⬍.001, partial
2
⫽.49, and probe
difference, F(1, 17) ⫽247.91, p⬍.001, partial
2
⫽.94. Also,
there was a significant interaction, F(2, 34) ⫽9.45, p⫽.001,
partial
2
⫽.36. Planned follow-up comparisons revealed that for
180° probe difference, the mean VWM capacities increased as the
encoding time was prolonged (slope: 0.95), F(2, 34) ⫽25.37, p⬍
.001, partial
2
⫽.69; however, for 20° probe difference, there
were no such benefits from prolonged encoding time (slope: 0.28),
F(2, 34) ⫽1.62, p⫽.212, partial
2
⫽.08.
Discussion
With large probe difference (180°), we replicated the findings
from our Experiment 1, showing that the mean VWM capacities
for simple colors increased as the encoding time was prolonged.
However, with small probe difference (20°), such benefits were no
longer found, suggesting that the encoding time benefit is com-
promised for simple colors as well when more fidelity of the
representation is required in the discrimination task.
Experiment 3
To further confirm our previous findings, in Experiment 3, we
added a medium probe difference (60°) to Experiment 2. If the
previous findings were really due to the task requirement for the
representation fidelity, we expect that the scale of the encoding
Figure 3. Estimated capacity as a function of ending time for different probe differences in Experiment 2 (A)
and in Experiment 3 (B). Error bars denote within-subjects 95% confidence intervals. See the online article for
the color version of this figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
4LI, XIONG, THEEUWES, AND WANG
time benefit from this medium probe condition (60°) should fall in
between the 20° and 180° condition.
Method
Eighteen new participants (one male; mean age: 20.2 years) took
part in the experiment. The procedure was identical to that of
Experiment 2, except for adding a medium probe difference (60°).
Thus,a3(encoding time: 0.2s,1s,and2s)⫻3(probe difference:
180°, 60°, and 20°) within-subject design was adopted.
Results
The mean VWM capacities for each probe difference as a
function of encoding time are shown in Figure 3B. A repeated-
measures ANOVA on the mean VWM capacities with variables
probe difference (180°, 60°, and 20°) and encoding time (0.2 s, 1
s, and 2 s) showed that there were significant main effects for
encoding time, F(2, 34) ⫽12,47, p⬍.001, partial
2
⫽.42, and
probe difference, F(2, 34) ⫽87.91, p⬍.001, partial
2
⫽.84. The
interaction was marginally significant, F(4, 68) ⫽2.27, p⫽.071,
partial
2
⫽.12. Planned follow-up comparisons revealed that, for
180° and 60° probe differences, the mean VWM capacities in-
creased as the encoding time was prolonged with the slopes of
0.86, F(2, 34) ⫽19.07, p⬍.001, partial
2
⫽.53, and 0.64, F(2,
34) ⫽5.45, p⫽.009, partial
2
⫽.24, respectively. Again,
however, for 20° probe difference, there was no such benefit from
prolonged encoding time (slope: 0.16), F(2, 34) ⫽.45, p⫽.644,
partial
2
⫽.03.
Discussion
Not being surprised, again, we replicated the critical findings
from our Experiment 1, showing that the mean VWM capacities
for simple colors increased as the encoding time was prolonged
with large probe difference (180°). With three independent repli-
cations, we provide compelling evidence that memory perfor-
mance for simple colors is also improved when the encoding time
is extended. That is, visual memory benefits from prolonged
encoding time regardless of stimulus type.
Consistent with Experiment 2, we found that when adopting the
same color sets, the encoding time benefit was eliminated when the
difference between two probes was substantially reduced from
180° on the color wheel to 20°. Once the probe difference in-
creased to 60°, the size of the encoding time benefit recovered and
fell in between the 20° and 180° conditions. It suggests that the
scale of the encoding time benefit is impacted by how much the
fidelity of the memory representation is required by the task.
Experiment 4
As mentioned above, compared to simple colors, real-world
objects not only have additional conceptual information but are
also perceptually more complex. In previous experiments, we
found the same effects for simple colors as for real-world objects,
indicating that neither the conceptual information nor the percep-
tual complexity is linked to the encoding time benefit. However,
one might still argue that there are different mechanisms underly-
ing the benefits for real-world objects and for simple colors. As
outlined earlier, the involvement of VLTM is one factor that may
play a crucial role in driving the encoding time benefit. Previous
studies have shown that real-world objects with different concep-
tual information barely interfered with each other in VLTM, sug-
gesting that conceptual information is one important factor to help
memorize real-world objects into VLTM (Konkle et al., 2010).
Thus, it is quite feasible that the conceptual information associated
with real-world objects still plays a critical role in obtaining the
benefits from prolonged encoding time.
The current experiment examined this possibility. In this study,
participants had to memorize real-world objects that were either
familiar (generating additional conceptual information) or unfa-
miliar (not generating additional conceptual information), with
more or less the same perceptual complexity. To determine which
objects were familiar or unfamiliar, we first conducted Experiment
4a, in which participants had to indicate whether they recognized
the object that was shown and indicated on a 5-point scale how
confident they were in their answer. On the basis of these data, we
selected real-world objects that were indicated as being very
familiar and real-world objects that were considered very unfamil-
iar. In Experiment 4b, we used these stimuli to create two condi-
tions: Participants had to memorize familiar real-world objects in
one condition and unfamiliar real-world objects in the other con-
dition. We used the same procedure as in Experiment 1. If the
encoding time benefit depends on stimulus familiarity (which is
assumed to generate additional conceptual information), we expect
to see an increase in memory performance for familiar objects but
not for unfamiliar objects when the encoding time is prolonged. If
this effect is not found, we have to conclude that the encoding time
benefit has nothing to do with stimulus familiarity.
Experiment 4a
Method. Twelve new participants (two males; mean age: 20.4
years) took part in the experiment. A new set of 210 images of
real-world objects (subtended by 2° ⫻2°) were selected from the
same database as in Experiment 1. In each trial, an image of the
real-world object was present for 5 s, and participants had to
indicate whether they could recognize what it is or not. If yes, they
pressed “left arrow”; otherwise, they pressed “right arrow” on the
keyboard. Following this response, participants indicated their
confidence about their answers on 1–5 confidence scale test (1
represented very sure and 5 represented very unsure). There was
no time limit for their responses.
Results. As illustrated in Figure 4A, we randomly selected 15
familiar objects and 15 unfamiliar objects from one array of
objects with which all participants were completely familiar (with
the answer of “yes” and the confidence of “very sure”) and the
other array of objects with which all participants were completely
unfamiliar (with the answer of “no” and the confidence of “very
sure”), respectively.
Experiment 4b
Method. Twenty-four new participants (one male; mean age:
20.1 years) took part in the experiment. The procedure was iden-
tical to that of Experiment 1, except that participants were only
required to memorize real-world objects with different familiarity.
Thus,a3(encoding time: 0.2s,1s,and2s)⫻2(familiarity:
familiar vs. unfamiliar) within-subject design was adopted.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
5
BENEFITS FROM PROLONGED ENCODING TIME
Results. The mean VWM capacities for different familiarities
as a function of encoding time are shown in Figure 4B. A repeated-
measures ANOVA on the mean VWM capacities with variables
familiarity (familiar vs. unfamiliar) and encoding time (0.2 s, 1 s,
and 2 s) showed significant main effects for encoding time, F(2,
46) ⫽40.36, p⬍.001, partial
2
⫽.64, and familiarity, F(1,
23) ⫽10.07, p⫽.004, partial
2
⫽.31. No interaction was
observed, F(2, 46) ⫽0.28, p⫽.755, partial
2
⫽.01, Bayes factor
[BF] ⫽0.14. Planned follow-up comparisons revealed that the
mean VWM capacities increased as the encoding time was pro-
longed for both familiar objects, F(2, 34) ⫽19.07, p⬍.001,
partial
2
⫽.53, and unfamiliar objects, F(2, 34) ⫽5.45, p⫽.009,
partial
2
⫽.24. Statistically, the slope for familiar objects (0.82)
and that for unfamiliar objects (0.86) were also the same, t(23) ⫽
.23, p⫽.823, d⫽0.06, BF ⫽0.22.
Discussion
The current results showed that the overall memory perfor-
mance for familiar objects was better than that for unfamiliar
objects, but the encoding time benefit was the same regardless of
whether participants were familiar or unfamiliar with these objects.
This indicates that, similar to what has been shown regarding the
perceptual complexity, the additional conceptual information re-
lated to the real-world objects is not critical for obtaining benefits
from prolonged encoding time.
General Discussion
Recently, Brady et al. (2016) found that the memory perfor-
mance for real-world objects was improved when the encoding
time was prolonged while this effect was not found for simple
stimuli such as colors. On the basis of this, Brady et al. argued that
the richer conceptual information of real-world objects enabled
observers to exploit the extended encoding time to store a larger
number of items in working memory. This important result chal-
lenged the standard view that the storage capacity of VWM is
limited (Adam, Vogel, & Awh, 2017; Awh et al., 2007; Cowan,
2001; Luck & Vogel, 1997; Zhang & Luck, 2008). However,
inconsistent with Brady et al., the current study unequivocally
shows that the encoding time benefit does not depend on the type
of stimuli employed: There were encoding time benefits for both
real-world objects (Experiments 1 and 4) and for simple colors
(Experiments 1, 2, and 3; see also Quirk & Vogel, 2017, for similar
results). We also show that this benefit is not the result of percep-
tual complexity or the presence of conceptual information (Exper-
iment 4). As shown in Experiments 2 and 3, one of the factors that
impacted the encoding time benefit was the task requirement: If a
higher fidelity memory representation was required, the encoding
time benefit was reduced.
Brady et al. (2016) found an increased CDA amplitude for
real-world objects when observers had to memorize five objects
instead of three objects, suggesting that more than three real-world
objects could be stored in VWM. This might indicate that VLTM
does not play a crucial role. However, Quirk and Vogel (2017),
who used more participants and more trials to increase statistical
power, were not able to replicate this effect on the CDA and
reported no reliable difference in CDAs between three and five
real-world objects. Therefore, it is difficult to decide whether or
not real objects are exclusively stored in VWM (as argued by
Brady et al., 2016) and whether VLTM plays a role in the encoding
time benefit. Regardless of this specific discussion, our conclusion
is the same: Visual memory benefits from prolonged encoding
time regardless of stimulus type.
So, one question that needs to be answered is why visual
memory benefits from prolonged encoding time. There are several
possibilities, for example: (a) The items are stored in VWM, but
Figure 4. (A) Familiar and unfamiliar objects used as memory materials in Experiment 4b were selected based
on the results of Experiment 4a. (B) Upper panel shows estimated capacity as a function of encoding time in
Experiment 4b, and bottom panel shows the estimated slopes for familiar and unfamiliar objects. Error bars
denote within-subjects 95% confidence intervals. See the online article for the color version of this figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
6LI, XIONG, THEEUWES, AND WANG
with the involvement of (or the interaction with) VLTM storage,
the memory capacity is temporally increased (see also Endress &
Potter, 2014; Hollingworth, 2005; Hollingworth & Hollingworth,
2004; Shoval, Luria, & Makovski, 2019); (b) the items are not
stored in VWM but instead in VLTM, which basically has a very
large storage space (see also Brady et al., 2008); and (c) the items
are stored in both VWM and VLTM by chunking information from
different memory types, resulting in larger memory capacity (see
also Ngiam, Khaw, Holcombe, & Goodbourn, 2019). Therefore, it
seems that, if anything, the involvement of VLTM might be a
critical factor in driving the benefits from prolonged encoding
time. However, it should be noted that none of the explanations
support the notion that VWM capacity is extended.
It has been argued that relative to simple colors, real-world
objects have additional conceptual information (which is related to
VLTM), which in turn may be the reason for encoding time
benefits of real-world objects (Brady et al., 2016; see also Curby,
Glazek, & Gauthier, 2009; Olsson & Poom, 2005). However,
compared to simple colors, real-world objects not only have ad-
ditional conceptual information but also have additional perceptual
complexity. Here, we found that the encoding time benefit existed
both for simple colors and for real-world objects, which indicates
that neither the conceptual information nor the perceptual com-
plexity is necessarily linked to the encoding time benefit. Criti-
cally, while controlling for the perceptual complexity, our Exper-
iment 4 further showed that the encoding time benefit for
real-world objects did not depend on whether participants had any
conceptual knowledge about real-world objects as the benefits
were the same for familiar and unfamiliar objects. Therefore, it is
unlikely that conceptual information has any potential impact on
obtaining the encoding time benefit.
In Experiment 4, we found that the overall memory performance
for familiar objects was better than that for unfamiliar objects.
There is some controversy in the literature regarding this effect. On
the one hand, using Pokémon figures as memory materials, Xie
and Zhang (2017a, 2017b, 2018) found no increase in memory
capacity due to stimulus familiarity but only found an effect on
memory consolidation. On the other hand, using alphabet letters
as memory materials, Ngiam et al. (2019) reported an increase in
memory capacity due to stimulus familiarity. Ngiam et al. argued
that the degree of familiarity between first-generation (familiar)
and recent-generation (unfamiliar) Pokémon is too small to gen-
erate an effect on memory capacity. Indeed, Ngiam et al. (2019)
used English letters as familiar stimuli and novel characters as
unfamiliar stimuli, which generates a large difference in familiar-
ity. Under those conditions, a difference in memory capacity was
found. This is consistent with our Experiment 4, in which there
was a large difference between familiar and unfamiliar objects
because the unfamiliar real-world objects we chose were com-
pletely unknown to observers while the familiar ones were well
known to the observers.
In summary, we conclude that visual memory benefits from
prolonged encoding time regardless of stimulus type employed.
Our study shows that neither the presence of conceptual informa-
tion nor the perceptual complexity of the stimuli plays a critical
role in obtaining the encoding time benefit. This implies that
factors other than those related to conceptual information and
perceptual complexity associated with VLTM might be critical in
obtaining the benefits from prolonged encoding time.
References
Adam, K. C. S., Vogel, E. K., & Awh, E. (2017). Clear evidence for item
limits in visual working memory. Cognitive Psychology, 97, 79 –97.
http://dx.doi.org/10.1016/j.cogpsych.2017.07.001
Alvarez, G. A., & Cavanagh, P. (2004). The capacity of visual short-term
memory is set both by visual information load and by number of objects.
Psychological Science, 15, 106 –111. http://dx.doi.org/10.1111/j.0963-
7214.2004.01502006.x
Awh, E., Barton, B., & Vogel, E. K. (2007). Visual working memory
represents a fixed number of items regardless of complexity. Psycho-
logical Science, 18, 622– 628. http://dx.doi.org/10.1111/j.1467-9280
.2007.01949.x
Bays, P. M., Gorgoraptis, N., Wee, N., Marshall, L., & Husain, M. (2011).
Temporal dynamics of encoding, storage, and reallocation of visual
working memory. Journal of Vision, 11(10), 6. http://dx.doi.org/10
.1167/11.10.6
Brady, T. F., Konkle, T., Alvarez, G. A., & Oliva, A. (2008). Visual
long-term memory has a massive storage capacity for object details.
Proceedings of the National Academy of Sciences of the United States of
America, 105, 14325–14329. http://dx.doi.org/10.1073/pnas.0803
390105
Brady, T. F., Konkle, T., Oliva, A., & Alvarez, G. A. (2009). Detecting
changes in real-world objects: The relationship between visual long-term
memory and change blindness. Communicative & Integrative Biology, 2,
1–3. http://dx.doi.org/10.4161/cib.2.1.7297
Brady, T. F., Störmer, V. S., & Alvarez, G. A. (2016). Working memory
is not fixed-capacity: More active storage capacity for real-world objects
than for simple stimuli. Proceedings of the National Academy of Sci-
ences of the United States of America, 113, 7459 –7464. http://dx.doi
.org/10.1073/pnas.1520027113
Carlisle, N. B., Arita, J. T., Pardo, D., & Woodman, G. F. (2011).
Attentional templates in visual working memory. Journal of Neurosci-
ence, 31, 9315–9322. http://dx.doi.org/10.1523/JNEUROSCI.1097-11
.2011
Cowan, N. (2001). The magical number 4 in short-term memory: A
reconsideration of mental storage capacity. Behavioral and Brain Sci-
ences, 24, 87–114. http://dx.doi.org/10.1017/S0140525X01003922
Curby, K. M., Glazek, K., & Gauthier, I. (2009). A visual short-term
memory advantage for objects of expertise. Journal of Experimental
Psychology: Human Perception and Performance, 35, 94 –107. http://
dx.doi.org/10.1037/0096-1523.35.1.94
Endress, A. D., & Potter, M. C. (2014). Large capacity temporary visual
memory. Journal of Experimental Psychology: General, 143, 548 –565.
http://dx.doi.org/10.1037/a0033934
Hollingworth, A. (2005). The relationship between online visual represen-
tation of a scene and long-term scene memory. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 31, 396 – 411. http://dx
.doi.org/10.1037/0278-7393.31.3.396
Hollingworth, A., & Hollingworth, A. (2004). Constructing visual repre-
sentations of natural scenes: The roles of short- and long-term visual
memory. Journal of Experimental Psychology: Human Perception and
Performance, 30, 519 –537. http://dx.doi.org/10.1037/0096-1523.30.3
.519
Konkle, T., Brady, T. F., Alvarez, G. A., & Oliva, A. (2010). Conceptual
distinctiveness supports detailed visual long-term memory for real-world
objects. Journal of Experimental Psychology: General, 139, 558 –578.
http://dx.doi.org/10.1037/a0019165
Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory
for features and conjunctions. Nature, 390, 279 –281. http://dx.doi.org/
10.1038/36846
Ngiam, W. X., Khaw, K. L., Holcombe, A. O., & Goodbourn, P. T. (2019).
Visual working memory for letters varies with familiarity but not com-
plexity. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 45, 1761–1775. http://dx.doi.org/10.1037/xlm0000682
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
7
BENEFITS FROM PROLONGED ENCODING TIME
Olsson, H., & Poom, L. (2005). Visual memory needs categories. Proceedings of
the National Academy of Sciences of the United States of America, 102,
8776 – 8780. http://dx.doi.org/10.1073/pnas.0500810102
Quirk, C., & Vogel, E. (2017). No evidence for an object working memory
capacity benefit with extended viewing time. Journal of Vision, 17(10),
112. http://dx.doi.org/10.1167/17.10.112
Shoval, R., Luria, R., & Makovski, T. (2019). Bridging the gap between
visual temporary memory and working memory: The role of stimuli
distinctiveness. Journal of Experimental Psychology: Learning, Mem-
ory, and Cognition. Advance online publication. http://dx.doi.org/10
.1037/xlm0000778
Vogel, E. K., & Machizawa, M. G. (2004). Neural activity predicts indi-
vidual differences in visual working memory capacity. Nature, 428,
748 –751. http://dx.doi.org/10.1038/nature02447
Xie, W., & Zhang, W. (2017a). Familiarity increases the number of
remembered Pokémon in visual short-term memory. Memory & Cogni-
tion, 45, 677– 689. http://dx.doi.org/10.3758/s13421-016-0679-7
Xie, W., & Zhang, W. (2017b). Familiarity speeds up visual short-term
memory consolidation. Journal of Experimental Psychology: Human
Perception and Performance, 43, 1207–1221. http://dx.doi.org/10.1037/
xhp0000355
Xie, W., & Zhang, W. (2018). Familiarity speeds up visual short-term
memory consolidation: Electrophysiological evidence from contralateral
delay activities. Journal of Cognitive Neuroscience, 30, 1–13. http://dx
.doi.org/10.1162/jocn_a_01188
Zhang, W., & Luck, S. J. (2008). Discrete fixed-resolution representations
in visual working memory. Nature, 453, 233–235. http://dx.doi.org/10
.1038/nature06860
Appendix
Analysis for the Digits Report Task
The mean accuracies for digits report task in all experiments are
shown in Table A1. In Experiment 1, a repeated-measures
ANOVA on the mean accuracies with variables probe type (color,
objects, and objects with detail) and encoding time (0.2 s, 1 s, and
2 s) showed that there were no significant main effects for encod-
ing time, F(2, 34) ⫽2.17, p⫽.129, partial
2
⫽.11, and probe
type, F(2, 34) ⫽1.86, p⫽.171, partial
2
⫽.1. Also, no
significant interaction was observed, F(4, 68) ⫽0.13, p⫽.97,
partial
2
⫽.01.
In Experiment 2, a repeated-measures ANOVA on the mean
accuracies of digits report task with variables probe difference
(180° vs. 20°) and encoding time (0.2 s, 1 s, and 2 s) showed that
there were no significant main effects for encoding time, F(2,
34) ⫽0.33, p⫽.719, partial
2
⫽.02, and probe difference, F(1,
17) ⫽1.03, p⫽.324, partial
2
⫽.06. Also, no significant
interaction was observed, F(2, 34) ⫽1.4, p⫽.26, partial
2
⫽.08.
In Experiment 3, a repeated-measures ANOVA on the mean
accuracies of digits report task with variables probe difference
(180°, 60°, and 20°) and encoding time (0.2 s, 1 s, and 2 s) showed
that there were no significant main effects for encoding time, F(2,
34) ⫽0.1, p⫽.909, partial
2
⫽.01, and probe difference, F(2,
34) ⫽0.25, p⫽.781, partial
2
⫽.01. Also, no significant
interaction was observed, F(4, 68) ⫽0.79, p⫽.536, partial
2
⫽
.04.
In Experiment 4b, a repeated-measures ANOVA on the mean
accuracies of digits report task with variables familiarity (familiar
vs. unfamiliar) and encoding time (0.2 s, 1 s, and 2 s) showed that
there were no significant main effects for encoding time, F(2,
46) ⫽1.07, p⫽.352, partial
2
⫽.04, and familiarity, F(1, 23) ⫽
0.06, p⫽.809, partial
2
⬍.01. Also, no interaction was ob-
served, F(2, 46) ⫽0.62, p⫽.543, partial
2
⫽.03.
Overall, participants performed the digits report task quite well
in all conditions in all experiments.
Received August 27, 2019
Revision received March 6, 2020
Accepted March 25, 2020 䡲
Table A1
Mean Accuracies for Digits Report Task in the Present Study
Variable 0.2 s, M(SD)1s,M(SD)2s,M(SD)
Experiment 1
Colors .98 (.03) .98 (.03) .99 (.01)
Objects .98 (.04) .97 (.05) .99 (.03)
Objects with detail .97 (.04) .97 (.06) .98 (.03)
Experiment 2
Colors 20° .97 (.03) .97 (.04) .99 (.06)
Colors 180° .99 (.03) .98 (.03) .97 (.06)
Experiment 3
Colors 20° .97 (.05) .95 (.08) .96 (.06)
Colors 60° .96 (.08) .97 (.04) .97 (.05)
Colors 180° .96 (.07) .96 (.07) .96 (.06)
Experiment 4b
Familiar objects .98 (.04) .97 (.04) .98 (.03)
Unfamiliar objects .98 (.03) .97 (.03) .98 (.04)
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
8LI, XIONG, THEEUWES, AND WANG