ArticlePDF Available


There exist some estimates of the capacity of human memory. Recent studies have proven the fact that Long Term Memory is subject to constant reconfigurations mostly at lower levels of neural clusters. There is no consensus on one definition for the capacity of memory. As far as retrieval of items present in memory is not the concern, it is reasonable to refrain from putting limits on capacity of human memory; otherwise, one must accept a number game which renders no fixed definite final estimation. Recently such capacity is defined as the amount of interference created by the item which must remain active in the memory.
Journal of Research on English and Language Learning
is licensed under aCreative Commons Attribution 4.0 International License
eISSN 2721-5016 | pISSN 2721-5024
Journal of Research on English and Language Learning
Vol. 1 | No. 2 | August 2020 | Hal. 69-72
The capacity of human memory: Is there any limit to human memory?
Ehsan Namaziandost1, Meisam Ziafar2
1,2Department of English, Islamic Azad University, Iran,
*) correspondence:
There exist some estimates of the capacity of human memory. Recent studies have proven the fact
that Long Term Memory is subject to constant reconfigurations mostly at lower levels of neural
clusters. There is no consensus on one definition for the capacity of memory. As far as retrieval of
items present in memory is not the concern, it is reasonable to refrain from putting limits on
capacity of human memory; otherwise, one must accept a number game which renders no fixed
definite final estimation. Recently such capacity is defined as the amount of interference created by
the item which must remain active in the memory.
Keywords: human memory; long term memory; interference; capacity
The human brain consists of about one billion neurons. Each neuron forms about 1,000
connections to other neurons, amounting to more than a trillion connections. “If each neuron could
only help store a single memory, running out of space would be a problem. You might have only a
few gigabytes of storage space, similar to the space in an iPod or a USB flash drive” (Chen, Hsieh, &
Kinshuk, 2008, p. 8). Yet neurons combine so that each one helps with many memories at a time,
exponentially increasing the brain’s memory storage capacity to something closer to around 2.5
petabytes (or a million gigabytes)(Wright & Fergadiotos, 2012). For comparison, if your brain
worked like a digital video recorder in a television, 2.5 petabytes would be enough to hold three
million hours of TV shows. You would have to leave the TV running continuously for more than
300 years to use up all that storage. The brain’s exact storage capacity for memories is difficult to
calculate (Murray, 2012). First, we do not know how to measure the size of a memory. Second,
certain memories involve more details and thus take up more space; other memories are forgotten
and thus free up space. Additionally, some information is just not worth remembering in the first
place. This is good news because our brain can keep up as we seek new experiences over our
lifetime. If the human life span were significantly extended, could we fill our brains? I’m not sure
(Ivanova & Hallowell, 2012).
One interesting machine is the human brain. In our mind the complex interactions form our
emotions, perceptions, feelings and desires and ultimately make us who we are. Is there any end to
what this incredible machine can achieve? Is a certain amount of the human intellect capped? When
we project ourselves over, say, a thousand years, we can learn and understand a lot more than we do
today? Is there an inherent limit to what our brains can understand? So, if we can imagine how
powerful the brain is, let's do some maths. There are around 100 billion neurons in the human brain.
While many famous sources say that on average each neuron is fired about 200 times per second
and it's the first number you'll get if you search up on Google this figure is most likely wrong.
Scientists are not exactly sure what the number is because different parts of brain pulse at different
rates, but a report proposes a rate of 0.29 per second based on rough calculations (Namaziandost,
Hafezian, & Shafiee, 2018). Every neuron is believed to be connected to some 7,000 other neurons,
so that 7,000 other neurons get that information every time a single neuron shoots a signal. If you
Namaziandost, E., Ziafar , M.
Journal of Research on English and Language Learning
is licensed under a Creative Commons Attribution 4.0 International License
multiply these three numbers, you get 200,000,000,000,000 bits of information transmitted every
second inside your brain. That’s 200 million - a number too big to visualize. The point is: the brain is
a powerful machine.
Despite the general agreement on the fact that there is no limit to the storage capacity of
human long-term memory, mostly in the case of well-defined upper bounds (Cherniak, 1983;
Galton, 1879, cited in Dudai, 2011), some studies have tried to find the capacity limitations of
human memory. Dudai (2011) maintains that if by memory capacity we mean memory that can be
stored regardless of whether it is normally retrieved or not it can be asserted that everything we learn
is permanently stored in the mind. Dudai (2011) cites the French philosopher Helvetius who
contends that:
“The capacity of memory far exceeds the practical needs of a thoughtful human being. There is
no one whose memory cannot contain not only all the words of a language, but also an infinity of
dates, facts, names, places, and finally a number of objects considerably more than six or seven
thousand”. (p. 342)
The other premodern attempt in determining the capacity of human memory was done by a
Swiss-German psychologist, Haller, who estimated that within 50 years a person may accumulate
1,577,880,000 traces (Burnham, 1889, cited in Dudai, 2011). After presenting some attempts in
measuring the amount of memory capacity in humans Dudai (2011) concludes that it cannot be
done yet since we still do not know how representations are in the spatial and temporal patterns of
neuronal activity. What is more current formal estimates of maximal information in our brain
- -2).
According to Wang, Liu and Wang (2003) the brain of an adult person contains up to 100
billion neurons, when each neuron is linked to a vast number of other neurons through thousands
of synapses. They maintain that despite having such knowledge the capacity of human brain remains
a mystery to be discovered, since such estimation highly depends on proper cognitive and
mathematical models of the brain. They further maintain that despite the belief that Long Term
Memory (LTM) is fixed and static, which is based on observations showing that the capacity of adult
brains is already completed and has reached a static state and its growth is impeded, recent
discoveries in neuroscience and informatics show that LTM is being constantly reconfigured, mainly
at lower levels of the neural clusters. After going through some mathematical and computational
algorithms, as a rough estimation of human memory capacity they propose 10^8432 bits as a rough
estimation of the human memory capacity. The authors further maintain that the brain does not
create new neurons to represent new information; rather it sets new synapses between the existing
neurons so that new information can be represented.
After all, the important fact to bear in mind is that as Dudai (2011) rightly mentions, memory
capacity means different things to different people:
Before embarking on the quest for pertinent data, it is useful to note that "memory capacity"
means different things to different people. Theoreticians may construe it as referring to the overall
storage capacity of an information-processing machine with the properties of brain. Those who are
more biologically oriented may wish to add that the capacity of brains is constrained by deterioration
with age and the finite life time of mortals. Experimental psychologists and neuropsychologists may
have in mind distinct memory systems and may also ask how much of the capacity of each system is
actually used. It is evident that the theoretical limit is larger than the real-life overall capacity, which
is larger than the capacity of specific systems, etc. (p. 343)
According to Dudai (2011) other attempts made in order to determine the amounts of human
capacity were geared to estimating the maximal amount of information that the human brain is
expected to perceive throughout life.
A pioneer in the modern attempts to equate the brain with a machine, John Von Neuman,
estimated that the input per nerve cell per second in the human brain is 14 bits, and, concluded that
the information accumulated over a life span of 60 years is 2.8 x 1020 bits (von Neuman 1958). In
the past 20 years both life expectancy and the estimate of the number of neurons in the brain
The capacity of human memory: Is there any limit to human memory?
Journal of Research on English and Language Learning
is licensed under a Creative Commons Attribution 4.0 International License
increased; therefore, using the same influx estimate one now gets 3.3 x 1021 bits per lifetime. (p.
The conclusions drawn by Dudai (2011, pp. 347-348) are as follow:
1- Although the theoretical constructs of mathematicians, physicists, and other practitioners of
artificial neural networks vary in their assumptions and rules, the bottom line is that the human brain
(or,more accurately, its artificial simplified counterparts) is capable of a storage capacity that can
account for the higher-end estimates obtained by either folk or experimental psychology. That is,
wewould not be able to astonish the current generation of modelers if we were to confront them
with the views of St. Augustine or Hailer; they surely would be able to come up with a model that
reconciles with the data. The appreciation for this truly impressive capacity is clearly detected in
various treatments of the subject in computer sciences and information theory.
2- Highly simplified number games, suggest that in theory, the human brain may be capable of
storing all of the sensory information that it encounters throughout a lifetime.
3-Last, but not at all least: There is a link missing between theory and biology, without which
the calculation would indeed remain number games. Even if we were to know how many bits the
brain can store, this would not tell us how many memories we have because we do not know, first,
how much of the system is engaged in mnemonics and second, most importantly, how specific
pieces of information are encoded.
Short-Term Memory (STM) Capacity and Working Memory Capacity (WMC)
There are various estimates for the storage capacity of short-term memory. Cherniak (1983)
considers it to be six meaningful units or chunks such as randomly chosen words. Miller (1956, cited
in Dudai, 2011) considers it to be seven plus or minus two items, and more recently Miller has
changed the number into the average capacity of about four numbers (Cowan, 2001, cited in Morey,
According to Baddeley (1996) the concept of working memory is different from that of short-
term memory in two respects. Firstly, the working memory is composed of a number of subsystems
rather than being a unitary module, and secondly, it involves an emphasis on the functional role of
other cognitive tasks such as learning, reasoning and comprehension, to the extent that functional
capacity of working memory (Ricks & Wiley, 2009) is preferred. The storage component of working
memory, according to Morey (2011), is considered as a buffer that permits the manipulation and
working on information which comes from various sources like sensory stores or long-term
memory. In regard to the capacity of working memory (WMC) there is general agreement that there
is limitation on the amount of processable input, although different scholars propose different
estimations. For example Conway, Kane, and Engle (2003) correlate it with Spearman’s g (general
intelligence),Kyllonen&Christal (1990) equate working memory capacity with reasoning ability, and
Baddeley and Hitch (1974, cited in Baddeley, 2003) propose two separate working memory
capacities, with one predicting verbal abilities and the other prediction visuospacial ones.
Gordon et al (2002, cited in Fedorenko, Gibson, & Rohde, 2006) argued that the working
memory capacity in language processing should not be determined in terms of the number of items
that can be kept active in memory through the process of comprehension, but rather in terms of the
amount of interference created by the item which must remain active in the memory. In the case of
the capacity of Long Term Memory, Brady, Konkle, Alvarez, and Oliva (2008) also refute the idea
that the number of stored items can be used as an estimate of memory capacity, and propose the
proper capacity estimate take into account the number of remembered items multiplied by the
amount of information present within each item.
Those interested in knowing about how the capacity of human memory is limited should not expect
to be presented with more than a number game. Different numbers are achieved by numerous
scholars due to their special orientations toward the issue. When by capacity of memory we mean
Namaziandost, E., Ziafar , M.
Journal of Research on English and Language Learning
is licensed under a Creative Commons Attribution 4.0 International License
those items, which are accumulated and the retrieval of the items already present in the memory is
not considered as a criterion in setting limits for memory capacity it is much easier to go into the
camp of those who see no point in limiting memory capacity. Despite this disagreement it can be
concluded that as Dudai (2011) maintains, current formal estimates of maximal information in our
- -2). Through making peace between theory and
biology one is more likely to end in closer estimates of the capacity of human memory and thus to
put an end on this number game. The presence of such a number game implies that there is almost
agreement on setting limits on the capacity of human memory. The capacity of working memory as
an indispensable part of human memory highly correlates with higher order manipulations of items
such as reasoning and comprehension. Recent views have a more comprehensive definition for what
the capacity of working memory is. They define it as the amount of interference created by the item
which must remain active in the memory.
Baddeley, A. D. (1996). The fractionation of working memory. Proc. Natl. Acad. Sci., 93, 13468-
Baddeley, A. D. (2003). Working memory: Humans. In J. H. Byrne (Ed.), Learning and memory (pp.
672-676). New York: MacMillan.
Brady, T. F., Konkle, T., Alvarez, G. A., & Oliva, A. (2008). Visual long-term memory has a massive
storage capacity for object details. PNAS, 105(38), 14325-14329.
Chen, N.S., Hsieh, S.W., &Kinshuk . (2008). Effects of short-term memory and content
representation type on mobilelanguage learning. Language Learning & Technology, 12(3),
Cherniak, C. (1983). Rationality and the structure of human memory. Synthese, 57, 163-186.
Conway, A. R. A., Kane, M. J., Engle, R. W. (2003). Working memory capacity and its relation to
general intelligence. Trends in Cognitive Sciences, 7(12), 547-552.
Dudai, Y. (2011). How big is human memory. Learning & Memory, 3, 341-365.
Fedorenko, E., Gibson, E., & Rodhe, D. (2006). The nature of working memory capacity in
sentence comprehension: Evidence against domain-specific working memory resources.
Journal of Memory and Language, 54, 541-553.
Ivanova, M.V., & Hallowell, B. (2012). Validity of an eye-tracking method to index working
memory in people with and without aphasia. Aphasiology, 26, 556-578.
Kyllonen, P.C., &Christal, R.E. (1990). Reasoning ability is (little more than) working memory
capacity. Intelligence, 14, 389433.
Morey, R. D. (2011). A Bayesian hierarchical model for the measurement of working memory
capacity. Journal of Mathematical Psychology, 55, 8-24.
Murray, L. L. (2012). Attention and other cognitive deficits in aphasia: presence and relation to
language and communication measures. American Journal of Speech-Language
Pathology, 21, S51-S64.
Namaziandost, E., Hafezian, M., &Shafiee, S. (2018). Exploring the association among working
memory, anxiety and Iranian EFL learners’ listening comprehension. Asian-Pacific Journal of
Second and Foreign Language Education, 3(20), 1-17.
Ricks, T. R., & Wiley, J. (2009). The influence of domain-knowledge on the functional capacity of
working memory. Journal of Memory and Language, 10, 1-19.
Wang, Y., Liu, D., & Wang, Y. (2003). Discovering the capacity of human memory. Brain and Mind,
4, 189-198.
Wright, H.H., &Fergadiotos, G. (2012). Conceptualizing and measuring working memory and
itsrelationship to aphasia. Aphasiology, 26, 258-278.
... Human memory is not a constant; it is elastic (Namaziandost and Ziafar, 2020). It can expand, and it can shrink, and its performance differs from one person to another (Bergman et al., 2021). ...
... Having difficulty in locating stored information (FR4) was strongly agreed with by 28.37%, while 29.52% agreed (M 5 3.88 and SD 5 0.05), 7.02% were neutral while 22.26 and 12.83% disagreed and strongly disagreed, respectively. Recalling the location of stored information involves both biological and psychological factors, and the intersection is not well understood (Namaziandost and Ziafar, 2020). The unfortunate event of never locating stored information items (FR5) was strongly agreed to by 4.42%, 1.17% agreed and 586% were neutral; as high as 42.59 and 45.96% (M 5 1.99 and SD 5 5.82), respectively disagreed and strongly disagreed that they never remember the locations of their stored information. ...
Purpose The study examined the personal information management (PIM) challenges encountered by faculty in six universities in Ghana, their information refinding experiences and the perceived role of memory. The study tested the hypothesis that faculty PIM performance will significantly differ when the differences in the influence of personal factors (age, gender and rank) on their memory are considered. Design/methodology/approach The study was guided by a sample survey design. A questionnaire designed based on themes extracted from earlier interviews was used to collect quantitative data from 235 faculty members from six universities in Ghana. Data analysis was undertaken with a discrete multivariate Generalized Linear Model to investigate how memory intermediates in the relationship between age, gender and rank, and, refinding of stored information. Findings The paper identified two subfunctions of refinding (Refinding 1 and Refinding 2) associated with self-confidence in information re-finding, and, memory (Memory 1 and Memory 2), associated with the use of complimentary frames to locate previously found and stored information. There were no significant multivariate effects for gender as a stand-alone variable. Males who were aged less than 39 could refind stored information irrespective of the memory class. Older faculty aged 40–49 who possess Memory 1 and senior lecturers who possess Memory 2 performed well in refinding information. There was a statistically significant effect of age and memory; and rank and memory. Research limitations/implications This study was limited to faculty in Ghana, whereas the study itself has implications for demographic differences in PIM. Practical implications Identifying how memory mediates the role of personal factors in faculty refinding of stored information will be necessary for the efforts to understand and design systems and technologies for enhancing faculty capacity to find/refind stored information. Social implications Understanding how human memory can be augmented by technology is a great PIM strategy, but understanding how human memory and personal factors interplay to affect PIM is more important. Originality/value PIM of faculty has been extensively examined in the literature, and limitations of memory has always been identified as a constraint. Human memory has been augmented with technology, although the outcome has been very minimal. This study shows that in addition to technology augmentation, personal factors interplay with human memory to affect PIM. Discrete multivariate Generalized Linear Model applied in this study is an innovative way of addressing the challenges of assimilating statistical methodologies in psychosocial disciplines.
... The human brain is the most sophisticated computer capable of storing trillions of bits of information [38]. Recent research has revealed that the human brain can store data up to a capacity of 1 million Gigabytes, which is ten times greater than the results of previous studies. ...
Full-text available
This study aims to determine the paradigm shift of scientists who use inner reason to external reason (mind), a concept initiated by Aristotle. The change allegedly occurred because of the strict laws of science. Philosophers basically talk about a topic broadly and deeply about inner reason, while scientists are more focused and narrower (outer reason). Data are obtained from six articles closely related to reason, mind, and brain, then explored and presented by six expert panels for data analysis using the N Vivo 12 QSR tool. The study shows a very strong correlation between the philosophical factor and reason. The results of the cluster analysis describe various comprehensive topics discussed by philosophers regarding reason which are correlated with inner reason and God, while the scientific discussion shows a very strong correlation between the brain and the alpha band. The alpha band is a condition that exists in brain waves. The cluster analysis results illustrate that scientists' discussions are focused and narrow on the brain. Thus, there has been a paradigm shift, where scientists now use outer reason to solve problems in life rather than applying inner reason. An interesting finding from this paper is that philosophers refer to reason as a noun, which means that reason is matter. This information is helpful because until now, based on a literature search, there have been no articles from scientists explaining that reason is a matter.
... However, accepting that long-term memory is finite seems more unpalatable, especially to generations raised on the myth that they only use 10% of their brains. Interestingly, attempts to predict the memory limit of the human brain have been made; a limit predicted to be around several petabytes (see Namaziandost and Ziafar, 2020). To put this in context, the current version of all articles in Wikipedia can be compressed into about 20 gigabytes. ...
Full-text available
Alzheimer’s Disease is defined as progressive memory loss coincident with accumulation of aggregated amyloid beta and phosphorylated tau. Identifying the relationship between these features has guided Alzheimer’s Disease research for decades, principally with the view that aggregated proteins drive a neurodegenerative process. Here I propose that amyloid beta and phospho-tau write-protect and tag neuroplastic changes as they form, protecting and insuring established neuroplasticity from corruption. In way of illustration, binding of oligomeric amyloid beta to the prion receptor is presented as an example possible mechanism. The write-protecting process is conjected to occur at least partially under the governance of isodendritic neuromodulators such as norepinephrine and acetylcholine. Coincident with aging, animals are exposed to accumulating amounts of memorable information. Compounded with recent increases in life expectancy and exposure to information-rich environments this causes aggregating proteins to reach unforeseen toxic levels as mnemonic circuits overload. As the brain cannot purposefully delete memories nor protect against overaccumulation of aggregating proteins, the result is catastrophic breakdown on cellular and network levels causing memory loss.
Full-text available
Background: Working memory (WM) is essential to auditory comprehension; thus understanding of the nature of WM is vital to research and clinical practice to support people with aphasia. A key challenge in assessing WM in people with aphasia is related to the myriad deficits prevalent in aphasia, including deficits in attention, hearing, vision, speech, and motor control of the limbs. Eye-tracking methods augur well for developing alternative WM tasks and measures in that they enable researchers to address many of the potential confounds inherent in tasks traditionally used to study WM. Additionally, eye-tracking tasks allow investigation of trade-off patterns between storage and processing in complex span tasks, and provide on-line response measures.Aims: The goal of the study was to establish concurrent and discriminative validity of a novel eye movement WM task in individuals with and without aphasia. Additionally we aimed to explore the relationship between WM and general language measures, and determine whether trade-off between storage and processing is captured via eye-tracking measures.Methods & Procedures: Participants with (n = 28) and without (n = 32) aphasia completed a novel eye movement WM task. This task, incorporating natural response requirements, was designed to circumvent potential confounds due to concomitant speech, motor, and attention deficits. The task consisted of a verbal processing component intermixed with presentation of colours and symbols for later recall. Performance on this task was indexed solely via eye movements. Additionally, participants completed a modified listening span task that served to establish concurrent validity of the eye-tracking WM task.Outcomes & Results: Performance measures of the novel eye movement WM task demonstrated concurrent validity with another established measure of WM capacity: the modified listening span task. Performance on the eye-tracking task discriminated effectively between participants with and without aphasia. No consistent relationship was observed between WM scores and Western Aphasia Battery aphasia quotient and subtest scores for people with aphasia. Additionally, eye-tracking measures yielded no trade-off between processing and storage for either group of participants.Conclusions: Results support the feasibility and validity of employing a novel eye-tracking method to index WM capacity in participants with and without aphasia. Further research is required to determine the nature of the relationship between WM, as indexed through this method, and specific aspects of language impairments in aphasia.
Full-text available
Due to the rapid advancements in mobile communication and wireless technologies, many researchers and educators have started to believe that these emerging technologies can be leveraged to support formal and informal learning opportunities. Mobile language learning can be effectively implemented by delivering learning content through mobile phones. Because the screen size of mobile phones is limited, the presentation of materials using different Learning Content Representation (LCR) types is an issue that needs to be explored. This study addresses the issue of content adaptation in mobile language learning environments. Two dimensions have been taken into consideration to identify a promising solution: instructional strategies (LCR types: written annotation and pictorial annotation), and learners' cognitive models (verbal and visual short-term memory). Our findings show that providing learning content with pictorial annotation in a mobile language learning environment can help learners with lower verbal and higher visual ability because such learners find it easier to learn content presented in a visual rather than in a verbal form. Providing learning content with both written and pictorial annotation can also help learners with both high verbal and high visual abilities. According to the Cognitive Load Theory, providing too much information may produce a higher cognitive load and lead to irritation and a lack of concentration. Our findings also suggest that providing just the basic learning materials is more helpful to learners with low verbal and visual abilities.
Full-text available
One of the major lessons of memory research has been that human memory is fallible, imprecise, and subject to interference. Thus, although observers can remember thousands of images, it is widely assumed that these memories lack detail. Contrary to this assumption, here we show that long-term memory is capable of storing a massive number of objects with details from the image. Participants viewed pictures of 2,500 objects over the course of 5.5 h. Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images. These results have implications for cognitive models, in which capacity limitations impose a primary computational constraint (e.g., models of object recognition), and pose a challenge to neural models of memory storage and retrieval, which must be able to account for such a large and detailed storage capacity. • object recognition • gist • fidelity
Working memory is the memory system that allows for conscious storage and manipulation of information. The capacity of working memory is extremely limited. Measurements of this limit, and what affects it, are critical to understanding working memory. Cowan (2001) and Pashler (1988) suggested applying multinomial tree models to data from change detection paradigms in order to estimate working memory capacity. Both Pashler and Cowan suggested simple formulas for estimating capacity with these models. However, in many cases, these simple formulas are inadequate, and may lead to inefficient or biased estimation of working memory capacity. I propose a Bayesian hierarchical alternative to the Pashler and Cowan formulas, and show that the hierarchical model outperforms the traditional formulas. The models are easy to use and appropriate for a wide range of experimental designs. An easy-to-use graphical user interface for fitting the hierarchical model to data is available.
A tacit and highly idealized model of the agent's memory is presupposed in philosophy. The main features of a more psychologically realistic duplex (orn-plex) model are sketched here. It is argued that an adequate understanding of the rationality of an agent's actions is not possible without a satisfactory theory of the agent's memory and of the trade-offs involved in management of the memory, particularly involving compartmentalization of the belief set. The discussion identifies some basic constraints on the organization of knowledge representations in general.
Theories of expertise have proposed that superior cognitive performance is in part due to increases in the functional capacity of working memory during domain-related tasks. Consistent with this approach Fincher-Kiefer et al. (1988), found that domain knowledge increased scores on baseball-related reading span tasks. The present studies extended those findings using span tasks with independent storage and processing components, and manipulating which component was related to baseball, to determine the source of functional working memory advantages due to domain knowledge. Only when the storage component was related to baseball, and participants were made aware of that relation, did domain knowledge lead to increases in performance on a domain-related span task. The results are discussed in relation to a Long Term Working Memory explanation of expert performance Ericsson and Kintsch (1995), but also show that the relevance of domain knowledge may need to be explicitly recognized to expand the functional capacity of working memory.
This paper reports the results of a dual-task experiment which investigates the nature of working memory resources used in sentence comprehension. Participants read sentences of varying syntactic complexity (containing subject- and object-extracted relative clauses) while remembering one or three nouns (similar to or dissimilar from the sentence-nouns). A significant on-line interaction was found between syntactic complexity and similarity between the memory-nouns and the sentence-nouns in the three memory-nouns conditions, such that the similarity between the memory-nouns and the sentence-nouns affected the more complex object-extracted relative clauses to a greater extent than the less complex subject-extracted relative clauses. These results extend Gordon, Hendrick, and Levine’s (2002) report of a trend of such an interaction. The results argue against the domain-specific view of working memory resources in sentence comprehension (Caplan & Waters, 1999).
This study was designed to further elucidate the relationship between cognition and aphasia, with a focus on attention. It was hypothesized that individuals with aphasia would display variable deficit patterns on tests of attention and other cognitive functions and that their attention deficits, particularly those of complex attention functions, would be related to their language and communication status. A group of individuals with varying types and severity of aphasia and a group of age- and education-matched adults with no brain damage completed tests of attention, short-term and working memory, and executive functioning. Overall, the group with aphasia performed significantly more poorly than the control group on the cognitive measures but displayed variability in the presence, types, and severity of their attention and other cognitive deficits. Correlational and regression analyses yielded significant relations between participants' attention deficits and their language and communication status. The findings accorded well with prior research identifying (a) attention and other cognitive deficits in most but not all individuals with aphasia; (b) heterogeneity in the types and severity of attention and other cognitive symptoms among individuals with cognitive impairments; and (c) potent associations among attention, language, and other cognitive domains. Implications for clinical practice and future research are discussed.
In performing many complex tasks, it is necessary to hold information in temporary storage to complete the task. The system used for this is referred to as working memory. Evidence for the need to postulate separable memory systems is summarized, and one particular model of working memory is described, together with its fractionation into three principal subsystems. The model has proved durable and useful and, with the development of electrophysiological and positive emission tomography scanning measures, is proving to map readily onto recent neuroanatomical developments.