In closed head injury patients impaired sustained attention has been used to explain poor performance of complex tasks. However, this basic capacity was never adequately investigated. We investigated sustained attention in an auditory vigilance task and found no evidence at all for an impairment. With an interval of about three months 8 patients, tested in the first half year after closed head injury, and 8 healthy control subjects were confronted twice with a low event rate vigilance task. This yielded measures of signal detection and response latency. Also the amplitude of 0.10 Hz Heart Rate Variability, a power spectral measure, was calculated to indicate sustained effort. Hypotheses that patients would manifest stronger effects of time-on-task on performance and effort were not supported. Independent of sustained attention patients differed from controls in terms of response latencies and sensitivity of discriminating small differences of loudness especially on the first occasion.
Schizophrenia is considered a brain disease with a quite heterogeneous clinical presentation. Studies in schizophrenia have yielded a wide array of correlations between structural and functional brain changes and clinical and cognitive symptoms. Reductions of grey matter volume (GMV) in the prefrontal and temporal cortex have been described which are crucial for the development of positive and negative symptoms and impaired working memory (WM). Associations between GMV reduction and positive and negative symptoms as well as WM impairment were assessed in schizophrenia patients (symptomatology in 34, WM in 26) and compared to healthy controls (36 total, WM in 26). GMV was determined by voxel-based morphometry and its relation to positive and negative symptoms as well as WM performance was assessed. In schizophrenia patients, reductions of GMV were evident in anterior cingulate cortex, ventrolateral prefrontal cortex (VLPFC), superior temporal cortex, and insula. GMV reductions in the superior temporal gyrus (STG) were associated with positive symptom severity as well as WM impairment. Furthermore, the absolute GMV of VLPFC was strongly related to negative symptoms. These predicted WM performance as well as processing speed. The present results support the assumption of two distinct pathomechanisms responsible for impaired WM in schizophrenia: (1) GMV reductions in the VLPFC predict the severity of negative symptoms. Increased negative symptoms in turn are associated with a slowing down of processing speed and predict an impaired WM. (2) GMV reductions in the temporal and mediofrontal cortex are involved in the development of positive symptoms and impair WM performance, too.
There is an academic dispute regarding the role of the right hemisphere in language processing. Transcranial Magnetic Stimulation (TMS) was used to test the hypothesis that Wernicke's area processes dominant meanings ("teller") whereas its right homologue processes subordinate meanings ("river") of ambiguous words ("bank"; Jung-Beeman, 2005).
Participants were asked to make a semantic decision on ambiguous words that were followed either by unrelated words or by words associated with their dominant or subordinate meanings. A 10 Hz TMS train was applied on each trial over CP5 (left Wernicke), CP6 (right Wernicke) or Cz (vertex) scalp positions, and was synchronized with the word presentation.
Accuracy and d' analysis revealed a TMS LOCATION by MEANING interaction. TMS over Wernicke's area resulted in more accurate responses and higher sensitivity to dominant meaning blocks compared to stimulating the right Wernicke's area and the vertex. In contrast, TMS over the right Wernicke's area resulted in more accurate responses and higher sensitivity to subordinate meaning blocks, compared to stimulating the left Wernicke's area and the vertex.
The left and right Wernicke's areas function as processors of dominant and subordinate meanings of ambiguous words, respectively. While previous research methods have yielded indecisive results, TMS proved to be a useful tool in demonstrating a causal role of the two brain regions in a double dissociation design with healthy subjects.
Although there are many opportunities to study memory in patients with Alzheimer's disease (AD) in the laboratory, there are few opportunities to study memory for real world events in these patients. The September 11, 2001 terrorist attacks provided one such opportunity. Patients with AD, patients with mild cognitive impairment (MCI), and healthy older adults were given a telephone questionnaire in the initial weeks after the event, again three to four months later, and finally one year afterwards to evaluate their memory for the September 11, 2001 terrorist attacks. We were particularly interested in using the attacks as an opportunity to examine the decline of episodic memory in patients with AD, patients with MCI, and older adult controls over a period of months. We found that compared to healthy older adults, patients with AD and MCI showed impaired memory at the initial time point, more rapid forgetting from the initial to the three-month time point, and very similar changes in memory from the three-month to the one-year time point. We speculated that these findings were consistent with patients with AD and MCI showing initial impaired encoding and a more rapid rate of forgetting compared with healthy older adults, but that once the memories had been consolidated, their decay rate became similar to that of healthy older adults. Lastly, although memory distortions were common among all groups, they were greatest in the patients with AD.
In animal studies, brain-derived neurotrophic factor (BDNF) is an important regulator of central nervous system development and synaptic plasticity. WAGR (Wilms tumour, Aniridia, Genitourinary anomalies, and mental Retardation) syndrome is caused by 11p13 deletions of variable size near the BDNF locus and can serve as a model for studying human BDNF haploinsufficiency (+/-). We hypothesized that BDNF+/- would be associated with more severe cognitive impairment in subjects with WAGR syndrome. Twenty-eight subjects with WAGR syndrome (6-28 years), 12 subjects with isolated aniridia due to PAX6 mutations/microdeletions (7-54 years), and 20 healthy controls (4-32 years) received neurocognitive assessments. Deletion boundaries for the subjects in the WAGR group were determined by high-resolution oligonucleotide array comparative genomic hybridization. Within the WAGR group, BDNF+/- subjects (n = 15), compared with BDNF intact (+/+) subjects (n = 13), had lower adaptive behaviour (p = .02), reduced cognitive functioning (p = .04), higher levels of reported historical (p = .02) and current (p = .02) social impairment, and higher percentage meeting cut-off score for autism (p = .047) on Autism Diagnostic Interview-Revised. These differences remained nominally significant after adjusting for visual acuity. Using diagnostic measures and clinical judgement, 3 subjects (2 BDNF+/- and 1 BDNF+/+) in the WAGR group (10.7%) were classified with autism spectrum disorder. A comparison group of visually impaired subjects with isolated aniridia had cognitive functioning comparable to that of healthy controls. In summary, among subjects with WAGR syndrome, BDNF+/- subjects had a mean Vineland Adaptive Behaviour Compose score that was 14-points lower and a mean intelligence quotient (IQ) that was 20-points lower than BDNF+/+ subjects. Our findings support the hypothesis that BDNF plays an important role in human neurocognitive development.
We report on a patient, LM, with a Korsakoff's syndrome who showed the unusual tendency to consistently provide a confabulatory answer to episodic memory questions for which the predicted and most frequently observed response in normal subjects and in confabulators is "I don't know". LM's pattern of confabulation, which we refer to as confabulatory hypermnesia, cannot be traced back to any more basic and specific cognitive deficit and is not associated with any particularly unusual pattern of brain damage. Making reference to the Memory, Consciousness and Temporality Theory - MCTT (Dalla Barba, 2002), we propose that LM shows an expanded Temporal Consciousness - TC, which overflows the limits of time ("Do you remember what you did on March 13, 1985?") and of details ("Do you remember what you were wearing on the first day of summer in 1979?") that are usually respected in normal subjects and in confabulating patients.
We have shown that dichotically presented nonsense syllables with strict acoustic and phonetic control of the stimuli, generate a right ear advantage that is essentially fixed by 5 years of age. What seems to change with age in our sample is total accuracy, phonetic content and nature of the errors, and something for want of a better term we might call “capacity” of the hypothesized left hemisphere speech processor.
Preoperative measures of intelligence are compared with retests 8 and 14 years after frontal lobe surgery. All 5s were chronic psychotics at the time of the first tests. The long-term retests are reported for 33 nonoperated controls, for 16 patients with lower frontal lobe surgery (Orbital Topectomy) and for 18 patients with upper frontal lobe surgery (Superior Topectomy). Previous research showed that both groups of operated patients scored lower on intelligence tests shortly after surgery than they did before. The objectives of the long-term retests concerned: (1) the permanence of changes, and (2) the differential effects of lower and upper forebrain surgery. The lower forebrain patients obtain long-term scores remarkably comparable to those of the nonoperated controls. The Superior Topectomy patients show significant loss 8 years after surgery, and this loss persists 14 years after surgery. The loss is both permanent and appreciable, equivalent to some 10 points in I.Q. Verbal and numerical tests reflect the permanent loss more clearly than perceptual and construction tests. The loss associated with upper frontal lobe surgery involves sustained attention, problem solving, and other intellectual functions.
Hemisphere specialization for mental rotation was investigated utilizing Shepard's (1971) paradigm. In each of two experiments, the procedure involved presenting pairs of novel non-verbal stimuli at various angles of disparity. Subjects were instructed to construct a mental image of one stimulus, rotate this image, and judge whether or not the image was a congruent match with its mate. Both response time and accuracy were measured. In Experiment 1, the testing of right-handed normals revealed a significant left visual field advantage for accuracy (p less than .0001) and response time (p less than .05). In Experiment 2, a comparison of right parietal lesioned patients with both left parietal lesioned patients and matched normal controls likewise revealed significant right lesion effects for accuracy (p greater than .0001) and response time (p greater than .01). Right hemisphere specialization for mental rotation was documented for both normals and brain damaged subjects.
180 normal children, ages five through ten years, were given a set of nine naming tests, each involving 50 timed responses to five randomly recurring pictured objects, colors, letters or numbers. “Automatization” of naming, measured by speed, accuracy, and consistency on these tasks, did not parallel the developmental order of acquisition of the various categories of names; letters and numbers were named relatively faster than were colors and objects as early as age six years. Children's facility in “automatization” of naming different semantic categories is considered in terms of the contributions of overlearning, stimulus discriminability, “operativity”, word frequency and response competition; only the last two appear to be explanatory factors.
In his short paper of 1886, the neogrammarian linguist Delbrück sketches his views on normal language processing and their relevance for the interpretation of some of the symptoms of progressive anomic aphasia. In particular, he discusses proper name impairments, verb and abstract noun superiority and the predominance of semantically related errors. Furthermore, he suggests that part of speech, morphology and word order may be preserved in this condition. This historical document has been lost in oblivion but the original ideas and their relevance for contemporary discussions merit a revival.
The early history of developmental language impairment in late 19th century Britain is considered through the critical examination of three papers appearing in 1891 by Hadden, Golding-Bird and Hale White, and Taylor. They represent innovative investigations of child language disorders whose themes and concerns are resonant today. The term 'idioglossia' was coined to identify this new impairment and reflected the belief by some that these children spoke an invented language. Rather than viewing these children as having some constitutional deficiency, these 19th century physicians were novel in insisting that children with language impairments merited extensive clinical investigation and treatment. Their case descriptions and the subsequent debates regarding classification and prognosis are reviewed. Further consideration is given to how these cases led to questioning the relation between language and speech and other aspects of child development and disorder. Reflection on the early sources of clinical categories provides a new perspective on our current formulations for variation in developmental language trajectories.
According to the 'Kennard Principle', there is a negative linear relation between age at brain injury and functional outcome. Other things being equal, the younger the lesioned organism, the better the outcome. But the 'Kennard Principle' is neither Kennard's nor a principle. In her work, Kennard sought to explain the factors that predicted functional outcome (age, to be sure, but also staging, laterality, location, and number of brain lesions, and outcome domain) and the neural mechanisms that altered the lesioned brain's functionality. This paper discusses Kennard's life and years at Yale (1931-1943); considers the genesis and scope of her work on early-onset brain lesions, which represents an empirical and theoretical foundation for current developmental neuropsychology; offers an historical explanation of why the 'Kennard Principle' emerged in the context of early 1970s work on brain plasticity; shows why uncritical belief in the 'Kennard Principle' continues to shape current research and practice; and reviews the continuing importance of her work.
We report the case of a 48-year old woman who, after a severe closed head injury, developed a severe and persistent disruption of retrograde memory, associated with a mild impairment of learning abilities. The patient's dense amnesia spared only the childhood period and included both explicit memory (autobiographical and semantic) and procedural skills. Because of her partially spared learning ability and intact language, intensive training by family members resulted in the reacquisition and retention of many autobiographical events and of some skills she had lost after the accident. Brain CT scan and MRI were normal; a PET study with (18F)FDG revealed a significant bilateral reduction of metabolism in the hippocampus and anterior cingulate cortex, suggesting a role for these structures in memory for past events.
Ebbinghaus' seminal work suggested that forgetting occurred as a function of time. However, it raised a number of fundamental theoretical issues that still have not been resolved in the literature. Müller and Pilzecker (1900) addressed some of these issues in a remarkable manner but their observations have been mostly ignored in recent years. Müller and Pilzecker (1900) showed that the materials and the task that intervene between presentation and recall may interfere with the to-be-remembered items, and they named this phenomenon "retroactive interference" (RI). They further asked whether there is a type of RI that is based only on distraction, and not on the similarity between the memoranda and the interfering stimuli. Their findings, and our follow up research in healthy volunteers and amnesiacs, confirm that forgetting can be induced by any subsequent mentally effortful interpolated task, irrespective of its content; the interpolated "interfering" material does not have to be similar to the to-be-remembered stimuli.
This review examined Dr. Harvey Cushing's cases in the surgical records of Johns Hopkins Hospital, from 1896 to 1912. 41 patients who underwent cortical stimulation for intra-operative motor mapping were selected for further analysis. We demonstrate that Cushing used cortical stimulation to define primary motor and sensory cortices in the treatment of tumors, trauma, and epilepsy, within adult and pediatric populations. In addition, he performed stimulation of sub-cortical white matter during 4 of these surgeries, setting the stage for contemporary use of this technique in improving post-operative outcomes. This review of Cushing's early intra-operative motor mapping illuminates his contributions, and clarifies his influence on the evolution of cortical mapping from an experimental technique to a staple of contemporary neurosurgery.
This article focuses on a series of six studies that address functional localization in the frontal lobe; they were published in Argentina between 1906 and 1909 by Christfried Jakob (1866-1956), one of the great thinkers in early 20th century neuropathology and neurophilosophy. At that time, the localization-holism controversy was at a peak, having been triggered by the historic Marie-Déjerine aphasiology debate. Jakob held the view that constitutive physiological elements of cognition are localized. Nonetheless, he cast doubt on phrenological approaches that considered the frontal lobe as 'superior' to the other cortical regions. Jakob studied the human frontal lobe from fetal life through senility, in normality and pathology, including tumors, injuries, softening, general paralysis and dementia. Based on those finds, he considered strict localization theories a dead-end. Taking a critical look at Flechsig's ideas on the parallel ontogenies of frontal association centers and intellect, Jakob argued that the frontal lobe does not carry any selective advantage over the remaining human cerebral lobes or even over the frontal lobe in non-human primates. Regarding lesion experiments in laboratory animals, he pointed to methodological caveats, such as insufficient recovery time, that may lead to disorientating conclusions, and rejected élite brain research, calling it superficial and inexact. Jakob was convinced that the verification of the anatomical connections of the frontal lobe would elucidate its functions. Thus, he viewed the frontal lobe as a central station receiving input via olfactory pathways and thalamic radiations, pertinent to muscular and cutaneous senses, and attributed a perceptive character to a brain region traditionally associated with productive functions. Modern neuroscience seems to support Jakob's rejection of distinguishable motor and sensory regions and to adopt a cautious stance concerning oversimplified localization views.
Dichhaptic matching of 3-D nonsense shapes was used to assess sex-specific differences in haptics. In an initial object exploration/description phase, strategy was manipulated with instructions to encode each object using either a visual image (which was drawn) or verbal description (tape recorded). These drawings or tape recordings were subsequently used on their own to identify each object. Attempts were made to maintain similarity between the verbal and spatial procedures, to avoid methodological biasing of hand superiorities (e.g., the same number of alternatives and the same presentation times were used). Differences in hand superiority did not, however, result. To obtain a broader perspective, a critical review table was compiled of the entire somatosensory asymmetry literature. Clear patterns emerged for all types of task when results potentially stemming from methodological biases and those based on only certain levels of samples were excluded. Somatosensory perceptual asymmetries are not robust, although hand superiorities are in the predicted direction when they do occur; nevertheless, we find little support for sex-specific asymmetries in these studies. Dichhaptic presentations lack the efficacy of other somatosensory (as well as tachistoscopic) tasks overall, possibly because of the time scale necessary for free haptic exploration.
This paper reports the results of a revised replication of the von Stockert/Bader constituent ordering study (1976) with German agrammatics. Part 1 describes why such a replication was necessary. In part 2, the original study is summarized and the weaknesses are identified which caused a revision. Part 3 reports the replication study with 10 German agrammatics. The findings of the original study with respect to Broca's aphasics could not be replicated: instead of an almost total loss of morphosyntactic sensitivity, there was almost total preservation. Part 4 eliminates the possibility of a population artefact by reporting on the performance of 3 patients with global aphasia. Part 5 integrates the results of this study within a larger framework of syntax and inflectional morphology in (German) agrammatism.
Millar (1987) argues that when braille readers scan a passage of text with the two hands side by side, the reading fingers actually alternate contacts with the letters. Reanalysis of her data shows that they provide little support for that claim. On the other hand, unlike a majority of the subjects studied by Bertelson et al. (1985), Millar's subjects do apparently not engage in simultaneous exploration of the end of one line and the beginning of the next one by different fingers. The suggestion offered by Millar that our findings result from an inadequate analysis of the data is rejected. The difference between the two sets of data should for the time being be considered as reflecting genuine differences between the populations that were sampled in the two studies.
Buck and Duffy (1980) and Borod et al. (1985) found evidence of deficits in spontaneous expressiveness in right brain-damaged (RBD) patients relative to LBD patients and controls. Using FACS, Mammucari et al. (1988) failed to replicate this result and questioned our methods and findings. This paper replies (a) that Mammucari et al. (1988)'s review of our work is selective and misleading; (b) that there are aspects of their study that can account for their null results, including the insufficient sensitivity of FACS for the measurement of spontaneous expressiveness; and (c) that the results of Mammucari et al. (1988) regarding "aversive eye movements" to a negative film in LBD and control, but not RBD, patients are in fact compatible with our findings. This paper also suggests a general strategy for the objective and comprehensive analysis of spontaneous emotional expressiveness.
A review of recent studies of prosopagnosia suggests that the weight of evidence has shifted in favor of regarding it as a disability that can be produced by a right hemisphere lesion alone even though bilateral disease remains the more frequent anatomical basis. It is possible that prosopagnosia resulting from a right hemisphere lesion occurs only within the context of some atypical condition of the left hemisphere. "Types" of prosopagnosia continue to be postulated and the "identification of individuality" hypothesis continues to be advanced. Autonomic and covert recognition studies of prosopagnosic patients have described a new dimension in facial identification. Right hemisphere dominance for the discrimination of unfamiliar faces in non-aphasic patients has been confirmed but the performances of left-hemisphere damaged aphasic patients has still not been fully investigated. New developments include the study of developmental prosopagnosia and novel applications of test of facial discrimination.
The stated mission of Cortex is "the study of the inter-relations of the nervous system and behavior, particularly as these are reflected in the effects of brain lesions on cognitive functions." The purpose of this paper is to explore the relationship between the stated mission and the executed mission as reflected by the characteristics of papers published in Cortex. In addition, we examine whether the results and conclusions of an analysis of this kind are affected by the level of description of the published papers.
A) Identify characteristics of contributors to Cortex; B) Identify characteristics of those who cite Cortex; C) Identify recurring themes; D) Identify the relationships among the recurring themes; E) Compare recurring themes and determine their relationships to the mission of Cortex; F) Identify the sensitivity of these results to the level of description of the Cortex papers used as the source database. G) Compare Cortex characteristics with those of Neuropsychologia, another Europe-based international neuropsychology journal.
Text mining (extraction of useful information from text) was used to generate the characteristics of the journal Cortex. Bibliometrics provided the Cortex contributor infrastructure (author/ organization/ country/ citation distributions), and computational linguistics identified the recurring technical themes and their inter-relationships. Citation mining (the integration of citation bibliometrics and text mining) was used to profile the research user community. Four levels of published article description were compared for the analysis: Full Text, Abstract, Title, Keywords.
Results and conclusions:
Highly cited documents were compared among Cortex, Neuropsychologia, and Brain, and a number of interesting parametric trends were observed. The characteristics of the papers that cite Cortex papers were examined, and some interesting insights were generated. Finally, the document clustering taxonomy showed that papers in Cortex can be reasonably divided into four categories (papers in each category in parenthesis): Semantic Memory (151); Handedness (145); Amnesia (119); and Neglect (66). It is concluded that Cortex needs to take steps to attract a more diverse group of contributors outside its continental Western European base if it wishes to capture a greater share of seminal neuropsychology papers. Further investigation of the critical citation differences reported in the paper is recommended.
Dehaene et al. (1993, Experiment 6) presented evidence that the mental number line is left-to-right oriented with respect to representational associations and not with respect to left and right hands. Here we tried to replicate the study of Dehaene et al. (1993) in a larger sample (n = 32) using four different stimulus notations (Arabic numbers, number words, auditory number words, and dice patterns). As in the study by Dehaene et al. (1993), the spatial numerical association of response codes (SNARC) effect was examined with an incongruent hand assignment to left/right response keys (crossed hands). In contrast to Dehaene et al. (1993), we did not observe a SNARC effect in any condition. Power analyses revealed that n = 32 should have been large enough to detect SNARC effects of usual size. Furthermore, time-course analyses revealed no SNARC slope in faster and slower responses, so that the null effect could not be due to relatively slow responses with crossed hands. Joint analyses with previous data (Nuerk et al., 2005b) revealed significantly steeper SNARC slopes with congruent hand assignment, and no interaction between hand assignment and notation. Altogether, these findings suggest that the results of Dehaene et al. (1993) only hold under specific conditions. Differences between studies are discussed. We suggest that spatial context has an influence on the SNARC effect and that hand-based associations (and not only representational associations) are relevant for the SNARC effect.