[Show abstract][Hide abstract] ABSTRACT: In this study we present a kernel based convolution model to characterize
neural responses to natural sounds by decoding their time-varying acoustic
features. The model allows to decode natural sounds from high-dimensional
neural recordings, such as magnetoencephalography (MEG), that track timing and
location of human cortical signalling noninvasively across multiple channels.
We used the MEG responses recorded from subjects listening to acoustically
different environmental sounds. By decoding the stimulus frequencies from the
responses, our model was able to accurately distinguish between two different
sounds that it had never encountered before with 70% accuracy. Convolution
models typically decode frequencies that appear at a certain time point in the
sound signal by using neural responses from that time point until a certain
fixed duration of the response. Using our model, we evaluated several fixed
durations (time-lags) of the neural responses and observed auditory MEG
responses to be most sensitive to spectral content of the sounds at time-lags
of 250 ms to 500 ms. The proposed model should be useful for determining what
aspects of natural sounds are represented by high-dimensional neural responses
and may reveal novel properties of neural signals.
[Show abstract][Hide abstract] ABSTRACT: Temporal and frontal activations have been implicated in learning of novel word forms, but
their specific roles remain poorly understood. The present magnetoencephalography (MEG)
study examines the roles of these areas in processing newly-established word form representations. The cortical effects related to acquiring new phonological word forms during incidental learning were localized. Participants listened to and repeated back new word form stimuli that adhered to native phonology (Finnish pseudowords) or were foreign (Korean words), with a subset of the stimuli recurring four times. Subsequently, a modified 1-back task and a recognition task addressed whether the activations modulated by learning were related to planning for overt articulation, while parametrically added noise probed reliance on developing memory representations during effortful perception. Learning resulted in decreased left superior temporal and increased bilateral frontal premotor activation for familiar compared to new items. The left temporal learning effect persisted in all tasks and was strongest when stimuli were embedded in intermediate noise. In the noisy conditions, native phonotactics evoked overall enhanced left temporal activation. In contrast, the frontal learning effects were present only in
conditions requiring overt repetition and were more pronounced for the foreign language. The
results indicate a functional dissociation between temporal and frontal activations in learning
new phonological word forms: the left superior temporal responses reflect activation of newly established word-form representations, also during degraded sensory input, whereas the frontal
premotor effects are related to planning for articulation and are not preserved in noise.
PLoS ONE 05/2015; 10(5):e0126652. DOI:10.1371/journal.pone.0126652 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Specific language impairment is associated with enduring problems in language-related functions. We followed the spatiotemporal course of cortical activation in SLI using magnetoencephalography. In the experiment, children with normal and impaired language development heard spoken real words and pseudowords presented only once or two times in a row. In typically developing children, the activation in the bilateral superior temporal cortices was attenuated to the second presentation of the same word. In SLI children, this repetition effect was nearly nonexistent in the left hemisphere. Furthermore, the activation was equally strong to words and pseudowords in SLI children whereas in the typically developing children the left hemisphere activation persisted longer for pseudowords than words. Our results indicate that the short-term maintenance of linguistic activation that underlies spoken word recognition is defective in SLI particularly in the left language-dominant hemisphere. The unusually rapid decay of speech-evoked activation can contribute to impaired vocabulary growth.
Brain and Language 03/2014; 130:11–18. DOI:10.1016/j.bandl.2014.01.005 · 3.31 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Animal and human studies have frequently shown that in primary sensory and motor regions the BOLD signal correlates positively with high-frequency and negatively with low-frequency neuronal activity. However, recent evidence suggests that this relationship may also vary across cortical areas. Detailed knowledge of the possible spectral diversity between electrophysiological and hemodynamic responses across the human cortex would be essential for neural-level interpretation of fMRI data and for informative multimodal combination of electromagnetic and hemodynamic imaging data, especially in cognitive tasks. We applied multivariate partial least squares correlation analysis to MEG-fMRI data recorded in a reading paradigm to determine the correlation patterns between the data types, at once, across the cortex. Our results revealed heterogeneous patterns of high-frequency correlation between MEG and fMRI responses, with marked dissociation between lower and higher order cortical regions. The low-frequency range showed substantial variance, with negative and positive correlations manifesting at different frequencies across cortical regions. These findings demonstrate the complexity of the neurophysiological counterparts of hemodynamic fluctuations in cognitive processing.
[Show abstract][Hide abstract] ABSTRACT: Ten participants learned a miniature language (Anigram), which they later employed to verbally describe a pictured event. Using magnetoencephalography, the cortical dynamics of sentence production in Anigram was compared with that in the native tongue from the preparation phase up to the production of the final word. At the preparation phase, a cartoon image with two animals prompted the participants to plan either the corresponding simple sentence (e.g., "the bear hits the lion") or a grammar-free list of the two nouns ("the bear, the lion"). For the newly learned language, this stage induced stronger left angular and adjacent inferior parietal activations than for the native language, likely reflecting a higher load on lexical retrieval and STM storage. The preparation phase was followed by a cloze task where the participants were prompted to produce the last word of the sentence or word sequence. Production of the sentence-final word required retrieval of rule-based inflectional morphology and was accompanied by increased activation of the left middle superior temporal cortex that did not differ between the two languages. Activation of the right temporal cortex during the cloze task suggested that this area plays a role in integrating word meanings into the sentence frame. The present results indicate that, after just a few days of exposure, the newly learned language harnesses the neural resources for multiword production much the same way as the native tongue and that the left and right temporal cortices seem to have functionally different roles in this processing.
[Show abstract][Hide abstract] ABSTRACT: Over the past decade, various techniques have been proposed for localization of cerebral sources of oscillatory activity on the basis of magnetoencephalography (MEG) or electroencephalography recordings. Beamformers in the frequency domain, in particular, have proved useful in this endeavor. However, the localization accuracy and efficacy of such spatial filters can be markedly limited by bias from correlation between cerebral sources and short duration of source activity, both essential issues in the localization of brain data. Here, we evaluate a method for frequency-domain localization of oscillatory neural activity based on the relevance vector machine (RVM). RVM is a Bayesian algorithm for learning sparse models from possibly overcomplete data sets. The performance of our frequency-domain RVM method (fdRVM) was compared with that of dynamic imaging of coherent sources (DICS), a frequency-domain spatial filter that employs a minimum variance adaptive beamformer (MVAB) approach. The methods were tested both on simulated and real data. Two types of simulated MEG data sets were generated, one with continuous source activity and the other with transiently active sources. The real data sets were from slow finger movements and resting state. Results from simulations show comparable performance for DICS and fdRVM at high signal-to-noise ratios and low correlation. At low SNR or in conditions of high correlation between sources, fdRVM performs markedly better. fdRVM was successful on real data as well, indicating salient focal activations in the sensorimotor area. The resulting high spatial resolution of fdRVM and its sensitivity to low-SNR transient signals could be particularly beneficial when mapping event-related changes of oscillatory activity.
[Show abstract][Hide abstract] ABSTRACT: Neural processes are explored through macroscopic neuroimaging and microscopic molecular measures, but the two levels remain primarily detached. The identification of direct links between the levels would facilitate use of imaging signals as probes of genetic function and, vice versa, access to molecular correlates of imaging measures. Neuroimaging patterns have been mapped for a few isolated genes, chosen based on their connection with a clinical disorder. Here we propose an approach that allows an unrestricted discovery of the genetic basis of a neuroimaging phenotype in the normal human brain. The essential components are a subject population that is composed of relatives and selection of a neuroimaging phenotype that is reproducible within an individual and similar between relatives but markedly variable across a population. Our present combined magnetoencephalography and genome-wide linkage study in 212 healthy siblings demonstrates that auditory cortical activation strength is highly heritable and, specifically in the right hemisphere, regulated oligogenically with linkages to chromosomes 2q37, 3p12, and 8q24. The identified regions delimit as candidate genes TRAPPC9, operating in neuronal differentiation, and ROBO1, regulating projections of thalamocortical axons. Identification of normal genetic variation underlying neurophysiological phenotypes offers a non-invasive platform for an in-depth, concerted capitalization of molecular and neuroimaging levels in exploring neural function.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 10/2012; 32(42):14511-8. DOI:10.1523/JNEUROSCI.1483-12.2012 · 6.75 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Incidental learning of phonological structures through repeated exposure is an important component of native and foreign-language vocabulary acquisition that is not well understood at the neurophysiological level. It is also not settled when this type of learning occurs at the level of word forms as opposed to phoneme sequences. Here, participants listened to and repeated back foreign phonological forms (Korean words) and new native-language word forms (Finnish pseudowords) on two days. Recognition performance was improved, repetition latency became shorter and repetition accuracy increased when phonological forms were encountered multiple times. Cortical magnetoencephalography responses occurred bilaterally but the experimental effects only in the left hemisphere. Superior temporal activity at 300-600ms, probably reflecting acoustic-phonetic processing, lasted longer for foreign phonology than for native phonology. Formation of longer-term auditory-motor representations was evidenced by a decrease of a spatiotemporally separate left temporal response and correlated increase of left frontal activity at 600-1200ms on both days. The results point to item-level learning of novel whole-word representations.
[Show abstract][Hide abstract] ABSTRACT: We present a methodological approach employing magnetoencephalography (MEG) and machine learning techniques to investigate the flow of perceptual and semantic information decodable from neural activity in the half second during which the brain comprehends the meaning of a concrete noun. Important information about the cortical location of neural activity related to the representation of nouns in the human brain has been revealed by past studies using fMRI. However, the temporal sequence of processing from sensory input to concept comprehension remains unclear, in part because of the poor time resolution provided by fMRI. In this study, subjects answered 20 questions (e.g. is it alive?) about the properties of 60 different nouns prompted by simultaneous presentation of a pictured item and its written name. Our results show that the neural activity observed with MEG encodes a variety of perceptual and semantic features of stimuli at different times relative to stimulus onset, and in different cortical locations. By decoding these features, our MEG-based classifier was able to reliably distinguish between two different concrete nouns that it had never seen before. The results demonstrate that there are clear differences between the time course of the magnitude of MEG activity and that of decodable semantic information. Perceptual features were decoded from MEG activity earlier in time than semantic features, and features related to animacy, size, and manipulability were decoded consistently across subjects. We also observed that regions commonly associated with semantic processing in the fMRI literature may not show high decoding results in MEG. We believe that this type of approach and the accompanying machine learning methods can form the basis for further modeling of the flow of neural information during language processing and a variety of other cognitive processes.
[Show abstract][Hide abstract] ABSTRACT: Human speech features rhythmicity that frames distinctive, fine-grained speech patterns. Speech can thus be counted among rhythmic motor behaviors that generally manifest characteristic spontaneous rates. However, the critical neural evidence for tuning of articulatory control to a spontaneous rate of speech has not been uncovered. The present study examined the spontaneous rhythmicity in speech production and its relationship to cortex-muscle neurocommunication, which is essential for speech control. Our MEG results show that, during articulation, coherent oscillatory coupling between the mouth sensorimotor cortex and the mouth muscles is strongest at the frequency of spontaneous rhythmicity of speech at 2-3 Hz, which is also the typical rate of word production. Corticomuscular coherence, a measure of efficient cortex-muscle neurocommunication, thus reveals behaviorally relevant oscillatory tuning for spoken language.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 03/2012; 32(11):3786-90. DOI:10.1523/JNEUROSCI.3191-11.2012 · 6.75 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Phase-locked evoked responses and event-related modulations of spontaneous rhythmic activity are the two main approaches used to quantify stimulus- or task-related changes in electrophysiological measures. The relationship between the two has been widely theorized upon but empirical research has been limited to the primary visual and sensorimotor cortex. However, both evoked responses and rhythms have been used as markers of neural activity in paradigms ranging from simple sensory to complex cognitive tasks. While some spatial agreement between the two phenomena has been observed, typically only one of the measures has been used in any given study, thus disallowing a direct evaluation of their exact spatiotemporal relationship. In this study, we sought to systematically clarify the connection between evoked responses and rhythmic activity. Using both measures, we identified the spatiotemporal patterns of task effects in three magnetoencephalography (MEG) data sets, all variants of a picture naming task. Evoked responses and rhythmic modulation yielded largely separate networks, with spatial overlap mainly in the sensorimotor and primary visual areas. Moreover, in the cortical regions that were identified with both measures the experimental effects they conveyed differed in terms of timing and function. Our results suggest that the two phenomena are largely detached and that both measures are needed for an accurate portrayal of brain activity.
[Show abstract][Hide abstract] ABSTRACT: Magnetoencephalography (MEG), with its direct view to the cortex through the magnetically transparent skull, has developed from its conception in physics laboratories to a powerful tool of basic and clinical neuroscience. MEG provides millisecond time resolution and allows real-time tracking of brain activation sequences during sensory processing, motor planning and action, cognition, language perception and production, social interaction, and various brain disorders. Current-day neuromagnetometers house hundreds of SQUIDs, superconducting quantum interference devices, to pick up signals generated by concerted action of cortical neurons. Complementary MEG measures of neuronal involvement include evoked responses, modulation of cortical rhythms, properties of the on-going neural activity, and interareal connectivity. Future MEG breakthroughs in understanding brain dynamics are expected through advanced signal analysis and combined use of MEG with hemodynamic imaging (fMRI). Methodological development progresses most efficiently when linked with insightful neuroscientific questions.
[Show abstract][Hide abstract] ABSTRACT: Speech processing skills go through intensive development during mid-childhood, providing basis also for literacy acquisition. The sequence of auditory cortical processing of speech has been characterized in adults, but very little is known about the neural representation of speech sound perception in the developing brain. We used whole-head magnetoencephalography (MEG) to record neural responses to speech and nonspeech sounds in first-graders (7-8-year-old) and compared the activation sequence to that in adults. In children, the general location of neural activity in the superior temporal cortex was similar to that in adults, but in the time domain the sequence of activation was strikingly different. Cortical differentiation between sound types emerged in a prolonged response pattern at about 250 ms after sound onset, in both hemispheres, clearly later than the corresponding effect at about 100 ms in adults that was detected specifically in the left hemisphere. Better reading skills were linked with shorter-lasting neural activation, speaking for interdependence of the maturing neural processes of auditory perception and developing linguistic skills. This study uniquely utilized the potential of MEG in comparing both spatial and temporal characteristics of neural activation between adults and children. Besides depicting the group-typical features in cortical auditory processing, the results revealed marked interindividual variability in children.
Human Brain Mapping 12/2011; 32(12):2193-206. DOI:10.1002/hbm.21181 · 6.92 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Word processing is often probed with experiments where a target word is primed by preceding semantically or phonologically related words. Behaviorally, priming results in faster reaction times, interpreted as increased efficiency of cognitive processing. At the neural level, priming reduces the level of neural activation, but the actual neural mechanisms that could account for the increased efficiency have remained unclear. We examined whether enhanced information transfer among functionally relevant brain areas could provide such a mechanism. Neural activity was tracked with magnetoencephalography while subjects read lists of semantically or phonologically related words. Increased priming resulted in reduced cortical activation. In contrast, coherence between brain regions was simultaneously enhanced. Furthermore, while the reduced level of activation was detected in the same area and time window (superior temporal cortex [STC] at 250-650 ms) for both phonological and semantic priming, the spatiospectral connectivity patterns appeared distinct for the 2 processes. Causal interactions further indicated a driving role for the left STC in phonological processing. Our results highlight coherence as a neural mechanism of priming and dissociate semantic and phonological processing via their distinct connectivity profiles.
[Show abstract][Hide abstract] ABSTRACT: There is an increasing interest to integrate electrophysiological and hemodynamic measures for characterizing spatial and temporal aspects of cortical processing. However, an informative combination of responses that have markedly different sensitivities to the underlying neural activity is not straightforward, especially in complex cognitive tasks. Here, we used parametric stimulus manipulation in magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) recordings on the same subjects, to study effects of noise on processing of spoken words and environmental sounds. The added noise influenced MEG response strengths in the bilateral supratemporal auditory cortex, at different times for the different stimulus types. Specifically for spoken words, the effect of noise on the electrophysiological response was remarkably nonlinear. Therefore, we used the single-subject MEG responses to construct parametrization for fMRI data analysis and obtained notably higher sensitivity than with conventional stimulus-based parametrization. fMRI results showed that partly different temporal areas were involved in noise-sensitive processing of words and environmental sounds. These results indicate that cortical processing of sounds in background noise is stimulus specific in both timing and location and provide a new functionally meaningful platform for combining information obtained with electrophysiological and hemodynamic measures of brain function.
[Show abstract][Hide abstract] ABSTRACT: It is often implicitly assumed that the neural activation patterns revealed by hemodynamic methods, such as functional magnetic resonance imaging (fMRI), and electrophysiological methods, such as magnetoencephalography (MEG) and electroencephalography (EEG), are comparable. In early sensory processing that seems to be the case, but the assumption may not be correct in high-level cognitive tasks. For example, MEG and fMRI literature of single-word reading suggests differences in cortical activation, but direct comparisons are lacking. Here, while the same human participants performed the same reading task, analysis of MEG evoked responses and fMRI blood oxygenation level-dependent (BOLD) signals revealed marked functional and spatial differences in several cortical areas outside the visual cortex. Divergent patterns of activation were observed in the frontal and temporal cortex, in accordance with previous separate MEG and fMRI studies of reading. Furthermore, opposite stimulus effects in the MEG and fMRI measures were detected in the left occipitotemporal cortex: MEG evoked responses were stronger to letter than symbol strings, whereas the fMRI BOLD signal was stronger to symbol than letter strings. The EEG recorded simultaneously during MEG and fMRI did not indicate neurophysiological differences that could explain the observed functional discrepancies between the MEG and fMRI results. Acknowledgment of the complementary nature of hemodynamic and electrophysiological measures, as reported here in a cognitive task using evoked response analysis in MEG and BOLD signal analysis in fMRI, represents an essential step toward an informed use of multimodal imaging that reaches beyond mere combination of location and timing of neural activation.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 01/2011; 31(3):1048-58. DOI:10.1523/JNEUROSCI.3113-10.2011 · 6.75 Impact Factor