Articulatory mediation of speech perception: a causal analysis of multi-modal imaging data.
ABSTRACT The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analyzes of high spatiotemporal resolution neural activation data derived from the integration of magnetic resonance imaging, magnetoencephalography and electroencephalography, to examine the role of lexical and articulatory mediation in listeners' ability to use phonetic context to compensate for place assimilation. Listeners heard two-word phrases such as pen pad and then saw two pictures, from which they had to select the one that depicted the phrase. Assimilation, lexical competitor environment and the phonological validity of assimilation context were all manipulated. Behavioral data showed an effect of context on the interpretation of assimilated segments. Analysis of 40 Hz gamma phase locking patterns identified a large distributed neural network including 16 distinct regions of interest (ROIs) spanning portions of both hemispheres in the first 200 ms of post-assimilation context. Granger analyzes of individual conditions showed differing patterns of causal interaction between ROIs during this interval, with hypothesized lexical and articulatory structures and pathways driving phonetic activation in the posterior superior temporal gyrus in assimilation conditions, but not in phonetically unambiguous conditions. These results lend strong support for the motor theory of speech perception, and clarify the role of lexical mediation in the phonetic processing of assimilated speech.
SourceAvailable from: Marc Sato[Show abstract] [Hide abstract]
ABSTRACT: The cortical dorsal auditory stream has been proposed to mediate mapping between auditory and articulatory-motor representations in speech processing. Whether this sensorimotor integration contributes to speech perception remains an open question. Here, magnetoencephalography was used to examine connectivity between auditory and motor areas while subjects were performing a sensorimotor task involving speech sound identification and overt repetition. Functional connectivity was estimated with inter-areal phase synchrony of electromagnetic oscillations. Structural equation modeling was applied to determine the direction of information flow. Compared to passive listening, engagement in the sensorimotor task enhanced connectivity within 200 ms after sound onset bilaterally between the temporoparietal junction (TPJ) and ventral premotor cortex (vPMC), with the left-hemisphere connection showing directionality from vPMC to TPJ. Passive listening to noisy speech elicited stronger connectivity than clear speech between left auditory cortex (AC) and vPMC at ~100 ms, and between left TPJ and dorsal premotor cortex (dPMC) at ~200 ms. Information flow was estimated from AC to vPMC and from dPMC to TPJ. Connectivity strength among the left AC, vPMC, and TPJ correlated positively with the identification of speech sounds within 150 ms after sound onset, with information flowing from AC to TPJ, from AC to vPMC, and from vPMC to TPJ. Taken together, these findings suggest that sensorimotor integration mediates the categorization of incoming speech sounds through reciprocal auditory-to-motor and motor-to-auditory projections.Frontiers in Psychology 05/2014; 5:394. DOI:10.3389/fpsyg.2014.00394 · 2.80 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: Models of spoken-word recognition differ on whether compensation for assimilation is language-specific or depends on general auditory processing. English and French participants were taught words that began or ended with the sibilants /s/ and /∫/. Both languages exhibit some assimilation in sibilant sequences (e.g., /s/ becomes like [∫] in dress shop and classe chargée), but they differ in the strength and predominance of anticipatory versus carryover assimilation. After training, participants were presented with novel words embedded in sentences, some of which contained an assimilatory context either preceding or following. A continuum of target sounds ranging from [s] to [∫] was spliced into the novel words, representing a range of possible assimilation strengths. Listeners' perceptions were examined using a visual-world eyetracking paradigm in which the listener clicked on pictures matching the novel words. We found two distinct language-general context effects: a contrastive effect when the assimilating context preceded the target, and flattening of the sibilant categorization function (increased ambiguity) when the assimilating context followed. Furthermore, we found that English but not French listeners were able to resolve the ambiguity created by the following assimilatory context, consistent with their greater experience with assimilation in this context. The combination of these mechanisms allows listeners to deal flexibly with variability in speech forms.Attention Perception & Psychophysics 09/2014; 77(1). DOI:10.3758/s13414-014-0750-z · 2.15 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: A vast repertoire of methods is currently available to study effective brain connectivity based on neuroimaging data, among which lag-based measures can be distinguished. Although several studies have previously assessed the performance of such measures, their validity in different conditions remains unclear. In the current study, several lag-based effective connectivity measures are tested and benchmarked using simulated fMRI data, conceived to reflect a broad range of different situations with practical interest. The main goal is two-fold: 1) to provide a thorough overview of lag-based effective connectivity measures, and 2) to assess their performance in specific experimental conditions, thereby providing guidance for future effective connectivity studies involving fMRI. We focus on well-known lag-based measures, cover existing improvements and alternative formulations in some cases: Granger causality (GC), Geweke's Granger causality (GGC), directed transfer function (DTF), partial directed coherence (PDC), phase slope index (PSI), and transfer entropy (TE). Benchmarking consists in identifying causal relations in local field potential (LFP) networks that have their output convolved with a canonical hemodynamic response function (HRF) with varying node number, topology, coupling strength, neuronal delay, repetition time (TR), signal-to-noise ratio (SNR) and HRF variability. In a first set of simulations, we cover all possible combinations of discretized values of the previous variables, for networks with 2 and 3 nodes, and find that the measure with best performance (time-domain Granger Causality) is able to detect neuronal delays of a few hundreds of milliseconds with TRs between 0.25 and 2seconds and neuronal delays below 100 milliseconds for TRs that are also below 100 milliseconds, with more than 80% accuracy in realistic conditions. For networks with more than 3 nodes, we find that the number of nodes and the density of causal links degrade sensitivity, especially if the number of observations does not compensate for the increase in nodes, and that clustered networks can be more easily identified. In conclusion, this study argues in favor of the applicability of lag-based measures in the context of fMRI, provided that a stringent set of experimental specifications is met and that the chosen measure is applied with full knowledge of its limitations and specific constraints.NeuroImage 10/2013; DOI:10.1016/j.neuroimage.2013.10.029 · 6.13 Impact Factor