Separating Processes within a Trial in Event-Related Functional MRI

Department of Radiology, Washington University, St. Louis, Missouri, 63110, USA.
NeuroImage (Impact Factor: 6.36). 02/2001; 13(1):210-7. DOI: 10.1006/nimg.2000.0710
Source: PubMed


Many behavioral paradigms involve temporally overlapping sensory, cognitive, and motor components within a single trial. The complex interplay among these factors makes it desirable to separate the components of the total response without assumptions about shape of the underlying hemodynamic response. We present a method that does this. Four conditions were studied in four subjects to validate the method. Two conditions involved rapid event-related studies, one with a low-contrast (5%) flickering checkerboard and another with a high-contrast (95%) checkerboard. In the third condition, the same high-contrast checkerboard was presented with widely spaced trials. Finally, multicomponent trials were formed from temporally adjacent low-contrast and high-contrast stimuli. These trials were presented as a rapid event-related study. Low-contrast stimuli presented in isolation (partial trials) made it possible to uniquely estimate both the low-contrast and high-contrast responses. These estimated responses matched those measured in the first three conditions, thereby validating the method. Nonlinear interactions between adjacent low-contrast and high-contrast responses were shown to be significant but weak in two of the four subjects.

Download full-text


Available from: Maurizio Corbetta, Dec 06, 2014
22 Reads
  • Source
    • "The event-related design runs were analyzed using a separate GLM that independently modeled cue-only trials and cue + stimulus trials, as shown in Fig. 1B. Each trial type was modeled with 12 regressors in a finite impulse response model (Ollinger et al., 2001). The peak magnitude timepoint for all effects were modeled using the FIR model was defined as the timepoint with the greatest mean absolute difference from timethat -point zero (effect onset) across the four different task conditions as previously described (Elkhetali et al., 2015). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of inter-modal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors-portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. Copyright © 2015. Published by Elsevier Inc.
    NeuroImage 10/2015; 120:285-297. DOI:10.1016/j.neuroimage.2015.07.005 · 6.36 Impact Factor
  • Source
    • "Compound trials required a yes-no response. In addition, participants were presented with 20 partial trials (Ollinger et al., 2001a,b) and 44 randomized short periods of rest, each equal in length to one trial. Partial trials presented only the classifier followed by a blank screen in the noun time window, and did not require a grammaticality judgment on part of the participants. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Embodied cognitive theories predict that linguistic conceptual representations are grounded and continually represented in real world, sensorimotor experiences. However, there is an on-going debate on whether this also holds for abstract concepts. Grammar is the archetype of abstract knowledge, and therefore constitutes a test case against embodied theories of language representation. Former studies have largely focussed on lexical-level embodied representations. In the present study we take the grounding-by-modality idea a step further by using reaction time (RT) data from the linguistic processing of nominal classifiers in Chinese. We take advantage of an independent body of research, which shows that attention in hand space is biased. Specifically, objects near the hand consistently yield shorter RTs as a function of readiness for action on graspable objects within reaching space, and the same biased attention inhibits attentional disengagement. We predicted that this attention bias would equally apply to the graspable object classifier but not to the big object classifier. Chinese speakers (N = 22) judged grammatical congruency of classifier-noun combinations in two conditions: graspable object classifier and big object classifier. We found that RTs for the graspable object classifier were significantly faster in congruent combinations, and significantly slower in incongruent combinations, than the big object classifier. There was no main effect on grammatical violations, but rather an interaction effect of classifier type. Thus, we demonstrate here grammatical category-specific effects pertaining to the semantic content and by extension the visual and tactile modality of acquisition underlying the acquisition of these categories. We conclude that abstract grammatical categories are subjected to the same mechanisms as general cognitive and neurophysiological processes and may therefore be grounded.
    Frontiers in Psychology 08/2015; 6. DOI:10.3389/fpsyg.2015.01299 · 2.80 Impact Factor
  • Source
    • "Of interest was whether the shape of the BOLD response to these relatively long video clips differed from the canonical HRF typically implemented in SPM. The shape of the BOLD response was estimated for each participant by modeling a finite impulse response function (Ollinger et al., 2001). Each trial was represented by a sequence of 12 consecutive TRs, beginning at the onset of each video clip. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This fMRI study investigated neural systems that interpret body language—the meaningful emotive expressions conveyed by body movement. Participants watched videos of performers engaged in modern dance or pantomime that conveyed specific themes such as hope, agony, lust, or exhaustion. We tested whether the meaning of an affectively laden performance was decoded in localized brain substrates as a distinct property of action separable from other superficial features, such as choreography, kinematics, performer, and low-level visual stimuli. A repetition suppression (RS) procedure was used to identify brain regions that decoded the meaningful affective state of a performer, as evidenced by decreased activity when emotive themes were repeated in successive performances. Because the theme was the only feature repeated across video clips that were otherwise entirely different, the occurrence of RS identified brain substrates that differentially coded the specific meaning of expressive performances. RS was observed bilaterally, extending anteriorly along middle and superior temporal gyri into temporal pole, medially into insula, rostrally into inferior orbitofrontal cortex, and caudally into hippocampus and amygdala. Behavioral data on a separate task indicated that interpreting themes from modern dance was more difficult than interpreting pantomime; a result that was also reflected in the fMRI data. There was greater RS in left hemisphere, suggesting that the more abstract metaphors used to express themes in dance compared to pantomime posed a greater challenge to brain substrates directly involved in decoding those themes. We propose that the meaning-sensitive temporal-orbitofrontal regions observed here comprise a superordinate functional module of a known hierarchical action observation network (AON), which is critical to the construction of meaning from expressive movement. The findings are discussed with respect to a predictive coding model of action understanding.
    Frontiers in Human Neuroscience 08/2015; xx(xx):xx. DOI:10.3389/fnhum.2015.00450 · 2.99 Impact Factor
Show more