Shivakumar Viswanathan

University of California, Santa Barbara, Santa Barbara, California, United States

Are you Shivakumar Viswanathan?

Claim your profile

Publications (8)21.39 Total impact

  • Scott T Grafton · Shivakumar Viswanathan ·

    Advances in Experimental Medicine and Biology 10/2014; 826:69-90. DOI:10.1007/978-1-4939-1338-1_6 · 1.96 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Neurophysiology and neuroimaging evidence shows that the brain represents multiple environmental and body-related features to compute transformations from sensory input to motor output. However, it is unclear how these features interact during goal-directed movement. To investigate this issue, we examined the representations of sensory and motor features of human hand movements within the left-hemisphere motor network. In a rapid event-related fMRI design, we measured cortical activity as participants performed right-handed movements at the wrist, with either of two postures and two amplitudes, to move a cursor to targets at different locations. Using a multivoxel analysis technique with rigorous generalization tests, we reliably distinguished representations of task-related features (primarily target location, movement direction, and posture) in multiple regions. In particular, we identified an interaction between target location and movement direction in the superior parietal lobule, which may underlie a transformation from the location of the target in space to a movement vector. In addition, we found an influence of posture on primary motor, premotor, and parietal regions. Together, these results reveal the complex interactions between different sensory and motor features that drive the computation of sensorimotor transformations.
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 05/2014; 34(20):6860-73. DOI:10.1523/JNEUROSCI.5173-13.2014 · 6.34 Impact Factor
  • Source
    Shivakumar Viswanathan · Matthew Cieslak · Scott T. Grafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Information mapping is a popular application of Multivoxel Pattern Analysis (MVPA) to fMRI. Information maps are constructed using the so called searchlight method, where the spherical multivoxel neighborhood of every voxel (i.e., a searchlight) in the brain is evaluated for the presence of task-relevant response patterns. Despite their widespread use, information maps present several challenges for interpretation. One such challenge has to do with inferring the size and shape of a multivoxel pattern from its signature on the information map. To address this issue, we formally examined the geometric basis of this mapping relationship. Based on geometric considerations, we show how and why small patterns (i.e., having smaller spatial extents) can produce a larger signature on the information map as compared to large patterns, independent of the size of the searchlight radius. Furthermore, we show that the number of informative searchlights over the brain increase as a function of searchlight radius, even in the complete absence of any multivariate response patterns. These properties are unrelated to the statistical capabilities of the pattern-analysis algorithms used but are obligatory geometric properties arising from using the searchlight procedure.
  • Source
    Shivakumar Viswanathan · Courtney Fritz · Scott T Grafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Judging the laterality of a hand seen at unanticipated orientations evokes a robust feeling of bodily movement, even though no movement is produced. In two experiments, we tested a novel hypothesis to explain this phenomenon: A hand's laterality is determined via a multisensory binding of the visual representation of the seen hand and a proprioceptive representation of the observer's felt hand, and the felt "movement" is an obligatory aftereffect of intersensory recalibration. Consistent with the predictions implied by such a cross-modal mechanism, our results in Experiment 1 showed that manipulating observers' selective attention can evoke illusory feelings of movement in the "wrong" hand (i.e., the hand whose laterality does not match that of the stimulus). In Experiment 2, these illusions were readily extinguished in conditions in which binding was predicted to fail, a result indicating that cross-modal binding was necessary to produce them. These results are not explained by imagery, a mechanism widely assumed to account for how hand laterality is identified.
    Psychological Science 05/2012; 23(6):598-607. DOI:10.1177/0956797611429802 · 4.43 Impact Factor
  • Source
    Rodolphe Nenert · Shivakumar Viswanathan · Darcy M Dubuc · Kristina M Visscher ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Alpha-frequency band oscillations have been shown to be one of the most prominent aspects of neuronal ongoing oscillatory activity, as reflected by electroencephalography (EEG) recordings. First thought to reflect an idling state, a recent framework indicates that alpha power reflects cortical inhibition. In the present study, the role of oscillations in the upper alpha-band (12 Hz) was investigated during a change-detection test of short-term visual memory. If alpha oscillations arise from a purely inhibitory process, higher alpha power before sample stimulus presentation would be expected to correlate with poorer performance. Instead, participants with faster reaction-times showed stronger alpha power before the sample stimulus in frontal and posterior regions. Additionally, faster participants showed stronger alpha desynchronization after the stimulus in a group of right frontal and left posterior electrodes. The same pattern of electrodes showed stronger alpha with higher working-memory load, so that when more items were processed, alpha power desynchronized faster after the stimulus. During memory maintenance, alpha power was greater when more items were held in memory, likely due to a faster resynchronization. These data are consistent with the hypothesis that the level of suppression of alpha power by stimulus presentation is an important factor for successfully encoding visual stimuli. The data are also consistent with a role for alpha as actively participating in attentional processes.
    Frontiers in Human Neuroscience 05/2012; 6:127. DOI:10.3389/fnhum.2012.00127 · 3.63 Impact Factor
  • M van Elk · S Viswanathan · H T van Schie · H Bekkering · S T Grafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: This fMRI study investigates the neural mechanisms supporting the retrieval of action semantics. A novel motor imagery task was used in which participants were required to imagine planning actions with a familiar object (e.g. a toothbrush) or with an unfamiliar object (e.g. a pair of pliers) based on either goal-related information (i.e. where to move the object) or grip-related information (i.e. how to grasp the object). Planning actions with unfamiliar compared to familiar objects was slower and was associated with increased activation in the bilateral superior parietal lobe, the right inferior parietal lobe and the right insula. The stronger activation in parietal areas for unfamiliar objects fits well with the idea that parietal areas are involved in motor imagery and suggests that this process takes more effort in the case of novel or unfamiliar actions. In contrast, the planning of familiar actions resulted in increased activation in the anterior prefrontal cortex, suggesting that subjects maintained a stronger goal-representation when planning actions with familiar compared to unfamiliar objects. These findings provide further insight into the neural structures that support action semantic knowledge for the functional use of real-world objects and suggest that action semantic knowledge is activated most readily when actions are planned in a goal-directed manner.
    Experimental Brain Research 02/2012; 218(2):189-200. DOI:10.1007/s00221-012-3016-9 · 2.04 Impact Factor
  • Source
    Shivakumar Viswanathan · Daniel R Perl · Kristina M Visscher · Michael J Kahana · Robert Sekuler ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual short-term recognition memory for multiple stimuli is strongly influenced by the study items' similarity to one another-that is, by their homogeneity. However, the mechanism responsible for this homogeneity effect has remained unclear. We evaluated competing explanations of this effect, using controlled sets of Gabor patches as study items and probe stimuli. Our results, based on recognition memory for spatial frequency, rule out the possibility that the homogeneity effect arises because similar study items are encoded and/or maintained with higher fidelity in memory than dissimilar study items are. Instead, our results support the hypothesis that the homogeneity effect reflects trial-by-trial comparisons of study items, which generate a homogeneity signal. This homogeneity signal modulates recognition performance through an adjustment of the subject's decision criterion. Additionally, it seems the homogeneity signal is computed prior to the presentation of the probe stimulus, by evaluating the familiarity of each new stimulus with respect to the items already in memory. This suggests that recognition-like processes operate not only on the probe stimulus, but on study items as well.
    Psychonomic Bulletin & Review 02/2010; 17(1):59-65. DOI:10.3758/PBR.17.1.59 · 2.99 Impact Factor
  • Source
    Dylan Shell · Shivakumar Viswanathan · Jing Huang · Rumi Ghosh · Jie Huang · Maja Mataric · Kristina Lerman · Robert Sekuler ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In order to better understand human navigation, way-finding, and general spatial behavior, we are conducting research into the effects of social interactions among individuals within a shared space. This paper describes work in the inte- gration of instrumentation of active public areas, macroscopic modeling, microscopic simulation, and experimentation with human subjects. The aim is to produce empirically grounded models of individual and collective spatial behavior. We describe the challenges and lessons learned from our experience with data collection, construction of tractable models, and pilot experiments. I. INTRODUCTION Whether it be shopping malls, parks, communal gardens or even corridors, significant portions of our daily lives are spent moving through spaces intended for groups of people. Ideally, shared spaces should permit people to achieve their varied goals with minimal inconvenience. Furthermore, in the event of emergencies, the space should enable effective management of crowds so as to minimize injury and prevent loss of life. Well-designed spaces could do more than merely promote efficiency as they can shape the experience and social interactions within the occupying groups. The suc- cessful design of such spaces requires an understanding of the collective spatial behavior of groups of people. A predictive theory of human navigation and way-finding within a social context has applications beyond architecture and building design. Models of such behavior could be used within robots and smart-spaces in order to reason about the collective properties of groups and crowds. As yet, few researchers in human-robot interaction have applied macroscopic models in an online sense, despite the fact that robots are increasingly expected to locomote naturally in areas with moving numerous people. We have formed a multi-institutional collaboration to study this problem of characterizing the dynamics of human spatial behavior. The ultimate goal is to provide principle d models and tools to inform the architectural design of spaces optimized for shared use, and develop technologies that can enable the accurate detection of life-threatening over- crowding scenarios for effective intervention. Furthermore, the data collected and models developed by this effort