James W Lewis

West Virginia University, Morgantown, WV, United States

Are you James W Lewis?

Claim your profile

Publications (21)81.41 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Humans and several non-human primates possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). However, the use of speech and other broadly defined categories of behaviorally relevant natural sounds has led to many discrepancies regarding where voice-sensitivity occurs, and more generally the identification of cortical networks, "proto-networks" or protolanguage networks, and pathways that may be sensitive or selective for certain aspects of vocalization processing. In this prospective review we examine different approaches for exploring vocal communication processing, including pathways that may be, or become, specialized for conspecific utterances. In particular, we address the use of naturally produced non-stereotypical vocalizations (mimicry of other animal calls) as another category of vocalization for use with human and non-human primate auditory systems. We focus this review on two main themes, including progress and future ideas for studying vocalization processing in great apes (chimpanzees) and in very early stages of human development, including infants and fetuses. Advancing our understanding of the fundamental principles that govern the evolution and early development of cortical pathways for processing non-verbal communication utterances is expected to lead to better diagnoses and early intervention strategies in children with communication disorders, improve rehabilitation of communication disorders resulting from brain injury, and develop new strategies for intelligent hearing aid and implant design that can better enhance speech signals in noisy environments.
    Hearing research 08/2013; · 2.18 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Early neuroimaging studies using Cyberball suggested that social rejection activated the pain matrix, as identified in studies of physical pain. However, these early studies were characterized by small sample sizes. Our statistical multi-level kernel density analysis (MKDA) of Cyberball neuroimaging studies with 244 participants fails to support the claim that social rejection operates on the same pain matrix as nociceptive stimuli, questioning whether social pain is more figurative or literal. We also performed an MKDA of the neuroimaging studies of reliving a romantic rejection to test whether the pain matrix was activated if the rejection were more meaningful. Results again failed to support the notion that rejection activates the neural matrix identified in studies of physical pain. Reliving an unwanted rejection by a romantic partner was significantly characterized by activation within and beyond the "Cyberball" brain network, suggesting that the neural correlates of social pain are more complex than previously thought.
    Scientific Reports 06/2013; 3:2027. · 5.08 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Although significant advances have been made in our understanding of the neural basis of action observation and intention understanding in the last few decades by studies demonstrating the involvement of a specific brain network (action observation network; AON), these have been largely based on experimental studies in which people have been considered as strictly isolated entities. However, we, as social species, spend much more of our time performing actions interacting with others. Research shows that a person's position along the continuum of perceived social isolation/bonding to others is associated with a variety of physical and mental health effects. Thus, there is a crucial need to better understand the neural basis of intention understanding performed in interpersonal and emotional contexts. To address this issue, we performed a meta-analysis using of functional magnetic resonance imaging (fMRI) studies over the past decade that examined brain and cortical network processing associated with understanding the intention of others actions vs. those associated with passionate love for others. Both overlapping and distinct cortical and subcortical regions were identified for intention and love, respectively. These findings provide scientists and clinicians with a set of brain regions that can be targeted for future neuroscientific studies on intention understanding, and help develop neurocognitive models of pair-bonding.
    Frontiers in Human Neuroscience 01/2013; 7:99. · 2.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Neuroimaging studies have found a correlation between activation in the anterior insula and love, and a correlation between activation in the posterior insula and lust. The present control-case study describes a neurological male patient, with a rare, circumscribed lesion in the anterior insula, whom we tested using a decision task that required he judge whether each of a series of attractive individuals could be the object of his love or lust. The patient, in contrast with neurologically typical participants matched on age, gender, and ethnicity, performed normally when making decisions about lust but showed a selective deficit when making decisions about love. These results provide the first clinical evidence indicating that the anterior insula may play an instrumental role in love but not lust more generally. These data support the notion of a posterior-to-anterior insular gradient, from sensorimotor to abstract representations, in the evaluation of anticipatory rewards in interpersonal relationships.
    Current Trends in Neurology. 01/2013; 7:15-19.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Numerous species possess cortical regions that are most sensitive to vocalizations produced by their own kind (conspecifics). In humans, the superior temporal sulci (STSs) putatively represent homologous voice-sensitive areas of cortex. However, superior temporal sulcus (STS) regions have recently been reported to represent auditory experience or "expertise" in general rather than showing exclusive sensitivity to human vocalizations per se. Using functional magnetic resonance imaging and a unique non-stereotypical category of complex human non-verbal vocalizations-human-mimicked versions of animal vocalizations-we found a cortical hierarchy in humans optimized for processing meaningful conspecific utterances. This left-lateralized hierarchy originated near primary auditory cortices and progressed into traditional speech-sensitive areas. Our results suggest that the cortical regions supporting vocalization perception are initially organized by sensitivity to the human vocal tract in stages before the STS. Additionally, these findings have implications for the developmental time course of conspecific vocalization processing in humans as well as its evolutionary origins.
    Journal of Neuroscience 06/2012; 32(23):8084-93. · 6.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: One of the most difficult dilemmas in relationship science and couple therapy concerns the interaction between sexual desire and love. As two mental states of intense longing for union with others, sexual desire and love are, in fact, often difficult to disentangle from one another. The present review aims to help understand the differences and similarities between these two mental states using a comprehensive statistical meta-analyses of all functional magnetic resonance imaging (fMRI) studies on sexual desire and love. Systematic retrospective review of pertinent neuroimaging literature. Review of published literature on fMRI studies illustrating brain regions associated with love and sexual desire to date. Sexual desire and love not only show differences but also recruit a striking common set of brain areas that mediate somatosensory integration, reward expectation, and social cognition. More precisely, a significant posterior-to-anterior insular pattern appears to track sexual desire and love progressively. This specific pattern of activation suggests that love builds upon a neural circuit for emotions and pleasure, adding regions associated with reward expectancy, habit formation, and feature detection. In particular, the shared activation within the insula, with a posterior-to-anterior pattern, from desire to love, suggests that love grows out of and is a more abstract representation of the pleasant sensorimotor experiences that characterize desire. From these results, one may consider desire and love on a spectrum that evolves from integrative representations of affective visceral sensations to an ultimate representation of feelings incorporating mechanisms of reward expectancy and habit learning.
    Journal of Sexual Medicine 02/2012; 9(4):1048-54. · 3.51 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and "auditory objects" can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more "object-like," independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds-a quantitative measure of change in entropy of the acoustic signals over time-and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds.
    Frontiers in Systems Neuroscience 01/2012; 6:27.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In contrast to visual object processing, relatively little is known about how the human brain processes everyday real-world sounds, transforming highly complex acoustic signals into representations of meaningful events or auditory objects. We recently reported a fourfold cortical dissociation for representing action (nonvocalization) sounds correctly categorized as having been produced by human, animal, mechanical, or environmental sources. However, it was unclear how consistent those network representations were across individuals, given potential differences between each participant's degree of familiarity with the studied sounds. Moreover, it was unclear what, if any, auditory perceptual attributes might further distinguish the four conceptual sound-source categories, potentially revealing what might drive the cortical network organization for representing acoustic knowledge. Here, we used functional magnetic resonance imaging to test participants before and after extensive listening experience with action sounds, and tested for cortices that might be sensitive to each of three different high-level perceptual attributes relating to how a listener associates or interacts with the sound source. These included the sound's perceived concreteness, effectuality (ability to be affected by the listener), and spatial scale. Despite some variation of networks for environmental sounds, our results verified the stability of a fourfold dissociation of category-specific networks for real-world action sounds both before and after familiarity training. Additionally, we identified cortical regions parametrically modulated by each of the three high-level perceptual sound attributes. We propose that these attributes contribute to the network-level encoding of category-specific acoustic knowledge representations.
    Journal of Cognitive Neuroscience 08/2011; 23(8):2079-101. · 4.49 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here, we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, whereas the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when hearing and attempting to recognize action sounds.
    Human Brain Mapping 02/2011; 32(12):2241-55. · 6.88 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Brain imaging is becoming a powerful tool in the study of human cerebral functions related to close personal relationships. Outside of subcortical structures traditionally thought to be involved in reward-related systems, a wide range of neuroimaging studies in relationship science indicate a prominent role for different cortical networks and cognitive factors. Thus, the field needs a better anatomical/network/whole-brain model to help translate scientific knowledge from lab bench to clinical models and ultimately to the patients suffering from disorders associated with love and couple relationships. The aim of the present review is to provide a review across wide range of functional magnetic resonance imaging (fMRI) studies to critically identify the cortical networks associated with passionate love, and to compare and contrast it with other types of love (such as maternal love and unconditional love for persons with intellectual disabilities). Retrospective review of pertinent neuroimaging literature. Review of published literature on fMRI studies of love illustrating brain regions associated with different forms of love. Although all fMRI studies of love point to the subcortical dopaminergic reward-related brain systems (involving dopamine and oxytocin receptors) for motivating individuals in pair-bonding, the present meta-analysis newly demonstrated that different types of love involve distinct cerebral networks, including those for higher cognitive functions such as social cognition and bodily self-representation. These metaresults provide the first stages of a global neuroanatomical model of cortical networks involved in emotions related to different aspects of love. Developing this model in future studies should be helpful for advancing clinical approaches helpful in sexual medicine and couple therapy.
    Journal of Sexual Medicine 11/2010; 7(11):3541-52. · 3.51 Impact Factor
  • Source
    James W. Lewis
    [Show abstract] [Hide abstract]
    ABSTRACT: Our ability to perceive and recognize objects, people, and meaningful action events is a cognitive function of prime importance, which is characterized by an interplay of visual, auditory, and sensory-motor processing. One goal of sensory neuroscience is to better understand multisensory perception, including how information from auditory and visual systems may merge to create stable, unified representations of objects and actions in our environment. This chapter summarizes and compares results from 49 paradigms published over the past decade that have explicitly examined human brain regions associated with audio-visual interactions. A series of meta-analyses compare and contrast distinct cortical networks preferentially activated under five major types of audio-visual interactions: (1) matching spatial and/or temporal features of nonnatural objects, (2–3) matching crossmodal features characteristic of natural objects (moving versus static images), (4) associating artificial audio-visual pairings (e.g., written/spoken language), and (5) an examination of networks activated when auditory and visual stimuli are incongruent. These meta-analysis results are discussed in the context of cognitive theories regarding how object knowledge representations may mesh with the multiple parallel pathways that appear to mediate audio-visual perception.
    12/2009: pages 155-190;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places-categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left lateralized fronto-parietal regions, bilateral insular cortices, and sub-cortical regions previously implicated in observation-execution matching, consistent with "embodied" and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception.
    NeuroImage 06/2009; 47(4):1778-91. · 6.25 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g., harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using functional MRI, we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human nonverbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for the presence of spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound.
    Journal of Neuroscience 03/2009; 29(7):2283-96. · 6.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A perception of coherent motion can be obtained in an otherwise ambiguous or illusory visual display by directing one's attention to a feature and tracking it. We demonstrate an analogous auditory effect in two separate sets of experiments. The temporal dynamics associated with the attention-dependent auditory motion closely matched those previously reported for attention-based visual motion. Since attention-based motion mechanisms appear to exist in both modalities, we also tested for multimodal (audiovisual) attention-based motion, using stimuli composed of interleaved visual and auditory cues. Although subjects were able to track a trajectory using cues from both modalities, no one spontaneously perceived "multimodal motion" across both visual and auditory cues. Rather, they reported motion perception only within each modality, thereby revealing a spatiotemporal limit on putative cross-modal motion integration. Together, results from these experiments demonstrate the existence of attention-based motion in audition, extending current theories of attention-based mechanisms from visual to auditory systems.
    Perception & Psychophysics 11/2008; 70(7):1207-16. · 1.37 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the "spotlight" of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique "attentional style."
    Journal of Cognitive Neuroscience 09/2008; 21(7):1447-60. · 4.49 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Our ability to manipulate and understand the use of a wide range of tools is a feature that sets humans apart from other animals. In right-handers, we previously reported that hearing hand-manipulated tool sounds preferentially activates a left hemisphere network of motor-related brain regions hypothesized to be related to handedness. Using functional magnetic resonance imaging, we compared cortical activation in strongly right-handed versus left-handed listeners categorizing tool sounds relative to animal vocalizations. Here we show that tool sounds preferentially evoke activity predominantly in the hemisphere "opposite" the dominant hand, in specific high-level motor-related and multisensory cortical regions, as determined by a separate task involving pantomiming tool-use gestures. This organization presumably reflects the idea that we typically learn the "meaning" of tool sounds in the context of using them with our dominant hand, such that the networks underlying motor imagery or action schemas may be recruited to facilitate recognition.
    Journal of Cognitive Neuroscience 09/2006; 18(8):1314-30. · 4.49 Impact Factor
  • James W Lewis
    [Show abstract] [Hide abstract]
    ABSTRACT: Greater manual dexterity and greater conceptual knowledge of tool use represent two main features that distinguish humans from other primates. Studies of human brain lesions suggest that the left hemisphere (at least in right-handed people) includes a system for processing manual skills that is specialized for tool use that interacts with another system involved more with conceptualizing, planning, and accessing knowledge associated with tool use. Growing evidence from recent neuroimaging studies supports this organization, and studies have begun to highlight specific brain regions and pathways that may be necessary for tool use. This review compares and summarizes results from 64 paradigms published over the past decade that have examined cortical regions associated with tool use skills and tool knowledge. A meta-analysis revealed cortical networks in both hemispheres, though with a clear left hemisphere bias, which may be organized to optimally represent action knowledge. Portions of this network appear to represent part of a system that is tightly linked with language systems, which is discussed together with the effects that handedness may have on the cortical organization for tool use.
    The Neuroscientist 07/2006; 12(3):211-31. · 5.63 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Human listeners can effortlessly categorize a wide range of environmental sounds. Whereas categorizing visual object classes (e.g., faces, tools, houses, etc.) preferentially activates different regions of visually sensitive cortex, it is not known whether the auditory system exhibits a similar organization for different types or categories of complex sounds outside of human speech. Using functional magnetic resonance imaging, we show that hearing and correctly or incorrectly categorizing animal vocalizations (as opposed to hand-manipulated tool sounds) preferentially activated middle portions of the left and right superior temporal gyri (mSTG). On average, the vocalization sounds had much greater harmonic and phase-coupling content (acoustically similar to human speech sounds), which may represent some of the signal attributes that preferentially activate the mSTG regions. In contrast, correctly categorized tool sounds (and even animal sounds that were miscategorized as being tool-related sounds) preferentially activated a widespread, predominantly left hemisphere cortical "mirror network." This network directly overlapped substantial portions of motor-related cortices that were independently activated when participants pantomimed tool manipulations with their right (dominant) hand. These data suggest that the recognition processing for some sounds involves a causal reasoning mechanism (a high-level auditory "how" pathway), automatically evoked when attending to hand-manipulated tool sounds, that effectively associates the dynamic motor actions likely to have produced the sound(s).
    Journal of Neuroscience 06/2005; 25(21):5148-58. · 6.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.
    Cerebral Cortex 10/2004; 14(9):1008-21. · 8.31 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a case of a 64-year-old, right-handed female with a metastatic breast cancer lesion involving the left posterior inferior temporal lobe causing complete loss of the ability to recognize visually common objects and words. After her symptoms resolved on corticosteroid therapy, functional magnetic resonance imaging (fMRI) mapping demonstrated strong left-hemispheric dominance for word recognition and right-hemispheric dominance for object recognition. The case illustrates the relationships among ventral occipito-temporal cortical activation, lesion localization, and lesion-induced deficits of higher visual function. The relationship between hemispheric dominance determined by fMRI and risk of postoperative deficit depends on the specific visual function of interest.
    Journal of Computer Assisted Tomography 01/2004; 28(1):63-7. · 1.58 Impact Factor

Publication Stats

594 Citations
81.41 Total Impact Points

Institutions

  • 2005–2013
    • West Virginia University
      • • Center for Advanced Imaging
      • • Center for Neuroscience
      • • Department of Physiology & Pharmacology
      Morgantown, WV, United States
  • 2012
    • University of Geneva
      • School of Psychology
      Genève, GE, Switzerland
  • 2006–2011
    • University of Virginia
      Charlottesville, Virginia, United States
  • 2010
    • Syracuse University
      • Department of Psychology
      Syracuse, NY, United States
  • 2008
    • University of Wisconsin - Milwaukee
      Milwaukee, Wisconsin, United States
  • 2004–2008
    • Medical College of Wisconsin
      • Department of Radiology
      Milwaukee, Wisconsin, United States