Jack M Loomis

University of Maine, Orono, MN, United States

Are you Jack M Loomis?

Claim your profile

Publications (155)303.66 Total impact

  • Source
    Multisensory Imagery: Theory & Applications, Edited by S. Lacey, R. Lawson, 01/2013: pages 131-156; Springer.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.
    Experimental Brain Research 10/2012; · 2.17 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
    Attention Perception & Psychophysics 05/2012; 74(6):1260-7. · 1.97 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This research uses a novel integration paradigm to investigate whether target locations read in from long-term memory (LTM) differ from perceptually encoded inputs in spatial working-memory (SWM) with respect to systematic spatial error and/or noise, and whether SWM can simultaneously encompass both of these sources. Our results provide evidence for a composite representation of space in SWM derived from both perception and LTM, albeit with a loss in spatial precision of locations retrieved from LTM. More generally, the data support the concept of a spatial image in working memory and extend its potential sources to representations retrieved from LTM.
    Spatial Cognition and Computation 01/2012; · 1.22 Impact Factor
  • Assistive Technology for Blindness and Low Vision, Edited by R. Manduchi, S. Kurniawan, 01/2012: pages 162-191; CRC Press.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When humans use vision to gauge the travel distance of an extended forward movement, they often underestimate the movement's extent. This underestimation can be explained by leaky path integration, an integration of the movement to obtain distance. Distance underestimation occurs because this integration is imperfect and contains a leak that increases with distance traveled. We asked human observers to estimate the distance from a starting location for visually simulated movements in a virtual environment. The movements occurred along curved paths that veered left and right around a central forward direction. In this case, the distance that has to be integrated (i.e., the beeline distance between origin and endpoint) and the distance that is traversed (the path length along the curve) are distinct. We then tested whether the leak accumulated with distance from the origin or with traversed distance along the curved path. Leaky integration along the path makes the seemingly counterintuitive prediction that the estimated origin-to-endpoint distance should decrease with increasing veering, because the length of the path over which the integration occurs increases, leading to a larger leak effect. The results matched the prediction: movements of identical origin-to-endpoint distance were judged as shorter when the path became longer. We conclude that leaky path integration from visual motion is performed along the traversed path even when a straight beeline distance is calculated.
    Experimental Brain Research 07/2011; 212(1):81-9. · 2.17 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.
    Current biology: CB 06/2011; 21(11):984-9. · 10.99 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.
    Journal of Experimental Psychology Learning Memory and Cognition 02/2011; 37(3):621-34. · 3.10 Impact Factor
  • J. W. Kelly, A. C. Beall, J. M. Loomis
    Journal of Vision 11/2010; 2(7):718-718. · 2.73 Impact Factor
  • J. W Kelly, A. C Beall, J. M Loomis
    Journal of Vision 10/2010; 3(9):213-213. · 2.73 Impact Factor
  • K. L Macuga, J. M Loomis, A. C Beall
    Journal of Vision 10/2010; 3(9):552-552. · 2.73 Impact Factor
  • J. M Loomis, A. C Beall
    Journal of Vision 10/2010; 3(9):132-132. · 2.73 Impact Factor
  • Journal of Vision 09/2010; 5(8):314-314. · 2.73 Impact Factor
  • N. A. Giudice, J. M. Loomis
    Journal of Vision 06/2010; 6(6):178-178. · 2.73 Impact Factor
  • Source
    Journal of Vision 05/2010; 8(6):1147-1147. · 2.73 Impact Factor
  • Journal of Vision 05/2010; 8(6):1043-1043. · 2.73 Impact Factor
  • J. W. Kelly, J. M. Loomis, A. C. Beall
    Journal of Vision 01/2010; 4(8):815-815. · 2.73 Impact Factor
  • Source
    01/2010;
  • Journal of Vision 01/2010; 4(8):912-912. · 2.73 Impact Factor
  • K. L. Macuga, J. M. Loomis, A. C. Beall
    Journal of Vision 01/2010; 4(8):2-2. · 2.73 Impact Factor

Publication Stats

5k Citations
303.66 Total Impact Points

Institutions

  • 2011–2012
    • University of Maine
      • • School of Computing and Information Science
      • • Department of Spatial Information Sciences and Engineering
      Orono, MN, United States
    • University of Münster
      • Institute for Psychology
      Münster, North Rhine-Westphalia, Germany
    • The University of Edinburgh
      • Centre for Cognitive and Neural Systems
      Edinburgh, SCT, United Kingdom
  • 1975–2012
    • University of California, Santa Barbara
      • • Department of Psychological and Brain Sciences
      • • Department of Computer Science
      • • Department of Geography
      Santa Barbara, CA, United States
  • 2009
    • Università degli Studi di Milano-Bicocca
      • Department of Psychology
      Milano, Lombardy, Italy
  • 2002–2009
    • Carnegie Mellon University
      • Department of Psychology
      Pittsburgh, PA, United States
    • George Washington University
      • Department of Psychology
      Washington, D. C., DC, United States
  • 2008
    • Vanderbilt University
      • Department of Psychology
      Nashville, MI, United States
  • 1998
    • Collège de France
      Lutetia Parisorum, Île-de-France, France