Jack M. Loomis

University of California, Santa Barbara, Santa Barbara, California, United States

Are you Jack M. Loomis?

Claim your profile

Publications (167)327.57 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface.
    Multisensory research 11/2014; 27(5-6):359-78. DOI:10.1163/22134808-00002447 · 0.78 Impact Factor
  • Jack M. Loomis · Roberta L. Klatzky · Nicholas A. Giudice ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent-that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features). © 2013 Springer Science+Business Media, LLC. All rights reserved.
  • Source
    Jack M. Loomis ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The focus here is on the paradoxical finding that whereas visually perceived egocentric distance is proportional to physical distance out to at least 20 m under full-cue viewing, there are large distortions of shape within the same range, reflecting a large anisotropy of depth and frontal extents on the ground plane. Three theories of visual space perception are presented, theories that are relevant to understanding this paradoxical result. The theory by Foley, Ribeiro-Filho, and Da Silva is based on the idea that when the visual system computes the length of a visible extent, the effective visual angle is a non-linear increasing function of the actual visual angle. The theory of Durgin and Li is based on the idea that two angular measures, optical slant and angular declination, are over-perceived. The theory of Ooi and He is based on both a default perceptual representation of the ground surface in the absence of visual cues and the “sequential surface integration process” whereby an internal representation of the visible ground surface is constructed starting from beneath the observer’s feet and extending outward.
    Psychology and Neuroscience 01/2014; 7(3):245-251. DOI:10.3922/j.psns.2014.034
  • Source
    Jack M. Loomis · Roberta L. Klatzky · Nicholas A. Giudice ·

    Multisensory Imagery: Theory & Applications, Edited by S. Lacey, R. Lawson, 01/2013: pages 131-156; Springer.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.
    Experimental Brain Research 10/2012; 224(1). DOI:10.1007/s00221-012-3295-1 · 2.04 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
    Attention Perception & Psychophysics 05/2012; 74(6):1260-7. DOI:10.3758/s13414-012-0311-2 · 2.17 Impact Factor
  • Source
    Nicholas A Giudice · J.M Loomis · R.L. Klatzky ·

    Assistive Technology for Blindness and Low Vision, Edited by R. Manduchi, S. Kurniawan, 01/2012: pages 162-191; CRC Press.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This research uses a novel integration paradigm to investigate whether target locations read in from long-term memory (LTM) differ from perceptually encoded inputs in spatial working-memory (SWM) with respect to systematic spatial error and/or noise, and whether SWM can simultaneously encompass both of these sources. Our results provide evidence for a composite representation of space in SWM derived from both perception and LTM, albeit with a loss in spatial precision of locations retrieved from LTM. More generally, the data support the concept of a spatial image in working memory and extend its potential sources to representations retrieved from LTM.
    Spatial Cognition and Computation 01/2012; 13(2). DOI:10.1080/13875868.2012.678522 · 1.22 Impact Factor
  • M. Lappe · M. Stiels · H. Frenz · J. Loomis ·

    Journal of Vision 09/2011; 11(11):930-930. DOI:10.1167/11.11.930 · 2.39 Impact Factor
  • Source
    Markus Lappe · Maren Stiels · Harald Frenz · Jack M Loomis ·
    [Show abstract] [Hide abstract]
    ABSTRACT: When humans use vision to gauge the travel distance of an extended forward movement, they often underestimate the movement's extent. This underestimation can be explained by leaky path integration, an integration of the movement to obtain distance. Distance underestimation occurs because this integration is imperfect and contains a leak that increases with distance traveled. We asked human observers to estimate the distance from a starting location for visually simulated movements in a virtual environment. The movements occurred along curved paths that veered left and right around a central forward direction. In this case, the distance that has to be integrated (i.e., the beeline distance between origin and endpoint) and the distance that is traversed (the path length along the curve) are distinct. We then tested whether the leak accumulated with distance from the origin or with traversed distance along the curved path. Leaky integration along the path makes the seemingly counterintuitive prediction that the estimated origin-to-endpoint distance should decrease with increasing veering, because the length of the path over which the integration occurs increases, leading to a larger leak effect. The results matched the prediction: movements of identical origin-to-endpoint distance were judged as shorter when the path became longer. We conclude that leaky path integration from visual motion is performed along the traversed path even when a straight beeline distance is calculated.
    Experimental Brain Research 07/2011; 212(1):81-9. DOI:10.1007/s00221-011-2696-x · 2.04 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.
    Current biology: CB 06/2011; 21(11):984-9. DOI:10.1016/j.cub.2011.04.038 · 9.57 Impact Factor
  • Source
    Nicholas A Giudice · Maryann R Betty · Jack M Loomis ·
    [Show abstract] [Hide abstract]
    ABSTRACT: This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.
    Journal of Experimental Psychology Learning Memory and Cognition 02/2011; 37(3):621-34. DOI:10.1037/a0022331 · 2.86 Impact Factor
  • J. W. Kelly · A. C. Beall · J. M. Loomis ·

    Journal of Vision 11/2010; 2(7):718-718. DOI:10.1167/2.7.718 · 2.39 Impact Factor
  • J. W Kelly · A. C Beall · J. M Loomis ·

    Journal of Vision 10/2010; 3(9):213-213. DOI:10.1167/3.9.213 · 2.39 Impact Factor
  • J. M Loomis · A. C Beall ·

    Journal of Vision 10/2010; 3(9):132-132. DOI:10.1167/3.9.132 · 2.39 Impact Factor
  • K. L Macuga · J. M Loomis · A. C Beall ·

    Journal of Vision 10/2010; 3(9):552-552. DOI:10.1167/3.9.552 · 2.39 Impact Factor
  • K. L. Macuga · A. C. Beall · J. M. Loomis · R. S. Smith · J. W. Kelly ·

    Journal of Vision 09/2010; 5(8):314-314. DOI:10.1167/5.8.314 · 2.39 Impact Factor
  • N. A. Giudice · J. M. Loomis ·

    Journal of Vision 06/2010; 6(6):178-178. DOI:10.1167/6.6.178 · 2.39 Impact Factor
  • Source
    J. Campos · J. Siegle · B. Mohler · H. Bulthoff · J. Loomis ·

    Journal of Vision 05/2010; 8(6):1147-1147. DOI:10.1167/8.6.1147 · 2.39 Impact Factor
  • J. Siegle · J. Campos · B. Mohler · J. Loomis · H. Buelthoff ·

    Journal of Vision 05/2010; 8(6):1043-1043. DOI:10.1167/8.6.1043 · 2.39 Impact Factor

Publication Stats

8k Citations
327.57 Total Impact Points


  • 1975-2014
    • University of California, Santa Barbara
      • • Department of Psychological and Brain Sciences
      • • Department of Geography
      • • Department of Computer Science
      Santa Barbara, California, United States
  • 1996
    • University of São Paulo
      • Department of Psychobiology
      San Paulo, São Paulo, Brazil