Jack M. Loomis

University of California, Santa Barbara, Santa Barbara, California, United States

Are you Jack M. Loomis?

Claim your profile

Publications (171)328.26 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface.
    Full-text · Article · Nov 2014 · Multisensory research
  • [Show abstract] [Hide abstract]
    ABSTRACT: The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent-that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features). © 2013 Springer Science+Business Media, LLC. All rights reserved.
    No preview · Article · Mar 2014
  • Source
    Jack M. Loomis
    [Show abstract] [Hide abstract]
    ABSTRACT: The focus here is on the paradoxical finding that whereas visually perceived egocentric distance is proportional to physical distance out to at least 20 m under full-cue viewing, there are large distortions of shape within the same range, reflecting a large anisotropy of depth and frontal extents on the ground plane. Three theories of visual space perception are presented, theories that are relevant to understanding this paradoxical result. The theory by Foley, Ribeiro-Filho, and Da Silva is based on the idea that when the visual system computes the length of a visible extent, the effective visual angle is a non-linear increasing function of the actual visual angle. The theory of Durgin and Li is based on the idea that two angular measures, optical slant and angular declination, are over-perceived. The theory of Ooi and He is based on both a default perceptual representation of the ground surface in the absence of visual cues and the “sequential surface integration process” whereby an internal representation of the visible ground surface is constructed starting from beneath the observer’s feet and extending outward.
    Preview · Article · Jan 2014 · Psychology and Neuroscience
  • Source

    Full-text · Chapter · Jan 2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.
    Full-text · Article · Oct 2012 · Experimental Brain Research
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
    Full-text · Article · May 2012 · Attention Perception & Psychophysics
  • Source
    Nicholas A Giudice · J.M Loomis · R.L. Klatzky

    Full-text · Chapter · Jan 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This research uses a novel integration paradigm to investigate whether target locations read in from long-term memory (LTM) differ from perceptually encoded inputs in spatial working-memory (SWM) with respect to systematic spatial error and/or noise, and whether SWM can simultaneously encompass both of these sources. Our results provide evidence for a composite representation of space in SWM derived from both perception and LTM, albeit with a loss in spatial precision of locations retrieved from LTM. More generally, the data support the concept of a spatial image in working memory and extend its potential sources to representations retrieved from LTM.
    Full-text · Article · Jan 2012 · Spatial Cognition and Computation
  • M. Lappe · M. Stiels · H. Frenz · J. Loomis

    No preview · Article · Sep 2011 · Journal of Vision
  • Source
    Markus Lappe · Maren Stiels · Harald Frenz · Jack M Loomis
    [Show abstract] [Hide abstract]
    ABSTRACT: When humans use vision to gauge the travel distance of an extended forward movement, they often underestimate the movement's extent. This underestimation can be explained by leaky path integration, an integration of the movement to obtain distance. Distance underestimation occurs because this integration is imperfect and contains a leak that increases with distance traveled. We asked human observers to estimate the distance from a starting location for visually simulated movements in a virtual environment. The movements occurred along curved paths that veered left and right around a central forward direction. In this case, the distance that has to be integrated (i.e., the beeline distance between origin and endpoint) and the distance that is traversed (the path length along the curve) are distinct. We then tested whether the leak accumulated with distance from the origin or with traversed distance along the curved path. Leaky integration along the path makes the seemingly counterintuitive prediction that the estimated origin-to-endpoint distance should decrease with increasing veering, because the length of the path over which the integration occurs increases, leading to a larger leak effect. The results matched the prediction: movements of identical origin-to-endpoint distance were judged as shorter when the path became longer. We conclude that leaky path integration from visual motion is performed along the traversed path even when a straight beeline distance is calculated.
    Full-text · Article · Jul 2011 · Experimental Brain Research
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.
    Full-text · Article · Jun 2011 · Current biology: CB
  • Source
    Nicholas A Giudice · Maryann R Betty · Jack M Loomis
    [Show abstract] [Hide abstract]
    ABSTRACT: This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.
    Full-text · Article · Feb 2011 · Journal of Experimental Psychology Learning Memory and Cognition
  • J. W. Kelly · A. C. Beall · J. M. Loomis
    [Show abstract] [Hide abstract]
    ABSTRACT: Judgments of exocentric direction are quite common, especially in judging where others are looking. In this experiment, subjects were shown two targets in a large field and were asked to judge the direction specified by the targets. They did so by noting which point on a distant fence appeared collinear with the two targets. Thus, subjects had to imagine a line connecting the targets and then extrapolate this imagined line to the fence. The targets ranged in egocentric distance from 5 to 20 m with target-to-target angular separations of 45, 90, and 135 deg. Subjects estimated the point of collinearity by marking a hand-held 360 deg panoramic cylinder representing their vistas. The two targets and the judged point of collinearity were often in quite different directions; in some conditions, only one or two of these three points were within the subjects' momentary field of view. Overall, performance was quite accurate - the mean estimates in the 60 different configurations exhibited absolute errors averaging only 5 deg and signed errors were even smaller. To perform with such accuracy, subjects must have perceived the relative locations of the two targets quite accurately and must have exhibited little systematic error in the sensorimotor integration process involved in extrapolating the imagined line in space.
    No preview · Article · Nov 2010 · Journal of Vision
  • J. W Kelly · A. C Beall · J. M Loomis
    [Show abstract] [Hide abstract]
    ABSTRACT: Discrete visual displacement of a room has been shown to cause body sway in adults and cause children to stagger (Lee & Aronson, 1974; Lee & Lishman, 1975). Sinusoidal displacement has been shown to cause body sway at the frequency of the room motion (Dijkstra, et al, 1994). Presumably this effect is due to an attempt to stabilize posture with respect to the external environment. Our experiments tested the idea that visually controlled postural stabilization may occur in the absence of optic flow. To investigate this, we measured postural response to a moving room presented in virtual reality. Subjects viewed either a luminance-defined dioptic environment containing smooth optic flow or a rapidly scintillating random dot cinematogram (SRDC) environment (Julesz, 1971), devoid of any optic flow relating to the task. The SRDC environments consisted of a sequence of random-dot stereograms with single-frame lifetimes, which to each eye appeared as a scintillating dot display of uniform density. Thus, the SRDC stimulus contained no optic flow related to the motion of the room. In both conditions, subjects stood 30 cm. from the wall of a room that oscillated longitudinally at 0.2 Hz. with an amplitude of 10 cm. An optical tracking system measured body sway during exposure to both a moving and stationary (control) room. Spectral analysis of the body motion data revealed significant body sway at the frequency of the visual stimulus (0.2 Hz) for both dioptic and SRDC conditions. During dioptic exposure, room motion produced body sway amplitude that was 5.6 times greater than the control condition. Similarly, SRDC exposure resulted in sway amplitude that was 2.4 times greater than the control. These data suggest that optic flow is not necessary for visual control of posture. Rather, perceived motion regardless of how it is evoked drives postural control.
    No preview · Article · Oct 2010 · Journal of Vision
  • J. M Loomis · A. C Beall
    [Show abstract] [Hide abstract]
    ABSTRACT: Using computer graphics techniques, we are able to place subjects in immersive virtual environments displayed only as scintillating random dot cinematograms (SRDC's) with 1-frame lifetimes (Julesz, 1971). Thus, although each eye sees only a scintillating pattern of random dots of uniform density, the subjects experience moving about within room-sized virtual environments. Without training, subjects are immediately able to perform a wide range of complex spatial behaviors, including aiming toward targets, steering along curving paths, and intercepting moving objects, even though there is no optic flow correlated with the environments or actions. Formal experiments done so far deal with 2 forms of vehicle steering: steering a curving path and steering a straight path in the presence of lateral perturbations. Two viewing conditions have been compared for each task: dioptic stimuli with smooth optic flow produced by high contrast environmental features and SRDC's with the same environmental features raised above the ground plane. The features were approximately matched for visibility in the two conditions. For the straight paths, the rms error of steering performance is 25% greater for SRDC's, and for the curving paths, rms error is about 80% greater. The ease and accuracy with which subjects can perform complex spatial behaviors with SRDC's signify that optic flow is not necessary for visually controlled locomotion. The suggestion is that optic flow normally acts through perceived flow in the control of spatial behavior. Furthermore, to the extent that subjects are utilizing aspects of optic flow to control behavior (e.g., splay rate), these appear to be aspects of the perceived flow instead.
    No preview · Article · Oct 2010 · Journal of Vision
  • Source
    J.M. Loomis · D.R. Montello · R.L. Klatzky

    Full-text · Article · Oct 2010 · Progress in Human Geography
  • K. L Macuga · J. M Loomis · A. C Beall
    [Show abstract] [Hide abstract]
    ABSTRACT: Observers can perceive heading over a ground plane with an accuracy of 1.2° using radial patterns of optic flow (Warren et al., 1988). Can observers successfully use information other than optic flow to extract heading? An alternative idea is that the perceived flow of visible elements can be used even when optic flow is absent. To investigate this question, we determined the accuracy of heading perception using two stimuli: a luminance defined dioptic stimulus containing smooth optic flow and a scintillating random dot cinematogram (SRDC) stimulus (Julesz, 1971), devoid of any optic flow relating to the task. The SRDC stimulus consists of a sequence of random-dot stereograms with single-frame lifetimes, which to each eye appears as a scintillating display of uniform dot density. Thus, there is no relevant optic flow signal in the SRDC stimulus. We then employed a discrimination task to assess translational heading thresholds for each condition using the method of constant stimuli. Subjects viewed simulated self-motion parallel to a ground plane covered with randomly placed objects through a head mounted display. In the last frame of each trial, motion ceased and a vertical target line appeared at the horizon, remaining visible until a response was made. Observers were required to judge whether they were moving to the left or to the right of the target. Bearing angle between the heading direction and the target varied randomly between ± 0.5° and 6.0° Environmental features were approximately matched for visibility in the two conditions. Data were collapsed across heading direction and positive-negative bearing angles. Mean thresholds of 75% correct for 4 observers were less than ∼1° for the dioptic condition and less than ∼2° for the SRDC condition. Thus, observers can perceive heading only slightly less accurately using the SRDC stimuli than using a stimulus with optic flow.
    No preview · Article · Oct 2010 · Journal of Vision
  • K. L. Macuga · A. C. Beall · J. M. Loomis · R. S. Smith · J. W. Kelly

    No preview · Article · Sep 2010 · Journal of Vision
  • N. A. Giudice · J. M. Loomis

    No preview · Article · Jun 2010 · Journal of Vision
  • Source
    J. Campos · J. Siegle · B. Mohler · H. Bulthoff · J. Loomis

    Full-text · Article · May 2010 · Journal of Vision

Publication Stats

8k Citations
328.26 Total Impact Points

Institutions

  • 1975-2014
    • University of California, Santa Barbara
      • • Department of Psychological and Brain Sciences
      • • Department of Geography
      • • Department of Computer Science
      Santa Barbara, California, United States
  • 1996
    • University of São Paulo
      • Department of Psychobiology
      San Paulo, São Paulo, Brazil