Jack M Loomis

University of Maine, Orono, MN, United States

Are you Jack M Loomis?

Claim your profile

Publications (147)258.11 Total impact

  • Source
    Multisensory Imagery: Theory & Applications, Edited by S. Lacey, R. Lawson, 01/2013: pages 131-156; Springer.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.
    Experimental Brain Research 10/2012; · 2.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
    Attention Perception & Psychophysics 05/2012; 74(6):1260-7. · 1.97 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This research uses a novel integration paradigm to investigate whether target locations read in from long-term memory (LTM) differ from perceptually encoded inputs in spatial working-memory (SWM) with respect to systematic spatial error and/or noise, and whether SWM can simultaneously encompass both of these sources. Our results provide evidence for a composite representation of space in SWM derived from both perception and LTM, albeit with a loss in spatial precision of locations retrieved from LTM. More generally, the data support the concept of a spatial image in working memory and extend its potential sources to representations retrieved from LTM.
    Spatial Cognition and Computation 01/2012; · 1.22 Impact Factor
  • Assistive Technology for Blindness and Low Vision, Edited by R. Manduchi, S. Kurniawan, 01/2012: pages 162-191; CRC Press.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When humans use vision to gauge the travel distance of an extended forward movement, they often underestimate the movement's extent. This underestimation can be explained by leaky path integration, an integration of the movement to obtain distance. Distance underestimation occurs because this integration is imperfect and contains a leak that increases with distance traveled. We asked human observers to estimate the distance from a starting location for visually simulated movements in a virtual environment. The movements occurred along curved paths that veered left and right around a central forward direction. In this case, the distance that has to be integrated (i.e., the beeline distance between origin and endpoint) and the distance that is traversed (the path length along the curve) are distinct. We then tested whether the leak accumulated with distance from the origin or with traversed distance along the curved path. Leaky integration along the path makes the seemingly counterintuitive prediction that the estimated origin-to-endpoint distance should decrease with increasing veering, because the length of the path over which the integration occurs increases, leading to a larger leak effect. The results matched the prediction: movements of identical origin-to-endpoint distance were judged as shorter when the path became longer. We conclude that leaky path integration from visual motion is performed along the traversed path even when a straight beeline distance is calculated.
    Experimental Brain Research 07/2011; 212(1):81-9. · 2.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.
    Current biology: CB 06/2011; 21(11):984-9. · 10.99 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.
    Journal of Experimental Psychology Learning Memory and Cognition 02/2011; 37(3):621-34. · 3.10 Impact Factor
  • Journal of Vision - J VISION. 01/2010; 8(6):1043-1043.
  • Source
    01/2010;
  • N. A. Giudice, J. M. Loomis
    Journal of Vision - J VISION. 01/2010; 6(6):178-178.
  • Journal of Vision - J VISION. 01/2010; 8(6):1147-1147.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Evidence for amodal representations after bimodal learning: Integration of haptic-visual layouts into a common spatial image. Spatial Cognition & Computation, 9(4), 287-304. Abstract: Participants learned circular layouts of six objects presented haptically or visually, then indicated the direction from a start target to an end target of the same or different modality (intramodal versus intermodal). When objects from the two modalities were learned separately, superior performance for intramodal trials indicated a cost of switching between modalities. When a bimodal layout intermixing modalities was learned, intra-and intermodal trials did not differ reliably. These findings indicate that a spatial image, independent of input modality, can be formed when inputs are spatially and temporally congruent, but not when modalities are temporally segregated in learning.
    Spatial Cognition and Computation 11/2009; · 1.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In order to optimally characterize full-body self-motion perception during passive translations, changes in perceived location, velocity, and acceleration must be quantified in real time and with high spatial resolution. Past methods have failed to effectively measure these critical variables. Here, we introduce continuous pointing as a novel method with several advantages over previous methods. Participants point continuously to the mentally updated location of a previously viewed target during passive, full-body movement. High-precision motion-capture data of arm angle provide a measure of a participant's perceived location and, in turn, perceived velocity at every moment during a motion trajectory. In two experiments, linear movements were presented in the absence of vision by passively translating participants with a robotic wheelchair or an anthropomorphic robotic arm (MPI Motion Simulator). The movement profiles included constant-velocity trajectories, two successive movement intervals separated by a brief pause, and reversed-motion trajectories. Results indicate a steady decay in perceived velocity during constant-velocity travel and an attenuated response to mid-trial accelerations.
    Experimental Brain Research 05/2009; 195(3):429-44. · 2.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The extent to which actual movements and imagined movements maintain a shared internal representation has been a matter of much scientific debate. Of the studies examining such questions, few have directly compared actual full-body movements to imagined movements through space. Here we used a novel continuous pointing method to a) provide a more detailed characterization of self-motion perception during actual walking and b) compare the pattern of responding during actual walking to that which occurs during imagined walking. This continuous pointing method requires participants to view a target and continuously point towards it as they walk, or imagine walking past it along a straight, forward trajectory. By measuring changes in the pointing direction of the arm, we were able to determine participants' perceived/imagined location at each moment during the trajectory and, hence, perceived/imagined self-velocity during the entire movement. The specific pattern of pointing behaviour that was revealed during sighted walking was also observed during blind walking. Specifically, a peak in arm azimuth velocity was observed upon target passage and a strong correlation was observed between arm azimuth velocity and pointing elevation. Importantly, this characteristic pattern of pointing was not consistently observed during imagined self-motion. Overall, the spatial updating processes that occur during actual self-motion were not evidenced during imagined movement. Because of the rich description of self-motion perception afforded by continuous pointing, this method is expected to have significant implications for several research areas, including those related to motor imagery and spatial cognition and to applied fields for which mental practice techniques are common (e.g. rehabilitation and athletics).
    PLoS ONE 01/2009; 4(11):e7793. · 3.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As you move through an environment, the positions of surrounding objects relative to your body constantly change. Updating these locations is a central feature of situational awareness and readiness to act. Here, we used functional magnetic resonance imaging and a virtual environment to test how the human brain uses optic flow to monitor changing object coordinates. Only activation profiles in the precuneus and the dorsal premotor cortex (PMd) were indicative of an updating process operating on a memorized egocentric map of space. A subsequent eye movement study argued against the alternative explanation that activation in PMd could be driven by oculomotor signals. Finally, introducing a verbal response mode revealed a dissociation between the two regions, with the PMd only showing updating-related responses when participants responded by pointing. We conclude that visual spatial updating relies on the construction of updated representations in the precuneus and the context-dependent planning of motor actions in PMd.
    Nature Neuroscience 10/2008; 11(10):1223-30. · 15.25 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We report a vibrotactile version of the common n-back task used to study working memory. Subjects wore vibrotactile stimulators on three fingers of one hand, and they responded by pressing a button with the other hand whenever the current finger matched the one stimulated n items back. Experiment 1 showed a steep decline in performance as n increased from 1 to 3; each additional level ofn decreased performance by 1.5 d' units on average. Experiment 2 supported a central capacity locus for the vibrotactile task by showing that it correlated strongly with an auditory analogue; both tasks were also related to standard digit span. The vibrotactile version of n-back may be particularly useful in dual-task contexts. It allows the assessment of cognitive capacity in sensory-impaired populations in which touch remains intact, and it may find use in brain-imaging studies in which vibrotactile stimuli impose a memory load.
    Behavior Research Methods 03/2008; 40(1):367-72. · 2.12 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In two experiments, we investigated the stabilizing influence of vision on human upright posture in real and virtual environments. Visual stabilization was assessed by comparing eyes-open with eyes-closed conditions while subjects attempted to maintain balance in the presence of a stable visual scene. Visual stabilization in the virtual display w as reduced, as compared wit hreal-world viewing. Th is differencewas partially accountedfor by the reduced field of view in the virtual display. When the retinal flow inthe virtual display wasremoved by using dynamic random-dot stereograms with single-frame lifetimes (cyclopean stimuli), vision did notstabilize posture. There was also an overall larger stabilizing influence of vision when more unstable stances were adopted (e.g., one-foot, as compared with side-by-side, stance). Reducing the graphics latency of the virtual display by 63% did not increase visual stabilization in the virtual display. Other visual and psychological differences between real and virtual environments are discussed.
    Perception & Psychophysics 02/2008; 70(1):158-65. · 1.37 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Two psychophysical experiments are reported, one dealing with the visual perception of the head orientation of another person (the 'looker') and the other dealing with the perception of the looker's direction of eye gaze. The participant viewed the looker with different retinal eccentricities, ranging from foveal to far-peripheral viewing. On average, judgments of head orientation were reliable even out to the extremes of peripheral vision (90 degrees eccentricity), with better performance at the extremes when the participant was able to view the looker changing head orientation from one trial to the next. In sharp contrast, judgments of eye-gaze direction were reliable only out to 4 degrees eccentricity, signifying that the eye-gaze social signal is available to people only when they fixate near the looker's eyes. While not unexpected, this vast difference in availability of information about head direction and eye direction, both of which can serve as indicators of the looker's focus of attention, is important for understanding the dynamics of eye-gaze behavior.
    Perception 02/2008; 37(9):1443-57. · 1.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Four experiments investigated the conditions contributing to sensorimotor alignment effects (i.e., the advantage for spatial judgments from imagined perspectives aligned with the body). Through virtual reality technology, participants learned object locations around a room (learning room) and made spatial judgments from imagined perspectives aligned or misaligned with their actual facing direction. Sensorimotor alignment effects were found when testing occurred in the learning room but not after walking 3 m into a neighboring (novel) room. Sensorimotor alignment effects returned after returning to the learning room or after providing participants with egocentric imagery instructions in the novel room. Additionally, visual and spatial similarities between the test and learning environments were independently sufficient to cause sensorimotor alignment effects. Memory alignment effects, independent from sensorimotor alignment effects, occurred in all testing conditions. Results are interpreted in the context of two-system spatial memory theories positing separate representations to account for sensorimotor and memory alignment effects.
    Journal of Experimental Psychology Learning Memory and Cognition 12/2007; 33(6):1092-107. · 3.10 Impact Factor

Publication Stats

5k Citations
258.11 Total Impact Points

Institutions

  • 2011–2012
    • University of Maine
      • • School of Computing and Information Science
      • • Department of Spatial Information Sciences and Engineering
      Orono, MN, United States
    • University of Münster
      • Institute for Psychology
      Münster, North Rhine-Westphalia, Germany
    • The University of Edinburgh
      • Centre for Cognitive and Neural Systems
      Edinburgh, SCT, United Kingdom
  • 1981–2012
    • University of California, Santa Barbara
      • • Department of Psychological and Brain Sciences
      • • Department of Computer Science
      • • Department of Geography
      Santa Barbara, CA, United States
  • 2002–2009
    • Carnegie Mellon University
      • Department of Psychology
      Pittsburgh, PA, United States
    • George Washington University
      • Department of Psychology
      Washington, D. C., DC, United States
  • 2008
    • Vanderbilt University
      • Department of Psychology
      Nashville, MI, United States