Journal of Experimental Psychology Human Perception & Performance (J EXP PSYCHOL HUMAN )

Publisher: American Psychological Association, American Psychological Association

Description

The Journal of Experimental Psychology: Human Perception and Performance publishes studies on perception, control of action, and related cognitive processes. All sensory modalities and motor systems are within its purview. The focus of the journal is on empirical studies that increase theoretical understanding of human perception and performance, but machine and animal studies that reflect on human capabilities may also be published. Occasional nonempirical reports, called Observations, may also be included. These are theoretical notes, commentary, or criticism on topics pertinent to the Journal's concerns.

  • Impact factor
    3.11
    Hide impact factor history
     
    Impact factor
  • 5-year impact
    3.26
  • Cited half-life
    0.00
  • Immediacy index
    0.52
  • Eigenfactor
    0.02
  • Article influence
    1.40
  • Website
    Journal of Experimental Psychology: Human Perception and Performance website
  • Other titles
    Journal of experimental psychology. Human perception and performance, Human perception and performance
  • ISSN
    0096-1523
  • OCLC
    2441505
  • Material type
    Periodical, Internet resource
  • Document type
    Journal / Magazine / Newspaper, Internet Resource

Publisher details

American Psychological Association

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Pre-print on a web-site
    • Pre-print must be labeled with date and accompanied with statement that paper has not (yet) been published
    • Copy of authors final peer-reviewed manuscript as accepted for publication
    • Post-print on author's web-site or employers server only, after acceptance
    • Publisher copyright and source must be acknowledged
    • Must link to APA journal home page or article DOI
    • Article must include the following statement: 'This article may not exactly replicate the final version published in the APA journal. It is not the copy of record.'
    • Publisher's version/PDF cannot be used
    • APA will submit NIH author articles to PubMed Central, after author completion of form
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Representing the locations of tactile stimulation can involve somatotopic reference frames in which locations are defined relative to a position on the skin surface, and also external reference frames that take into account stimulus position in external space. Locations in somatotopic and external reference frames can conflict in terms of left/right assignment when the hands are crossed or positioned outside of their typical hemispace. To investigate the spatial codes of the representation of both tactile stimuli and responses to touch, a Simon effect task, often used in the visual modality to examine issues of spatial reference frames, was deployed in the tactile modality. Participants performed the task with stimuli delivered to the hands with arms in crossed or uncrossed postures and responses were produced with foot pedals. Across all 4 experiments, participants were faster on somatotopically congruent trials (e.g., left hand stimulus, left foot response) than on somatotopically incongruent trials (left hand stimulus, right foot response), regardless of arm or leg position. However, some evidence of an externally based Simon effect also appeared in 1 experiment in which arm (stimulus) and leg (response) position were both manipulated. Overall, the results demonstrate that tactile stimulus and response codes are primarily generated based on their somatotopic identity. However, stimulus and response coding based on an external reference frame can become more salient when both hands and feet can be crossed, creating a situation in which somatotopic and external representations can differ for both stimulus and response codes. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 01/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The question of what makes a good melody has interested composers, music theorists, and psychologists alike. Many of the observed principles of good "melodic continuation" involve melodic contour-the pattern of rising and falling pitch within a sequence. Previous work has shown that contour perception can extend beyond pitch to other auditory dimensions, such as brightness and loudness. Here, we show that the generalization of contour perception to nontraditional dimensions also extends to melodic expectations. In the first experiment, subjective ratings for 3-tone sequences that vary in brightness or loudness conformed to the same general contour-based expectations as pitch sequences. In the second experiment, we modified the sequence of melody presentation such that melodies with the same beginning were blocked together. This change produced substantively different results, but the patterns of ratings remained similar across the 3 auditory dimensions. Taken together, these results suggest that (a) certain well-known principles of melodic expectation (such as the expectation for a reversal following a skip) are dependent on long-term context, and (b) these expectations are not unique to the dimension of pitch and may instead reflect more general principles of perceptual organization. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 11/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Behavior is typically organized with respect to a goal to be achieved rather than the anatomical components used in doing so. Similarly, perception is typically organized with respect to a property to be perceived rather than the anatomical components used in doing so. Such task specificity and anatomical independence is manifest in perception of properties of a wielded object. In 6 experiments, we investigated whether these properties might also be manifest in perception of properties by means of a wielded object. In particular, we investigated perception of whether a surface could be stood on when the object used to explore that surface is wielded by the preferred and nonpreferred hands (Experiment 1), by 1 or both hands (Experiment 2), by different 2-handed grips (Experiment 3), and by entirely different limbs (i.e., the hand and the foot, Experiments 4-6). In general, the results show that perception reflected the action capabilities of the perceiver but was largely unaffected by the (configurations of) anatomical components used to wield the object. The results highlight the haptic system as a smart perceptual device and as a multifractal biotensegrity structure. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 11/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Continuous tasks such as baggage screening often involve selective gating of sensory information when "targets" are detected. Previous research has shown that temporal selection of behaviorally relevant information triggers changes in perception, learning, and memory. However, it is unclear whether temporal selection has broad effects on concurrent tasks. To address this question, we asked participants to view a stream of faces and encoded faces of a particular gender for a later memory test. At the same time, they listened to a sequence of tones, pressing a button for specific pitched tones. We manipulated the timing of temporal selection such that target faces and target tones could be unrelated, perfectly correlated, or anticorrelated. Temporal selection was successful when the temporally coinciding stimuli were congruent (e.g., both were targets), but not when they were incongruent (i.e., only 1 was a target). This pattern suggests that attentional selection for separate tasks is yoked in time-when the attentional gate opens for 1 task it also opens for the other. Temporal yoking is a unique form of dual-task interaction. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 11/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 10/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Motor-performance-enhancing effects of long final fixations before movement initiation-a phenomenon called quiet eye (QE)-have repeatedly been demonstrated. Drawing on the information-processing framework, it is assumed that the QE supports information processing revealed by the close link between QE duration and task demands concerning, in particular, response selection and movement parameterization. However, the question remains whether the suggested mechanism also holds for processes referring to stimulus identification. Thus, in a series of 2 experiments, performance in a targeting task was tested as a function of experimentally manipulated visual processing demands as well as experimentally manipulated QE durations. The results support the suggested link because a performance-enhancing QE effect was found under increased visual processing demands only: Whereas QE duration did not affect performance as long as positional information was preserved (Experiment 1), in the full versus no target visibility comparison, QE efficiency turned out to depend on information processing time as soon as the interval falls below a certain threshold (Experiment 2). Thus, the results rather contradict alternative, for example, posture-based explanations of QE effects and support the assumption that the crucial mechanism behind the QE phenomenon is rooted in the cognitive domain. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 10/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The modulating effect of emotional expression on the rewarding nature of attractive and nonattractive female faces in heterosexual men was explored in a motivated viewing paradigm. This paradigm, which is an indicator of neural reward, requires the viewer to expend effort to maintain or reduce image-viewing times. Males worked to extend the viewing time for happy and neutral attractive faces but to reduce the viewing time for the attractive angry faces. Attractive angry faces were rated as more aesthetically pleasing than the nonattractive faces; however, the males worked to reduce their viewing time to a level comparable with the nonattractive neutral and happy faces. Therefore, the addition of an angry expression onto an otherwise attractive face renders it unrewarding and aversive to potential mates. Mildly happy expressions on the nonattractive faces did little to improve their attractiveness or reward potential, with males working to reduce viewing time for all nonattractive faces. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 10/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Emerging evidence has revealed that visual processing of objects near the hands is altered. The present study shows that the visuomotor Simon effect when the hands are proximal to stimuli is greater than that observed when the hands are far from stimuli, thereby indicating stronger spatial stimulus-response mapping near the hands. The visuomotor Simon effect is robustly enhanced near the hands even when hand visibility and stimulus-response axis-similarity are controlled. However, the semantic Simon effect with location words is not modulated by hand-stimulus proximity. Thus, consistent with the dimensional overlap model and the known features of the bimodal visuotactile neurons, hand-stimulus proximity enhances spatial stimulus-response mapping but has no effect on semantic processing of location words. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 10/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many perceptual and cognitive tasks permit or require the integrated cooperation of specialized sensory channels, detectors, or other functionally separate units. In compound detection or discrimination tasks, 1 prominent general mechanism to model the combination of the output of different processing channels is probability summation. The classical example is the binocular summation model of Pirenne (1943), according to which a weak visual stimulus is detected if at least 1 of the 2 eyes detects this stimulus; as we review briefly, exactly the same reasoning is applied in numerous other fields. It is generally accepted that this mechanism necessarily predicts performance based on 2 (or more) channels to be superior to single channel performance, because 2 separate channels provide "2 chances" to succeed with the task. We argue that this reasoning is misleading because it neglects the increased opportunity with 2 channels not just for hits but also for false alarms and that there may well be no redundancy gain at all when performance is measured in terms of receiver operating characteristic curves. We illustrate and support these arguments with a visual detection experiment involving different spatial uncertainty conditions. Our arguments and findings have important implications for all models that, in one way or another, rest on, or incorporate, the notion of probability summation for the analysis of detection tasks, 2-alternative forced-choice tasks, and psychometric functions. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 10/2014; 40(5):2091-2100.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Research suggests that visual short-term memory (VSTM) has both an item capacity, of around 4 items, and an information capacity. We characterize the information capacity limits of VSTM using a task in which observers discriminated the orientation of a single probed item in displays consisting of 1, 2, 3, or 4 orthogonally oriented Gabor patch stimuli that were presented in noise for 50 ms, 100 ms, 150 ms, or 200 ms. The observed capacity limitations are well described by a sample-size model, which predicts invariance of ∑i(d' i ) 2 for displays of different sizes and linearity of (d' i ) 2 for displays of different durations. Performance was the same for simultaneous and sequentially presented displays, which implicates VSTM as the locus of the observed invariance and rules out explanations that ascribe it to divided attention or stimulus encoding. The invariance of ∑i(d' i ) 2 is predicted by the competitive interaction theory of Smith and Sewell (2013), which attributes it to the normalization of VSTM traces strengths arising from competition among stimuli entering VSTM. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 09/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: An enduring question in visual attention research is whether unattended objects are subject to perceptual processing. The traditional view suggests that, whereas focal attention is required for the processing of complex features or for individuating objects, it is not required for detecting basic features. However, other models suggest that detecting basic features may be no different from object identification and also require focal attention. In the present study, we approach this problem by measuring the effect of attentional capture in simple and compound visual search tasks. To make sure measurements did not reflect strategic components of the tasks, we measured accuracy with brief displays. Results show that attentional capture influenced only compound but not basic feature searches, suggestive of a distinction between attentional requirements of the 2 tasks. We discuss our findings, together with recent results of top-down word cue effects and dimension-specific intertrial effects, in terms of the dual-route account for visual search, which suggests that the task that is being completed determines whether search is based on attentive or preattentive mechanisms. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 09/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Studies of the perception of motion in three-dimensional scenes have provided extensive information about the effects of changes in the size, speed, and disparity of an object's image on the perception of the object's trajectory. The present study demonstrates that this perception is not determined primarily by the object's motion but by the shape of the background against which this motion is displayed. The effect of a scene background on judgments of the trajectory of a moving object was examined in 2 experiments with 33 observers. In the first experiment, observers judged whether the trajectory was concave or convex. In the second experiment, observers judged which of 2 displays, differing in curvature of the motion path and curvature of the background, depicted the more curved motion path. Judgments of sign of curvature and judgments of relative magnitude of curvature were determined almost entirely by the background curvature. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 09/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Journal of Experimental Psychology Human Perception & Performance 09/2014;