Visual Cognition (VIS COGN)
This journal publishes high quality research concerned with all aspects of visual cognition. This includes, for example, studies of visual object and face recognition (from edge extraction to accessing stored knowledge representations), texture and surface perception, perceptual organization, dynamic aspects of vision, visual attention, long-term and short-term visual memory, visual imagery, visual word recognition, eye movement control in reading and scene perception, context effects in reading, object and face recognition. Papers tackling these topics from a range of perspectives are accepted, including: experimental studies of normal visual cognition, developmental, neuropsychological, psychophysiological, neuroanatomical and computational studies. By bringing together work within the field, Visual Cognition provides researchers with a unique venue for papers cutting across traditional research boundaries. Papers that make a novel theoretical contribution to the literature are especially welcome. The journal is interdisciplinary, drawing on the research of developmental psychologists, experimental psychologists, cognitive scientists and neuropsychologists, as well as cognitive psychologists. In addition to taking full-length papers making theoretical contributions to the literature, manuscripts that are short, (for instance, containing a single experiment or a commentary on other papers) can be considered for publication, provided they make a useful empirical or theoretical impact. Similarly, papers presenting only a theoretical analysis or a review are welcome providing they are original and of a high standard of scholarship.
- Impact factor2.05
- WebsiteVisual Cognition website
Other titlesVisual cognition (Online)
Material typeDocument, Periodical, Internet resource
Document typeInternet Resource, Computer File, Journal / Magazine / Newspaper
- Author can archive a pre-print version
- Author cannot archive a post-print version
- 12 month embargo for STM, Behavioural Science and Public Health Journals
- 18 month embargo for SSH journals
- Some individual journals may have policies prohibiting pre-print archiving
- Pre-print on authors own website, Institutional or Subject Repository
- Post-print on authors own website, Institutional or Subject Repository
- Publisher's version/PDF cannot be used
- On a non-profit server
- Published source must be acknowledged
- Must link to publisher version
- Set statements to accompany deposits (see policy)
- Publisher will deposit to PMC on behalf of NIH authors.
- STM: Science, Technology and Medicine
- SSH: Social Science and Humanities
- 'Taylor & Francis (Psychology Press)' is an imprint of 'Taylor & Francis'
Publications in this journal
Article: No difference in flanker effects for sad and happy schematic faces: A parametric study of temporal parameters[show abstract] [hide abstract]
ABSTRACT: Flanker effects with schematic faces have been reported to be larger for happy than for sad faces, allegedly because sad faces restrict the focus of spatial attention. We report a parametric study that fails to replicate this effect. Participants performed speeded identifications of happy or sad faces accompanied by compatible or incompatible flanker faces. We varied the temporal interval between presentation of central target and flanker faces because differential attentional effects of happy and sad faces should critically depend on this variable. In contradiction to the literature, we found large compatibility effects that were modulated by temporal parameters, but not by the emotional valence of the faces, and not in the way consistent with differential attentional modulation. We conclude that previously reported asymme- tries in flanker tasks with schematic faces are not due to changes in attentional scope (mediated by emotion or otherwise), but rather to perceptual low-level differences.Visual Cognition 04/2013;
[show abstract] [hide abstract]
ABSTRACT: Does the same basic-level advantage commonly observed in the categorization literature also hold for targets in a search task? We answered this question by first conducting a category verification task to define a set of categories showing a standard basic-level advantage, which we then used as stimuli in a search experiment. Participants were cued with a picture preview of the target or its category name at either superordinate, basic, or subordinate levels, then shown a target-present/absent search display. Although search guidance and target verification was best using pictorial cues, the effectiveness of the categorical cues depended on the hierarchical level. Search guidance was best for the specific subordinate level cues, while target verification showed a standard basic-level advantage. These findings demonstrate different hierarchical advantages for guidance and verification in categorical search. We interpret these results as evidence for a common target representation underlying categorical search guidance and verification.Visual Cognition 12/2012; 20(10):1153-1163.
Article: When less is more: Line-drawings lead to greater boundary extension than color photographs.[show abstract] [hide abstract]
ABSTRACT: Is boundary extension (false memory beyond the edges of the view; Intraub & Richardson, 1989) determined solely by the schematic structure of the view or does the quality of the pictorial information impact this error? To examine this color photograph or line-drawing versions of 12 multi-object scenes (Experiment 1: N=64) and 16 single-object scenes (Experiment 2: N=64) were presented for 14-s each. At test, the same pictures were each rated as being the "same", "closer-up" or "farther away" (5-pt scale). Although the layout, the scope of the view, the distance of the main objects to the edges, the background space and the gist of the scenes were held constant, line-drawings yielded greater boundary extension than did their photographic counterparts for multi-object (Experiment 1) and single-object (Experiment 2) scenes. Results are discussed in the context of the multisource model and its implications for the study of scene perception and memory.Visual Cognition 08/2012; 20(7):815-824.
Article: Direct control of fixation times in scene viewing: Evidence from analysis of the distribution of first fixation duration[show abstract] [hide abstract]
ABSTRACT: Participants' eye movements were monitored in two scene viewing experiments that manipulated the task-relevance of scene stimuli and their availability for extrafoveal processing. In both experiments, participants viewed arrays containing eight scenes drawn from two categories. The arrays of scenes were either viewed freely (Free Viewing) or in a gaze-contingent viewing mode where extrafoveal preview of the scenes was restricted (No Preview). In Experiment 1a, participants memorized the scenes from one category that was designated as relevant, and in Experiment 1b, participants chose their preferred scene from within the relevant category. We examined first fixations on scenes from the relevant category compared to the irrelevant category (Experiments 1a and 1b), and those on the chosen scene compared to other scenes not chosen within the relevant category (Experiment 1b). A survival analysis was used to estimate the first discernible influence of the task-relevance on the distribution of first-fixation durations. In the free viewing condition in Experiment 1a, the influence of task relevance occurred as early as 81 ms from the start of fixation. In contrast, the corresponding value in the no preview condition was 254 ms, demonstrating the crucial role of extrafoveal processing in enabling direct control of fixation durations in scene viewing. First fixation durations were also influenced by whether or not the scene was eventually chosen (Experiment 1b), but this effect occurred later and affected fewer fixations than the effect of scene category, indicating that the time course of scene processing is an important variable mediating direct control of fixation durations.Visual Cognition 06/2012; 20(6):605-626.
Article: The utility of modeling word identification from visual input within models of eye movements in reading.[show abstract] [hide abstract]
ABSTRACT: Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency.Visual Cognition 04/2012; 20(4-5):422-456.
[show abstract] [hide abstract]
ABSTRACT: The visual system rapidly represents the mean size of sets of objects (Ariely, 2001). Here, we investigated whether mean size is explicitly encoded by the visual system, along a single dimension like texture, numerosity, and other visual dimensions susceptible to adaptation. Observers adapted to two sets of dots with different mean sizes, presented simultaneously in opposite visual fields. After adaptation, two test patches replaced the adapting dot sets, and participants judged which test appeared to have the larger average dot diameter. They generally perceived the test that replaced the smaller mean size adapting set as being larger than the test that replaced the larger adapting set. This differential aftereffect held for single test dots (Experiment 2) and high-pass filtered displays (Experiment 3), and changed systematically as a function of the variance of the adapting dot sets (Experiment 4), providing additional support that mean size is adaptable, and therefore explicitly encoded dimension of visual scenes.Visual Cognition 02/2012; 20(2):211-231.
Article: Heuristics and Criterion Setting during Selective Encoding in Visual Decision-Making: Evidence from Eye Movements.[show abstract] [hide abstract]
ABSTRACT: When making a decision, people spend longer looking at the option they ultimately choose compared other options-termed the gaze bias effect-even during their first encounter with the options (Glaholt & Reingold, 2009a, 2009b; Schotter, Berry, McKenzie & Rayner, 2010). Schotter et al. (2010) suggested that this is because people selectively encode decision-relevant information about the options, on-line during the first encounter with them. To extend their findings and test this claim, we recorded subjects' eye movements as they made judgments about pairs of images (i.e., which one was taken more recently or which one was taken longer ago). We manipulated whether both images were presented in the same color content (e.g., both in color or both in black-and-white) or whether they differed in color content and the extent to which color content was a reliable cue to relative recentness of the images. We found that the magnitude of the gaze bias effect decreased when the color content cue was not reliable during the first encounter with the images, but no modulation of the gaze bias effect in remaining time on the trial. These data suggest people do selectively encode decision-relevant information on-line.Visual Cognition 01/2012; 20(9):1110-1129.
[show abstract] [hide abstract]
ABSTRACT: We here study the predictability of eye movements when viewing high-resolution natural videos. We use three recently published gaze data sets that contain a wide range of footage, from scenes of almost still-life character to professionally made, fast-paced advertisements and movie trailers. Inter-subject gaze variability differs significantly between data sets, with variability being lowest for the professional movies. We then evaluate three state-of-the-art saliency models on these data sets. A model that is based on the invariants of the structure tensor and that combines very generic, sparse video representations with machine learning techniques outperforms the two reference models; performance is further improved for two data sets when the model is extended to a perceptually inspired colour space. Finally, a combined analysis of gaze variability and predictability shows that eye movements on the professionally made movies are the most coherent (due to implicit gaze-guidance strategies of the movie directors), yet the least predictable (presumably due to the frequent cuts). Our results highlight the need for standardized benchmarks to comparatively evaluate eye movement prediction algorithms.Visual Cognition 01/2012; 20(4-5):495-514.
Article: TAM: Explaining off-object fixations and central fixation tendencies as effects of population averaging during search.[show abstract] [hide abstract]
ABSTRACT: Understanding how patterns are selected for both recognition and action, in the form of an eye movement, is essential to understanding the mechanisms of visual search. It is argued that selecting a pattern for fixation is time consuming-requiring the pruning of a population of possible saccade vectors to isolate the specific movement to the potential target. To support this position, two experiments are reported showing evidence for off-object fixations, where fixations land between objects rather than directly on objects, and central fixations, where initial saccades land near the center of scenes. Both behaviors were modeled successfully using TAM (Target Acquisition Model; Zelinsky, 2008). TAM interprets these behaviors as expressions of population averaging occurring at different times during saccade target selection. A large population early during search results in the averaging of the entire scene and a central fixation; a smaller population later during search results in averaging between groups of objects and off-object fixations.Visual Cognition 01/2012; 20(4-5):515-545.
Article: Eye movements in reading versus nonreading tasks: Using E-Z Reader to understand the role of word/stimulus familiarity.[show abstract] [hide abstract]
ABSTRACT: In this article, we extend our previous work (Reichle, Pollatsek, & Rayner, 2012) using the principles of the E-Z Reader model to examine the factors that determine when and where the eyes move in both reading and non-reading tasks, and in particular the role that word/stimulus familiarity plays in determining when the eyes move from one word/stimulus to the next. In doing this, we first provide a brief overview of E-Z Reader, including its assumption that word familiarity is the "engine" driving eye movements during reading. We then review the theoretical considerations that motivated this assumption, as well as recent empirical evidence supporting its validity. We also report the results of three new simulations that were intended to demonstrate the utility of the familiarity check in three tasks: (1) reading; (2) searching for a target word in embedded in text; and (3) searching for the letter O in linear arrays of Landolt Cs. The results of these simulations suggest that the familiarity check always improves task efficiency by speeding its rate of performance. We provide several arguments as to why this conclusion is not likely to be true for the two non-reading tasks, and in the final section of the paper, we provide a fourth simulation to test the hypothesis that problems associated with the mis-identification of words may also curtail the too liberal use of word familiarity.Visual Cognition 01/2012; 20(4-5):360-390.
[show abstract] [hide abstract]
ABSTRACT: We used the change blindness paradigm of Landman, Spekreijse, and Lamme (2003) to measure the effect of cues on the ability to detect changes between two presentations of an array of eight rectangles separated by an interstimulus interval (ISI). Next, we measured the ability to detect sameness as the target rectangle remained the same when all of the others changed orientation. We were surprised to find no difference between change-detection and same-detection performance. These results (1) are consistent with the notion that some kind of internal representation was cued during the ISI, (2) mitigate against the recruitment of grouping strategies and/or forming a Gestalt, and (3) imply that under conditions facilitated by cues same-detection performance can be as good as change-detection performance.Visual Cognition 09/2011; 19(8):973-982.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
Psychonomic Society, Psychonomic Society
ISSN: 1943-393X, Impact factor: 2.04
Asia-Pacific Academic Consortium for...
ISSN: 1941-2479, Impact factor: 1.06
Rutgers Center of Alcohol Studies
ISSN: 1938-4114, Impact factor: 2.25
ISSN: 1878-5646, Impact factor: 2.29
ScienceDirect (Service en ligne),...
ISSN: 1873-6297, Impact factor: 2.19
ISSN: 1873-5487, Impact factor: 1.88
ISSN: 1873-5347, Impact factor: 2.7
International journal of developmental neuroscience: the official journal of the International Socie...
ISSN: 1873-474X, Impact factor: 2.03