Article

Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In real life, we more commonly complete goal-directed behavior, such as setting the table, during which the location and identity representation of the surrounding objects is generated "on the fly." Recent work has shown that the representations generated through natural behavior are more reliable than those generated through explicit memorization (Draschkow, Wolfe, & Võ, 2014;Helbing, Draschkow, & Võ, 2020). ...
... In parallel with the previous sections, we emphasize the importance of studying active natural behavior Foulsham et al., 2011;Malcolm et al., 2016;Tatler, 2014; and how VLTMs are generated as a natural by-product of interactions with the environment (Draschkow & Võ, 2017;Helbing et al., 2020), as these representations support seamless everyday activities. In comparison to memory investigations in which memorization is the explicit task, during ecological behavior it is not necessary to constantly instruct ourselves to remember everything in our surroundings. ...
... Search within naturalistic images created more robust memories for the identity of target objects than representations formed as a result of explicit memorization (Draschkow et al., 2014;Josephs, Draschkow, Wolfe, & Võ, 2016). During immersive searches in virtual reality this search superiority even leads to more reliable incidentally generated spatial representations when compared to memories formed under explicit instruction to memorize (Helbing et al., 2020). Critically, incidental encoding seems to strongly rely on the availability of meaningful scene semantics in the stimulus materials used (Draschkow et al., 2014;Võ et al., 2019). ...
Article
Full-text available
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate “for free” and “on the fly.” These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
... Visual Information. During attention allocation in a dynamic and complex scene, relevant anchor objects-those with a spatial relationship to the target object-can guide attention to a faster reaction time, less scene coverage, and less time between fixating on the anchor and the target object [54,24,2]. Therefore, we need to encode each frame of a given video to extract target and non-target features which an agent needs in order to effectively select the next fixation locations. Next, we describe in detail how this encoding is done (see Figure. ...
... We recruited 20 participants (5 female and 15 male, ages [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39] with at least three years of driving experience (Mean=9.7, SD=5.8). ...
Preprint
Full-text available
Inspired by human visual attention, we propose a novel inverse reinforcement learning formulation using Maximum Entropy Deep Inverse Reinforcement Learning (MEDIRL) for predicting the visual attention of drivers in accident-prone situations. MEDIRL predicts fixation locations that lead to maximal rewards by learning a task-sensitive reward function from eye fixation patterns recorded from attentive drivers. Additionally, we introduce EyeCar, a new driver attention dataset in accident-prone situations. We conduct comprehensive experiments to evaluate our proposed model on three common benchmarks: (DR(eye)VE, BDD-A, DADA-2000), and our EyeCar dataset. Results indicate that MEDIRL outperforms existing models for predicting attention and achieves state-of-the-art performance. We present extensive ablation studies to provide more insights into different features of our proposed model.
... The proportions of the rooms' content were similar to real-world scenes; each room was populated with 36 unique objects, 8 anchor objects, and 28 local objects, of which 6 were selected as target objects. For a description of the scenes, please refer to Helbing et al. (2020). Out of the 16 rooms, one living room was set aside for participants to train with and did not appear after the training phase, leaving 15 rooms for the actual task. ...
... The authors thank Jason Helbing for the construction of the complex, indoor scenes (Helbing et al., 2020) that were used in this study. ...
Article
Full-text available
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
... Visual Information. During attention allocation in a dynamic and complex scene, relevant anchor objects-those with a spatial relationship to the target object-can guide attention to a faster reaction time, less scene coverage, and less time between fixating on the anchor and the target object [54,24,2]. Therefore, we need to encode each frame of a given video to extract target and non-target features which an agent needs in order to effectively select the next fixation locations. Next, we describe in detail how this encoding is done (see Figure. ...
... We recruited 20 participants (5 female and 15 male, ages [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39] with at least three years of driving experience (Mean=9.7, SD=5.8). ...
... Behavioral data were analyzed using R (R Core Team, 2018) using the approach described in Helbing et al. (2020) and Draschkow and Võ (2017). Differences in analysis procedures between experiments are highlighted in the corresponding section. ...
Article
Full-text available
Visual search is a fundamental element of human behavior and is predominantly studied in a laboratory setting using static displays. However, real-life search is often an extended process taking place in dynamic environments. We have designed a dynamic-search task in order to incorporate the temporal dimension into visual search. Using this task, we tested how participants learn and utilize spatiotemporal regularities embedded within the environment to guide performance. Participants searched for eight instances of a target that faded in and out of a display containing similarly transient distractors. In each trial, four of the eight targets appeared in a temporally predictable fashion with one target appearing in each of four spatially separated quadrants. The other four targets were spatially and temporally unpredictable. Participants' performance was significantly better for spatiotemporally predictable compared to unpredictable targets (Experiments 1-4). The effects were reliable over different patterns of spatiotemporal predictability (Experiment 2) and primarily reflected long-term learning over trials (Experiments 3, 4), although single-trial priming effects also contributed (Experiment 4). Eye-movement recordings (Experiment 1) revealed that spatiotemporal regularities guide attention proactively and dynamically. Taken together, our results show that regularities across both space and time can guide visual search and this guidance can primarily be attributed to robust long-term representations of these regularities. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
... However, thanks to ever improving display technology, decreasing costs and ease-of-use, over the recent years, VR systems have become a research tool in many fields. This includes -besides the entertainment market -highly regulated fields like medicine [e.g., Dentistry (Huang et al., 2018), education and training (Bernardo, 2017;Izard et al., 2018), simulation, diagnosis and rehabilitation of visual impairments (Baheux et al., 2005;Jones et al., 2020)] and psychotherapy (e.g., autism therapy: Georgescu et al., 2014;Lahiri et al., 2015, fear andanxiety disorders: Hong et al., 2017;Kim et al., 2018;Matthis et al., 2018), as well as in areas directly relevant to psychophysical research such as attentional allocation (Helbing et al., 2020). As fears of long-term negative effects of VR use have so far not been confirmed (e.g., Turnbull and Phillips, 2017), and the recent VR goggles approach photorealistic capabilities while being more and more comfortable to wear, we are now in a position to ask, to what extent a HMD can be used as a proxy for a real-world setting in the context of gaze tracking -a question that has previously only been addressed in a limited scope. ...
Article
Full-text available
How vision guides gaze in realistic settings has been researched for decades. Human gaze behavior is typically measured in laboratory settings that are well controlled but feature-reduced and movement-constrained, in sharp contrast to real-life gaze control that combines eye, head, and body movements. Previous real-world research has shown environmental factors such as terrain difficulty to affect gaze; however, real-world settings are difficult to control or replicate. Virtual reality (VR) offers the experimental control of a laboratory, yet approximates freedom and visual complexity of the real world (RW). We measured gaze data in 8 healthy young adults during walking in the RW and simulated locomotion in VR. Participants walked along a pre-defined path inside an office building, which included different terrains such as long corridors and flights of stairs. In VR, participants followed the same path in a detailed virtual reconstruction of the building. We devised a novel hybrid control strategy for movement in VR: participants did not actually translate: forward movements were controlled by a hand-held device, rotational movements were executed physically and transferred to the VR. We found significant effects of terrain type (flat corridor, staircase up, and staircase down) on gaze direction, on the spatial spread of gaze direction, and on the angular distribution of gaze-direction changes. The factor world (RW and VR) affected the angular distribution of gaze-direction changes, saccade frequency, and head-centered vertical gaze direction. The latter effect vanished when referencing gaze to a world-fixed coordinate system, and was likely due to specifics of headset placement, which cannot confound any other analyzed measure. Importantly, we did not observe a significant interaction between the factors world and terrain for any of the tested measures. This indicates that differences between terrain types are not modulated by the world. The overall dwell time on navigational markers did not differ between worlds. The similar dependence of gaze behavior on terrain in the RW and in VR indicates that our VR captures real-world constraints remarkably well. High-fidelity VR combined with naturalistic movement control therefore has the potential to narrow the gap between the experimental control of a lab and ecologically valid settings.
... As immersive environments aim to enhance user experience in gallery and museum settings, many exploratory studies have started to investigate visitors' experience and the feasibility of these VR applications (Hoang and Cox 2017;Petrelli 2019;Parker and Saker 2020;Gulhan et al. 2021). Experiments have mostly focussed so far on the general cognitive implications of using these environments, for example, crowd movement on navigation decisions in VR (Zhao et al. 2020), mental imagery and eye movements in VR (Chiquet et al. 2020), visual search in 3D scenes (Helbing et al. 2020), replication of findings from a lab-based inattentional blindness paradigm in VR (Schöne et al. 2021), or episodic memory in virtual museum rooms (van Helvoort et al. 2020). Experimental aesthetics research in VR remains to be explored. ...
Article
Full-text available
Empirical aesthetics is beginning to branch off from conventional laboratory-based studies, leading to in-situ, immersive, often more accessible experiments. Here, we explored different types of aesthetic judgments of three-dimensional artworks in two contexts: virtual reality (VR), aiming for an immersive experience, and online settings aiming for an accessible setup for a remote audience. Following the pilot experiment conducted to select a set of 3D artworks, in the first experiment, participants freely engaged with virtual artworks via an eye-tracking-enabled VR headset and provided evaluations based on subjective measures of aesthetic experience such as ratings on liking, novelty, complexity, perceived viewing duration; and the objective viewing duration was also recorded. Results showed positive, linear, and mostly moderate correlations between liking and the other perceived judgment attributes. Supplementary eye-tracking data showed a range of viewing strategies and variation in viewing durations between participants and artworks. Results of the second experiment, adapted as a short online follow-up, showed converging evidence on correlations between the different aspects contributing to aesthetic judgments and suggested similarity of judgment strategies across contexts. In both settings, participants provided further insights via exit questionnaires. We speculate that both VR and online settings offer ecologically valid experimental contexts, create immersive visual arts experience, and enhance accessibility to cultural heritage.
... Objects undeniably define the scene (Bar, 2004;Võ, Boettcher, & Draschkow, 2019) and guide our actions and memories (Torralba, Oliva, Castelhano, & Henderson, 2006;Draschkow & Võ, 2017;Helbing, Draschkow, & Võ, 2020). For this reason, there are numerous scientific questions for which an object-based measurement is preferable. ...
Preprint
Full-text available
Psychophysical paradigms measure visual attention via localized test items to which observers must react or whose features have to be discriminated. These items, however, potentially interfere with the intended measurement as they bias observers' spatial and temporal attention to their location and presentation time. Furthermore, visual sensitivity for conventional test items naturally decreases with retinal eccentricity, which prevents direct comparison of central and peripheral attention assessments. We developed a stimulus that overcomes these limitations. A brief oriented discrimination signal is seamlessly embedded into a continuously changing 1/f noise field, such that observers cannot anticipate potential test locations or times. Using our new paradigm, we demonstrate that local orientation discrimination accuracy for 1/f filtered signals is largely independent of retinal eccentricity. Moreover, we show that items present in the visual field indeed shape the distribution of visual attention, suggesting that classical studies investigating the spatiotemporal dynamics of visual attention via localized test items may have obtained a biased measure. We recommend our paradigm as an efficient method to evaluate the spatial and temporal spread of visual attention.
... In recent years, VR 47 has become a go-to tool for investigating long-term memory and spatial navigation 21,[48][49][50][51][52][53] . By contrast, the use of VR for investigating working memory remains rare 5 . ...
Article
Full-text available
As we move around, relevant information that disappears from sight can still be held in working memory to serve upcoming behaviour. How we maintain and select visual information as we move through the environment remains poorly understood because most laboratory tasks of working memory rely on removing visual material while participants remain still. We used virtual reality to study visual working memory following self-movement in immersive environments. Directional biases in gaze revealed the recruitment of more than one spatial frame for maintaining and selecting memoranda following self-movement. The findings bring the important realization that multiple spatial frames support working memory in natural behaviour. The results also illustrate how virtual reality can be a critical experimental tool to characterize this core memory system.
... Because the critical manipulation (of block type) regarded the type of lures that could occur, we restricted our analyses to lure trials. Generalized linear mixed-effects models (GLMMs) with a binomial distribution were used to analyses the percentage false alarms (FAs), and linear mixed-effects models (LMMs) to analyses reaction times (RTs) (correct rejection trials only) in a procedure similar to the approach described in Helbing, Draschkow, and Võ (2020) and Draschkow and Võ (2017). Trials in which the response times were faster than 200 ms or greater than 3 standard deviations from the participant's mean were discarded from the analysis. ...
Article
Full-text available
In everyday life, attentional templates-which facilitate the perception of task-relevant sensory inputs-are often based on associations in long-term memory. We ask whether templates retrieved from memory are necessarily faithful reproductions of the encoded information or if associative-memory templates can be functionally adapted after retrieval in service of current task demands. Participants learned associations between four shapes and four colored gratings, each with a characteristic combination of color (green or pink) and orientation (left or right tilt). On each trial, observers saw one shape followed by a grating and indicated whether the pair matched the learned shape-grating association. Across experimental blocks, we manipulated the types of nonmatch (lure) gratings most often presented. In some blocks the lures were most likely to differ in color but not tilt, whereas in other blocks this was reversed. If participants functionally adapt the retrieved template such that the distinguishing information between lures and targets is prioritized, then they should overemphasize the most commonly diagnostic feature dimension within the template. We found evidence for this in the behavioral responses to the lures: participants were more accurate and faster when responding to common versus rare lures, as predicted by the functional-but not the strictly veridical-template hypothesis. This shows that templates retrieved from memory can be functionally biased to optimize task performance in a flexible, context-dependent, manner.
... Moreover, with the help of eye-tracking, it is possible to identify higher nervous functions such as emotions, regret, and disappointment (Nakhaeizadeh et al., 2020), as well as memorization processes (Bault et al., 2016). It has also been shown that eye-tracking technologies can be used to control decision-making (Helbing et al., 2020) for example by increasing its value (Fridman et al., 2018). A recent study (Smith and Krajbich, 2019) showed the importance of the multiplicative model, which states that greater attention to options when making a choice, has higher influences on choices. ...
Article
Full-text available
The concept of using eye-tracking in virtual reality for education has been researched in various fields over the past years. With this review, we aim to discuss the recent advancements and applications in this area, explain the technological aspects, highlight the advantages of this approach and inspire interest in the field. Eye-tracking has already been used in science for many decades and now has been substantially reinforced by the addition of virtual and augmented reality technologies. The first part of the review is a general overview of eye-tracking concepts, technical parts, and their applications. In the second part, the focus shifted toward the application of eye-tracking in virtual reality. The third part, first the description of the recently emerged concept of eye-tracking in virtual reality is given, followed by the current applications to education and studying, which has not been thoroughly described before. We describe the main findings, technological aspects, and advantages of this approach.
... The applications of combined cortical EEG recordings (Jiang et al., 2021) and specific gait parameters (Pieruccini-Faria et al., 2021) can independently predict the development of cognitive impairment in older and cognitive impaired persons. However, the integration of such MoBI applications into exergames is limited, but will be elaborated on (Helbing et al., 2020). At the same time, XR-MoBI exergames with the appropriate unit-architecture of telerehabilitation are limitly published. ...
Article
Full-text available
A major concern of public health authorities is to also encourage adults to be exposed to enriched environments (sensory and cognitive-motor activity) during the pandemic lockdown, as was recently the case worldwide during the COVID-19 outbreak. Games for adults that require physical activity, known as exergames, offer opportunities here. In particular, the output of the gaming industry nowadays offers computer games with extended reality (XR) which combines real and virtual environments and refers to human-machine interactions generated by computers and wearable technologies. For example, playing the game in front of a computer screen while standing or walking on a force plate or treadmill allows the user to react to certain infrastructural changes and obstacles within the virtual environment. Recent developments, optimization, and minimizations in wearable technology have produced wireless headsets and sensors that allow for unrestricted whole-body movement. This makes the virtual experience more immersive and provides the opportunity for greater engagement than traditional exercise. Currently, XR serves as an umbrella term for current immersive technologies as well as future realities that enhance the experience with features that produce new controllable environments. Overall, these technology-enhanced exergames challenge the adult user and modify the experience by increasing sensory stimulation and creating an environment where virtual and real elements interact. As a therapy, exergames can potentially create new environments and visualizations that may be more ecologically valid and thus simulate real activities of daily living that can be trained. Furthermore, by adding telemedicine features to the exergame, progress over time can be closely monitored and feedback provided, offering future opportunities for cognitive-motor assessment. To more optimally serve and challenge adults both physically and cognitively over time in future lockdowns, there is a need to provide long-term remote training and feedback. Particularly related to activities of daily living that create opportunities for effective and lasting rehabilitation for elderly and sufferers from chronic non-communicable diseases (CNDs). The aim of the current review is to envision the remote training and monitoring of physical and cognitive aspects for adults with limited mobility (due to disability, disease, or age), through the implementation of concurrent telehealth and exergame features using XR and wireless sensor technologies.
... The analysis of the eye gaze patterns showed that incidental fixations made on objects did not help in subsequent search, suggesting that without a specific task the fixated objects are not permanently coded to memory. On the other hand, Helbing et al. [15] found that in VR environments, incidental viewing might result in better learning outcomes for the features present in the environment (memory for object identity and location) than directed memorization [16]. Understanding the relationship between viewing behavior, attention, and memory processes in VR is crucial to develop environments that optimally support learning. ...
Conference Paper
Full-text available
This paper presents the integration of eye-tracking in the MarSEVR (Maritime Safety Education with VR) technology to increase the precision of the trainee focus on delivering the learning episodes of the technology with enhanced impressiveness and user engagement. MarSEVR is part of the Safe Oceans concept, a green ocean technology that integrates several VR safety training applications to reduce maritime accidents that result in human casualties, sea pollution and other environmental damages. The paper indicates the research delivery architecture driven by Hevner's design science in information systems Research for usability, user experience (UX) and effectiveness. Furthermore, this technology integration is approached from a game design perspective for user engagement but also from a cognitive and neuroscience perspective for pedagogical use and purposes. The paper addresses the impact of eye-tracking technology in the maritime sector operations, training market, and competitive research. Lastly, areas of further research are presented and the efforts to link and align finger tracking and hand recognitions technologies with eye tracking for a more complete VR training environment.
... In an attempt to resolve these two limitations, recent studies have used a new method using virtual reality (VR) technology [27][28][29][30]. In these studies, participants were asked to view or search certain scenes presented on virtual environments while their eye movements were recorded. ...
Article
Full-text available
The effect of spatial contexts on attention is important for evaluating the risk of human errors and the accessibility of information in different situations. In traditional studies, this effect has been investigated using display-based and non-laboratory procedures. However, these two procedures are inadequate for measuring attention directed toward 360-degree environments and controlling exogeneous stimuli. In order to resolve these limitations, we used a virtual-reality-based procedure and investigated how spatial contexts of 360-degree environments influence attention. In the experiment, 20 students were asked to search for and report a target that was presented at any location in 360-degree virtual spaces as accurately and quickly as possible. Spatial contexts comprised a basic context (a grey and objectless space) and three specific contexts (a square grid floor, a cubic room, and an infinite floor). We found that response times for the task and eye movements were influenced by the spatial context of 360-degree surrounding spaces. In particular, although total viewing times for the contexts did not match the saliency maps, the differences in total viewing times between the basic and specific contexts did resemble the maps. These results suggest that attention comprises basic and context-dependent characteristics, and the latter are influenced by the saliency of 360-degree contexts even when the contexts are irrelevant to a task.
... Finally, by using VR, we were able to measure the head-direction bias alongside the gaze bias as participants' head, eye, and body were unconstrained. To date, the benefits of VR have been appreciated most prominently by researchers studying naturalistic human navigation, ethology, and long-term memory [59][60][61][62][63][64] . Our present findings further highlight the benefits of using VR (combined with eye-and head-tracking) to study bodily orienting behaviour 11,31 related to internal cognitive processes, as showcased here for internal attentional focusing in working memory. ...
Preprint
Full-text available
We shift our gaze even when we orient attention internally to visual representations in working memory. Here, we show the bodily orienting response associated with internal selective attention is widespread as it also includes the head. In three virtual reality (VR) experiments, participants remembered two visual items. After a working memory delay, a central colour cue indicated which item needed to be reproduced from memory. After the cue, head movements became biased in the direction of the memorised location of the cued memory item, despite there being no items to orient towards in the external environment. The head-direction bias had a distinct temporal profile from the gaze bias. Our findings reveal that directing attention within the spatial layout of visual working memory bears a strong relation to the overt head orienting response we engage when directing attention to sensory information in the external environment. The head-direction bias further demonstrates common neural circuitry is engaged during external and internal orienting of attention.
... Still, maintaining control over stimulus parameters in real-world studies can be difficult, and testing has often been limited to viewing static scenes. Despite these efforts, there still remains a need to develop adaptable and controllable stimuli for the purposes of investigating visual search performance using behavioral paradigms that can be considered as more ecologically valid (Bennett, Bex, Bauer, & Merabet, 2019;Helbing, Draschkow, & Võ, 2020;Parsons, 2011;Parsons, 2015;Parsons & Duffield, 2019; for further discussion, see Holleman, Hooge, Kemner, & Hessels, 2020). In this direction, virtual reality (VR) has gained considerable interest as a way to approach issues related to task realism, immersion, adaptability, and experimental control, and it has even found a growing application in clinical and behavioral neuroscience research (for reviews, see Bouchard, 2019;Parsons & Phillips, 2016;Tarr & Warren, 2002). ...
Article
Full-text available
Daily activities require the constant searching and tracking of visual targets in dynamic and complex scenes. Classic work assessing visual search performance has been dominated by the use of simple geometric shapes, patterns, and static backgrounds. Recently, there has been a shift toward investigating visual search in more naturalistic dynamic scenes using virtual reality (VR)-based paradigms. In this direction, we have developed a first-person perspective VR environment combined with eye tracking for the capture of a variety of objective measures. Participants were instructed to search for a preselected human target walking in a crowded hallway setting. Performance was quantified based on saccade and smooth pursuit ocular motor behavior. To assess the effect of task difficulty, we manipulated factors of the visual scene, including crowd density (i.e., number of surrounding distractors) and the presence of environmental clutter. In general, results showed a pattern of worsening performance with increasing crowd density. In contrast, the presence of visual clutter had no effect. These results demonstrate how visual search performance can be investigated using VR-based naturalistic dynamic scenes and with high behavioral relevance. This engaging platform may also have utility in assessing visual search in a variety of clinical populations of interest.
Article
Full-text available
Working memory (WM) enables temporary storage and manipulation of information,¹ supporting tasks that require bridging between perception and subsequent behavior. Its properties, such as its capacity, have been thoroughly investigated in highly controlled laboratory tasks.1, 2, 3, 4, 5, 6, 7, 8 Much less is known about the utilization and properties of WM in natural behavior,9, 10, 11 when reliance on WM emerges as a natural consequence of interactions with the environment. We measured the trade-off between reliance on WM and gathering information externally during immersive behavior in an adapted object-copying task.¹² By manipulating the locomotive demands required for task completion, we could investigate whether and how WM utilization changed as gathering information from the environment became more effortful. Reliance on WM was lower than WM capacity measures in typical laboratory tasks. A clear trade-off also occurred. As sampling information from the environment required increasing locomotion and time investment, participants relied more on their WM representations. This reliance on WM increased in a shallow and linear fashion and was associated with longer encoding durations. Participants’ avoidance of WM usage showcases a fundamental dependence on external information during ecological behavior, even if the potentially storable information is well within the capacity of the cognitive system. These foundational findings highlight the importance of using immersive tasks to understand how cognitive processes unfold within natural behavior. Our novel VR approach effectively combines the ecological validity, experimental rigor, and sensitive measures required to investigate the interplay between memory and perception in immersive behavior. Video Abstract Download : Download video (21MB)
Article
Psychophysical paradigms measure visual attention via localized test items to which observers must react or whose features have to be discriminated. These items, however, potentially interfere with the intended measurement, as they bias observers’ spatial and temporal attention to their location and presentation time. Furthermore, visual sensitivity for conventional test items naturally decreases with retinal eccentricity, which prevents direct comparison of central and peripheral attention assessments. We developed a stimulus that overcomes these limitations. A brief oriented discrimination signal is seamlessly embedded into a continuously changing 1/ f noise field, such that observers cannot anticipate potential test locations or times. Using our new protocol, we demonstrate that local orientation discrimination accuracy for 1/ f filtered signals is largely independent of retinal eccentricity. Moreover, we show that items present in the visual field indeed shape the distribution of visual attention, suggesting that classical studies investigating the spatiotemporal dynamics of visual attention via localized test items may have obtained a biased measure. We recommend our protocol as an efficient method to evaluate the behavioral and neurophysiological correlates of attentional orienting across space and time.
Article
Full-text available
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Article
We live in a rich, three dimensional world with complex arrangements of meaningful objects. For decades, however, theories of visual attention and perception have been based on findings generated from lines and color patches. While these theories have been indispensable for our field, the time has come to move on from this rather impoverished view of the world and (at least try to) get closer to the real thing. After all, our visual environment consists of objects that we not only look at, but constantly interact with. Having incorporated the meaning and structure of scenes, i.e. its “grammar”, then allows us to easily understand objects and scenes we have never encountered before. Studying this grammar provides us with the fascinating opportunity to gain new insights into the complex workings of attention, perception, and cognition. In this review, I will discuss how the meaning and the complex, yet predictive structure of real-world scenes influence attention allocation, search, and object identification.
Article
Previous research raised the counterintuitive hypothesis that searching for multiple potential targets leads to increased incidental encoding of distractors. Are these previously reported findings due to increased visual working memory (VWM) engagement, or less precise target templates? In four experiments, we examined the effect of VWM load during visual search on incidental encoding of distractors. Consecutive target repetitions indirectly reduce template-related VWM demands but failed to reduce recognition for distractors relative to conditions where the targets were novel. Distractors that were subsequently recognized attracted longer cumulative dwell time, regardless of search condition. When placed in a dual-task situation where search was performed while holding a working memory load, recognition for distractors was marginally improved relative to a search task without additional VWM demands. We ruled out the possibility that the dual-task was not sufficiently difficult to trigger the scrutiny of distractors required for significant encoding benefits by showing a decrement to encoding when search time was limited. This suggests that widening the attentional set is the crucial factor in improved incidental encoding given that observers can assign differential status to various contents of VWM. Thus, utilizing VWM resources in general appears insufficient to meaningfully improve incidental memory.
Article
Virtual reality (VR) is a powerful tool for researchers due to its potential to study dynamic human behavior in highly naturalistic environments while retaining full control over the presented stimuli. Due to advancements in consumer hardware, VR devices are now very affordable and have also started to include technologies such as eye tracking, further extending potential research applications. Rendering engines such as Unity, Unreal, or Vizard now enable researchers to easily create complex VR environments. However, implementing the experimental design can still pose a challenge, and these packages do not provide out-of-the-box support for trial-based behavioral experiments. Here, we present a Python toolbox, designed to facilitate common tasks when developing experiments using the Vizard VR platform. It includes functionality for common tasks like creating, randomizing, and presenting trial-based experimental designs or saving results to standardized file formats. Moreover, the toolbox greatly simplifies continuous recording of eye and body movements using any hardware supported in Vizard. We further implement and describe a simple goal-directed reaching task in VR and show sample data recorded from five volunteers. The toolbox, example code, and data are all available on GitHub under an open-source license. We hope that our toolbox can simplify VR experiment development, reduce code duplication, and aid reproducibility and open-science efforts.
Article
Novelty is defined as the part of an experience that is not yet represented by memory systems. Novelty has been claimed to exert various memory‐enhancing effects. A pioneering study by Wittmann et al. (2007) has shown that memory formation may even benefit from the expectation of novelty. We aimed to replicate this assumed memory effect in four behavioral studies. However, our results do not support the idea that anticipated novel stimuli are more memorable than unexpected novelty. In our experiments, we systematically manipulated the novelty predicting cues to ensure that the expectations were correctly formed by the participants, however, the results showed that there was no memory enhancement for expected novel pictures in any of the examined indices, thus we could not replicate the main behavioral finding of Wittmann et al. (2007). These results call into question the original effect, and we argue that this fits more into current thinking on memory formation and brain function in general. Our results are more consistent with the view that unexpected stimuli are more likely to be retained by memory systems. Predictive coding theory suggests that unexpected stimuli are prioritized by the nervous system and this may also benefit memory processes. Novel stimuli may be unexpected and thus recognized better in some experimental setups, yet novelty and unexpectedness do not always coincide. We hope that our work can bring more consistency in the literature on novelty, as educational methods in general could also benefit from this clarification.
Article
Humans are highly sensitive to the statistical relationships between features and objects within visual scenes. Inconsistent objects within scenes (e.g., a mailbox in a bedroom) instantly jump out to us and are known to catch our attention. However, it is debated whether such semantic inconsistencies result in boosted memory for the scene, impaired memory, or have no influence on memory. Here, we examined the relationship of scene–object consistencies on memory representations measured through drawings made during recall. Participants (N = 30) were eye-tracked while studying 12 real-world scene images with an added object that was either semantically consistent or inconsistent. After a 6-minute distractor task, they drew the scenes from memory while pen movements were tracked electronically. Online scorers (N = 1,725) rated each drawing for diagnosticity, object detail, spatial detail, and memory errors. Inconsistent scenes were recalled more frequently, but contained less object detail. Further, inconsistent objects elicited more errors reflecting looser memory binding (e.g., migration across images). These results point to a dual effect in memory of boosted global (scene) but diminished local (object) information. Finally, we observed that participants fixate longest on inconsistent objects, but these fixations during study were not correlated with recall performance, time, or drawing order. In sum, these results show a nuanced effect of scene inconsistencies on memory detail during recall.
Article
Full-text available
Memories are about the past, but they serve the future. Memory research often emphasizes the former aspect: focusing on the functions that re-constitute (re-member) experience and elucidating the various types of memories and their interrelations, timescales, and neural bases. Here we highlight the prospective nature of memory in guiding selective attention, focusing on functions that use previous experience to anticipate the relevant events about to unfold-to "premember" experience. Memories of various types and timescales play a fundamental role in guiding perception and performance adaptively, proactively, and dynamically. Consonant with this perspective, memories are often recorded according to expected future demands. Using working memory as an example, we consider how mnemonic content is selected and represented for future use. This perspective moves away from the traditional representational account of memory toward a functional account in which forward-looking memory traces are informationally and computationally tuned for interacting with incoming sensory signals to guide adaptive behavior.
Article
Full-text available
Understanding the content of memory is essential to teasing apart its underlying mechanisms. While recognition tests have commonly been used to probe memory, it is difficult to establish what specific content is driving performance. Here, we instead focus on free recall of real-world scenes, and quantify the content of memory using a drawing task. Participants studied 30 scenes and, after a distractor task, drew as many images in as much detail as possible from memory. The resulting memory-based drawings were scored by thousands of online observers, revealing numerous objects, few memory intrusions, and precise spatial information. Further, we find that visual saliency and meaning maps can explain aspects of memory performance and observe no relationship between recall and recognition for individual images. Our findings show that not only is it possible to quantify the content of memory during free recall, but those memories contain detailed representations of our visual experiences.
Article
Full-text available
The arrangement of the contents of real-world scenes follows certain spatial rules that allow for extremely efficient visual exploration. What remains underexplored is the role different types of objects hold in a scene. In the current work, we seek to unveil an important building block of scenes-anchor objects. Anchors hold specific spatial predictions regarding the likely position of other objects in an environment. In a series of three eye tracking experiments we tested what role anchor objects occupy during visual search. In all of the experiments, participants searched through scenes for an object that was cued in the beginning of each trial. Critically, in half of the scenes a target relevant anchor was swapped for an irrelevant, albeit semantically consistent, object. We found that relevant anchor objects can guide visual search leading to faster reaction times, less scene coverage, and less time between fixating the anchor and the target. The choice of anchor objects was confirmed through an independent large image database, which allowed us to identify key attributes of anchors. Anchor objects seem to play a unique role in the spatial layout of scenes and need to be considered for understanding the efficiency of visual search in realistic stimuli.
Article
Full-text available
Visual long-term memory capacity appears massive and detailed when probed explicitly. In the real world, however, memories are usually built from chance encounters. Therefore, we investigated the capacity and detail of incidental memory in a novel encoding task, instructing participants to detect visually distorted objects among intact objects. In a subsequent surprise recognition memory test, lures of a novel category, another exemplar, the same object in a different state, or exactly the same object were presented. Lure recognition performance was above chance, suggesting that incidental encoding resulted in reliable memory formation. Critically, performance for state lures was worse than for exemplars, which was driven by a greater similarity of state as opposed to exemplar foils to the original objects. Our results indicate that incidentally generated visual long-term memory representations of isolated objects are more limited in detail than recently suggested.
Article
Full-text available
An important issue of psychological research is how experiments conducted in the laboratory or theories based on such experiments relate to human performance in daily life. Immersive virtual reality (VR) allows control over stimuli and conditions at increased ecological validity. The goal of the present study was to accomplish a transfer of traditional paradigms that assess attention and distraction to immersive VR. To further increase ecological validity we explored attentional effects with daily objects as stimuli instead of simple letters. Participants searched for a target among distractors on the countertop of a virtual kitchen. Target–distractor discriminability was varied and the displays were accompanied by a peripheral flanker that was congruent or incongruent to the target. Reaction time was slower when target–distractor discriminability was low and when flankers were incongruent. The results were replicated in a second experiment in which stimuli were presented on a computer screen in two dimensions. The study demonstrates the successful translation of traditional paradigms and manipulations into immersive VR and lays a foundation for future research on attention and distraction in VR. Further, we provide an outline for future studies that should use features of VR that are not available in traditional laboratory research.
Article
Full-text available
Search is a central visual function. Most of what is known about search derives from experiments where subjects view 2D displays on computer monitors. In the natural world, however, search involves movement of the body in large-scale spatial contexts, and it is unclear how this might affect search strategies. In this experiment, we explore the nature of memory representations developed when searching in an immersive virtual environment. By manipulating target location, we demonstrate that search depends on episodic spatial memory as well as learnt spatial priors. Subjects rapidly learned the large-scale structure of the space, with shorter paths and less head rotation to find targets. These results suggest that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Thus spatial memory may allow less energetically costly search strategies.
Article
Full-text available
Predictions of environmental rules (here referred to as “scene grammar”) can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one’s scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants’ memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants’ construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.
Article
Full-text available
One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using the Kenward-Roger approximation for denominator degrees of freedom (based on the KRmodcomp function from the pbkrtest package). Some other convenient mixed model analysis tools such as a step method, that performs backward elimination of nonsignificant effects - both random and fixed, calculation of population means and multiple comparison tests together with plot facilities are provided by the package as well.
Article
Full-text available
The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.
Article
Full-text available
Looking for as well as actively manipulating objects that are relevant to ongoing behavioral goals are intricate parts of natural behavior. It is, however, not clear to what degree these two forms of interaction with our visual environment differ with regard to their memory representations. In a real-world paradigm, we investigated if physically engaging with objects as part of a search task influences identity and position memory differently for task-relevant versus irrelevant objects. Participants equipped with a mobile eye tracker either searched for cued objects without object interaction (Find condition) or actively collected the objects they found (Handle condition). In the following free-recall task, identity memory was assessed, demonstrating superior memory for relevant compared to irrelevant objects, but no difference between the Handle and Find conditions. Subsequently, location memory was inferred via times to first fixation in a final object search task. Active object manipulation and task-relevance interacted in that location memory for relevant objects was superior to irrelevant ones only in the Handle condition. Including previous object recall performance as a covariate in the linear mixed-model analysis of times to first fixation allowed us to explore the interaction between remembered/forgotten object identities and the execution of location memory. Identity memory performance predicted location memory in the Find but not the Handle condition, suggesting that active object handling leads to strong spatial representations independent of object identity memory. We argue that object handling facilitates the prioritization of relevant location information, but this might come at the cost of deprioritizing irrelevant information.
Article
Full-text available
The analysis of experimental data with mixed-effects models requires decisions about the specification of the appropriate random-effects structure. Recently, Barr et al. (2013) recommended fitting 'maximal' models with all possible random effect components included. Estimation of maximal models, however, may not converge. We show that failure to converge typically is not due to a suboptimal estimation algorithm, but is a consequence of attempting to fit a model that is too complex to be properly supported by the data, irrespective of whether estimation is based on maximum likelihood or on Bayesian hierarchical modeling with uninformative or weakly informative priors. Importantly, even under convergence, overparameterization may lead to uninterpretable models. We provide diagnostic tools for detecting overparameterization and guiding model simplification. Finally, we clarify that the simulations on which Barr et al. base their recommendations are atypical for real data. A detailed example is provided of how subject-related attentional fluctuation across trials may further qualify statistical inferences about fixed effects, and of how such nonlinear effects can be accommodated within the mixed-effects modeling framework.
Article
Full-text available
An essential first step in planning a confirmatory or a replication study is to determine the sample size necessary to draw statistically reliable inferences using power analysis. A key problem, however, is that what is available is the sample-size estimate of the effect size, and its use can lead to severely underpowered studies when the effect size is overestimated. As a potential remedy, we introduce safeguard power analysis, which uses the uncertainty in the estimate of the effect size to achieve a better likelihood of correctly identifying the population effect size. Using a lower-bound estimate of the effect size, in turn, allows researchers to calculate a sample size for a replication study that helps protect it from being underpowered. We show that in most common instances, compared with nominal power, safeguard power is higher whereas standard power is lower. We additionally recommend the use of safeguard power analysis to evaluate the strength of the evidence provided by the original study. © The Author(s) 2014.
Article
Full-text available
MEMORIZING CRITICAL OBJECTS AND THEIR LOCATIONS IS AN ESSENTIAL PART OF EVERYDAY LIFE IN THE PRESENT STUDY, INCIDENTAL ENCODING OF OBJECTS IN NATURALISTIC SCENES DURING SEARCH WAS COMPARED TO EXPLICIT MEMORIZATION OF THOSE SCENES: TO INVESTIGATE IF PRIOR KNOWLEDGE OF SCENE STRUCTURE INFLUENCES THESE TWO TYPES OF ENCODING DIFFERENTLY, WE USED MEANINGLESS ARRAYS OF OBJECTS AS WELL AS OBJECTS IN REAL-WORLD, SEMANTICALLY MEANINGFUL IMAGES SURPRISINGLY, WHEN PARTICIPANTS WERE ASKED TO RECALL SCENES, THEIR MEMORY PERFORMANCE WAS MARKEDLY BETTER FOR SEARCHED OBJECTS THAN FOR OBJECTS THEY HAD EXPLICITLY TRIED TO MEMORIZE, EVEN THOUGH PARTICIPANTS IN THE SEARCH CONDITION WERE NOT EXPLICITLY ASKED TO MEMORIZE OBJECTS: THIS FINDING HELD TRUE EVEN WHEN OBJECTS WERE OBSERVED FOR AN EQUAL AMOUNT OF TIME IN BOTH CONDITIONS: CRITICALLY, THE RECALL BENEFIT FOR SEARCHED OVER MEMORIZED OBJECTS IN SCENES WAS ELIMINATED WHEN OBJECTS WERE PRESENTED ON UNIFORM, NON-SCENE BACKGROUNDS RATHER THAN IN A FULL SCENE CONTEXT THUS, SCENE SEMANTICS NOT ONLY HELP US SEARCH FOR OBJECTS IN NATURALISTIC SCENES, BUT APPEAR TO PRODUCE A REPRESENTATION THAT SUPPORTS OUR MEMORY FOR THOSE OBJECTS BEYOND INTENTIONAL MEMORIZATION:
Article
Full-text available
Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.
Article
Full-text available
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
Article
Full-text available
In sentence processing, semantic and syntactic violations elicit differential brain responses observable in event-related potentials: An N400 signals semantic violations, whereas a P600 marks inconsistent syntactic structure. Does the brain register similar distinctions in scene perception? To address this question, we presented participants with semantic inconsistencies, in which an object was incongruent with a scene's meaning, and syntactic inconsistencies, in which an object violated structural rules. We found a clear dissociation between semantic and syntactic processing: Semantic inconsistencies produced negative deflections in the N300-N400 time window, whereas mild syntactic inconsistencies elicited a late positivity resembling the P600 found for syntactic inconsistencies in sentence processing. Extreme syntactic violations, such as a hovering beer bottle defying gravity, were associated with earlier perceptual processing difficulties reflected in the N300 response, but failed to produce a P600 effect. We therefore conclude that different neural populations are active during semantic and syntactic processing of scenes, and that syntactically impossible object placements are processed in a categorically different manner than are syntactically resolvable object misplacements.
Article
Full-text available
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.
Article
Full-text available
A powerful way of improving one's memory for material is to be tested on that material. Tests enhance later retention more than additional study of the material, even when tests are given without feedback. This surpris- ing phenomenon is called the testing effect, and although it has been studied by cognitive psychologists sporadically over the years, today there is a renewed effort to learn why testing is effective and to apply testing in educational settings. In this article, we selectively review laboratory studies that reveal the power of testing in improving re- tention and then turn to studies that demonstrate the basic effects in educational settings. We also consider the related concepts of dynamic testing and formative assess- ment as other means of using tests to improve learning. Finally, we consider some negative consequences of testing that may occur in certain circumstances, though these negative effects are often small and do not cancel out the large positive effects of testing. Frequent testing in the classroom may boost educational achievement at all levels of education. In contemporary educational circles, the concept of testing has a dubious reputation, and many educators believe that testing is overemphasized in today's schools. By ''testing,'' most com- mentators mean using standardized tests to assess students. During the 20th century, the educational testing movement produced numerous assessment devices used throughout edu- cation systems in most countries, from prekindergarten through graduate school. However, in this review, we discuss primarily the kind of testing that occurs in classrooms or that students engage in while studying (self-testing). Some educators argue
Article
Full-text available
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F1 and F2 tests, and in many cases, even worse than F1 alone. Maximal LMEMs should be the ‘gold standard’ for confirmatory hypothesis testing in psycholinguistics and beyond.
Article
Full-text available
When looking at a scene, observers feel that they see its entire structure in great detail and can immediately notice any changes in it. However, when brief blank fields are placed between alternating displays of an original and a modified scene, a striking failure of perception is induced: identification of changes becomes extremely difficult, even when changes are large and made repeatedly. Identification is much faster when a verbal cue is provided, showing that poor visibility is not the cause of this difficulty. Identification is also faster for objects mentioned in brief verbal descriptions of the scene. These results support the idea that observers never form a complete, detailed representation of their surroundings. In addition, results also indicate that attention is required to perceive change, and that in the absence of localized motion signals it is guided on the basis of high-level interest. To see or not to see: The need for attention to perceive changes in scenes. Available from: https://www.researchgate.net/publication/236170014_To_see_or_not_to_see_The_need_for_attention_to_perceive_changes_in_scenes [accessed Jun 15, 2017].
Article
Full-text available
Recent results from Võ and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a preview task did not improve later search, but Võ and Wolfe used a relatively insensitive, between-subjects design. Here, we replicated the Võ and Wolfe study using a within-subject manipulation of scene preview. A preview session (focused either on object location memory or on the assessment of object semantics) reliably facilitated later search. In addition, information acquired from distractors in a scene-facilitated search when the distractor later became the target. Instead of being strongly constrained by task, visual memory is applied flexibly to guide attention and gaze during visual search. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Article
Full-text available
Showed 2560 photographic stimuli for 10 sec. each to 21 undergraduates, and tested their recognition memory using a 2-alternative forced-choice task. Performance exceeded 90% retention, even when up to 3 days elapsed between learning and testing. Variants showed that the presentation time could be reduced to 1 sec/picture without seriously affecting performance, and that the stimuli could be reversed in orientation in the test situation without impairing recognition performance appreciably. The orientation of the stimuli could also be learned, although not as well as the identity of the pictures. Results indicate the vast memory for pictures possessed by human beings and emphasize the need to determine mechanisms by which this is accomplished. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In everyday situations, we often rely on our memories to find what we are looking for in our cluttered environment. Recently, we developed a new experimental paradigm to investigate how long-term memory (LTM) can guide attention and showed how the pre-exposure to a complex scene in which a target location had been learned facilitated the detection of the transient appearance of the target at the remembered location [Summerfield, J. J., Rao, A., Garside, N., & Nobre, A. C. Biasing perception by spatial long-term memory. The Journal of Neuroscience, 31, 14952-14960, 2011; Summerfield, J. J., Lepsien, J., Gitelman, D. R., Mesulam, M. M., & Nobre, A. C. Orienting attention based on long-term memory experience. Neuron, 49, 905-916, 2006]. This study extends these findings by investigating whether and how LTM can enhance perceptual sensitivity to identify targets occurring within their complex scene context. Behavioral measures showed superior perceptual sensitivity (d') for targets located in remembered spatial contexts. We used the N2pc ERP to test whether LTM modulated the process of selecting the target from its scene context. Surprisingly, in contrast to effects of visual spatial cues or implicit contextual cueing, LTM for target locations significantly attenuated the N2pc potential. We propose that the mechanism by which these explicitly available LTMs facilitate perceptual identification of targets may differ from mechanisms triggered by other types of top-down sources of information.
Article
Full-text available
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image dis-crimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demon-strations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception. What is the nature of the representation that is created during ongoing, natural scene perception? Intuitively, it seems that the visual system generates a com-plete and highly detailed model of the external world. The perceptual experience of a stable and detailed visual world has led many vision researchers in the past to conclude that the visual representation formed for a scene is veridical and complete (McConkie & Rayner, 1976; Neisser, 1967). Such a detailed visual This research was submitted in partial fulfilment of the requirements for the Masters of Arts degree by Monica Castelhano, who was supported by the MSU graduate school on behalf of the NSF IGERT program in Cognitive Science. The authors wish to thank Jeremy Athy, for his help with data collection, and the members of the masters committee, Erik Altmann, Thomas Carr, and Fred Dyer, for their guidance and comments.
Article
Full-text available
Past experience provides a rich source of predictive information about the world that could be used to guide and optimize ongoing perception. However, the neural mechanisms that integrate information coded in long-term memory (LTM) with ongoing perceptual processing remain unknown. Here, we explore how the contents of LTM optimize perception by modulating anticipatory brain states. By using a paradigm that integrates LTM and attentional orienting, we first demonstrate that the contents of LTM sharpen perceptual sensitivity for targets presented at memory-predicted spatial locations. Next, we examine oscillations in EEG to show that memory-guided attention is associated with spatially specific desynchronization of alpha-band activity over visual cortex. Additionally, we use functional MRI to confirm that target-predictive spatial information stored in LTM triggers spatiotopic modulation of preparatory activity in extrastriate visual cortex. Finally, functional MRI results also implicate an integrated cortical network, including the hippocampus and a dorsal frontoparietal circuit, as a likely candidate for organizing preparatory states in visual cortex according to the contents of LTM.
Article
Full-text available
Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures.
Article
Full-text available
One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches-despite previous encounters with the target objects-demonstrates the dominance of guidance by generic scene knowledge in real-world search.
Article
Full-text available
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.
Article
Despite many recent technical advances, the human efficacy of naturalistic scene processing is still unparalleled. What guides attention in real world environments? How does scene context affect object search and classification? And how are the many predictions we have with regards to our visual environment structured? Here, we review the latest findings that speak to these questions, ranging from traditional psychophysics, eye movement, and neurocognitive studies in the lab to experiments in virtual reality (VR) and the real world. The growing interest and grasp of a scene's inner workings are enriching our understanding of the efficiency of scene perception and could inspire new technological and clinical advances.
Article
A long line of research has shown that vision and memory are closely linked, such that particular eye movement behaviour aids memory performance. In two experiments, we ask whether the positive influence of eye movements on memory is primarily a result of overt visual exploration during the encoding or the recognition phase. Experiment 1 allowed participants to free-view images of scenes, followed by a new-old recognition memory task. Exploratory analyses found that eye movements during study were predictive of subsequent memory performance. Importantly, intrinsic image memorability does not explain this finding. Eye movements during test were only predictive of memory within the first 600 ms of the trial. To examine whether this relationship between eye movements and memory is causal, Experiment 2 manipulated participants' ability to make eye movements during either study or test in a new-old recognition task. Participants were either encouraged to freely explore the scene in both the study and test phases, or had to refrain from making eye movements in either the test phase, the study phase, or both. We found that hit rate was significantly higher when participants moved their eyes during the study phase, regardless of what they did in the test phase. False alarm rate, on the other hand, was affected only by eye movements during the test phase: it decreased when participants were encouraged to explore the scene. Taken together, these results reveal a dissociation of the role of eye movements during the encoding and recognition of scenes. Eye movements during study are instrumental in forming memories, and eye movements during recognition support the judgment of memory veracity.
Conference Paper
People perform visual search tasks every day: from trivial tasks to emergencies. Classical research on visual search used artificial stimuli to identify factors that affect search times and accuracy. Recent studies have explored visual search in real scenes by simulating them on two-dimensional displays. The scientific community continues to use new technology to formulate better methods and practices. Virtual reality is a new technology that offers its users immersivity and elicits “real” responses. The purpose of this study is to compare search efficiencies in real scenes on 2-D displays and Virtual Reality. A visual search experiment measuring reaction times and accuracy was conducted to evaluate both methods. Results suggest that visual search in real scenes is significantly faster and more accurate in Virtual Reality than in 2-D Displays. These findings could open up new opportunities for visual search research on real scenes and real life scenarios.
Article
There are ever more compelling tools available for neuroscience research, ranging from selective genetic targeting to optogenetic circuit control to mapping whole connectomes. These approaches are coupled with a deep-seated, often tacit, belief in the reductionist program for understanding the link between the brain and behavior. The aim of this program is causal explanation through neural manipulations that allow testing of necessity and sufficiency claims. We argue, however, that another equally important approach seeks an alternative form of understanding through careful theoretical and experimental decomposition of behavior. Specifically, the detailed analysis of tasks and of the behavior they elicit is best suited for discovering component processes and their underlying algorithms. In most cases, we argue that study of the neural implementation of behavior is best investigated after such behavioral work. Thus, we advocate a more pluralistic notion of neuroscience when it comes to the brain-behavior relationship: behavioral work provides understanding, whereas neural interventions test causality.
Article
To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties.
Article
In the analysis of data it is often assumed that observations y1, y2, …, yn are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters θ. In this paper we make the less restrictive assumption that such a normal, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's. Inferences about the transformation and about the parameters of the linear model are made by computing the likelihood function and the relevant posterior distribution. The contributions of normality, homoscedasticity and additivity to the transformation are separated. The relation of the present methods to earlier procedures for finding transformations is discussed. The methods are illustrated with examples.
Article
Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization.
Article
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. © 2015 New York Academy of Sciences.
Conference Paper
Holding recently experienced information in mind can help us achieve our current goals. However, such immediate and direct forms of guidance from working memory are less helpful over extended delays or when other related information in long-term memory is useful for reaching these goals. Here we show that information that was encoded in the past but is no longer present or relevant to the task also guides attention. We examined this by associating multiple unique features with novel shapes in visual long-term memory (VLTM), and subsequently testing how memories for these objects biased the deployment of attention. In Experiment 1, VLTM for associated features guided visual search for the shapes, even when these features had never been task-relevant. In Experiment 2, associated features captured attention when presented in isolation during a secondary task that was completely unrelated to the shapes. These findings suggest that long-term memory enables a durable and automatic type of memory-based attentional control. (PsycINFO Database Record
Article
Selecting and remembering visual information is an active and competitive process. In natural environments, representations are tightly coupled to task. Objects that are task-relevant are remembered better due to a combination of increased selection for fixation and strategic control of encoding and/or retaining viewed information. However, it is not understood how physically manipulating objects when performing a natural task influences priorities for selection and memory. In this study, we compare priorities for selection and memory when actively engaged in a natural task with first-person observation of the same object manipulations. Results suggest that active manipulation of a task-relevant object results in a specific prioritization for object position information compared with other properties and compared with action observation of the same manipulations. Experiment 2 confirms that this spatial prioritization is likely to arise from manipulation rather than differences in spatial representation in real environments and the movies used for action observation. Thus, our findings imply that physical manipulation of task relevant objects results in a specific prioritization of spatial information about task-relevant objects, possibly coupled with strategic de-prioritization of colour memory for irrelevant objects.
Article
Certain results obtained in an experiment 'On the Analysis of the Memory Function in Orthography' conducted in the psychological laboratory of the University of Illinois in the year 1907-08, led to the conclusion that the opportunity for recall, during or immediately after the learning process, was of great benefit to the individual. It has been the aim of the present experiment to determine more carefully the nature of the influence of this recall and the conditions under which it could be used most favorably. In other words, to determine, when a definite length of time is given in which to learn a given amount of material, whether it is of the greater advantage to spend all the time in actual perception of the material, or part of the time in perception and part in recall; and also whether the recall should be interspersed with the perception or should follow it immediately or after an interval. We have also attempted to make some analysis and explanation of the factors which are of influence in this recall period and particularly to determine the effect of localization. The work already done along the line of economy of learning has shown that the relative value of many of the so-called methods of learning depend in a large measure on the memory type of the individual who is to use the method. A condition which would be of great advantage to a visually-minded individual might prove positively distracting to one of the auditory-motor type and again might have no appreciable effect on the individual of a mixed type. In view of this, in the present experiment care was taken to determine from introspective analysis the type of imagery of the subjects and in general the mental processes which they went through in learning the material presented to them.
Article
This book encompasses and weaves together the common threads of the four major topics that comprise the core of false memory research: theories of false memory, adult experimental psychology of false memory, false memory in legal contexts, and false memory in psychotherapy. By integrating material on all four of these topics, the book provides a comprehensive picture of our current understanding of human false memory.
Article
The visual environment is extremely rich and complex, producing information overload for the visual system. But the envi- ronment also embodies structure in the form of redundancies and reg- ularities that may serve to reduce complexity. How do perceivers internalize this complex informational structure? We present new evi- dence of visual learning that illustrates how observers learn how objects and events covary in the visual world. This information serves to guide visual processes such as object recognition and search. Our first experiment demonstrates that search and object recognition are facilitated by learned associations (covariation) between novel visual shapes. Our second experiment shows that regularities in dynamic visual environments can also be learned to guide search behavior. In both experiments, learning occurred incidentally and the memory rep- resentations were implicit. These experiments show how top-down visual knowledge, acquired through implicit learning, constrains what to expect and guides where to attend and look.
Article
The representations that are formed as we view real environments are still not well characterized. We studied the influence of task instructions on memory performance and fixation allocation in a real-world setting, in which participants were free to move around. Object memories were found to be task sensitive, as was the allocation of foveal vision. However, changes in the number of fixations directed at objects could not fully explain the changes in object memory performance that were found between task instruction conditions. Our data suggest that the manner in which information is extracted and retained from fixations varies with the instructions given to participants, with strategic prioritization of information retention from fixations made to task-relevant objects and strategic deprioritization of information retention from fixations directed to task-irrelevant objects.
Article
It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers per- formed as many as 15 searches for different objects in the same, unchanging scene, the speed of search did not decrease much over the course of these multiple searches (Võ & Wolfe, 2012). Only when observers were asked to search for the same object again did search become considerably faster. We argued that our naturalistic scenes provided such strong ‘‘semantic’’ guidance—e.g., knowing that a faucet is usually located near a sink—that guidance by incidental episodic memory—having seen that faucet previously—was ren- dered less useful. Here, we directly manipulated the availability of semantic information provided by a scene. By monitoring observers’ eye movements, we found a tight coupling of semantic and episodic memory guidance: Decreasing the availability of semantic infor- mation increases the use of episodic memory to guide search. These findings have broad implications regarding the use of memory during search in general and particularly during search in naturalistic scenes.
Article
Attention is strongly influenced by both external stimuli and internal goals. However, this useful dichotomy does not readily capture the ubiquitous and often automatic contribution of past experience stored in memory. We review recent evidence about how multiple memory systems control attention, consider how such interactions are manifested in the brain, and highlight how this framework for 'memory-guided attention' might help systematize previous findings and guide future research.
Article
Prior research into the impact of encoding tasks on visual memory (Castelhano & Henderson, 2005) indicated that incidental and intentional encoding tasks led to similar memory performance. The current study investigated whether different encoding tasks impacted visual memories equally for all types of objects in a conjunction search (e.g., targets, colour distractors, object category distractors, or distractors unrelated to the target). In sequences of pictures, participants searched for prespecified targets (e.g., green apple; Experiment 1), memorized all objects (Experiment 2), searched for specified targets while memorizing all objects (Experiment 3), searched for postidentified targets (Experiment 4), or memorized all objects with one object prespecified (Experiment 5). Encoding task significantly improved visual memory for targets and led to worse memory for unrelated distractors, but did not influence visual memory of distractors that were related to the target's colour or object category. The differential influence of encoding task indicates that the relative importance of the object both positively and negatively influences the memory retained.
Article
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.
Article
This paper provides an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects. A worked-out example of how to use recent software for mixed-effects modeling is provided. Simulation studies illustrate the advantages offered by mixed-effects analyses compared to traditional analyses based on quasi-F tests, by-subjects analyses, combined by-subjects and by-items analyses, and random regression. Applications and possibilities across a range of domains of inquiry are discussed.
Article
Human perception is highly flexible and adaptive. Selective processing is tuned dynamically according to current task goals and expectations to optimize behavior. Arguably, the major source of our expectations about events yet to unfold is our past experience; however, the ability of long-term memories to bias early perceptual analysis has remained untested. We used a noninvasive method with high temporal resolution to record neural activity while human participants detected visual targets that appeared at remembered versus novel locations within naturalistic visual scenes. Upon viewing a familiar scene, spatial memories changed oscillatory brain activity in anticipation of the target location. Memory also enhanced neural activity during early stages of visual analysis of the target and improved behavioral performance. Both measures correlated with subsequent target-detection performance. We therefore demonstrated that memory can directly enhance perceptual functions in the human brain.
Article
How do people distribute their visual attention in the natural environment? We and our colleagues have usually addressed this question by showing pictures, photographs or videos of natural scenes under controlled conditions and recording participants' eye movements as they view them. In the present study, we investigated whether people distribute their gaze in the same way when they are immersed and moving in the world compared to when they view video clips taken from the perspective of a walker. Participants wore a mobile eye tracker while walking to buy a coffee, a trip that required a short walk outdoors through the university campus. They subsequently watched first-person videos of the walk in the lab. Our results focused on where people directed their eyes and their head, what objects were gazed at and when attention-grabbing items were selected. Eye movements were more centralised in the real world, and locations around the horizon were selected with head movements. Other pedestrians, the path, and objects in the distance were looked at often in both the lab and the real world. However, there were some subtle differences in how and when these items were selected. For example, pedestrians close to the walker were fixated more often when viewed on video than in the real world. These results provide a crucial test of the relationship between real behaviour and eye movements measured in the lab.