ArticlePublisher preview available

Remembering Cinematic Sequences: Boundaries Disrupt Memory in Fast-Paced Visual Events

American Psychological Association
Psychology of Aesthetics, Creativity, and the Arts
Authors:
  • IWM Leibniz Institute for Wissensmedien
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We engage with at least one type of visual media on a daily basis. Among those, there is a growing interest in the perception of cinematic events among cognitive psychologists. The current study investigated how event boundaries and pace affect recognition memory for movie scenes. We presented participants with brief clips composed out of six shots which either included a boundary or not and whether the average shot length was long or short. The results indicated that slower paced scenes were remembered better than faster paced scenes. More interestingly, there was a significant interaction between event boundary and pace. For fast-paced scenes, lower accuracy as well as longer reaction times were observed for scenes that involved an event boundary compared to those without an event boundary. Analysis of the serial position of the individual shots further indicated that people remember information in the new scene compared to the old scene only for fast-paced scenes. Event segmentation theory states that we form an active model of an event in working memory, which is updated when there is a significant change that violates predictions. Our experiment adds to event segmentation theory suggesting that the role of event boundaries is conditional on the exposure duration. When information is consolidated with enough exposure, the experience of an event boundary does not hinder memory. The current study provides new evidence showing that in complex visual scenes, memory operates economically to rely on the current model when the resources are limited.
Psychology of Aesthetics, Creativity, and the Arts
Remembering Cinematic Sequences: Boundaries Disrupt Memory in Fast-
Paced Visual Events
Ayşe Candan Şimşek and Elif Kurum
Online First Publication, April 22, 2024. https://dx.doi.org/10.1037/aca0000661
CITATION
Candan Şimşek, A., & Kurum, E. (2024). Remembering cinematic sequences: Boundaries disrupt memory
in fast-paced visual events.. Psychology of Aesthetics, Creativity, and the Arts. Advance online publication.
https://dx.doi.org/10.1037/aca0000661
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
How do people perceive routine events, such as making a bed, as these events unfold in time? Research on knowledge structures suggests that people conceive of events as goal-directed partonomic hierarchies. Here, participants segmented videos of events into coarse and fine units on separate viewings; some described the activity of each unit as well. Both segmentation and descriptions support the hierarchical bias hypothesis in event perception: Observers spontaneously encoded the events in terms of partonomic hierarchies. Hierarchical organization was strengthened by simultaneous description and, to a weaker extent, by familiarity. Describing from memory rather than perception yielded fewer units but did not alter the qualitative nature of the descriptions. Although the descriptions were telegraphic and without communicative intent, their hierarchical structure was evident to naive readers. The data suggest that cognitive schemata mediate between perceptual and functional information about events and indicate that these knowledge structures may be organized around object/action units.
Article
Full-text available
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Book
Full-text available
Detection Theory: A User’s Guide is an introduction to one of the most important tools for the analysis of data where choices must be made and performance is not perfect. In these cases, detection theory can transform judgments about subjective experiences, such as perceptions and memories, into quantitative data ready for analysis and modeling. For beginners, the first three chapters introduce measuring detection and discrimination, evaluating decision criteria, and the utility of receiver operating characteristics. Later chapters cover more advanced research paradigms, including: complete tools for application, including flowcharts, tables, and software; student-friendly language; complete coverage of content area, including both one-dimensional and multidimensional models; integrated treatment of threshold and nonparametric approaches; an organized, tutorial-level introduction to multidimensional detection theory; and popular discrimination paradigms presented as applications of multidimensional detection theory. This modern summary of signal detection theory is both a self-contained reference work for users and a readable text for graduate students and researchers learning the material either in courses or on their own.
Article
Full-text available
When people experience everyday activities, their comprehension can be shaped by expectations that derive from similar recent experiences, which can affect the encoding of a new experience into memory. When a new experience includes changes-such as a driving route being blocked by construction-this can lead to interference in subsequent memory. One potential mechanism of effective encoding of event changes is the retrieval of related features from previous events. Another such mechanism is the generation of a prediction error when a predicted feature is contradicted. In two experiments, we tested for effects of these two mechanisms on memory for changed features in movies of everyday activities. Participants viewed movies of an actor performing everyday activities across two fictitious days. Some event features changed across the days, and some features violated viewers' predictions. Retrieval of previous event features while viewing the second movie was associated with better subsequent memory, providing evidence for the retrieval mechanism. Contrary to our hypotheses, there was no support for the error mechanism: Prediction error was not associated with better memory when it was observed correlationally (Experiment 1) or directly manipulated (Experiment 2). These results support a key role for episodic retrieval in the encoding of new events. They also indicate boundary conditions on the role of prediction errors in driving new learning. Both findings have clear implications for theories of event memory.
Article
Full-text available
Memory is constructive, but that does not mean it is unreliable. When people remember the events of their lives they depend on knowledge, some of which is in the form of scripts or schemata. Schematic information encodes typical patterns in events, and for this reason schemata often contribute veridical features to memory reconstruction. This process can be thought of in Bayesian terms, as incorporating prior probabilities based on recurring patterns in experience. It also can be thought of in terms of statistical regression, such that information from knowledge is combined with information from episodic traces to reconstruct a best estimate of what happened.
Article
Full-text available
Cognitive load theory was introduced in the 1980s as an instructional design theory based on several uncontroversial aspects of human cognitive architecture. Our knowledge of many of the characteristics of working memory, long-term memory and the relations between them had been well-established for many decades prior to the introduction of the theory. Curiously, this knowledge had had a limited impact on the field of instructional design with most instructional design recommendations proceeding as though working memory and long-term memory did not exist. In contrast, cognitive load theory emphasised that all novel information first is processed by a capacity and duration limited working memory and then stored in an unlimited long-term memory for later use. Once information is stored in long-term memory, the capacity and duration limits of working memory disappear transforming our ability to function. By the late 1990s, sufficient data had been collected using the theory to warrant an extended analysis resulting in the publication of Sweller et al. (Educational Psychology Review, 10, 251–296, 1998). Extensive further theoretical and empirical work have been carried out since that time and this paper is an attempt to summarise the last 20 years of cognitive load theory and to sketch directions for future research.
Article
Full-text available
Hollywood movies provide continuous audiovisual information. Yet, information conveyed by movies address different sensory systems. For a broad variety of media applications (such as multimedia learning environments) it is important to understand the underlying cognitive principles. This project addresses the interplay of auditory and visual information during movie perception. Because auditory information is known to change basic visual processes, it is possible that movie perception and comprehension depends on stimulus modality. In this project, we report three experiments that studied how humans perceive and remember changes in visual and audiovisual movie clips. We observed basic processes of event perception (event segmentation, change detection, and memory) to be independent of stimulus modality. We thus conclude that event boundary perception is a general perceptual-cognitive mechanism and discuss these findings with respect to current cognitive psychological and media psychological theories.
Article
Full-text available
PsychoPy is an application for the creation of experiments in behavioral science (psychology, neuroscience, linguistics, etc.) with precise spatial control and timing of stimuli. It now provides a choice of interface; users can write scripts in Python if they choose, while those who prefer to construct experiments graphically can use the new Builder interface. Here we describe the features that have been added over the last 10 years of its development. The most notable addition has been that Builder interface, allowing users to create studies with minimal or no programming, while also allowing the insertion of Python code for maximal flexibility. We also present some of the other new features, including further stimulus options, asynchronous time-stamped hardware polling, and better support for open science and reproducibility. Tens of thousands of users now launch PsychoPy every month, and more than 90 people have contributed to the code. We discuss the current state of the project, as well as plans for the future.
Chapter
Social interaction requires social cognition—the ability to perceive, interpret, and explain the actions of others. This ability fundamentally relies on the concepts of intention and intentionality. For example, people distinguish sharply between intentional and unintentional behavior; identify the intentions underlying others' behavior; explain completed actions with reference to intentions, beliefs, and desires; and evaluate the social worth of actions using the concepts of intentionality and responsibility. Intentions and Intentionality highlights the roles these concepts play in social cognition. Taking an interdisciplinary approach, it offers cutting-edge work from researchers in cognitive, developmental, and social psychology and in philosophy, primatology, and law. It includes both conceptual and empirical contributions. Bradford Books imprint
Article
Memory-guided predictions can improve event comprehension by guiding attention and the eyes to the location where an actor is about to perform an action. But when events change, viewers may experience predictive-looking errors and need to update their memories. In two experiments ( Ns = 38 and 98), we examined the consequences of mnemonic predictive-looking errors for comprehending and remembering event changes. University students watched movies of everyday activities with actions that were repeated exactly and actions that were repeated with changed features—for example, an actor reached for a paper towel on one occasion and a dish towel on the next. Memory guidance led to predictive-looking errors that were associated with better memory for subsequently changed event features. These results indicate that retrieving recent event features can guide predictions during unfolding events and that error signals derived from mismatches between mnemonic predictions and actual events contribute to new learning.