Fig 1 - uploaded by Lorraine E Bahrick
Content may be subject to copyright.
The central role of selective attention in perception, action, learning, and memory in two interrelated, concurrent attention-system feedback loops: from attention to perception to learning to memory (red arrows) and from attention to perception to action (light blue arrows). The arrows illustrate the primary direction of the flow of information, but each component process (and each system) is involved in continuous, bidirectional feedback loops with the other components (and systems). Stimulation available for exploration is generated through action/exploratory activities (e.g., eye movements, reaching, posture changes), which in turn produce more stimulation for exploration in continuous cycles. Selective attention to this stream of stimulation provides the basis for what is perceived, and thus what can be learned and remembered, and this affects what is attended to next and in subsequent encounters with similar forms of stimulation. Reprinted from “Intermodal Perception and Selective Attention to Intersensory Redundancy: Implications for Typical Social Development and Autism” (p. 123), by L. E. Bahrick, in G. Bremner and T. D. Wachs (Eds.), Blackwell Handbook of Infant Development (2nd ed., pp. 120–166), 2010, Oxford, England: Wiley/Blackwell. Copyright 2010 by Wiley/Blackwell. Reprinted with permission. 

The central role of selective attention in perception, action, learning, and memory in two interrelated, concurrent attention-system feedback loops: from attention to perception to learning to memory (red arrows) and from attention to perception to action (light blue arrows). The arrows illustrate the primary direction of the flow of information, but each component process (and each system) is involved in continuous, bidirectional feedback loops with the other components (and systems). Stimulation available for exploration is generated through action/exploratory activities (e.g., eye movements, reaching, posture changes), which in turn produce more stimulation for exploration in continuous cycles. Selective attention to this stream of stimulation provides the basis for what is perceived, and thus what can be learned and remembered, and this affects what is attended to next and in subsequent encounters with similar forms of stimulation. Reprinted from “Intermodal Perception and Selective Attention to Intersensory Redundancy: Implications for Typical Social Development and Autism” (p. 123), by L. E. Bahrick, in G. Bremner and T. D. Wachs (Eds.), Blackwell Handbook of Infant Development (2nd ed., pp. 120–166), 2010, Oxford, England: Wiley/Blackwell. Copyright 2010 by Wiley/Blackwell. Reprinted with permission. 

Source publication
Article
Full-text available
Selective attention is the gateway to perceptual processing, learning, and memory, and is a skill honed through extensive experience. However, little research has focused on how selective attention develops. Here we synthesize established and new findings assessing the central role of redundancy across the senses in guiding and constraining this pr...

Contexts in source publication

Context 1
... environment provides a flux of changing, concurrent stimulation to all our senses, far more than can be attended to at any given moment in time. Consequently, we must selectively attend to some aspects of objects and events while ignoring others. Adults are highly skilled at directing selective attention to information that is relevant to their needs, goals, and interests while ignoring a vast array of irrelevant stimulation. For example, we can easily pick out a friend in a crowd, follow the flow of action in a ball game, and attend to the face and voice of a single speaker in the context of competing conversations. These attention skills, however, must be learned and honed through experience and practice. Much of this learning takes place in early development. Infants quickly learn to coordinate their patterns of looking and listening to deter- mine which sights and sounds belong together and which do not. They learn to parse the visual array into coherent objects and speech into meaningful words by attending to invariant patterns across variation in input. Such selective attention is widely recognized as the gateway to success- ful information pickup and processing (Neisser, 1976). An obvious but important insight is that selective attention to stimulation generated from exploratory activity provides the basis for what is perceived, learned, and remembered. In turn, what is perceived, learned, and remembered influences what is attended to in subsequent bouts of exploration, in continuous cycles from attention, to perception, to learning, to memory, to attention, and so on. Figure 1 illustrates this dynamic system of influences and the often overlooked but fundamental role of selective attention in perception, learning, and memory. Moreover, action is tightly coupled with these processes, providing new stimulation for attention, perception, learning, and memory across continuous bidirectional feedback loops (Fig. 1; see also Adolph & Berger, 2005; E. J. Gibson & Pick, 2000). This system of dynamic, interactive influences evolves over time, with concurrent changes in neurodevelopment that go hand in hand with changes in perception and action. Simply put, we create our effective environment (Schneirla, 1966) by what we attend to. Infants face a particularly daunting challenge: They must learn to attend selectively to the vast array of changing multimodal stimulation with limited attentional resources and limited experience with objects and events to guide them. “Selective attention” here refers to a ...
Context 2
... environment provides a flux of changing, concurrent stimulation to all our senses, far more than can be attended to at any given moment in time. Consequently, we must selectively attend to some aspects of objects and events while ignoring others. Adults are highly skilled at directing selective attention to information that is relevant to their needs, goals, and interests while ignoring a vast array of irrelevant stimulation. For example, we can easily pick out a friend in a crowd, follow the flow of action in a ball game, and attend to the face and voice of a single speaker in the context of competing conversations. These attention skills, however, must be learned and honed through experience and practice. Much of this learning takes place in early development. Infants quickly learn to coordinate their patterns of looking and listening to deter- mine which sights and sounds belong together and which do not. They learn to parse the visual array into coherent objects and speech into meaningful words by attending to invariant patterns across variation in input. Such selective attention is widely recognized as the gateway to success- ful information pickup and processing (Neisser, 1976). An obvious but important insight is that selective attention to stimulation generated from exploratory activity provides the basis for what is perceived, learned, and remembered. In turn, what is perceived, learned, and remembered influences what is attended to in subsequent bouts of exploration, in continuous cycles from attention, to perception, to learning, to memory, to attention, and so on. Figure 1 illustrates this dynamic system of influences and the often overlooked but fundamental role of selective attention in perception, learning, and memory. Moreover, action is tightly coupled with these processes, providing new stimulation for attention, perception, learning, and memory across continuous bidirectional feedback loops (Fig. 1; see also Adolph & Berger, 2005; E. J. Gibson & Pick, 2000). This system of dynamic, interactive influences evolves over time, with concurrent changes in neurodevelopment that go hand in hand with changes in perception and action. Simply put, we create our effective environment (Schneirla, 1966) by what we attend to. Infants face a particularly daunting challenge: They must learn to attend selectively to the vast array of changing multimodal stimulation with limited attentional resources and limited experience with objects and events to guide them. “Selective attention” here refers to a ...

Similar publications

Article
Full-text available
Effective leadership within early childhood settings is aligned with the perceived successful implementation of high quality care and education programmes (Thornton, Tamati, Clarkin-Philips, Aitken & Wansbrough, 2009). With growing attention on the role early childhood education (ECE) plays in preparing children to be successful in their lives, it...

Citations

... Nor has the role of SES in this potential relation been assessed. We expect to find links between intersensory processing and working memory, given that both rely on attention control (e.g., Bahrick & Lickliter, 2014;Buss et al., 2018; for a review, see Soto-Faraco et al., 2019). For example, intersensory processing and working memory both require selective attention-selectively focusing on target information at the expense of other information. ...
... Selective attention provides the basis for what is perceived, learned, and remembered. In turn, what is perceived, learned, and remembered influences what is selectively attended to at later points in time (Bahrick & Lickliter, 2014). Intersensory redundancy (the synchronous co-occurrence of stimulation across two or more senses) is provided by most naturalistic events and is highly salient to infants. ...
... Why might intersensory processing predict individual differences in working memory? Intersensory processing of sights and sounds requires selectively attending to properties of events that are common across visual and auditory stimulation, such as face-voice or object-sound synchrony, while at the same time ignoring irrelevant stimulation (see Bahrick et al., 1981;Bahrick & Lickliter, 2014;Stein, 2012;Talsma et al., 2010). Further, psychologists have long appreciated the role of selective attention in working memory (e.g., Baddeley & Hitch, 1974). ...
Article
Full-text available
Socioeconomic status (SES) is a well-established predictor of individual differences in childhood language and cognitive functioning, including executive functions such as working memory. In infancy, intersensory processing-selectively attending to properties of events that are redundantly specified across the senses at the expense of non-redundant, irrelevant properties-also predicts language development. Our recent research demonstrates that individual differences in intersensory processing in infancy predict a variety of language outcomes in childhood, even after controlling for SES. However, relations among intersensory processing and cognitive outcomes such as working memory have not yet been investigated. Thus, the present study examines relations between intersensory processing in infancy and working memory in early childhood, and the role of SES in this relation. Children (N = 101) received the Multisensory Attention Assessment Protocol at 12-months to assess intersensory processing (face-voice and object-sound matching) and received the WPPSI at 36-months to assess working memory. SES was indexed by maternal education, paternal education, and income. A variety of novel findings emerged. 1) Individual differences in intersensory processing at 12-months predicted working memory at 36-months of age even after controlling for SES. 2) Individual differences in SES predicted intersensory processing at 12-months of age. 3) The well-established relation between SES and working memory was partially mediated by intersensory processing. Children from families of higher-SES have better intersensory processing skills at 12-months and this combination of factors predicts greater working memory two years later at 36-months. Together these findings reveal the role of intersensory processing in cognitive functioning.
... These novel findings indicate that at 6 months, given equal amounts of parent language input (quantity and quality) and comparable SES, the accuracy of intersensory processing of faces and voices can predict which children will benefit most from language learning opportunities provided by parent language input. Infants who are more efficient at detecting face-voice synchrony likely have greater attentional resources available, enabling them to further process audiovisual speech events (Bahrick & Lickliter, 2014), and attend to other behaviors that take place in the context of language learning opportunities. This may include following eye-gaze direction and detecting gesture, facial and vocal affect, and prosody signaling communicative intent, all skills that are built on intersensory processing (Bahrick & Lickliter, 2012;Gogate & Hollich, 2010). ...
Article
Full-text available
Intersensory processing of social events (e.g., matching sights and sounds of audiovisual speech) is a critical foundation for language development. Two recently developed protocols, the Multisensory Attention Assessment Protocol (MAAP) and the Intersensory Processing Efficiency Protocol (IPEP), assess individual differences in intersensory processing at a sufficiently fine-grained level for predicting developmental outcomes. Recent research using the MAAP demonstrates 12-month intersensory processing of face-voice synchrony predicts language outcomes at 18- and 24-months, holding traditional predictors (parent language input, SES) constant. Here, we build on these findings testing younger infants using the IPEP, a more comprehensive, fine-grained index of intersensory processing. Using a longitudinal sample of 103 infants, we tested whether intersensory processing (speed, accuracy) of faces and voices at 3- and 6-months predicts language outcomes at 12-, 18-, 24-, and 36-months, holding traditional predictors constant. Results demonstrate intersensory processing of faces and voices at 6-months (but not 3-months) accounted for significant unique variance in language outcomes at 18-, 24-, and 36-months, beyond that of traditional predictors. Findings highlight the importance of intersensory processing of face-voice synchrony as a foundation for language development as early as 6-months and reveal that individual differences assessed by the IPEP predict language outcomes even 2.5-years later.
... For example, viewing a moving mouth and hearing speech involves information from the audio and visual modalities that can be linked based on their temporal synchrony to create a combined auditory and visual signal, i.e., a speaking mouth producing sounds, that benefits word recognition (Hollich et al., 2005). Infants may use this synchronous information not only to improve word processing (the mouth-word relationship) but also to facilitate the word-to-world mappings (using visual cues such as the speaker's gaze to a named object, the eyes-word relationship) (Gogate and Hollich, 2010;Bahrick and Lickliter, 2014). Therefore, how well infants can connect the auditory to visually synchronous information that they receive from faces may relate to their vocabulary outcomes. ...
... The theories posited that gaze to faces could improve awareness of communicative intent, boost auditory word processing, and guide word-object pairings, all of which are instrumental for the learning of words. In both the Social Pragmatic Account and the Intersensory Redundancy hypothesis, it was argued that infants need to be able to flexibly utilize the multiple cues that they receive from social partners to learn words (Tomasello, 2000;Bahrick and Lickliter, 2014). But will there be development when infants start utilizing these cues? ...
Article
Full-text available
Infants acquire their first words through interactions with social partners. In the first year of life, infants receive a high frequency of visual and auditory input from faces, making faces a potential strong social cue in facilitating word-to-world mappings. In this position paper, we review how and when infant gaze to faces is likely to support their subsequent vocabulary outcomes. We assess the relevance of infant gaze to faces selectively, in three domains: infant gaze to different features within a face (that is, eyes and mouth); then to faces (compared to objects); and finally to more socially relevant types of faces. We argue that infant gaze to faces could scaffold vocabulary construction, but its relevance may be impacted by the developmental level of the infant and the type of task with which they are presented. Gaze to faces proves relevant to vocabulary, as gazes to eyes could inform about the communicative nature of the situation or about the labeled object, while gazes to the mouth could improve word processing, all of which are key cues to highlighting word-to-world pairings. We also discover gaps in the literature regarding how infants’ gazes to faces (versus objects) or to different types of faces relate to vocabulary outcomes. An important direction for future research will be to fill these gaps to better understand the social factors that influence infant vocabulary outcomes.
... Concerning the longer and more frequent fixation on the teammates' faces and longer duration on the teammates' upper body under the high game performance pressure condition, it seems that especially the face takes on a special role within the social interaction between the rallies. On the one hand, faces send a lot of emotional stimuli about the other person (Bahrick and Lickliter, 2014;Caulfield et al., 2016;Crivelli et al., 2016). Recognizing the emotions of the teammate is important to possibly provide support if the teammate needs it. ...
Article
Full-text available
Previous research has indicated that social interactions and gaze behavior analyses in a group setting could be essential tools in accomplishing group objectives. However, only a few studies have examined the impact of social interactions on group dynamics in team sports and their influence on team performance. This study aimed to investigate the effects of game performance pressure on the gaze behavior within social interactions between beach volleyball players during game-like situations. Therefore, 18 expert beach volleyball players conducted a high and a low game performance pressure condition while wearing an eye tracking system. The results indicate that higher game performance pressure leads to more and longer fixation on teammates’ faces. A higher need for communication without misunderstandings could explain this adaptation. The longer and more frequent look at the face could improve the receiving of verbal and non-verbal information of the teammate’s face. Further, players showed inter-individual strategies to cope with high game performance pressure regarding their gaze behavior, for example, increasing the number of fixations and the fixation duration on the teammate’s face. Thereby, this study opens a new avenue for research on social interaction and how it is influenced in/through sport.
... As vision continues to improve over the first months of life, eye movements become more volitional, with increased fixations to both semantically salient features such as the eyes or mouth [12][13][14], and perceptually salient features such as high contrast object boundaries [15,16]. The development of attention also influences saccade and fixation dynamics, and previous research suggests that typical development proceeds from slow visual orienting and sparse visual scanning, to fast, efficient visual orienting with increased visual scanning [8,[17][18][19]. ...
Article
Full-text available
Though previous work has examined infant attention across a variety of tasks, less is known about the individual saccades and fixations that make up each bout of attention, and how individual differences in saccade and fixation patterns (i.e., scanning efficiency) change with development, scene content and perceptual load. To address this, infants between the ages of 5 and 11 months were assessed longitudinally (Experiment 1) and cross-sectionally (Experiment 2). Scanning efficiency (fixation duration, saccade rate, saccade amplitude, and saccade velocity) was assessed while infants viewed six quasi-naturalistic scenes that varied in content (social or non-social) and scene complexity (3, 6 or 9 people/objects). Results from Experiment 1 revealed moderate to strong stability of individual differences in saccade rate, mean fixation duration, and saccade amplitude, and both experiments revealed 5-month-old infants to make larger, faster, and more frequent saccades than older infants. Scanning efficiency was assessed as the relation between fixation duration and saccade amplitude, and results revealed 11-month-olds to have high scanning efficiency across all scenes. However, scanning efficiency also varied with scene content, such that all infants showing higher scanning efficiency when viewing social scenes, and more complex scenes. These results suggest both developmental and stimulus-dependent changes in scanning efficiency, and further highlight the use of saccade and fixation metrics as a sensitive indicator of cognitive processing.
... Many such developments are shaped in critical ways by infants' experiences with objects (Johnson, Amso, & Slemmer, 2003;Shinskey & Munakata, 2005). Of particular relevance to the current work is evidence that experience with audio-visual synchrony influences how infants encode information available via vision alone (Bahrick & Lickliter, 2014;Flom & Bahrick, 2007). These findings led us to hypothesize that caregiver-generated synchrony can impact the nature of the object representations that infants form when hearing words, irrespective of whether a mapping between them is learned. ...
... We based this hypothesis on evidence that the presence of redundant information across sensory modalities, such as in audio-visual synchrony, has a strong influence on perceptual processing, especially in infants whose sensory systems are still developing (Bahrick & Lickliter, 2014). The presence of synchrony strongly attracts infants' attention (Bahrick, Walker, & Neisser, 1981), and foregrounds information that is redundant across modalities. ...
... Synchrony tends to attract infants' attention (Bahrick & Lickliter, 2014;Bahrick et al., 1981), and thus infants may have been differentially attentive during the Familiarization trials as a function of their familiarization Condition. If infants in the Synchronous condition were more attentive than infants in the other groups, greater exposure to the familiarized object, rather than word-object synchrony per se, might drive differences in their object recognition. ...
Article
Full-text available
Caregivers often shake or loom an object towards an infant as they say its name. It is well-established that this synchrony helps infants form word-object associations. We hypothesized that word-object synchrony has an even more fundamental effect on infant development by influencing object perception. Here we tested whether word-object synchrony influences infants’ visual representations of objects. Infants were familiarized to words presented in or out of synchrony with an object’s motion, or with a static object. They were then tested in silence on their ability to discriminate the familiarized object from one that differed in shape, in color, or both shape and color. Although there were no global differences in performance across conditions, infants exposed to synchrony showed the clearest evidence of recognizing the familiarized object, and appeared to rely on shape in doing so. Thus, word-object synchrony may influence pre-lexical development by supporting the formation of object representations.
... In summary, the time spent looking at the mouth is generally associated with an infant's early expressive language skills, even after the end of the first year, sustaining the ongoing learning processand this is in accord with all of the studies. Infant preference for mouth over eyes when presented with non-native speech could mean that babies recognise a novel linguistic pattern and deploy their attention to the mouth to maximally profit from the redundancy of the articulatory movements (Bahrick & Lickliter, 2014). Looking towards the mouth is not linked to receptive skills in this research, since the context in which these abilities were tested was limited and does not reflect real-life situations. ...
Article
Although the pattern of visual attention towards the region of the eyes is now well-established for infants at an early stage of development, less is known about the extent to which the mouth attracts an infant’s attention. Even less is known about the extent to which these specific looking behaviours towards different regions of the talking face (i.e., the eyes or the mouth) may impact on or account for aspects of language development. The aim of the present systematic review is to synthesize and analyse (i) which factors might determine different looking patterns in infants during audio-visual tasks using dynamic faces and (ii) how these patterns have been studied in relation to aspects of the baby’s development. Four bibliographic databases were explored, and the records were selected following specified inclusion criteria. The search led to the identification of 19 papers (October 2021). Some studies have tried to clarify the role played by audio-visual support in speech perception and early production based on directly related factors such as the age or language background of the participants, while others have tested the child’s competence in terms of linguistic or social skills. Several hypotheses have been advanced to explain the selective attention phenomenon. The results of the selected studies have led to different lines of interpretation. Some suggestions for future research are outlined.
... Adults' speech perception is known to benefit from the complementary information of both auditory and visual cues (Ross, Saint-Amour, Leavitt, Javitt, & Foxe, 2007). Captivating multimodal stimuli might allow for inter-sensory redundancy (Bahrick & Lickliter, 2014), for example, caregivers' facial cues in language play reinforce the hierarchical structure (e.g., Longhi, 2009) and contingent eye-gaze between infant and caregiver during language play has been suggested to allow for increased encoding of the stimulus in infant listeners (Leong, Byrne, et al., 2017). ...
... Perceiving and encoding multimodal information leads to stronger memories compared to information that was encoded by only one modality (Jahn and Engelkamp, 2003;Feyereisen, 2009). This phenomenon has also been shown in the context of the intersensory-redundancy hypothesis (Bahrick and Lickliter, 2014). This hypothesis states that if the same information is perceived by more than one modality (e.g., seeing a speaker's mouth while hearing the sound of his/her voice), amodal information like speech rhythm can be perceived more easily, and is more likely to be encoded (intersensory facilitation). ...
... This hypothesis states that if the same information is perceived by more than one modality (e.g., seeing a speaker's mouth while hearing the sound of his/her voice), amodal information like speech rhythm can be perceived more easily, and is more likely to be encoded (intersensory facilitation). Bahrick and Lickliter (2014) emphasize the role that intersensory redundancy plays in the development of selective attention in infancy and early childhood. ...
... In this context, previously experienced sensory information can work as a cue during retrieval, by aiding reinstatement of the same mental state as during encoding (McClelland and Rumelhart, 1985;Dijkstra and Zwaan, 2014). In addition, enhanced memory encoding can be expected when information is perceived by more than one modality during encoding (Bahrick and Lickliter, 2014). However, it is possible that the perception of one's own motor information interfered with the encoding and recall of the spatial positions. ...
Article
Full-text available
Studies examining the effect of embodied cognition have shown that linking one’s body movements to a cognitive task can enhance performance. The current study investigated whether concurrent walking while encoding or recalling spatial information improves working memory performance, and whether 10-year-old children, young adults, or older adults ( M age = 72 years) are affected differently by embodiment. The goal of the Spatial Memory Task was to encode and recall sequences of increasing length by reproducing positions of target fields in the correct order. The nine targets were positioned in a random configuration on a large square carpet (2.5 m × 2.5 m). During encoding and recall, participants either did not move, or they walked into the target fields. In a within-subjects design, all possible combinations of encoding and recall conditions were tested in counterbalanced order. Contrary to our predictions, moving particularly impaired encoding, but also recall. These negative effects were present in all age groups, but older adults’ memory was hampered even more strongly by walking during encoding and recall. Our results indicate that embodiment may not help people to memorize spatial information, but can create a dual-task situation instead.
... Most investigators agree that the ability to control attention improves with age. Reviews have focused on further questions: distinguishing the relatively late development of executive attention from earlier-developing, stimulus-driven forms of attention (Rueda, 2013); development of attention as part of a dynamic system rather than as a static gatekeeper (Ristic & Enns, 2015); similarities and differences between auditory and visual attention (Godwin et al., 2019); and the importance of both interference and redundancy between modalities (Bahrick & Lickliter, 2014). There has been recent research on the developmental trajectory of sustained attention (Betts et al., 2006) and of the ability to allocate and share attention (Irwin-Chase & Burns, 2000), as well as on the relation of practical and social functioning to working memory loads and attention (Doebel, 2020;Hilton et al., 2020). ...
Article
Accumulating evidence suggests that distinct aspects of successful navigation—path integration, spatial-knowledge acquisition, and navigation strategies—change with advanced age. Yet few studies have established whether navigation deficits emerge early in the aging process (prior to age 65) or whether early age-related deficits vary by sex. Here, we probed healthy young adults (ages 18–28) and midlife adults (ages 43–61) on three essential aspects of navigation. We found, first, that path-integration ability shows negligible effects of sex or age. Second, robust sex differences in spatial-knowledge acquisition are observed not only in young adulthood but also, although with diminished effect, at midlife. Third, by midlife, men and women show decreased ability to acquire spatial knowledge and increased reliance on taking habitual paths. Together, our findings indicate that age-related changes in navigation ability and strategy are evident as early as midlife and that path-integration ability is spared, to some extent, in the transition from youth to middle age.