Christopher G Prince, George J Hollich, Nathan A Helder, Eric J Mislivec, Anoop Reddy, Sampanna Salunke, Naveed Memon09/2002;
Article: Are you synching what I'm synching? Modeling infants' real-time detection of audiovisual contingencies between face and voice[show abstract] [hide abstract]
ABSTRACT: Audio-visual synchrony is one of the earliest and most salient properties to which infants are sensitive. 1 Fur-thermore, it is likely that detection of contingent relations in and across modalities is a critical beginning point for autonomous mental development. 2 While there are nu-merous ecological examples of the need for contingency detection, one of the strongest is connecting face and voice. Dodd 3 demonstrated that infants would look longer to a face that is synchronized to speech than one that is asynchronous with speech (see also Pickens 4). The goal of the research reported here is to explicitly contrast detailed empirical data capturing infants' real-time detection of speech/face synchrony with a formal model of audio-visual synchrony detection. The empirical data comes from the Purdue University Infant Labora-tory. Infants, 4, 8, and 12 months of age, were tested in a cross-sectional design using the splitscreen preferential looking paradigm. In the procedure, two female faces (talking in infant-directed speech) were presented side-by-side on a large video screen, with the audio alternately matching one of the faces. By following the developmental trajectory of the preference for the synchronous face and by examining reaction times when the synchrony switches between faces, we gain a better understanding of the temporal resolution of infants' sensitivity to synchrony at different ages. We also gain frame-by-frame coding of infant looking preferences that are directly comparable with the output of the formal model. In the model, we use an algorithm that directly computes moment-by-moment audio-visual synchrony relations between low-level au-dio-visual features (e.g., RMS audio and grayscale pixels) based on Gaussian mutual information across a time window of audio-visual information. 5 While the ability of the model to discover and localize sources of synchrony is still in its infancy, it already shows strikingly similar overall and moment-by-moment performance to the data from the infants. This suggests that infants and the model may be tapping similar aspects of the audio-visual contingencies in the video. It is our hope the model will ultimately capture even more detailed aspects of infants' behaviour and scale to more general models of infant attention and autonomous development. Following this motivation, we are extending the model to utilize audio-visual synchrony to train within-visual-modality categorization and to bootstrap aspects of facial recognition.
Christopher G. Prince, George J. Hollich, Nathan A. Helder, Eric J. Mislivec, Anoop Reddy, Sampanna Salunke, Naveed Memon[show abstract] [hide abstract]
ABSTRACT: Synchrony detection between different sensory and/or motor channels appears critically important for young infant learning and cognitive development. For example, empirical studies demonstrate that audio-visual synchrony aids in language acquisition. In this paper we compare these infant studies with a model of synchrony detection based on the Hershey and Movellan (2000) algorithm augmented with methods for quantitative synchrony estimation. Four infant-model comparisons are presented, using audio-visual stimuli of increasing complexity. While infants and the model showed learning or discrimination with each type of stimuli used, the model was most successful with stimuli comprised of one audio and one visual source, and also with two audio sources and a dynamic-face visual motion source. More difficult for the model were stimuli conditions with two motion sources, and more abstract visual dynamics—an oscilloscope instead of a face. Future research should model the developmental pathway of synchrony detection. Normal audio-visual synchrony detection in infants may be experience-dependent (e.g., Bergeson, et al., 2004).
[show abstract] [hide abstract]
ABSTRACT: We propose ongoing emergence as a core concept in epigenetic robotics. Ongoing emergence refers to the continuous development and integration of new skills and is exhibited when six criteria are satisfied: (1) continuous skill acquisition, (2) incorporation of new skills with existing skills, (3) autonomous development of values and goals, (4) bootstrapping of initial skills, (5) stability of skills, and (6) reproducibility. In this paper we: (a) provide a conceptual synthesis of ongoing emergence based on previous theorizing, (b) review current research in epigenetic robotics in light of ongoing emergence, (c) provide prototypical examples of ongoing emergence from infant development, and (d) outline computational issues relevant to creating robots exhibiting ongoing emergence.