Article

Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their typically developing peers. To shed light on possible differences in the maturation of audiovisual speech integration, we tested younger (ages 6-12) and older (ages 13-18) children with and without ASD on a task indexing such multisensory integration. To do this, we used the McGurk effect, in which the pairing of incongruent auditory and visual speech tokens typically results in the perception of a fused percept distinct from the auditory and visual signals, indicative of active integration of the two channels conveying speech information. Whereas little difference was seen in audiovisual speech processing (i.e., reports of McGurk fusion) between the younger ASD and TD groups, there was a significant difference at the older ages. While TD controls exhibited an increased rate of fusion (i.e., integration) with age, children with ASD failed to show this increase. These data suggest arrested development of audiovisual speech integration in ASD. The results are discussed in light of the extant literature and necessary next steps in research.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... As described in the previous section, in our network, reducing multisensory exposures leads to better agreement with the ASD data than the other two perturbations (Figure 7), providing support for an attentional account of impaired MSI in ASD. We follow this up by testing how well fewer multisensory exposures would impact the McGurk illusion compared to "typical development, " and whether network performance would align with the finding that children with ASD are less vulnerable to this illusion (Taylor et al., 2010;Irwin et al., 2011;Bebko et al., 2014;Stevenson et al., 2014b). To this aim, the same simulations as in Figure 8A are repeated at different training epochs using synapses trained under reduced multisensory experience and results are shown in Figure 8B (simulations of the McGurk effect for the other two perturbed developments are presented in the Supplementary Material). ...
... A greater McGurk effect appears only at the end of the training period (correct auditory phoneme recognition less than 45%, vs. ∼25% in the basal condition). This aligns well with experimental findings in ASD children, who generally show reduced susceptibility to the McGurk effect compared with their TD counterparts (Taylor et al., 2010;Irwin et al., 2011;Bebko et al., 2014;Stevenson et al., 2014b). training epochs (10 years of simulated age) the model already shows a strong McGurk effect (percentage of correct phoneme detection as low as 33%), although this is not as strong as in its final configuration (simulations after 8,500 epochs of training, corresponding to 17 years). ...
... As an additional consequence of weaker connectivity between the visual and multisensory area following reduced multisensory experiences, and ensuing reduced cross-sensory connectivity, the model showed fewer McGurk illusions, a result that has been consistently reported in previous studies on autism (Smith and Bennetto, 2007;Mongillo et al., 2008;Taylor et al., 2010;Irwin et al., 2011;Bebko et al., 2014;Stevenson et al., 2014b) and that finds indirect support from fMRI data from Nath and Beauchamp (2012). These authors found that the level of activity in pSTG/S was correlated with the likelihood of the McGurk effect. ...
Article
Full-text available
Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity.
... More specifically, individuals with higher AQ scores and increased difficulties in the ability to switch attention had a stronger tendency to report auditory-leading stimulus presentations as occurring simultaneously. One interpretation of this shift toward an ecologically less valid point is that individuals in the general population with higher levels of autistic traits prioritize auditory information over visual information; which is in line with the presumed over-reliance on the auditory modality observed in ASD 19,20,23 . Another explanation for this finding is that individuals with more ASD traits have a decreased ability to infer the probabilistic structure of sensory events. ...
... perceptual binding and imagination. Reduced audiovisual perceptual binding − characterized by reduced fused responses to incongruent McGurk stimuli − has been widely reported in ASD [18][19][20][21][22][23] . Studies on the relationship between autistic traits and susceptibility to the McGurk illusion in the general population have yielded inconsistent results. ...
... In the current study, individuals reporting a more limited (autistic-like) capacity to imagine reported fewer fused responses, but more auditory responses to the incongruent McGurk stimuli. This reduced perceptual binding behaviour is also found in clinical ASD populations [18][19][20][21][22][23] , and suggests that audiovisual speech perception in individuals with diminished (autistic-like) imagination abilities may be less affected by visual input, and more reliant on the auditory modality. Another explanation for the observed relationship between reduced susceptibility to the McGurk illusion and autistic-like imagination is that individuals with reduced imagination abilities may have a more literal perception of the world that is less affected by prior experiences, but more reliant on the sensory input 41 . ...
Article
Full-text available
Recent studies suggest that sub-clinical levels of autistic symptoms may be related to reduced processing of artificial audiovisual stimuli. It is unclear whether these findings extent to more natural stimuli such as audiovisual speech. The current study examined the relationship between autistic traits measured by the Autism spectrum Quotient and audiovisual speech processing in a large non-clinical population using a battery of experimental tasks assessing audiovisual perceptual binding, visual enhancement of speech embedded in noise and audiovisual temporal processing. Several associations were found between autistic traits and audiovisual speech processing. Increased autistic-like imagination was related to reduced perceptual binding measured by the McGurk illusion. Increased overall autistic symptomatology was associated with reduced visual enhancement of speech intelligibility in noise. Participants reporting increased levels of rigid and restricted behaviour were more likely to bind audiovisual speech stimuli over longer temporal intervals, while an increased tendency to focus on local aspects of sensory inputs was related to a more narrow temporal binding window. These findings demonstrate that increased levels of autistic traits may be related to alterations in audiovisual speech processing, and are consistent with the notion of a spectrum of autistic traits that extends to the general population.
... Researchers have argued that the effects of lip-motion on speech perception follow a consistent trajectory in normalhearing speakers from early development (Rosenblum et al. 1997;Wallace et al. 2006). Research examining differences in susceptibility to the McGurk effect among older versus younger listeners, in addition to listeners from different clinical populations, has indicated that factors such as aging, perceptual experience, and higher-level cognitive factors influence how visual speech cues affect auditory speech perception (e.g., Sekiyama et al. 2013;Setti et al. 2013;Stevenson et al. 2014). In light of the effects of visual influence on auditory perception, the question becomes: what cognitive mechanisms contribute to the perception of the McGurk effect? ...
... As we discuss within the context of the "visual dominant hypothesis", children with hearingimpairment appear to rely more on the visual signal compared to their typically developing peers and therefore report hearing the visually articulated consonants (Dodd et al. 2008). On the other hand, listeners with ASD do not appear to respond to audiovisual stimuli in the same way; this is evidenced by lower susceptibility to the McGurk effect among ASD teenagers compared to their typically developing peers (Stevenson et al. 2014). This scenario is related to what we will refer to as the "auditory dominant hypothesis". ...
... While most normal-hearing adult listeners experience the McGurk effect, children and clinical populations can sometimes experience lower rates of these fusions compared to normally developing controls likely for reasons related to their ability to use visual information. Stevenson et al. (2014) reported that teenage listeners with ASD integrate visual speech information differently compared to their typically developing peers. The authors presented four groups of listeners with McGurk Stimuli: 1) children listeners with ASD (ages 6-12), 2) children controls, 3) teenage listeners with ASD (ages 13-18), and 4) teenage controls. ...
Article
Full-text available
One of the most common examples of audiovisual speech integration is the McGurk effect. As an example, an auditory syllable /ba/ recorded over incongruent lip movements that produce “ga” typically causes listeners to hear “da”. This report hypothesizes reasons why certain clinical and listeners who are hard of hearing might be more susceptible to visual influence. Conversely, we also examine why other listeners appear less susceptible to the McGurk effect (i.e., they report hearing just the auditory stimulus without being influenced by the visual). Such explanations are accompanied by a mechanistic explanation of integration phenomena including visual inhibition of auditory information, or slower rate of accumulation of inputs. First, simulations of a linear dynamic parallel interactive model were instantiated using inhibition and facilitation to examine potential mechanisms underlying integration. In a second set of simulations, we systematically manipulated the inhibition parameter values to model data obtained from listeners with autism spectrum disorder. In summary, we argue that cross-modal inhibition parameter values explain individual variability in McGurk perceptibility. Nonetheless, different mechanisms should continue to be explored in an effort to better understand current data patterns in the audiovisual integration literature.
... The last but not the least, the task scoring methods varied across studies. In the study conducted by Stevenson et al. (2014b), only "fusion" answers from participants were scored as McGurk responses when calculating the magnitude of McGurk effect. To be specific, if the participant reported the perception of the fusion /da/ for auditory /ba/ dubbed on visual /ga/, the response was considered as a McGurk illusion. ...
... For typically developing individuals, the capabilities to integrate audiovisual speech information are of crucial significance because they are the foundation to develop social communicative and language skills (Kreifelts et al. 2007). Visual cues can be supplementary to the intelligibility of auditory information and provide the affective information of other people in reality (Stevenson et al. 2014b). For typically developing children, visual cues are initially less involved in audiovisual speech integration than auditory cues and children had a bias toward auditory stimuli when they are immature in multisensory processing (Hockley and Polka 1994). ...
... That may be why the gap between individuals with ASD and their typically developing counterparts increase with age. In fact, this idea could be supported by Stevenson et al. (2014b) that younger ASD and normal controls performed equally in McGurk effect while older ASD group exhibited weaker McGurk effect than their typically developing counterparts. ...
Article
Full-text available
By synthesizing existing behavioural studies through a meta-analytic approach, the current study compared the performances of Autism spectrum disorder (ASD) and typically developing groups in audiovisual speech integration and investigated potential moderators that might contribute to the heterogeneity of the existing findings. In total, nine studies were included in the current study, and the pooled overall difference between the two groups was significant, g = − 0.835 (p < 0.001; 95% CI − 1.155 to − 0.516). Age and task scoring method were found to be associated with the inconsistencies of the findings reported by previous studies. These findings indicate that individuals with ASD show weaker McGurk effect than typically developing controls.
... Although the vast majority of this evidence comes from subjective reports (Baranek et al., 2006;Dawson and Watling, 2000;Kasari and Sigman, 1997;Kern et al., 2007;Kientz and Dunn, 1997;O'Neill and Jones, 1997;Rogers et al., 2003;Talay-Ongan and Wood, 2000;Watling et al., 2001;Wing and Potter, 2002), emerging recent empirical evidence supports the notion of atypical sensory processing in autism across all sensory modalities (for review, see Baum et al., 2015a). Most germane to this report, sensory disturbances have been empirically shown across vision and audition (Baum et al., , 2015bBebko et al., 2014;De Boer-Schellekens et al., 2013;Iarocci et al., 2010;Kwakye et al., 2011;Stevenson et al., 2014bStevenson et al., , 2014cStevenson et al., , 2014dStevenson et al., , 2014e, 2015bWoynaroski et al., 2013). How atypical sensory processing fits within the broader behavioral profile of the disorder, however, has yet to be established. ...
... Mixed results have been reported in relation to autistic children's perception of the McGurk effect, with many studies showing decreased integration (e.g. Bebko et al. 2014;De Gelder et al., 1991;Irwin et al., 2011;Mongillo et al., 2008;Stevenson et al., 2014cStevenson et al., , 2014dWilliams et al., 2004) and others showing intact integration (Iarocci et al., 2010;Woynaroski et al., 2013). In the McGurk effect, an individual hears a speaker say "ba" and sees the speaker articulate "ga" but perceives the syllable "da" (McGurk and MacDonald, 1976). ...
... In the McGurk effect, an individual hears a speaker say "ba" and sees the speaker articulate "ga" but perceives the syllable "da" (McGurk and MacDonald, 1976). Given that "da" was contained in neither the auditory nor the visual sensory inputs, the perception of "da" is indicative of integration (Calvert and Thesen, 2004;Stevenson et al., 2014d). Autistic children, on average, perceive the integrated "da" percept less often than their peers. ...
Article
It has been recently theorized that atypical sensory processing in autism relates to difficulties in social communication. Through a series of tasks concurrently assessing multisensory temporal processes, multisensory integration and speech perception in 76 children with and without autism, we provide the first behavioral evidence of such a link. Temporal processing abilities in children with autism contributed to impairments in speech perception. This relationship was significantly mediated by their abilities to integrate social information across auditory and visual modalities. These data describe the cascading impact of sensory abilities in autism, whereby temporal processing impacts multisensory information of social information, which, in turn, contributes to deficits in speech perception. These relationships were found to be specific to autism, specific to multisensory but not unisensory integration, and specific to the processing of social information.
... A variety of studies has also evaluated the McGurk effect in clinical populations, including in those with ASD. Overall, studies have demonstrated that individuals with ASD tend to perceive the McGurk effect less frequently in comparison to TD individuals [Irwin, Tornatore, Brancazio, & Whalen, 2011;Mongillo et al., 2008;Stevenson, Siemann, et al., 2013;Taylor et al., 2010], with the majority of the studies describing this as a decrease in the strength or magnitude of multisensory integration. In addition, studies have evaluated and found differences in the perception of the McGurk effect across development in individuals with ASD [Stevenson, Siemann, et al., 2013;Taylor et al., 2010]. ...
... Overall, studies have demonstrated that individuals with ASD tend to perceive the McGurk effect less frequently in comparison to TD individuals [Irwin, Tornatore, Brancazio, & Whalen, 2011;Mongillo et al., 2008;Stevenson, Siemann, et al., 2013;Taylor et al., 2010], with the majority of the studies describing this as a decrease in the strength or magnitude of multisensory integration. In addition, studies have evaluated and found differences in the perception of the McGurk effect across development in individuals with ASD [Stevenson, Siemann, et al., 2013;Taylor et al., 2010]. Also, studies, utilizing various psychophysical behavioral measures, have determined that the development of multisensory integration might be delayed in these individuals [Beker et al., 2018;Brandwein et al., 2012;Foxe et al., 2015;Stevenson, Siemann, et al., 2013;Taylor et al., 2010], along with evidence of sex-dependent differences in audiovisual speech development in both typically developed and ASD populations . ...
... In addition, studies have evaluated and found differences in the perception of the McGurk effect across development in individuals with ASD [Stevenson, Siemann, et al., 2013;Taylor et al., 2010]. Also, studies, utilizing various psychophysical behavioral measures, have determined that the development of multisensory integration might be delayed in these individuals [Beker et al., 2018;Brandwein et al., 2012;Foxe et al., 2015;Stevenson, Siemann, et al., 2013;Taylor et al., 2010], along with evidence of sex-dependent differences in audiovisual speech development in both typically developed and ASD populations . Interestingly, neuroimaging studies have found both structural and functional differences in the STS of those with ASD, and this brain region, which has been shown to be important for the McGurk illusion, has also been established as a neural hub for multisensory integration and social processing [Beauchamp, Argall, Bodurka, Duyn, & Martin, 2004;Boddaert et al., 2004;Calvert, Campbell, & Brammer, 2000;Gervais et al., 2004;Redcay, 2008;Zilbovicius et al., 2006]. ...
Article
Abnormal sensory responses are a DSM‐5 symptom of autism spectrum disorder (ASD), and research findings demonstrate altered sensory processing in ASD. Beyond difficulties with processing information within single sensory domains, including both hypersensitivity and hyposensitivity, difficulties in multisensory processing are becoming a core issue of focus in ASD. These difficulties may be targeted by treatment approaches such as “sensory integration,” which is frequently applied in autism treatment but not yet based on clear evidence. Recently, psychophysical data have emerged to demonstrate multisensory deficits in some children with ASD. Unlike deficits in social communication, which are best understood in humans, sensory and multisensory changes offer a tractable marker of circuit dysfunction that is more easily translated into animal model systems to probe the underlying neurobiological mechanisms. Paralleling experimental paradigms that were previously applied in humans and larger mammals, we and others have demonstrated that multisensory function can also be examined behaviorally in rodents. Here, we review the sensory and multisensory difficulties commonly found in ASD, examining laboratory findings that relate these findings across species. Next, we discuss the known neurobiology of multisensory integration, drawing largely on experimental work in larger mammals, and extensions of these paradigms into rodents. Finally, we describe emerging investigations into multisensory processing in genetic mouse models related to autism risk. By detailing findings from humans to mice, we highlight the advantage of multisensory paradigms that can be easily translated across species, as well as the potential for rodent experimental systems to reveal opportunities for novel treatments. Lay Summary Sensory and multisensory deficits are commonly found in ASD and may result in cascading effects that impact social communication. By using similar experiments to those in humans, we discuss how studies in animal models may allow an understanding of the brain mechanisms that underlie difficulties in multisensory integration, with the ultimate goal of developing new treatments.
... A functional magnetic resonance imaging (fMRI) study revealed that the magnitude of the McGurk effect was positively correlated with the amplitude of the response in the left superior temporal sulcus (STS), an area critical for the integration of auditory and visual speech information (Calvert et al. 2000;Nath and Beauchamp 2011). Previous studies have also demonstrated that information is processed across modalities even in infancy (Rosenblum et al. 1997), and that the influence of visual speech increases with chronological age in typically developing individuals (Sekiyama and Burnham 2008), but not in individuals with autism spectrum disorder (ASD) (Stevenson et al. 2014a;Taylor et al. 2010). ...
... In particular, some studies on audiovisual speech perception suggested that more severe ASD symptoms predict weaker visual effects on audiovisual speech integration. In fact, people with ASD showed weaker effects of visual speech on auditory perception than did their typically developing peers (de Gelder et al. 1991;Foxe et al. 2015;Iarocci et al. 2010;Saalasti et al. 2011Saalasti et al. , 2012Smith and Bennetto 2007;Stevenson et al. 2014a;Williams et al. 2004). Bebko et al. (2014) suggested that children with autism might have unique audiovisual speech perception difficulties. ...
... However, some studies failed to support this hypothesis: For example, Stevenson et al. (2014b) found that children with ASD still exhibited a weaker McGurk effect relative to both younger and older control participants even when the groups did not differ in terms of lip-reading ability. In addition, other studies have demonstrated that children and adults with ASD exhibited a weaker visual influence on voice perception despite having intact lip-reading ability (de Gelder et al. 1991;Saalasti et al. 2011Saalasti et al. , 2012Smith and Bennetto 2007;Stevenson et al. 2014a). Therefore, although lip-reading ability plays an important role in audiovisual speech integration, deficits in it seem to be an insufficient explanation for individual differences in the McGurk effect. ...
Article
Full-text available
The McGurk effect, which denotes the influence of visual information on audiovisual speech perception, is less frequently observed in individuals with autism spectrum disorder (ASD) compared to those without it; the reason for this remains unclear. Several studies have suggested that facial configuration context might play a role in this difference. More specifically, people with ASD show a local processing bias for faces—that is, they process global face information to a lesser extent. This study examined the role of facial configuration context in the McGurk effect in 46 healthy students. Adopting an analogue approach using the Autism-Spectrum Quotient (AQ), we sought to determine whether this facial configuration context is crucial to previously observed reductions in the McGurk effect in people with ASD. Lip-reading and audiovisual syllable identification tasks were assessed via presentation of upright normal, inverted normal, upright Thatcher-type, and inverted Thatcher-type faces. When the Thatcher-type face was presented, perceivers were found to be sensitive to the misoriented facial characteristics, causing them to perceive a weaker McGurk effect than when the normal face was presented (this is known as the McThatcher effect). Additionally, the McGurk effect was weaker in individuals with high AQ scores than in those with low AQ scores in the incongruent audiovisual condition, regardless of their ability to read lips or process facial configuration contexts. Our findings, therefore, do not support the assumption that individuals with ASD show a weaker McGurk effect due to a difficulty in processing facial configuration context.
... [6][7][8][9] On the basis of the growing evidence for disturbances across multiple sensory systems, there has been an increased focus on examining the integration of information across the different sensory modalities, with a number of studies now detailing impaired multisensory processing in ASD. [10][11][12][13][14][15][16][17] The relevance of these multisensory deficits for the autism phenotype is critical, given that multisensory integration has a central role in the construction of coherent perceptual representations, and has been shown to facilitate behavior and perception under a number of circumstances. [18][19][20][21] The serotonin system has long been implicated in ASD. ...
... 64,65 These human studies have demonstrated atypical multisensory processing in individuals with ASD on both the behavioral and neural levels. 3,[10][11][12]14,16,66,67 Most germane in the current context, a number of these human studies have detailed weaker multisensory integrative function. 13,15,68,69 A variety of mouse models associated with ASD have been generated to evaluate the neural underpinnings and associated behaviors. ...
Article
Full-text available
Altered sensory processing is observed in many children with autism spectrum disorder (ASD), with growing evidence that these impairments extend to the integration of information across the different senses (that is, multisensory function). The serotonin system has an important role in sensory development and function, and alterations of serotonergic signaling have been suggested to have a role in ASD. A gain-of-function coding variant in the serotonin transporter (SERT) associates with sensory aversion in humans, and when expressed in mice produces traits associated with ASD, including disruptions in social and communicative function and repetitive behaviors. The current study set out to test whether these mice also exhibit changes in multisensory function when compared with wild-type (WT) animals on the same genetic background. Mice were trained to respond to auditory and visual stimuli independently before being tested under visual, auditory and paired audiovisual (multisensory) conditions. WT mice exhibited significant gains in response accuracy under audiovisual conditions. In contrast, although the SERT mutant animals learned the auditory and visual tasks comparably to WT littermates, they failed to show behavioral gains under multisensory conditions. We believe these results provide the first behavioral evidence of multisensory deficits in a genetic mouse model related to ASD and implicate the serotonin system in multisensory processing and in the multisensory changes seen in ASD.
... One canonical measure of multisensory fusion, or "binding," that has been extensively studied is the McGurk effect (McGurk & Mac-Donald, 1976). This effect has been studied in infants, children, and adolescents (Burnham & Dodd, 2004;Hockley & Polka, 1994;Massaro, 1984;McGurk & MacDonald, 1976;Stevenson, Siemann, Schneider, et al., 2014;Stevenson et al., 2014a;Tremblay et al., 2007) and in adults (Baart & Vroomen, 2010;Beauchamp, Nath, & Pasalar, 2010;Nath & Beauchamp, 2012;Saint-Amour, De Sanctis, Molholm, Ritter, & Foxe, 2007;Soto-Faraco & Alsius, 2009;Stevenson, Zemtsov, et al., 2012), and preliminarily studied in older adults (Cienkowski & Carney, 2002;Sekiyama, Soshi, & Sakamoto, 2014). In the most common form of this illusion, an individual is shown a video in which a speaker is visually articulating the syllable "ga" while concurrently presenting the auditory syllable "ba." ...
... McGurk stimuli, which were used to measure multisensory integration, consisted of pairing temporally aligned visual "ga" with the auditory "ba," a pairing known to induce the illusory perception of "da" or "tha." This stimulus set was originally described in full detail in Quinto, Thompson, Russo, and Trehub (2010) and has been successfully used in multiple previous studies of audiovisual integration (Stevenson et al., in press;Stevenson, Siemann, Schneider, et al., 2014;Stevenson et al., 2014a;Stevenson, Zemtsov, et al., 2012). Participants reported what the speaker said in each condition by pressing one of four keys: B, G, D, or T. Twenty trials of each condition were presented in random order. ...
Article
The temporal relationship between individual pieces of information from the different sensory modalities is one of the stronger cues to integrate such information into a unified perceptual gestalt, conveying numerous perceptual and behavioral advantages. Temporal acuity, however, varies greatly over the life span. It has previously been hypothesized that changes in temporal acuity in both development and healthy aging may thus play a key role in integrative abilities. This study tested the temporal acuity of 138 individuals ranging in age from 5 to 80. Temporal acuity and multisensory integration abilities were tested both within and across modalities (audition and vision) with simultaneity judgment and temporal order judgment tasks. We observed that temporal acuity, both within and across modalities, improved throughout development into adulthood and subsequently declined with healthy aging, as did the ability to integrate multisensory speech information. Of importance, throughout development, temporal acuity of simple stimuli (i.e., flashes and beeps) predicted individuals' abilities to integrate more complex speech information. However, in the aging population, although temporal acuity declined with healthy aging and was accompanied by declines in integrative abilities, temporal acuity was not able to predict integration at the individual level. Together, these results suggest that the impact of temporal acuity on multisensory integration varies throughout the life span. Although the maturation of temporal acuity drives the rise of multisensory integrative abilities during development, it is unable to account for changes in integrative abilities in healthy aging. The differential relationships between age, temporal acuity, and multisensory integration suggest an important role for experience in these processes. (PsycINFO Database Record
... When an error is made, however, the VWM colour-wheel task affords us with the possibility to determine the origin of such errors, forgetting the studied colour or misbinding a studied colour with the incorrect location. Autistic individuals have difficulties binding sensory information into a unified percept [20][21][22] : they are less susceptible to visual illusions that require integration of multiple component features 23 , and show reduced benefits of binding a speaker's face with their voice [24][25][26][27] . In a colour-wheel VWM task, these binding errors would result in recalling a colour that was presented at a non-target location. ...
... However, the autistic group showed a higher proportion of binging errors relative to the total number of errors than the control group. This finding is in line with previous research, which has reliably demonstrated that autistic individuals exhibit atypical integration of individual pieces of information to form a coherent Gestalt percept [20][21][22] . This effect ranges from the perceptual processing of sensory stimuli such as a speaker's face and voice [24][25][26][27] to higher-level cognitive representations, such integrating content of a story into a global narrative 44 . ...
Article
Full-text available
While atypical sensory processing is one of the more ubiquitous symptoms in autism spectrum disorder, the exact nature of these sensory issues remains unclear, with different studies showing either enhanced or deficient sensory processing. Using a well-established continuous cued-recall task that assesses visual working memory, the current study provides novel evidence reconciling these apparently discrepant findings. Autistic children exhibited perceptual advantages in both likelihood of recall and recall precision relative to their typically-developed peers. When autistic children did make errors, however, they showed a higher probability of erroneously binding a given colour with the incorrect spatial location. These data align with neural-architecture models for feature binding in visual working memory, suggesting that atypical population-level neural noise in the report dimension (colour) and cue dimension (spatial location) may drive both the increase in probability of recall and precision of colour recall as well as the increase in proportion of binding errors when making an error, respectively. These changes are likely to impact core symptomatology associated with autism, as perceptual binding and working memory play significant roles in higher-order tasks, such as communication.
... Several studies have observed diminished multisensory integration for individuals with ASD versus TD controls. For example, some past research has shown that, in comparison to TD controls, individuals with ASD demonstrate less multisensory facilitation in their reaction times (e.g., Brandwein et al., 2013), receive less benefit from audiovisual versus unisensory speech cues in noise (e.g., , demonstrate less mature (i.e., wider) temporal binding windows (TBWs) in response to audiovisual stimuli (e.g., Foss-Feig et al., 2010;Stevenson et al., 2014b;Woynaroski et al., 2013a), and exhibit a reduced magnitude of multisensory integration as evidenced by reduced perception of audiovisual illusions (e.g., Bebko et al., 2014;Foss-Feig et al., 2010;Stevenson et al., 2014c). ...
... However, findings for group differences are inconsistent across the literature. For instance, in some cases individuals with ASD have been found to report fewer McGurk illusions than peers (Bebko et al., 2014;Feldman et al., 2018a;Stevenson et al., 2014c), but in other cases no significant between-group differences are observed (e.g., Keane, Rosenthal et al., 2010;Saalasti et al., 2012;Saalasti, Tiippana et al., 2011;Stevenson et al., 2018;Woynaroski et al., 2013a). Similar inconsistencies have been noted using other tasks, such as the flash-beep illusion (e.g., Bao, Doobay et al., 2017;Foss-Feig et al., 2010;Keane et al., 2010) and tasks measuring TBWs for audiovisual stimuli (e.g., Foss-Feig et al., 2010;Noel, De Niear et al., 2017). ...
Article
An ever-growing literature has aimed to determine how individuals with autism spectrum disorder (ASD) differ from their typically developing (TD) peers on measures of multisensory integration (MSI) and to ascertain the degree to which differences in MSI are associated with the broad range of symptoms associated with ASD. Findings, however, have been highly variable across the studies carried out to date. The present work systematically reviews and quantitatively synthesizes the large literature on audiovisual MSI in individuals with ASD to evaluate the cumulative evidence for (a) group differences between individuals with ASD and TD peers, (b) correlations between MSI and autism symptoms in individuals with ASD and (c) study level factors that may moderate findings (i.e., explain differential effects) observed across studies. To identify eligible studies, a comprehensive search strategy was employed using the ProQuest search engine, PubMed database, forwards and backwards citation searches, direct author contact, and hand-searching of select conference proceedings. A significant between-group difference in MSI was evident in the literature, with individuals with ASD demonstrating worse audiovisual integration on average across studies compared to TD controls. This effect was moderated by mean participant age, such that between-group differences were more pronounced in younger samples. The mean correlation between MSI and autism and related symptomatology was also significant, indicating that increased audiovisual integration in individuals with ASD is associated with better language/communication abilities and/or reduced autism symptom severity in the extant literature. This effect was moderated by whether the stimuli were linguistic versus non-linguistic in nature, such that correlation magnitudes tended to be significantly greater when linguistic stimuli were utilized in the measure of MSI. Limitations and future directions for primary and meta-analytic research are discussed.
... When an error is made, however, the VWM colour-wheel task affords us with the possibility to determine the origin of such errors, forgetting the studied colour or misbinding a studied colour with the incorrect location. Autistic individuals have difficulties binding sensory information into a unified percept [19][20][21] : they are less susceptible to visual illusions that require integration of multiple component features 22 , and show reduced benefits of binding a speaker's face with their voice [23][24][25][26] . In a colourwheel VWM task, these binding errors would result in recalling a colour that was presented at a non-target location. ...
... conjunction with the observed increase in precision, this suggests that the perceptual representation of colour is being maintained with high fidelity; however, the subsequent cognitive process of binding colour and location is impaired. This finding is in line with previous research, which has reliably demonstrated that autistic individuals have difficulty integrating individual pieces of information to form a coherent Gestalt percept[19][20][21] . This effect ranges from the perceptual processing of sensory stimuli such as a speaker's face and voice[23][24][25][26] to higher-level cognitive representations, such integrating content of a story into a global narrative 34 .How might we understand the present pattern of results from a neurocomputational perspective? ...
Preprint
Atypical sensory processing one of the more ubiquitous symptoms in autism spectrum disorder, the exact nature of these sensory issues remains unclear, with different studies showing either enhanced or deficient sensory processing. Using a well-established continuous free-recall task that assesses visual working memory, the current study provides novel evidence reconciling these apparently discrepant findings by showing both enhanced and impaired sensory processing in the same individuals on distinct aspects of the same task and stimuli. Autistic children exhibited perceptual advantages in both likelihood of recall and recall precision relative to their typically-developed peers. When autistic children did make errors, however, they showed a higher probability of erroneously binding a given colour with the incorrect spatial location. These data indicate that although the initial perceptual representations of sensory inputs were maintained with enhanced fidelity, the subsequent cognitive process of binding multiple features of sensory information into one percept was impaired. These data align with neural-architecture models for feature binding in visual working memory, suggesting that atypical population-level neural noise in the report dimension (colour) and cue dimension (spatial location) may drive both the increase in probability of recall and precision of colour recall as well as the increase in proportion of binding errors, respectively. These changes are likely to impact core symptomatology associated with autism, as perceptual binding and working memory play significant roles in higher-order tasks, such as communication.
... The result is a single, unified percept of the event -a process known as multisensory integration. Numerous recent reports suggest that this process may be compromised in autism, particularly as it pertains to the integration of auditory and visual social information [4][5][6][7][8][9][10][11][12][13][14][15] . Challenges related to multisensory integration in this population have also been theoretically 16 and experimentally 17 linked to higher-level cognitive processes that build on the processing of such bound sensory information. ...
... Diminished acuity in audiovisual temporal perception impairs the ability to detect temporal regularities between events and may then impact multisensory integration abilities in autism. Indeed, numerous studies have shown impaired multisensory temporal perception in autism 40 , and likewise many have shown impaired multisensory integration [4][5][6][7][8][9][10][11][12][13][14] . Two such studies reported an explicit link between multisensory temporal perception in ASD and this ability to integrate audiovisual speech signals, where individuals with less precise temporal perception showed concomitant decreases in integration 6,17 . ...
Article
Full-text available
Recent empirical evidence suggests that autistic individuals perceive the world differently than their typically-developed peers. One theoretical account, the predictive coding hypothesis, posits that autistic individuals show a decreased reliance on previous perceptual experiences, which may relate to autism symptomatology. We tested this through a well-characterized, audiovisual statistical-learning paradigm in which typically-developed participants were first adapted to consistent temporal relationships between audiovisual stimulus pairs (audio-leading, synchronous, visual-leading) and then performed a simultaneity judgement task with audiovisual stimulus pairs varying in temporal offset from auditory-leading to visual-leading. Following exposure to the visual-leading adaptation phase, participants' perception of synchrony was biased towards visual-leading presentations, reflecting the statistical regularities of their previously experienced environment. Importantly, the strength of adaptation was significantly related to the level of autistic traits that the participant exhibited, measured by the Autism Quotient (AQ). This was specific to the Attention to Detail subscale of the AQ that assesses the perceptual propensity to focus on fine-grain aspects of sensory input at the expense of more integrative perceptions. More severe Attention to Detail was related to weaker adaptation. These results support the predictive coding framework, and suggest that changes in sensory perception commonly reported in autism may contribute to autistic symptomatology.
... Changes in multisensory function have been demonstrated using simple auditory and visual stimuli (Brandwein et al., 2013) and the double-flash illusion (Foss-Feig et al., 2010). Subjects with ASD are less susceptible to the speech related McGurk illusion (Taylor et al., 2010;Bebko et al., 2014;Stevenson et al., 2014) and show lower sensitivity for temporal synchrony of AV speech events (Bebko et al., 2006). But the picture is not fully consistent (Zhang et al., 2018). ...
... Whereas children with AS showed a regular rate of McGurk illusions (Bebko et al., 2014), adolescent ASD subjects (Stevenson et al., 2014) showed reduced rates and adult AS subjects have been shown to have qualitative differences in perception of McGurk stimuli compared to control subjects (Saalasti et al., 2012) and to show further differences in AV speech perception (Saalasti et al., 2011). On the other hand children with AS also show deficits in speech perception (Saalasti et al., 2008) and these deficits seem to be amplified in acoustically noisy environments (Alcantara et al., 2004), i.e., situations in which visual information is most important (Ross et al., 2007a). ...
Article
Full-text available
Audiovisual (AV) integration deficits have been proposed to underlie difficulties in speech perception in Asperger’s syndrome (AS). It is not known, if the AV deficits are related to alterations in sensory processing at the level of unisensory processing or at levels of conjoint multisensory processing. Functional Magnetic-resonance images (MRI) was performed in 16 adult subjects with AS and 16 healthy controls (HC) matched for age, gender, and verbal IQ as they were exposed to disyllabic AV congruent and AV incongruent nouns. A simple semantic categorization task was used to ensure subjects’ attention to the stimuli. The left auditory cortex (BA41) showed stronger activation in HC than in subjects with AS with no interaction regarding AV congruency. This suggests that alterations in auditory processing in unimodal low-level areas underlie AV speech perception deficits in AS. Whether this is signaling a difficulty in the deployment of attention remains to be demonstrated.
... This shows a lot of overlap with the mono-information processing of individuals with ASD. Indeed, dysfunction in temporal processing of multisensory stimuli have been shown in individuals with autism (Baum S.H., 2015) (Bebko, 2006) (De Boer Schellekens, 2013) (Foss Feig, 2010 (Kwakye, 2011) (Stevenson, 2014a.) (Wallace, 2014), but also in schizophrenia (Martin, 2013) The work of Cardella and Gangemi (Cardella, 2015) about reasoning in schizophrenia, shines a more optimistic light on the outcome of patients with schizophrenia. ...
... This shows a lot of overlap with the mono-information processing of individuals with ASD. Indeed, dysfunction in temporal processing of multisensory stimuli have been shown in individuals with autism (Baum S.H., 2015) (Bebko, 2006) (De Boer Schellekens, 2013) (Foss Feig, 2010 (Kwakye, 2011) (Stevenson, 2014a.) (Wallace, 2014), but also in schizophrenia (Martin, 2013) The work of Cardella and Gangemi (Cardella, 2015) about reasoning in schizophrenia, shines a more optimistic light on the outcome of patients with schizophrenia. ...
Book
Full-text available
ReAttach is a therapeutic method characterised by Paula Weerkamp (Weerkamp-Bartholomeus 2015). She points out that, “‘individually’, there are many differences in psychological functioning. ReAttach is focusing on the similarity in cognitive processing of information, emotions and events. The underlying structure of ReAttach is based on ortho-paedagogical influencing, obstructing factors and facilitating optimal conditions for cognitive functioning and growth. ReAttach for autism is made up from the following components: arousal regulation, tactile stimuli and joint attention, multiple sensory integration processing, conceptualisation and cognitive bias modification”. E-Book Open Access https://www.fioritieditore.com/en/prodotto/autism-is-there-a-place-for-reattach-therapy-a-promotion-of-natural-self-healing-through-emotions-rewiring/
... While the preattentive inhibitory model appears appropriate for most typical-hearing adults, other models may better describe other listeners. For example, individuals with autism appear to be less susceptible to the influences of visual speech and hence the McGurk effect (e.g., Stevenson et al., 2014); hypothetically, they may better conform to the predictions of the parallel postattentive model. On the other hand, those who are hard of hearing may rely more heavily on the visual signal compared to participants in this study and thus show stronger evidence of inhibition from the visual channel. ...
Article
This paper proposes a novel approach to assess audiovisual integration for both congruent and incongruent speech stimuli using reaction times (RT). The experiments are based on the McGurk effect, in which a listener is presented with incongruent audiovisual speech signals. A typical example involves the auditory consonant/b/combined with a visually articulated/g/, often yielding a perception of/d/. We quantify the amount of integration relative to the predictions of a parallel independent model as a function of attention and congruency between auditory and visual signals. We assessed RT distributions for congruent and incongruent auditory and visual signals in a within-subjects signal detection paradigm under conditions of divided versus focused attention. Results showed that listeners often received only minimal benefit from congruent auditory visual stimuli, even when such information could have improved performance. Incongruent stimuli adversely affected performance in divided and focused attention conditions. Our findings support a parallel model of auditory-visual integration with interactions between auditory and visual channels.
... They found that McGurk illusions were reduced in younger children with ASD (7-14 year olds) but that adolescents with ASD performed similarly to the control group (15-16 year olds). Similarly, Stevenson et al., 2014b examined the McGurk illusion in a group of children and adolescents, and found reduced McGurk illusions in their ASD sample. However, when the data were separated into two age groups, the pattern of findings contrasted somewhat with those of Taylor et al. with significantly fewer McGurk Illusions observed in the older ASD vs. TD group (13-18 year olds), and no statistically reliable group difference in their younger group (6-12 year olds). ...
Article
Difficulty integrating inputs from different sensory sources is commonly reported in individuals with Autism Spectrum Disorder (ASD). Accumulating evidence consistently points to altered patterns of behavioral reactions and neural activity when individuals with ASD observe or act upon information arriving through multiple sensory systems. For example, impairments in the integration of seen and heard speech appear to be particularly acute, with obvious implications for interpersonal communication. Here, we explore the literature on multisensory processing in autism with a focus on developmental trajectories. While much remains to be understood, some consistent observations emerge. Broadly, sensory integration deficits are found in children with an ASD whereas these appear to be much ameliorated, or even fully recovered, in older teenagers and adults on the spectrum. This protracted delay in the development of multisensory processing raises the possibility of applying early intervention strategies focused on multisensory integration, to accelerate resolution of these functions. We also consider how dysfunctional cross-sensory oscillatory neural communication may be one key pathway to impaired multisensory processing in ASD.
... In addition to differences in responsiveness to stimuli presented within the individual senses, a number of recent reports have highlighted that individuals with ASD may also exhibit deficits in tasks requiring integration or utilization of information across the different sensory modalities [i.e., multisensory tasks; see Brandwein et al., 2013;Foxe et al., 2013;Smith & Bennetto, 2007;Stevenson et al., 2013Stevenson et al., , 2014Woynaroski et al., 2013]. Rather than combining information from the different sensory modalities in an indiscriminant manner, multisensory neurons and circuits are highly sensitive to the statistical relationships between stimuli within the environment. ...
Article
Lay summary: Studies have shown that individuals with autism have difficulty in separating auditory and visual events in time. People with autism also weight sensory evidence originating from the external world and from their body differently. We measured simultaneity judgments regarding visual and auditory events and between visual and heartbeat events. Results suggest that while individuals with autism show unusual temporal function across the senses in a general manner, this deficit is greater when pairings bridged between the external world and the internal body.
... In other words, an aspect of time-varying somatosensory coding (synchronicity) is relevant for this illusion to emerge in controls, but not in ASD. Impaired perception of the temporal relationship between cross-modal inputs can be expected in ASD, given for instance the inadequate matching of audiovisual speech information (Stevenson et al., 2014) and, more generally, the difficulties in local/global processing issues widely reported in ASD (Happé & Frith, 2006). Finally, the high NI ratings for both self-synchronous and self-asynchronous conditions constitute indirect evidence of the dysfunctional neural activity within the primary somatosensory (S1) area in ASD (Khan et al., 2015). ...
Article
A fundamental aspect of self-consciousness is body ownership, which refers to the experience that our body and its parts belong to us and it is distinct from those of other persons. Body ownership depends on the integration of different sensory stimulations and it is crucial for the development of functional motor and social abilities, which are compromised in individuals with autism spectrum disorder (ASD). Here we examined the multisensory nature of body ownership in individuals with ASD by using a procedure based on tactile conflicts, namely the numbness illusion (NI).
... Speech perception, while often conceptualized as an auditory process, is in fact inherently multisensory, with a listener using both auditory speech information as well as visual speech information in the form of oral articulations. Emerging evidence strongly supports the presence of specific deficits in autism spectrum disorder (ASD) that impact how information is integrated across the different sensory modalities Brandwein et al., 2013;Foss-Feig et al., 2010;Kwakye, Foss-Feig, Cascio, Stone, & Wallace, 2011;Ross et al., 2011;Russo et al., 2010;Stevenson, Segers, Ferber, Barense, & Wallace, 2014a;Stevenson et al., 2014bStevenson et al., , 2014cStevenson et al., , 2014dWallace & Stevenson, 2014]. In fact, while ASD is a heterogeneous disorder with complex etiologies, atypical sensory processing is one of the most common symptoms, reported in up to 87% of autistic individuals 1 [Le Couteur et al., 1989;Lord, 1995]. ...
Article
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
... These differences across studies have been attributed to interactions between clinical diagnosis and other factors, including stimulus, gender, temporal processing abilities, or participant age. For instance, one group reported a population difference for younger children but not older children [27] while another group reported the exact opposite effect [28]. Published estimates of group differences in multisensory integration are inflated ...
Article
Full-text available
A common measure of multisensory integration is the McGurk effect, an illusion in which incongruent auditory and visual speech are integrated to produce an entirely different percept. Published studies report that participants who differ in age, gender, culture, native language, or traits related to neurological or psychiatric disorders also differ in their susceptibility to the McGurk effect. These group-level differences are used as evidence for fundamental alterations in sensory processing between populations. Using empirical data and statistical simulations tested under a range of conditions, we show that published estimates of group differences in the McGurk effect are inflated when only statistically significant (p < 0.05) results are published. With a sample size typical of published studies, a group difference of 10% would be reported as 31%. As a consequence of this inflation, follow-up studies often fail to replicate published reports of large between-group differences. Inaccurate estimates of effect sizes and replication failures are especially problematic in studies of clinical populations involving expensive and time-consuming interventions, such as training paradigms to improve sensory processing. Reducing effect size inflation and increasing replicability requires increasing the number of participants by an order of magnitude compared with current practice.
... For instance, Delbeuck, Collette and Linden (2007) reported deficits in auditory-visual speech integration in Alzheimer's disease patients and with a sample of Asperger's Syndrome individuals, Schelinski, Riedel and von Kriegstein (2014) found a similar result. In addition, Stevenson et al (2014) found that the magnitude of deficiency in auditory-visual speech integration was relatively negligible at earlier ages, with the difference becoming much greater with increasing age. A comparable developmental pattern was also the case with a group of children with developmental language disorder (Meronen, Tiippana, Westerholm, &Ahonen, 2013).In this investigation, we attempted to study the status of auditory-visual speech perception in the context bipolar disordera disorder characterized by alternating and contrastive episodes of mania and depression. ...
Preprint
The focus of this study was to investigate how individuals with bipolar disorder integrate auditory and visual speech information compared to non-disordered individuals and whether there were any differences in auditory and visual speech integration in the manic and depressive episodes in bipolar disorder patients. It was hypothesized that bipolar groups’ auditory-visual speech integration would be less robust than the control group. Further, it was predicted that those in the manic phase of bipolar disorder would integrate visual speech information more than their depressive phase counterparts. To examine these, the McGurk effect paradigm was used with typical auditory-visual speech (AV) as well as auditory-only (AO) speech perception on visual-only (VO) stimuli. Results. Results showed that the disordered and non-disordered groups did not differ on auditory-visual speech (AV) integration and auditory-only (AO) speech perception but on visual-only (VO) stimuli. The results are interpreted to pave the way for further research whereby both behavioural and physiological data are collected simultaneously. This will allow us understand the full dynamics of how, actually, the auditory and visual (relatively impoverished in bipolar disorder) speech information are integrated in people with bipolar disorder.
... For instance, Delbeuck et al. (2007) reported deficits in AV speech integration in Alzheimer's disease patients, and with a sample of Asperger's syndrome individuals, Schelinski et al. (2014) found a similar result. In addition, Stevenson et al. (2014) found that the magnitude of deficiency in AV speech integration was relatively negligible at earlier ages, with the difference becoming much greater with increasing age. A comparable developmental pattern was also found with a group of children with developmental language disorder (Meronen et al. 2013). ...
Article
Full-text available
This study aimed to investigate how individuals with bipolar disorder integrate auditory and visual speech information compared to healthy individuals. Furthermore, we wanted to see whether there were any differences between manic and depressive episode bipolar disorder patients with respect to auditory and visual speech integration. It was hypothesized that the bipolar group’s auditory–visual speech integration would be weaker than that of the control group. Further, it was predicted that those in the manic phase of bipolar disorder would integrate visual speech information more robustly than their depressive phase counterparts. To examine these predictions, a McGurk effect paradigm with an identification task was used with typical auditory–visual (AV) speech stimuli. Additionally, auditory-only (AO) and visual-only (VO, lip-reading) speech perceptions were also tested. The dependent variable for the AV stimuli was the amount of visual speech influence. The dependent variables for AO and VO stimuli were accurate modality-based responses. Results showed that the disordered and control groups did not differ in AV speech integration and AO speech perception. However, there was a striking difference in favour of the healthy group with respect to the VO stimuli. The results suggest the need for further research whereby both behavioural and physiological data are collected simultaneously. This will help us understand the full dynamics of how auditory and visual speech information are integrated in people with bipolar disorder.
... Recurrent demonstrations of altered MSI in autistic individuals have been published in recent years (for reviews see Feldman et al., 2018;Zhou et al., 2018;Stevenson et al., 2018) and, for the most part, the literature has been driven by findings from studies focussed on the integration of social information; assessing performance based on auditory (voice), visual (face) or audiovisual speech (voice and face) perception. This social-information approach is exemplified by demonstrations of atypical MSI performance in autism using the McGurk illusion (Woynaroski et al., 2013;Williams et al., 2004;Taylor et al., 2010;Stevenson et al., 2014a;Stevenson et al., 2014b;Saalasti et al., 2011;Saalasti et al., 2012;Foxe, et al., 2015;DePape, et al., 2012;Stevenson et al., 2018). Additionally, research has explored basic speech perception e.g., syllable recognition ((de Boer-Schellekens, et al., 2013); Ross, et al., 2011), word recognition (Smith and Bennetto, 2007) and sentence comprehension (Grossman et al., 2015). ...
Article
Atypical sensory processing is now recognised as a key component of an autism diagnosis. The integration of multiple sensory inputs (multisensory integration (MSI)) is thought to be idiosyncratic in autistic individuals and may have cascading effects on the development of higher-level skills such as social communication. Multisensory facilitation was assessed using a target detection paradigm in 45 autistic and 111 neurotypical individuals, matched on age and IQ. Target stimuli were: auditory (A; 3500 Hz tone), visual (V; white disk ‘flash’) or audiovisual (AV; simultaneous tone and flash), and were presented on a dark background in a randomized order with varying stimulus onset delays. Reaction time (RT) was recorded via button press. In order to assess possible developmental effects, participants were divided into younger (age 14 or younger) and older (age 15 and older) groups. Redundancy gain (RG) was significantly greater in neurotypical, compared to autistic individuals. No significant effect of age or interaction was found. Race model analysis was used to compute a bound value that represented the facilitation effect provided by MSI. Our results revealed that MSI facilitation occurred (violation of the race model) in neurotypical individuals, with more efficient MSI in older participants. In both the younger and older autistic groups, we found reduced MSI facilitation (no or limited violation of the race model). Autistic participants showed reduced multisensory facilitation compared to neurotypical participants in a simple target detection task, void of social context. This remained consistent across age. Our results support evidence that autistic individuals do not integrate low-level, non-social information in a typical fashion, adding to the growing discussion around the influential effect that basic perceptual atypicalities may have on the development of higher-level, core aspects of autism.
... Another potentially important consideration is our use of an intersensory design, whereby the auditory target is preceded by visual cues. Since there are well-documented deficits in multisensory integration in children with autism (71,91,93,(107)(108)(109)(110)(111)(112), it is possible that impaired ITPC is due to poor cross-sensory communication and that what we are observing reflects a multisensory deficit rather than a general entrainment issue. Arguing against a pure multisensory deficit account, the ASD group showed faster RTs in the cued compared with the uncued condition, demonstrating intact use of a cross-sensory cue. ...
Article
Full-text available
Anticipating near-future events is fundamental to adaptive behavior, whereby neural processing of predictable stimuli is significantly facilitated relative to non-predictable events. Neural oscillations appear to be a key anticipatory mechanism by which processing of upcoming stimuli is modified, and they often entrain to rhythmic environmental sequences. Clinical and anecdotal observations have led to the hypothesis that people with Autism Spectrum Disorder (ASD) may have deficits in generating predictions, and as such, a candidate neural mechanism may be failure to adequately entrain neural activity to repetitive environmental patterns to facilitate temporal predictions. We tested this hypothesis by interrogating temporal predictions and rhythmic entrainment using behavioral and electrophysiological approaches. We recorded high-density electroencephalography in children with ASD and Typically Developing (TD) age- and IQ-matched controls, while they reacted to an auditory target as quickly as possible. This auditory event was either preceded by predictive rhythmic visual cues, or not. Both ASD and control groups presented comparable behavioral facilitation in response to the Cue vs. No-Cue condition, challenging the hypothesis that children with ASD have deficits in generating temporal predictions. Analyses of the electrophysiological data, in contrast, revealed significantly reduced neural entrainment to the visual cues, and altered anticipatory processes in the ASD group. This was the case despite intact stimulus evoked visual responses. These results support intact temporal prediction in response to a cue in ASD, in the face of altered entrainment and anticipatory processes.
... Age effects. Given the age range of the samples, effects of age were analyzed given previously reported differences observed between TD and ASD participants throughout development (Taylor et al., 2010;Stevenson et al., 2014). To explore such a possible age-effect, a "double difference" comparing the impact of ambiguity on objects and faces was calculated for each individual [(ObjectHA-ObjectLA) -(FaceHA-FaceHA)]. ...
... One tool that has been used frequently to measure multisensory integration of audiovisual speech in children with autism is the McGurk effect (see Zhang et al., 2019 for a review), wherein incongruent multisensory speech (e.g., concurrent presentation of an auditory "ba" and visual "ga") often leads to the perception of a fused percept (i.e., "da" or "tha"; McGurk & MacDonald, 1976). Children with autism may perceive significantly fewer illusory percepts in response to McGurk stimuli compared to non-autistic peers (e.g., Bebko et al., 2014;Irwin et al., 2011;Stevenson et al., 2014;Taylor et al., 2010), reflecting reduced visual influence on auditory speech perception or diminished multisensory integration, but these differences have not been consistently observed across the extant literature (e.g., Keane et al., 2010;Saalasti et al., 2011;Stevenson et al., 2018;Woynaroski et al., 2013). ...
Article
Full-text available
Children with autism show alterations in multisensory integration that have been theoretically and empirically linked with the core and related features of autism. It is unclear, however, to what extent multisensory integration maps onto features of autism within children with and without autism. This study, thus, evaluates relations between audiovisual integration and core and related autism features across children with and without autism. Thirty-six children reported perceptions of the McGurk illusion during a psychophysical task. Parents reported on participants’ autistic features. Increased report of illusory percepts tended to covary with reduced autistic features and greater communication skill. Some relations, though, were moderated by group. This work suggests that associations between multisensory integration and higher-order skills are present, but in some instances vary according to diagnostic group.
... Similar auditory-visual speech discrepancies were observed in people with other mental disorders such as Alzheimer's disease [17]. A similar auditoryvisual speech discrepancy was observed in people with Asperger's syndrome [18], this discrepancy relatively dissipates and becomes negligible with age [19]. ...
Conference Paper
In this study, we tested two bipolar groups, manic-episode and depressive episode patients and heathy controls over a series of McGurk effect stimuli as well as auditory and visual-only (lip-read) speech stimuli. We hypothesized that the bipolar group's auditory-visual speech integration should be weaker than the control group. We also predicted the manic-period bipolar individuals should integrate visual speech information more robustly than their depressive-episode counterparts. These hypotheses were not supported, and all groups were found to integrate auditory and visual speech information comparably. However, paradoxically, the depressive-episode individuals were unable to lipread despite that they were able to integrate the two sources of speech information. Here, we discuss and try to make sense of this behavioural data with solid predictions on how corresponding physiological data can decipher our findings. On top of this discussion, physiological predictions and possibilities are presented.
... Indeed, understanding these mechanisms is not only important from a comparative perspective with our own species, but may represent a fundamental contribution to issues concerning mental health. In particular, autistic individuals are often challenged by understanding social scenes, including the integration of auditory and visual information (Feldman et al., 2018;Stevenson, Siemann, Schneider, et al., 2014;Stevenson, Siemann, Woynaroski, et al., 2014a, b). Such de cits may result from not only de cits in face and voice processing on their own, but the ability to integrate each modality in the service of predicting and understanding social interactions. ...
Preprint
Full-text available
Social interactions rely on the interpretation of semantic and emotional information, often from multiple sensory modalities. In primates, both audition and vision serve the interpretation of communicative signals. Autistic individuals present deficits in both social communication and audio-visual integration. At present, the neural mechanisms subserving the interpretation of complex audio-visual social events are unknown. Based on heart rate estimates and functional neuroimaging, we show that macaque monkeys associate affiliative facial expressions or social scenes with corresponding affiliative vocalizations, aggressive expressions or scenes with corresponding aggressive vocalizations and escape visual scenes with scream vocalizations, while suppressing vocalizations that are incongruent with the visual context. This process is subserved by two distinct functional networks, homologous to the human emotional and attentional networks activated during the processing of visual social information. These networks are thus critical for the construction of social meaning representation, and provide grounds for the audio-visual deficits observed in autism. One-sentence summary Macaques extract social meaning from visual and auditory input recruiting face and voice patches and a broader emotional and attentional network.
... Although speculative, these results may suggest that developmental changes due to neurobiological or experiential factors (or both), progressively regularize cross-modal integration abilities in AS (Poole et al. 2015). Studies examining developmental trajectories of multisensory processing for linguistic information in AS individuals have observed an improvement in the ability to integrate audio-visual syllables with age Taylor et al. 2010; although see Stevenson et al. 2014b). Cross-modal integration abilities are far from mature at birth, but rather develop over a protracted period of time and strongly depend on sensory experiences (see Stein et al. 2014, for a review). ...
Article
Full-text available
Although impairment in sensory integration is suggested in the autism spectrum (AS), empirical evidences remain equivocal. We assessed the integration of low-level visual and tactile information within and across modalities in AS and typically developing (TD) individuals. TD individuals demonstrated increased redundancy gain for cross-modal relative to double tactile or visual stimulation, while AS individuals showed similar redundancy gain between cross-modal and double tactile conditions. We further observed that violation of the race model inequality for cross-modal conditions was observed over a wider proportion of the reaction times distribution in TD than AS individuals. Importantly, the reduced cross-modal integration in AS individuals was not related to atypical attentional shift between modalities. We conclude that AS individuals displays selective decrease of cross-modal integration of low-level information.
... Our findings have imported clinical implications regarding severe psychiatric conditions like autism and schizophrenia, where a widened TBW was demonstrated to occur (for example Stevenson et al., 2014c;Hass et al., 2017). Because even healthy subjects benefit from our training regarding speech intelligibility, one can assume that subjects with a chronically widened TBW would do so even more. ...
Article
Full-text available
Our ability to integrate multiple sensory-based representations of our surrounding supplies us with a more holistic view of our world. There are many complex algorithms our nervous system uses to construct a coherent perception. An indicator to solve this ‘binding problem’ are the temporal characteristics with the specificity that environmental information has different propagation speeds (e.g., sound and electromagnetic waves) and sensory processing time and thus the temporal relationship of a stimulus pair derived from the same event must be flexibly adjusted by our brain. This tolerance can be conceptualized in the form of the cross-modal temporal binding window (TBW). Several studies showed the plasticity of the TBW and its importance concerning audio-visual illusions, synesthesia, as well as psychiatric disturbances. Using three audio-visual paradigms, we investigated the importance of length (short vs. long) as well as modality (uni- vs. multimodal) of a perceptual training aiming at reducing the TBW in a healthy population. We also investigated the influence of the TBW on speech intelligibility, where participants had to integrate auditory and visual speech information from a videotaped speaker. We showed that simple sensory trainings can change the TBW and are capable of optimizing speech perception at a very naturalistic level. While the training-length had no different effect on the malleability of the TBW, the multisensory trainings induced a significantly stronger narrowing of the TBW than their unisensory counterparts. Furthermore, a narrowing of the TBW was associated with a better performance in speech perception, meaning that participants showed a greater capacity for integrating informations from different sensory modalities in situations with one modality impaired. All effects persisted at least seven days. Our findings show the significance of multisensory temporal processing regarding ecologically valid measures and have important clinical implications for interventions that may be used to alleviate debilitating conditions (e.g., autism, schizophrenia), in which multisensory temporal function is shown to be impaired.
... Given the low number of participants in this initial preliminary cohort, it will be important to replicate with a large cohort. Additional research into visual conjunctive processing in ASD should include a larger number of children, adolescents, and adults, as there have been multiple studies of perception of social stimuli that show that the differences observed between TD and ASD participants changes throughout development (Taylor et al., 2010;Stevenson et al., 2014b). ...
Article
Full-text available
Face processing in autism spectrum disorder (ASD) is thought to be atypical, but it is unclear whether differences in visual conjunctive processing are specific to faces. To address this, we adapted a previously established eye-tracking paradigm which modulates the need for conjunctive processing by varying the degree of feature ambiguity in faces and objects. Typically-developed (TD) participants showed a canonical pattern of conjunctive processing: High-ambiguity objects were processed more conjunctively than low-ambiguity objects, and faces were processed in an equally conjunctive manner regardless of ambiguity level. In contrast, autistic individuals did not show differences in conjunctive processing based on stimulus category, providing evidence that atypical visual conjunctive processing in ASD is the result of a domain general lack of perceptual specialization. © 2019 Stevenson, Philipp-Muller, Hazlett, Wang, Luk, Lee, Black, Yeung, Shafai, Segers, Feber and Barense.
... More specifically, the ability to process and integrate what is seen with what is heard is impacted in ASD (Baum, Stevenson, & Wallace, 2015;Bebko, Weiss, Demark, & Gomez, 2006;Iarocci & McDonald, 2006;Stevenson et al., 2014aStevenson et al., , 2014b. Perhaps the most classic example of audiovisual integration is the McGurk effect (McGurk & MacDonald, 1976), where an individual is presented with the auditory syllable ba and a speaker visually articulating the syllable ga. ...
Article
Autism spectrum disorder is a neurodevelopmental disorder that is characterized by impairments in social communication, restricted interests, and repetitive behaviors. Many studies have demonstrated atypical responses to audiovisual sensory inputs, particularly those containing sociolinguistic information. It is currently unclear whether these atypical responses are due to the linguistic nature of the inputs or the social aspect itself. Further, it is unclear how atypical sensory responses to sociocommunicative stimuli intersect with autism symptomatology. The current study addressed these outstanding questions by using pupillometry in mental age-matched children with and without autism (N = 71) to examine physiological responses to dynamic, audiovisual stimuli including social, sociolinguistic, socioemotional, and nonsocial stimuli, as well as to temporally manipulated stimuli. Data revealed group differences in pupillary responses with social stimuli but not nonsocial stimuli and, importantly, showed no variation through the inclusion of linguistic or emotional information. This suggests that atypical sensory responses are driven primarily by the inclusion of social information broadly. Further, individual responses to social stimuli were significantly correlated with a wide range of autism spectrum disorder symptomatology, including social communication, restricted interests and repetitive behaviors, and sensory processing issues. Pupillary responses to social but not nonsocial presentation were also capable of predicting diagnosis with a high level of selectivity, but only with marginal sensitivity. Finally, responses to the temporal manipulation did not yield any group differences, suggesting that while atypical multisensory temporal processing has been well documented in autism at the level of behavior and perception, these issues may be intact at the physiological level. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Article
The objective of this study was to identify and synthesize research about how sensory factors affect daily life of children. We designed a conceptual model to guide a scoping review of research published from 2005 to October 2014 (10 years). We searched MEDLINE, CINAHL, and PsycINFO and included studies about sensory perception/processing; children, adolescents/young adults; and participation. We excluded studies about animals, adults, and review articles. Our process resulted in 261 articles meeting criteria. Research shows that children with conditions process sensory input differently than peers. Neuroscience evidence supports the relationship between sensory-related behaviors and brain activity. Studies suggest that sensory processing is linked to social participation, cognition, temperament, and participation. Intervention research illustrates the importance of contextually relevant practices. Future work can examine the developmental course of sensory processing aspects of behavior across the general population and focus on interventions that support children’s sensory processing as they participate in their daily lives.
Article
Introduction: In the present study we were interested in the processing of audio-visual integration in schizophrenia compared to healthy controls. The amount of sound-induced double-flash illusions served as an indicator of audio-visual integration. We expected an altered integration as well as a different window of temporal integration for patients. Methods: Fifteen schizophrenia patients and 15 healthy volunteers matched for age and gender were included in this study. We used stimuli with eight different temporal delays (stimulus onset asynchronys (SOAs) 25, 50, 75, 100, 125, 150, 200 and 300 ms) to induce a double-flash illusion. Group differences and the widths of temporal integration windows were calculated on percentages of reported double-flash illusions. Results: Patients showed significantly more illusions (ca. 36-44% vs. 9-16% in control subjects) for SOAs 150-300. The temporal integration window for control participants went from SOAs 25 to 200 whereas for patients integration was found across all included temporal delays. We found no significant relationship between the amount of illusions and either illness severity, chlorpromazine equivalent doses or duration of illness in patients. Conclusions: Our results are interpreted in favour of an enlarged temporal integration window for audio-visual stimuli in schizophrenia patients, which is consistent with previous research.
Article
This scoping review provides a descriptive synthesis of available evidence on children's audiovisual speech perception. We used eight databases to identify the experimental studies published 2000–2019, and reported the data using the guidelines of PRISMA-ScR designed for scoping reviews. While research conducted prior to 2000 provided a strong foundation in this area, the past two decades have brought technical advances that have allowed for more precise measurement of audiovisual speech perception. Thirty-eight studies were identified: 18 articles that focused on children with typical development, 9 focused on children with autism spectrum disorder, 8 focused on children with speech and language disorders, and 3 focused on children with hearing loss. Most of the studies identified were behavioral studies, while a minority reported on neuroanatomical correlates underlying the audiovisual speech perception. Through this scoping review, key gaps were identified that include few studies in clinical populations, a few studies on languages other than English, and variability in terminology to describe similar or overlapping concepts. Further research is needed to inform the development and mechanisms of audiovisual speech integration in children with different language development paths. In addition, the use of common terminology in future research would improve access to evidence and the communication of this knowledge for researchers and clinicians.
Preprint
Full-text available
Anticipating near-future events is fundamental to adaptive behavior, whereby neural processing of predictable stimuli is significantly facilitated relative to non-predictable inputs. Neural oscillations appear to be a key anticipatory mechanism by which processing of upcoming stimuli is modified, and they often entrain to rhythmic environmental sequences. Clinical and anecdotal observations have led to the hypothesis that people with Autism Spectrum Disorder (ASD) may have deficits in generating predictions in daily life, and as such, a candidate neural mechanism may be failure to adequately entrain neural activity to repetitive environmental patterns. Here, we tested this hypothesis by interrogating rhythmic entrainment both behaviorally and electrophysiologically. We recorded high-density electroencephalography in children with ASD (n=31) and Typically Developing (TD) age- and IQ-matched controls (n=20), while they reacted to an auditory target as quickly as possible. This auditory event was either preceded by predictive rhythmic visual cues, or not. Results showed that while both groups presented highly comparable evoked responses to the visual stimuli, children with ASD showed reduced neural entrainment to the rhythmic visual cues, and altered anticipation of the occurrence of these stimuli. Further, in both groups, neuro-oscillatory phase coherence correlated with behavior. These results describe neural processes that may underlie impaired event anticipation in children with ASD, and support the notion that their perception of events is driven more by instantaneous sensory inputs and less by their temporal predictability.
Article
Full-text available
Atypical face perception has been associated with the socio-communicative difficulties that characterize autism spectrum disorder (ASD). Growing evidence, however, suggests that a widespread impairment in face perception is not as common as once thought. One important issue arising with the interpretation of this literature is the relationship between face processing and a more general perceptual tendency to focus on local rather than global information. Previous work has demonstrated that when discriminating faces presented from the same view, older adolescents and adults with ASD perform similarly to typically developing individuals. When faces are presented from different views, however, they perform more poorly-specifically, when access to local cues is minimized. In this study, we assessed the cross-sectional development of face identity discrimination across viewpoint using same- and different-view conditions in children and adolescents with and without ASD. Contrary to the findings in adults, our results revealed that all participants experienced greater difficulty identifying faces from different views than from same views, and demonstrated similar age-expected improvements in performance across tasks. These results suggest that differences in face discrimination across views may only emerge beyond the age of 15 years in ASD.
Article
The temporal relationship between auditory and visual cues is a fundamental feature in the determination of whether these signals will be integrated. The window of perceived simultaneity (TBW) is a construct that describes the epoch of time during which asynchronous auditory and visual stimuli are likely to be perceptually bound. Recently, a number of studies have demonstrated the capacity for perceptual training to enhance temporal acuity for audiovisual stimuli (i.e., narrow the TBW). These studies, however, have only examined multisensory perceptual learning that develops in response to feedback that is provided when making judgments on simple, low-level audiovisual stimuli (i.e., flashes and beeps). Here we sought to determine if perceptual training was capable of altering temporal acuity for audiovisual speech. Furthermore, we also explored whether perceptual training with simple or complex audiovisual stimuli generalized across levels of stimulus complexity. Using a simultaneity judgment (SJ) task, we measured individuals' temporal acuity (as estimated by the TBW) prior to, immediately following, and one week after four consecutive days of perceptual training. We report that temporal acuity for audiovisual speech stimuli is enhanced following perceptual training using speech stimuli. Additionally, we find that changes in temporal acuity following perceptual training do not generalize across the levels of stimulus complexity in this study. Overall, the results suggest that perceptual training is capable of enhancing temporal acuity for audiovisual speech in adults, and that the dynamics of the changes in temporal acuity following perceptual training differ between simple audiovisual stimuli and more complex audiovisual speech stimuli.
Article
Full-text available
DSM-5 Autism Spectrum Disorder (ASD) comprises a set of neurodevelopmental disorders characterized by deficits in social communication and interaction and repetitive behaviors or restricted interests, and may both affect and be affected by multiple cognitive mechanisms. This study attempts to identify and characterize cognitive subtypes within the ASD population using a random forest (RF) machine learning classification model. We trained our model on measures from seven tasks that reflect multiple levels of information processing. 47 ASD diagnosed and 58 typically developing (TD) children between the ages of 9 and 13 participated in this study. Our RF model was 72.7% accurate, with 80.7% specificity and 63.1% sensitivity. Using the RF model, we measured the proximity of each subject to every other subject, generating a distance matrix between participants. This matrix was then used in a community detection algorithm to identify subgroups within the ASD and TD groups, revealing 3 ASD and 4 TD putative subgroups with unique behavioral profiles. We then examined differences in functional brain systems between diagnostic groups and putative subgroups using resting-state functional connectivity magnetic resonance imaging (rsfcMRI). Chi-square tests revealed a significantly greater number of between group differences (p < .05) within the cingulo-opercular, visual, and default systems as well as differences in inter-system connections in the somato-motor, dorsal attention, and subcortical systems. Many of these differences were primarily driven by specific subgroups suggesting that our method could potentially parse the variation in brain mechanisms affected by ASD.
Article
Lay summary: Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to explore whether children with ASD process audio-visual synchrony in ways comparable to their typically developing peers, and the relationship between preference for synchrony and language ability. Results showed that there are differences in attention to audiovisual synchrony between typically developing children and children with ASD. Preference for synchrony was related to the language abilities of children across groups.
Article
Previously, we have shown that people who have had one eye surgically removed early in life during visual development have enhanced sound localization [1] and lack visual dominance, commonly observed in binocular and monocular (eye-patched) viewing controls [2]. Despite these changes, people with one eye integrate auditory and visual components of multisensory events optimally [3]. The current study investigates how people with one eye perceive the McGurk effect, an audiovisual illusion where a new syllable is perceived when visual lip movements do not match the corresponding sound [4]. We compared individuals with one eye to binocular and monocular viewing controls and found that they have a significantly smaller McGurk effect compared to binocular controls. Additionally, monocular controls tended to perceive the McGurk effect less often than binocular controls suggesting a small transient modulation of the McGurk effect. These results suggest altered weighting of the auditory and visual modalities with both short and long-term monocular viewing. These results indicate the presence of permanent adaptive perceptual accommodations in people who have lost one eye early in life that may serve to mitigate the loss of binocularity during early brain development.
Preprint
Full-text available
A common measure of multisensory integration is the McGurk effect, an illusion in which incongruent auditory and visual speech are integrated to produce an entirely different percept. Published studies report that participants who differ in age, gender, culture, native language, or traits related to neurological or psychiatric disorders also differ in their susceptibility to the McGurk effect. These group-level differences are used as evidence for fundamental alterations in sensory processing between populations. However, there is high variability in the McGurk effect within groups: some participants never report the illusion while others always do. High within-group variability means that sample sizes much larger than those in the published literature are necessary to accurately estimate between-group differences. Using empirical data and simulations, we show that a typical study with a sample size of 15 participants per group and a true group difference of 10% will report a group difference of 31%, an effect inflation of three-fold. Reducing the effect inflation by half requires increasing the sample size ten-fold, to 150 participants per group. The combination of within-group variability and small sizes explains two prominent features of the McGurk literature. First, group differences are inflated and highly variable across studies. Second, follow-up studies fail to replicate published reports of large between-group susceptibility differences because initial effect-size estimates are driven by statistical inflation rather than true group differences. These replication failures are especially problematic in studies of clinical populations because they have real-world consequences. Due to the high within-group variability, caution is essential when using the McGurk effect to assess deficits in clinical groups or propose treatment plans, such as training to improve sensory processing.
Chapter
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that is characterized by a constellation of symptoms, including impairments in social communication, restricted interests, and repetitive behaviors. Although sensory issues have long been reported in clinical descriptions of ASD, only the most recent edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) has included differences in sensory processing as part of the diagnostic profile for ASD. Indeed, sensory processing differences are among the most prevalent findings in ASD, and these differences are increasingly recognized as a core component of ASD. Furthermore, characterizing ASD phenotypes on the basis of sensory processing differences has been suggested as a constructive means of creating phenotypic subgroups of ASD, which may be useful to better tailor individualized treatment strategies. Although sensory processing differences are frequently approached from the perspective of deficits in the context of ASD, there are a number of instances in which individuals with ASD outperform their neurotypical counterparts on tests of sensory function. Here, the current state of knowledge regarding sensory processing in ASD is reviewed, with a particular emphasis on auditory and multisensory (i.e., audiovisual) performance. In addition to characterizing the nature of these differences in sensory performance, the chapter focuses on the neurological correlates of these sensory processing differences and how differences in sensory function relate to the other core clinical features of ASD, with an emphasis on speech and language.
Article
Full-text available
Autism spectrum disorder (ASD) is a heterogeneous syndrome characterized by behavioral features such as impaired social communication, repetitive behavior patterns, and a lack of interest in novel objects. A multimodal neuroimaging using magnetic resonance imaging (MRI) in patients with ASD shows highly heterogeneous abnormalities in function and structure in the brain associated with specific behavioral features. To elucidate the mechanism of ASD, several ASD mouse models have been generated, by focusing on some of the ASD risk genes. A specific behavioral feature of an ASD mouse model is caused by an altered gene expression or a modification of a gene product. Using these mouse models, a high field preclinical MRI enables us to non-invasively investigate the neuronal mechanism of the altered brain function associated with the behavior and ASD risk genes. Thus, MRI is a promising translational approach to bridge the gap between mice and humans. This review presents the evidence for multimodal MRI, including functional MRI (fMRI), diffusion tensor imaging (DTI), and volumetric analysis, in ASD mouse models and in patients with ASD and discusses the future directions for the translational study of ASD.
Article
Full-text available
When the image of a speaker saying the bisyllable /aga/ is presented in synchrony with the sound of a speaker saying /aba/, subjects tend to report hearing the sound /ada/. The present experiment explores the effects of spatial separation on this class of perceptual illusion known as the McGurk effect. Synchronous auditory and visual speech signals were presented from different locations. The auditory signal was presented from positions 0°, 30°, 60° and 90° in azimuth away from the visual signal source. The results show that spatial incongruencies do not substantially influence the multimodal integration of speech signals.
Article
Full-text available
In the McGurk effect, perceptual identification of auditory speech syllables is influenced by simultaneous presentation of discrepant visible speech syllables. This effect has been found in subjects of different ages and with various native language backgrounds. But no McGurk tests have been conducted with prelinguistic infants. In the present series of experiments, 5-month-old English-exposed infants were tested for the McGurk effect. Infants were first gaze-habituated to an audiovisual /va/. Two different dishabituation stimuli were then presented: audio /ba/-visual /va/ (perceived by adults as /va/), and audio /da/-visual /va/ (perceived by adults as /da/). The infants showed generalization from the audiovisual /va/ to the audio /ba/-visual /va/ stimulus but not to the audio /da/-visual /va/ stimulus. Follow-up experiments revealed that these generalization differences were not due to a general preference for the audio /da/-visual /va/ stimulus or to the auditory similarity of /ba/ to /va/ relative to /da/. These results suggest that the infants were visually influenced in the same way as Englishspeaking adults are visually influenced.
Article
Full-text available
The Autism Diagnostic Observation Schedule—Generic (ADOS-G) is a semistructured, standardized assessment of social interaction, communication, play, and imaginative use of materials for individuals suspected of having autism spectrum disorders. The observational schedule consists of four 30-minute modules, each designed to be administered to different individuals according to their level of expressive language. Psychometric data are presented for 223 children and adults with Autistic Disorder (autism), Pervasive Developmental Disorder Not Otherwise Specified (PDDNOS) or nonspectrum diagnoses. Within each module, diagnostic groups were equivalent on expressive language level. Results indicate substantial interrater and test—retest reliability for individual items, excellent interrater reliability within domains and excellent internal consistency. Comparisons of means indicated consistent differentiation of autism and PDDNOS from nonspectrum individuals, with some, but less consistent, differentiation of autism from PDDNOS. A priori operationalization of DSM-IV/ICD-10 criteria, factor analyses, and ROC curves were used to generate diagnostic algorithms with thresholds set for autism and broader autism spectrum/PDD. Algorithm sensitivities and specificities for autism and PDDNOS relative to nonspectrum disorders were excellent, with moderate differentiation of autism from PDDNOS.
Article
Full-text available
The brain's ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Recently, our laboratory has demonstrated that a perceptual training paradigm is capable of eliciting a 40% narrowing in the width of this window that is stable for at least 1 week after cessation of training. In the current study, we sought to reveal the neural substrates of these changes. Eleven human subjects completed an audiovisual simultaneity judgment training paradigm, immediately before and after which they performed the same task during an event-related 3T fMRI session. The posterior superior temporal sulcus (pSTS) and areas of auditory and visual cortex exhibited robust BOLD decreases following training, and resting state and effective connectivity analyses revealed significant increases in coupling among these cortices after training. These results provide the first evidence of the neural correlates underlying changes in multisensory temporal binding likely representing the substrate for a multisensory temporal binding window.
Article
Full-text available
Autism spectrum disorders (ASD) are characterized by deficits in social reciprocity and communication, as well as by repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli also have been reported frequently. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically developing (TD) children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ) tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies.
Article
Full-text available
The importance of visual cues in speech perception is illustrated by the McGurk effect, whereby a speaker's facial movements affect speech perception. The goal of the present study was to evaluate whether the McGurk effect is also observed for sung syllables. Participants heard and saw sung instances of the syllables /ba/ and /ga/ and then judged the syllable they perceived. Audio-visual stimuli were congruent or incongruent (e.g., auditory /ba/ presented with visual /ga/). The stimuli were presented as spoken, sung in an ascending and descending triad (C E G G E C), and sung in an ascending and descending triad that returned to a semitone above the tonic (C E G G E C#). Results revealed no differences in the proportion of fusion responses between spoken and sung conditions confirming that cross-modal phonemic information is integrated similarly in speech and song.
Article
Full-text available
The bimodal perception of speech sounds was examined in children with autism as compared to mental age-matched typically developing (TD) children. A computer task was employed wherein only the mouth region of the face was displayed and children reported what they heard or saw when presented with consonant-vowel sounds in unimodal auditory condition, unimodal visual condition, and a bimodal condition. Children with autism showed less visual influence and more auditory influence on their bimodal speech perception as compared to their TD peers, largely due to significantly worse performance in the unimodal visual condition (lip reading). Children with autism may not benefit to the same extent as TD children from visual cues such as lip reading that typically support the processing of speech sounds. The disadvantage in lip reading may be detrimental when auditory input is degraded, for example in school settings, whereby speakers are communicating in frequently noisy environments.
Article
Full-text available
Autism spectrum disorders (ASD) form a continuum of neurodevelopmental disorders, characterized by deficits in communication and reciprocal social interaction, as well as by repetitive behaviors and restricted interests. Sensory disturbances are also frequently reported in clinical and autobiographical accounts. However, surprisingly few empirical studies have characterized the fundamental features of sensory and multisensory processing in ASD. The current study is structured to test for potential differences in multisensory temporal function in ASD by making use of a temporally dependent, low-level multisensory illusion. In this illusion, the presentation of a single flash of light accompanied by multiple sounds often results in the illusory perception of multiple flashes. By systematically varying the temporal structure of the audiovisual stimuli, a "temporal window" within which these stimuli are likely to be bound into a single perceptual entity can be defined. The results of this study revealed that children with ASD report the flash-beep illusion over an extended range of stimulus onset asynchronies relative to children with typical development, suggesting that children with ASD have altered multisensory temporal function. These findings provide valuable new insights into our understanding of sensory processing in ASD and may hold promise for the development of more sensitive diagnostic measures and improved remediation strategies.
Article
Full-text available
This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability. Results showed that the children with ASD were delayed in visual accuracy and audiovisual integration compared to the control group. However, in the audiovisual integration measure, children with ASD appeared to 'catch-up' with their typically developing peers at the older age ranges. The suggestion that children with ASD show a deficit in audiovisual integration which diminishes with age has clinical implications for those assessing and treating these children.
Article
Full-text available
The brain's ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be bound together and perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Here, the plasticity in the size of this temporal window was investigated using a perceptual learning paradigm in which participants were given feedback during a two-alternative forced choice (2-AFC) audiovisual simultaneity judgment task. Training resulted in a marked (i.e., approximately 40%) narrowing in the size of the window. To rule out the possibility that this narrowing was the result of changes in cognitive biases, a second experiment using a two-interval forced choice (2-IFC) paradigm was undertaken during which participants were instructed to identify a simultaneously presented audiovisual pair presented within one of two intervals. The 2-IFC paradigm resulted in a narrowing that was similar in both degree and dynamics to that using the 2-AFC approach. Together, these results illustrate that different methods of multisensory perceptual training can result in substantial alterations in the circuits underlying the perception of audiovisual simultaneity. These findings suggest a high degree of flexibility in multisensory temporal processing and have important implications for interventional strategies that may be used to ameliorate clinical conditions (e.g., autism, dyslexia) in which multisensory temporal function may be impaired.
Article
Full-text available
We describe recent progress in our program of research that aims to use functional magnetic resonance imaging (fMRI) to identify and delineate the brain systems involved in social perception and to chart the development of those systems and their roles as mechanisms supporting the development of social cognition in children, adolescents, and adults with and without autism. This research program was initiated with the intention of further specifying the role of the posterior superior temporal sulcus (STS) region in the network of neuroanatomical structures comprising the social brain. Initially, this work focused on evaluating STS function when typically developing adults were engaged in the visual analysis of other people's actions and intentions. We concluded that that the STS region plays an important role in social perception via its involvement in representing and predicting the actions and social intentions of other people from an analysis of biological-motion cues. These studies of typically developing people provided a set of core findings and a methodological approach that informed a set of fMRI studies of social perception dysfunction in autism. The work has established that dysfunction in the STS region, as well as reduced connectivity between this region and other social brain structures including the fusiform gyrus and amygdala, play a role in the pathophysiology of social perception deficits in autism. Most recently, this research program has incorporated a developmental perspective in beginning to chart the development of the STS region in children with and without autism.
Article
Full-text available
In the McGurk effect, perceptual identification of auditory speech syllables is influenced by simultaneous presentation of discrepant visible speech syllables. This effect has been found in subjects of different ages and with various native language backgrounds. But no McGurk tests have been conducted with prelinguistic infants. In the present series of experiments, 5-month-old English-exposed infants were tested for the McGurk effect. Infants were first gaze-habituated to an audiovisual /va/. Two different dishabituation stimuli were then presented: audio /ba/-visual /va/ (perceived by adults as /va/), and audio /da/-visual /va/ (perceived by adults as /da/). The infants showed generalization from the audiovisual /va/ to the audio /ba/-visual /va/ stimulus but not to the audio /da/-visual /va/ stimulus. Follow-up experiments revealed that these generalization differences were not due to a general preference for the audio /da/-visual /va/ stimulus or to the auditory similarity of /ba/ to /va/ relative to /da/. These results suggest that the infants were visually influenced in the same way as English-speaking adults are visually influenced.
Article
Full-text available
Childhood autism is now widely viewed as being of developmental neurobiological origin. Yet, localised structural and functional brain correlates of autism have to be established. Structural brain-imaging studies performed in autistic patients have reported abnormalities such as increased total brain volume and cerebellar abnormalities. However, none of these abnormalities fully account for the full range of autistic symptoms. Functional brain imaging, such as positron emission tomography (PET), single photon emission computed tomography (SPECT) and functional MRI (fMRI) have added a new perspective to the study of normal and pathological brain functions. In autism, functional studies have been performed at rest or during activation. However, first-generation functional imaging devices were not sensitive enough to detect any consistent dysfunction. Recently, with improved technology, two independent groups have reported bilateral hypoperfusion of the temporal lobes in autistic children. In addition, activation studies, using perceptive and cognitive paradigms, have shown an abnormal pattern of cortical activation in autistic patients. These results suggest that different connections between particular cortical regions could exist in autism. The purpose of this review is to present the main results of rest and activation studies performed in autism.
Conference Paper
Childhood autism is now widely viewed as being of developmental neurobiological origin. Yet, localised structural and functional brain correlates of autism have to be established. Structural brain-imaging studies performed in autistic patients have reported abnormalities such as increased total brain volume and cerebellar abnormalities. However, none of these abnormalities fully account for the full range of autistic symptoms. Functional brain imaging, such as positron emission tomography (PET), single photon emission computed tomography (SPECT) and functional MRI (fMRI) have added a new perspective to the study of normal and pathological brain functions. In autism, functional studies have been performed at rest or during activation. However, first-generation functional imaging devices were not sensitive enough to detect any consistent dysfunction. Recently, with improved technology, two independent groups have reported bilateral hypoperfusion of the temporal lobes in autistic children. In addition, activation studies, using perceptive and cognitive paradigms, have shown an abnormal pattern of cortical activation in autistic patients. These results suggest that different connections between particular cortical regions could exist in autism. The purpose of this review is to present the main results of rest and activation studies performed in autism.
Article
No one doubts the importance of the face in social interactions, but people seldom think of it as playing much of a role in verbal communication. A number of observations suggest otherwise, though: Many people dislike talking over the telephone and are irritated by poorly dubbed foreign films. Some people even comment that they hear the television better with their glasses on. Children born blind learn some speech distinctions more slowly than their sighted cohorts. It has been well known for some time that the deaf and hearing impaired can make valuable use of lipreading, which is better termed speechreading, but more recently investigators have shown that even people with normal hearing are greatly influenced by the visible speech in face-to-face communication. Our research is aimed at understanding how people perceive speech by both ear and eye.
Article
An index is suggested that measures the change in Wilks' lambda that occurs when a variable is removed from a complete set that is purported to differentiate among groups.
Article
Autistic children individually matched for mental age with normal subjects were tested on memory for unfamiliar faces and on lip reading ability. The results show that autistic children are poorer than controls in memory for faces but comparable to controls in lip-reading. Autistic children show little influence on their auditory speech perception from visual speech. The results are discussed in relation to Bruce and Young's (1986) model of face recognition. The independence between facial speech and memory for faces is in accordance with this model but is only observed in autistic subjects.
Article
This study examined unisensory and multisensory speech perception in 8-17 year old children with autism spectrum disorders (ASD) and typically developing controls matched on chronological age, sex, and IQ. Consonant-vowel syllables were presented in visual only, auditory only, matched audiovisual, and mismatched audiovisual ("McGurk") conditions. Participants with ASD displayed deficits in visual only and matched audiovisual speech perception. Additionally, children with ASD reported a visual influence on heard speech in response to mismatched audiovisual syllables over a wider window of time relative to controls. Correlational analyses revealed associations between multisensory speech perception, communicative characteristics, and responses to sensory stimuli in ASD. Results suggest atypical speech perception is linked to broader behavioral characteristics of ASD.
Article
The importance of multisensory integration for human behavior and perception is well documented, as is the impact that temporal synchrony has on driving such integration. Thus, the more temporally coincident two sensory inputs from different modalities are, the more likely they will be perceptually bound. This temporal integration process is captured by the construct of the temporal binding window-the range of temporal offsets within which an individual is able to perceptually bind inputs across sensory modalities. Recent work has shown that this window is malleable and can be narrowed via a multisensory perceptual feedback training process. In the current study, we seek to extend this by examining the malleability of the multisensory temporal binding window through changes in unisensory experience. Specifically, we measured the ability of visual perceptual feedback training to induce changes in the multisensory temporal binding window. Visual perceptual training with feedback successfully improved temporal visual processing, and more importantly, this visual training increased the temporal precision across modalities, which manifested as a narrowing of the multisensory temporal binding window. These results are the first to establish the ability of unisensory temporal training to modulate multisensory temporal processes, findings that can provide mechanistic insights into multisensory integration and which may have a host of practical applications.
Article
"Oral speech intelligibility tests were conducted with, and without, supplementary visual observation of the speaker's facial and lip movements. The difference between these two conditions was examined as a function of the speech-to-noise ratio and of the size of the vocabulary under test. The visual contribution to oral speech intelligibility (relative to its possible contribution) is, to a first approximation, independent of the speech-to-noise ratio under test. However, since there is a much greater opportunity for the visual contribution at low speech-to-noise ratios, its absolute contribution can be exploited most profitably under these conditions." (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Replies to S. E. Edgell's (see record 1996-03503-001) comments on R. W. Frick's (see record 1995-27802-001) article regarding the null hypothesis. Frick argues that the good-effort criterion does violate minor meta-rules, whereas never allowing experimenters to accept the point null hypothesis violates a more important meta-rule. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Reisberg, McLean, and Goldfield (1987) have shown that vision plays a part in the perception of speech even when the auditory signal is clearly audible and intact. Using an alternative method the present study replicated their finding. Clearly audible spoken messages were presented in audio-only and audio-visual conditions, and the adult participants' resulting comprehension was measured. Stories were presented in French (Expt 1), in a Glaswegian accent (Expt 2), and by presenting spoken information that was semantically and syntactically complex (Experiment 3). Three separate groups of 16 adult female participants aged 19-21 participated in the three experiments. In all three experiments, comprehension improved significantly when the speaker's face was visible.
Article
Children with autism were compared to developmentally matched children with Down syndrome or typical development in terms of their ability to visually orient to two social stimuli (name called, hands clapping) and two nonsocial stimuli (rattle, musical jack-in-the-box), and in terms of their ability to share attention (following another's gaze or point). It was found that, compared to children with Down syndrome or typical development, children with autism more frequently failed to orient to all stimuli, and that this failure was much more extreme for social stimuli. Children with autism who oriented to social stimuli took longer to do so compared to the other two groups of children. Children with autism also exhibited impairments in shared attention. Moreover, for both children with autism and Down syndrome, correlational analyses revealed a relation between shared attention performance and the ability to orient to social stimuli, but no relation between shared attention performance and the ability to orient to nonsocial stimuli. Results suggest that social orienting impairments may contribute to difficulties in shared attention found in autism.
Article
The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli. High-level platform-independent languages like MATLAB are best for creating the numbers that describe the desired images. Low-level, computer-specific VideoToolbox routines control the hardware that transforms those numbers into a movie. Transcending the particular computer and language, we discuss the nature of the computer-display interface, and how to calibrate and control it.
Article
Everyday experience involves the continuous integration of information from multiple sensory inputs. Such crossmodal interactions are advantageous since the combined action of different sensory cues can provide information unavailable from their individual operation, reducing perceptual ambiguity and enhancing responsiveness. The behavioural consequences of such multimodal processes and their putative neural mechanisms have been investigated extensively with respect to orienting behaviour and, to a lesser extent, the crossmodal coordination of spatial attention. These operations are concerned mainly with the determination of stimulus location. However, information from different sensory streams can also be combined to assist stimulus identification. Psychophysical and physiological data indicate that these two crossmodal processes are subject to different temporal and spatial constraints both at the behavioural and neuronal level and involve the participation of distinct neural substrates. Here we review the evidence for such a dissociation and discuss recent neurophysiological, neuroanatomical and neuroimaging findings that shed light on the mechanisms underlying crossmodal identification, with specific ref