Article

Generality and specificity in the effects of musical expertise on perception and cognition

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Performing musicians invest thousands of hours becoming experts in a range of perceptual, attentional, and cognitive skills. The duration and intensity of musicians’ training – far greater than that of most educational or rehabilitation programs – provides a useful model to test the extent to which skills acquired in one particular context (music) generalize to different domains. Here, we asked whether the instrument-specific and more instrument-general skills acquired during professional violinists’ and pianists’ training would generalize to superior performance on a wide range of analogous (largely non-musical) skills, when compared to closely matched non-musicians. Violinists and pianists outperformed non-musicians on fine-grained auditory psychophysical measures, but surprisingly did not differ from each other, despite the different demands of their instruments. Musician groups did differ on a tuning system perception task: violinists showed clearest biases towards the tuning system specific to their instrument, suggesting that long-term experience leads to selective perceptual benefits given a training-relevant context. However, we found only weak evidence of group differences in non-musical skills, with musicians differing marginally in one measure of sustained auditory attention, but not significantly on auditory scene analysis or multi-modal sequencing measures. Further, regression analyses showed that this sustained auditory attention metric predicted more variance in one auditory psychophysical measure than did musical expertise. Our findings suggest that specific musical expertise may yield distinct perceptual outcomes within contexts close to the area of training. Generalization of expertise to relevant cognitive domains may be less clear, particularly where the task context is non-musical.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Auditory working memory for tones is an explicit cognitive task that has been examined in a number of studies in which improved performance is shown in groups of musicians compared to non-musicians 12,17,18 . Musicians have also been shown to have improved sustained attention and attention to pitch direction, as well as better general auditory cognition in terms of phonological working memory and speech-in-noise perception [19][20][21][22] . ...
... The studies that have tested the relationship between musical instrument experience and perceptual and cognitive abilities have used perceptual and cognitive tasks that differ markedly from each other 19 . There is a need in this area for perceptual and cognitive tasks that are more closely aligned in methodology and output metrics to facilitate the measurement of task effects unique to each domain. ...
... Musical sophistication is related to working memory precision but not perceptual precision. Previous work suggested that musicians have lower frequency discrimination thresholds than nonmusicians 5,19 . Other work suggests that working memory may drive these effects in certain circumstances 26,27 . ...
Article
Full-text available
Previous studies have found conflicting results between individual measures related to music and fundamental aspects of auditory perception and cognition. The results have been difficult to compare because of different musical measures being used and lack of uniformity in the auditory perceptual and cognitive measures. In this study we used a general construct of musicianship, musical sophistication, that can be applied to populations with widely different backgrounds. We investigated the relationship between musical sophistication and measures of perception and working memory for sound by using a task suitable to measure both. We related scores from the Goldsmiths Musical Sophistication Index to performance on tests of perception and working memory for two acoustic features—frequency and amplitude modulation. The data show that musical sophistication scores are best related to working memory for frequency in an analysis that accounts for age and non-verbal intelligence. Musical sophistication was not significantly associated with working memory for amplitude modulation rate or with the perception of either acoustic feature. The work supports a specific association between musical sophistication and working memory for sound frequency.
... Finally, the last category is listener experience, which here will be approximated by musical training as measured by the training sub-scale of the Goldsmiths Musical Sophistication Index (Gold-MSI). Musical expertise has been shown to affect lowlevel perceptual skills (Fujioka, Ross, Kakigi, Pantev, & Trainor, 2006;Fujioka, Trainor, Ross, Kakigi, & Pantev, 2004;Micheyl, Delhommeau, Perrot, & Oxenham, 2006;Skoe & Kraus, 2013;Strait, Parbery-Clark, Hittner, & Kraus, 2012), cognitive skills (Carey et al., 2015;Corrigall & Trainor, 2011;Franklin et al., 2008;Tsang & Conrad, 2011) and expectancies (Hansen & Pearce, 2014) in relation to both music and everyday listening situations. This thesis will address all these categories at least once throughout in varying combinations, with each chapter exploring one subset of auditory streaming. ...
... Beyond its effects on perception and the brain at the neural and structural levels, musical training has also been studied in relation to cognitive skills such as memory (Chan, Ho, & Cheung, 1998;Franklin et al., 2008;Strait et al., 2012), spatial-temporal skill (Gromko & Poorman, 1998;Hurwitz, Wolff, Bortnick, & Kokas, 1975;Rauscher & Hinton, 2011) and general IQ (Bilhartz, Bruhn, & Olson, 1999;Phillips, 1976), with links between musical training and reading comprehension (Corrigall & Trainor, 2011) or reading skill (Anvari, Trainor, Woodside, & Levy, 2002;Moreno, Friesen, & Bialystok, 2011;Tsang & Conrad, 2011) and music and speech processing (Strait & Kraus, 2011) also explored. This body of literature has a much more complex output of results than the sections discussed above: some find benefits of musical training (Chan et al., 1998), others do not (Haimson, Swain, & Winner, 2011;Steele, Ball, & Runk, 1997) and some find improvements to some abilities (i.e., auditory psychophysical measures) but not to others (i.e., multi-modal sequence processing; Carey et al., 2015). Careful consideration and appropriate controls should therefore be taken when evaluating effects of musical training on cognitive tasks. ...
... In some cases, these even exceed musical training group definitions (Dean et al., 2014a). Research in musical training suggests that it is a prolonged and focused exposure to music that causes differences in perception between musicians and non-musicians (Fujioka et al., 2006(Fujioka et al., , 2004Habib & Besson, 2009;Micheyl et al., 2006), but only for domain-specific (i.e., musical) tasks (Bigand & Poulin-Charronnat, 2006;Carey et al., 2015). Those who receive Western musical training are also more likely to be exposed to and understand Western classical music, where non-musicians may have less exposure to this genre (though exposure is unlikely to be zero) and more to popular genres such as rock, pop or dance. ...
Thesis
Full-text available
How do we know that a melody is a melody? In other words, how does the human brain extract melody from a polyphonic musical context? This thesis begins with a theoretical presentation of musical auditory scene analysis (ASA) in the context of predictive coding and rule-based approaches and takes methodological and analytical steps to evaluate selected components of a proposed integrated framework for musical ASA, unified by prediction. Predictive coding has been proposed as a grand unifying model of perception, action and cognition and is based on the idea that brains process error to refine models of the world. Existing models of ASA tackle distinct subsets of ASA and are currently unable to integrate all the acoustic and extensive contextual information needed to parse auditory scenes. This thesis proposes a framework capable of integrating all relevant information contributing to the understanding of musical auditory scenes, including auditory features, musical features, attention, expectation and listening experience, and examines a subset of ASA issues – timbre perception in relation to musical training, modelling temporal expectancies, the relative salience of musical parameters and melody extraction – using probabilistic approaches. Using behavioural methods, attention is shown to influence streaming perception based on timbre more than instrumental experience. Using probabilistic methods, information content (IC) for temporal aspects of music as generated by IDyOM (information dynamics of music; Pearce, 2005), are validated and, along with IC for pitch and harmonic aspects of the music, are subsequently linked to perceived complexity but not to salience. Furthermore, based on the hypotheses that a melody is internally coherent and the most complex voice in a piece of polyphonic music, IDyOM has been extended to extract melody from symbolic representations of chorales by J.S. Bach and a selection of string quartets by W.A. Mozart.
... Musical training has been put forth as a promising tool to promote auditory specialization and therefore support literacy skill development (Rolka & Silverman, 2015;Tallal & Gaab, 2006). Individuals with musical training have demonstrated heightened discrimination skills over nonmusicians for several components of auditory processing, including spectral features such as pitch (Amir, Amir, & Kishon-Rabin, 2003;Besson, Schon, Moreno, Santos, & Magne, 2007;Carey et al., 2015;Kishon-Rabin, Amir, Vexler, & Zaltz, 2001;Koelsch, Schroger, & Tervaniemi, 1999;Magne, Schon, & Besson, 2006;Micheyl, Delhommeau, Perrot, & Oxenham, 2006;Spiegel & Watson, 1984), and temporal features such as elements of timing (Cicchini, Arrighi, Cecchetti, Giusti, & Burr, 2012;Ehrle & Samson, 2005;Gaab et al., 2005;Rammsayer & Altenmüller, 2006). This document is copyrighted by the American Psychological Association or one of its allied publishers. ...
... Accordingly, it is unclear whether musicians with dyslexia may be characterized by working memory deficits in general, or whether the auditory expertise afforded by musical training may be associated with some advantages in processing auditory information even when taxing the working memory system. Auditory sequencing is particularly of interest since attending to and reproducing auditory sequences is one of the primary auditory skills developed through musical training (Carey et al., 2015;Loui, Wessel, & Hudson Kam, 2010;Rohrmeier, Rebuschat, & Cross, 2011;van Zuijen, Sussman, Winkler, Naatanen, & Tervaniemi, 2005). Moreover, auditory sequencing has also been shown to be a critical building block for language (Tallal & Gaab, 2006;Tallal et al., 1985). ...
... The present findings suggest that musical training is associated with specialized auditory sequencing abilities in those with dyslexia, despite findings of poor auditory working memory performance in musicians with dyslexia. These results support our hypothesis that despite reading difficulties, musicians with dyslexia exhibit tone sequencing skills similar to those that have been shown to be developed and mastered in typical musicians (Carey et al., 2015;Loui et al., 2010;Rohrmeier et al., 2011;van Zuijen et al., 2005). These refined tone sequencing abilities in musicians with dyslexia are also in line with prior findings of auditory perception skills that did not significantly differ between musicians with dyslexia and typical musicians for nonlinguistic auditory tasks (Bishop-Liebler et al., 2014;Weiss et al., 2014). ...
Article
Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate three synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time (VOT) and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Since these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance auditory processing skills, suggested to be important for literacy, in individuals with dyslexia.
... There is growing evidence that musical experience shapes the brain both structurally (Gaser & Schlaug, 2003;Hyde et al., 2009) and functionally (Schlaug, 2001;Lappe et al., 2008). Cross-sectional comparisons of musicians and non-musicians have revealed musician advantages in various aspects of cognitive and sensory function including attention and inhibitory control (Bugos et al., 2007;Bialystok & Depape, 2009;Strait et al., 2010;Rodrigues et al., 2013;Moreno et al., 2014;Carey et al., 2015;Costa-Giomi, 2015), frequency discrimination (Tervaniemi et al., 2005;Micheyl et al., 2006) and backward masking (Strait et al., 2010) as well as neural processing of speech (Sch€ on et al., 2004;Musacchia et al., 2007;Wong et al., 2007;Parbery-Clark et al., 2009;Strait et al., 2014). Although cross-sectional studies cannot differentiate effects of training from pre-existing differences, an increasing number of longitudinal studies shows the emergence of changes within individuals over time (Moreno et al., 2009(Moreno et al., , 2011Tierney et al., 2013;Chobert et al., 2014;Kraus et al., 2014;Putkinen et al., 2014;Slater et al., 2014Slater et al., , 2015, suggesting that at least some musician enhancements may be due, in part, to experience-based plasticity rather than innate differences between those who pursue music training and those who do not (see Kraus & White-Schwoch, 2016 for review). ...
... There are very few studies investigating differential effects of musical expertise on cognitive function, although initial evidence suggests that cognitive enhancements may be independent of instrument (Carey et al., 2015) and extend to vocalists (Bialystok & Depape, 2009). However, Carey et al. (2015) found only weak evidence overall for the transfer of musical training to non-musical tasks and the factors contributing to the generalization of effects remain a point of ongoing discussion (for example, see Benz et al., 2015;Costa-Giomi, 2015 for review). ...
... There are very few studies investigating differential effects of musical expertise on cognitive function, although initial evidence suggests that cognitive enhancements may be independent of instrument (Carey et al., 2015) and extend to vocalists (Bialystok & Depape, 2009). However, Carey et al. (2015) found only weak evidence overall for the transfer of musical training to non-musical tasks and the factors contributing to the generalization of effects remain a point of ongoing discussion (for example, see Benz et al., 2015;Costa-Giomi, 2015 for review). It has been suggested that broader cognitive benefits of music training may be mediated by inhibitory control (Deg e et al., 2011;Moreno & Farzan, 2015), therefore it is of particular interest to identify the components of musical activity that may be effective in strengthening inhibitory control. ...
Article
Full-text available
Comparisons of musicians and non-musicians have revealed enhanced cognitive and sensory processing in musicians, with longitudinal studies suggesting these enhancements may be due in part to experience-based plasticity. Here, we investigate the impact of primary instrument on the musician signature of expertise by assessing three groups of young adults: percussionists, vocalists, and non-musician controls. We hypothesize that primary instrument engenders selective enhancements reflecting the most salient acoustic features to that instrument, whereas cognitive functions are enhanced regardless of instrument. Consistent with our hypotheses, percussionists show more precise encoding of the fast-changing acoustic features of speech than non-musicians, whereas vocalists have better frequency discrimination and show stronger encoding of speech harmonics than non-musicians. There were no strong advantages to specialization in sight-reading vs. improvisation. These effects represent subtle nuances to the signature since the musician groups do not differ from each other in these measures. Interestingly, percussionists outperform both non-musicians and vocalists in inhibitory control. Follow-up analyses reveal that within the vocalists and non-musicians, better proficiency on an instrument other than voice is correlated with better inhibitory control. Taken together, these outcomes suggest the more widespread engagement of motor systems during instrumental practice may be an important factor for enhancements in inhibitory control, consistent with evidence for overlapping neural circuitry involved in both motor and cognitive control. These findings contribute to the ongoing refinement of the musician signature of expertise and may help to inform the use of music in training and intervention to strengthen cognitive function. This article is protected by copyright. All rights reserved.
... For auditory perception, one long-recognized source of interindividual variability is musical training [19]. Musical training provides established benefits for music-related tasks, such as fine-grained pitch discrimination [6,20]. Generalization to basic auditory processes is still under scrutiny however. ...
... No difference was observed across groups when comparing accuracy, as measured by the slopes of the psychometric functions (s, t 132 ¼ 0.83, p ¼ 0.4). Groups differed for extreme interval values, corresponding to small log-frequency distances between T1 and T2 (upper asymptote: t 132 ¼ 2.72, p ¼ 0.007; lower asymptote: t 132 ¼ 3.60, p , 0.001), consistent with a better fine frequency discrimination for musicians [20]. The point of subjective equality was equivalent for the two groups (t 132 ¼ 20.69, p ¼ 0.5). ...
... An effect of auditory expertise has been shown for informational masking. Musicians are less susceptible to informational masking than non-musicians when using complex maskers such as random tone clouds with a large degree of uncertainty [28], environmental sounds [20] or simultaneous speech [23]. This has been interpreted as better attentional resolution for musicians [28]. ...
Article
Full-text available
Because musicians are trained to discern sounds within complex acoustic scenes, such as an orchestra playing, it has been hypothesized that musicianship improves general auditory scene analysis abilities. Here, we compared musicians and non-musicians in a behavioural paradigm using ambiguous stimuli, combining performance, reaction times and confidence measures. We used ‘Shepard tones’, for which listeners may report either an upward or a downward pitch shift for the same ambiguous tone pair. Musicians and non-musicians performed similarly on the pitch-shift direction task. In particular, both groups were at chance for the ambiguous case. However, groups differed in their reaction times and judgements of confidence. Musicians responded to the ambiguous case with long reaction times and low confidence, whereas non-musicians responded with fast reaction times and maximal confidence. In a subsequent experiment, non-musicians displayed reduced confidence for the ambiguous case when pure-tone components of the Shepard complex were made easier to discern. The results suggest an effect of musical training on scene analysis: we speculate that musicians were more likely to discern components within complex auditory scenes, perhaps because of enhanced attentional resolution, and thus discovered the ambiguity. For untrained listeners, stimulus ambiguity was not available to perceptual awareness. This article is part of the themed issue ‘Auditory and visual scene analysis’.
... The beneficial effects of musical training on auditory cognition have long been recognized (Zatorre, 2005). Although, the generalizability of musical training to higher cognitive and nonmusic-related tasks (beyond pitch processing) has been discussed controversially (Moreno and Bidelman, 2014;Ruggles et al., 2014;Carey et al., 2015), many studies report beneficial effects on auditory perception. For example, musical training has been suggested to increase auditory working memory (Zhang et al., 2020), aspects of auditory scene analysis (Pelofi et al., 2017), or inhibitory control (Slater et al., 2018). ...
... These were related to improved inhibitory control and have been shown to more strongly impact speech in noise perception compared to vocal training (Slater and Kraus, 2016;Slater et al., 2017Slater et al., , 2018. However, other studies did not report the effects of the type of musical instrument on the training benefit for auditory psychophysical measures (comparing violinist and pianists: Carey et al., 2015), or on an age-related benefit from musical training for speech perception in noise (comparing several instrument families: Zhang et al., 2020). Both, auditorymotor coupling (Assaneo and Poeppel, 2018;Assaneo et al., 2019aAssaneo et al., , 2021 and speech perception (Ghitza, 2011;Giraud and Poeppel, 2012;Gross et al., 2013;Hyafil et al., 2015;Rimmele et al., 2015Rimmele et al., , 2018Rimmele et al., , 2021Kösem and van Wassenhove, 2017) have been related to rhythmic processing. ...
... Individuals with high vs. low synchrony behavior-despite strong differences in whether they received musical training and the overall years of trainingshowed no differences in the type of musical instrument they were trained on. In line with our findings, others have shown no effect of the type of musical training on auditory psychophysical measures (comparing violinist and pianists: Carey et al., 2015), or on an age-related benefit from musical training for speech perception in noise (comparing several instrument families: Zhang et al., 2020). ...
Article
Full-text available
Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants ( n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales ( General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills ), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.
... The extent to which the specific auditory skills developed through musical experience may transfer to non-musical domains is a matter of continuing debate [14][15][16][17][18], however numerous components of auditory processing that support the perception of speech in noise have been found to be strengthened in musicians, including syllable discrimination [19][20][21] and the processing of temporal speech cues [22][23][24], prosody [25], pitch [26][27][28][29][30] and melodic contour [31]. Further, musicians demonstrate enhanced auditory cognitive function such as working memory [32][33][34] and attention [6,35,36], as well as enhanced neural representation of speech when presented in acoustically-compromised conditions [6,8,9,[37][38][39][40]. ...
... Research assessing the impact of musical skill on more general aspects of auditory and cognitive processing has also yielded mixed results, for example a recent study did not find any difference between musicians and non-musicians in multi-modal sequencing or auditory scene analysis, and the authors emphasize the importance of task context as a factor that may influence the transfer of musical skills to non-musical domains [18]. The complex processing demands of both speech and music may point to similarities that are important for transfer between these domains. ...
... Post hoc analyses revealed that Group 2 showed significant improvement after 2 years of musical training (year 1 pre-training assessment to year 3 post-training assessment: paired t (18) Fig. 2. Significance thresholds were adjusted using Bonferroni corrections to allow for multiple comparisons. ...
... Auditory working memory for tones is an explicit cognitive task that has been examined in a number of studies in which improved performance is shown in groups of musicians compared to non-musicians (Ding et al., 2018;Talamini et al., 2021;Williamson et al., 2010). Musicians have also been shown to have improved sustained attention (Carey et al., 2015) and attention to pitch direction (Ouimet et al., 2012), as well as better general auditory cognition in terms of phonological working memory and speech-in-noise perception (Parbery-Clark et al., 2011. ...
... Previous studies of the relationship between musicality and perceptual and cognitive abilities have used perceptual and cognitive tasks that differ markedly from each other (Carey et al., 2015). There is a need in this area for perceptual and cognitive tasks that are more closely aligned in methodology and output metrics to facilitate comparison of task effects. ...
... Previous work suggested that musicians have lower frequency discrimination thresholds than non-musicians (Carey et al., 2015;Kishon-Rabin et al., 2001). Other work suggests that working memory may drive these effects in certain circumstances (Ahissar et al., 2009;Zhang et al., 2016). ...
Preprint
Full-text available
Musical engagement may be associated with better listening skills, such as the perception of and working memory for notes, in addition to the appreciation of musical rules. The nature and extent of this association is controversial. In this study we assessed the relationship between musical engagement and both sound perception and working memory. We developed a task to measure auditory perception and working memory for sound using a behavioural measure for both, precision . We measured the correlation between these tasks and musical sophistication based on a validated measure (the Goldsmiths Musical Sophistication Index) that can be applied to populations of both musicians and non-musicians. The data show that musical sophistication accounts for 21% of the variance in the precision of working memory for frequency in an analysis that accounts for age and non-verbal intelligence. Musical sophistication was not significantly associated with the precision of working memory for amplitude modulation rate or with the precision of perception of either acoustic feature. The work supports a specific association between musical sophistication and working memory for sound frequency.
... Since Tierney et al. (2008) found that musicians were able to reproduce longer auditory sequences (spoken color names) than nonmusicians (but cf. Carey et al., 2015), we hypothesized that musicians would show better recall performance in the reproduction task than nonmusicians. However, since musicians do not appear to show better statistical learning ability than nonmusicians in artificial grammar learning studies (Loui et al., 2010;Rohrmeier et al., 2011), we hypothesized that error position effects would not differ between the groups. ...
... It is possible that this result reflects better working and STM in our musician group, which would also be consistent with previous results (Franklin et al., 2008). On the other hand, using a paradigm very similar to the active reproduction task employed in the present experiments, Carey et al. (2015) did not find any difference in recall performance between nonmusicians and a group of musicians with at least as much musical training as the musicians who participated in Experiment 4. In light of these considerations, we suggest that recall performance may constitute a less robust and precise processing-based measure of statistical learning in reproduction tasks than the error position effect. ...
Article
Statistical learning plays an important role in acquiring the structure of cultural communication signals such as speech and music, which are both perceived and reproduced. However, statistical learning is typically investigated through passive exposure to structured signals, followed by offline explicit recognition tasks assessing the degree of learning. Such experimental approaches fail to capture statistical learning as it takes place and require post hoc conscious reflection on what is thought to be an implicit process of knowledge acquisition. To better understand the process of statistical learning in active contexts while addressing these shortcomings, we introduce a novel, processing-based measure of statistical learning based on the position of errors in sequence reproduction. Across five experiments, we employed this new technique to assess statistical learning using artificial pure-tone or environmental-sound languages with controlled statistical properties in passive exposure, active reproduction, and explicit recognition tasks. The new error position measure provided a robust, online indicator of statistical learning during reproduction, with little carryover from prior statistical learning via passive exposure and no correlation with recognition-based estimates of statistical learning. Error position effects extended consistently across auditory domains, including sequences of pure tones and environmental sounds. Whereas recall performance showed significant variability across experiments, and little evidence of being improved by statistical learning, the error position effect was highly consistent for all participant groups, including musicians and nonmusicians. We discuss the implications of these results for understanding psychological mechanisms underlying statistical learning and compare the evidence provided by different experimental measures. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
... Many studies have reported that musicians have better cognitive abilities than nonmusicians (e.g., Moreno et al. 2011;Lee et al. 2007). A musician advantage has also been documented for memory and WM, in both children and adults (Chan et al. 1998 although these improved cognitive abilities may not necessarily transfer to the auditory domain (Carey et al. 2015). However, there is still a debate on whether music training improves cognition, or if individuals who succeed as musicians are those who already have higher cognitive abilities. ...
... p = 0.68). Even when comparing musician groups which are considered to have different demands on their ability to manipulate depth of pitch and amplitude modulations (i.e., violinists, who are required to fine tune their intonation and vibrato, and pianists, who have fixed tuning), little difference has been found in AM or FM depth sensitivity between the groups (Ruggles et al. 2014;Carey et al. 2015). Since all of our participants in the musician group not only had over 10 years of music training but were also college students who were obtaining a minor or a major in music, there is no indication that our sample of musicians was not representative. ...
Article
Objectives: Speech-In-Noise (SIN) perception is essential for everyday communication. In most communication situations, the listener requires the ability to process simultaneous complex auditory signals in order to understand the target speech or target sound. As the listening situation becomes more difficult, the ability to distinguish between speech and noise becomes dependent on recruiting additional cognitive resources, such as working memory (WM). Previous studies have explored correlations between WM and SIN perception in musicians and non-musicians, with mixed findings. However, no study to date has examined the speech perception abilities of musicians and non-musicians with similar WM capacity. The objectives of this study were to investigate (1) whether musical experience results in improved listening in adverse listening situations, and (2) whether the benefit of musical experience can be separated from the effect of greater WM capacity. Design: Forty-nine young musicians and non-musicians were assigned to subgroups of high versus low WM, based on performance on the backward digit span test. To investigate the effects of music training and WM on SIN perception, performance was assessed on clinical tests of speech perception in background noise. Listening effort (LE) was assessed in a dual-task paradigm, and via self-report. We hypothesized that musicians would have an advantage when listening to SIN, at least in terms of reduced LE. Results: There was no statistically significant difference between musicians and nonmusicians, and no significant interaction between music training and WM on any of the outcome measures used in this study. However, a significant effect of WM on SIN ability was found on both the QuickSIN and the HINT tests. Conclusion: The results of this experiment suggest that music training does not provide an advantage in adverse listening situations either in terms of improved speech understanding or reduced LE. While musicians have been shown to have heightened basic auditory abilities, the effect on SIN performance may be more subtle. Our results also show that regardless of prior music training, listeners with high WM capacity are able to perform significantly better on speech-in-noise tasks.
... Goldsmiths Musical Sophistication Index. In line with recent research on musical abilities (e.g., Carey et al., 2015), we used the Goldsmiths Musical Sophistication Index (Müllensiefen, Gingras, Stewart, & Musil, 2014) to assess participants' musical experience. The Gold-MSI is a comprehensive instrument that was developed to account for individual differences in active musical engagement beyond the traditional focus on the ability to play an instrument. ...
... Also, different instruments might impact inhibitory control in various ways. Past research that compared two specific instrumental groups (violinists versus pianists) (Carey et al., 2015) found differences in terms of auditory skills (e.g., violinists showed a greater ability to detect small changes in pitch than both pianists and non-musicians), suggesting possible effects on general cognitive skills as well. ...
Conference Paper
Full-text available
The purpose of our study was to determine whether active musical engagement alleviates decline in inhibitory control due to cognitive aging. Given that musical training in young adults has been shown to improve attentional performance, we can expect this benefit to persist for older adults as well. With the help of the stop-signal procedure, we measured response inhibition of young and older adults who provided a self-reported assessment of their musical engagement, using the recently validated Goldsmiths Musical Sophistication Index. The Gold-MSI addresses a variety of musical activities and thus offers a more comprehensive measure than the ability to play a musical instrument used in the past. Results of the experiment showed that older participants had longer stop-signal reaction times, independently of their musical training and engagement, but musical training and ensemble practice were negatively related to the proportion of missed responses suggesting a weak effect of certain types of musical activities on inhibitory control.
... Indeed, Carey et al. (2015) found no differences between musicians (violinists and pianists) and non-musicians during a real-world auditory scene processing task. Individuals in this study had to detect when congruent and incongruent target objects appeared in an ongoing background scene or scenes (e.g., office and/or barn) at different signal-tonoise ratios (SNRs). ...
... As such, we found that the largest association with music training came from overall lower rates of change deafness. This is in contrast to previous work that found no relation between music training and scene processing (Carey et al., 2015). ...
Article
Full-text available
Our world is a sonically busy place and we use both acoustic information and experience-based knowledge to make sense of the sounds arriving at our ears. The knowledge we gain through experience has the potential to shape what sounds are prioritized in a complex scene. There are many examples of how visual expertise influences how we perceive objects in visual scenes, but few studies examine how auditory expertise is associated with attentional biases toward familiar real-world sounds in complex scenes. In the current study, we investigated whether musical expertise is associated with the ability to detect changes to real-world sounds in complex auditory scenes, and whether any such benefit is specific to musical instrument sounds. We also examined whether change detection is better for human-generated sounds in general or only communicative human sounds. We found that musicians had less change deafness overall. All listeners were better at detecting human communicative sounds compared to human non-communicative sounds, but this benefit was driven by speech sounds and sounds that were vocally generated. Musical listening skill, speech-in-noise, and executive function abilities were used to predict rates of change deafness. Auditory memory, musical training, fine-grained pitch processing, and an interaction between training and pitch processing accounted for 45.8% of the variance in change deafness. To better understand perceptual and cognitive expertise, it may be more important to measure various auditory skills and relate them to each other, as opposed to comparing experts to non-experts.
... This musician advantage may be due to better processing of acoustic features, such as pitch or fundamental frequency (F0), which is a primary dimension in music (Micheyl et al., 2006) or better stream segregation (Zendel and Alain, 2013). Alternatively, or perhaps in addition, the advantage could also be due to enhanced auditory cognitive abilities, such as better attention or extended working memory capacity, at least in the auditory modality (Strait et al., 2010;Carey et al., 2015). ...
... Overall, perhaps the strong speech-on-speech perception advantage observed with musicians is not a direct result of better pitch perception, but instead more associated with other factors related to auditory perception, such as better stream segregation, better rhythm perception, or even better auditory cognitive abilities (Zendel and Alain, 2013;Miendlarzewska and Trost, 2014;Carey et al., 2015). ...
Article
Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level auditory cognitive functions, such as attention. Indeed, despite the few non-musicians who performed as well as musicians, on a group level, there was a strong musician benefit for speech perception in a speech masker. This benefit does not seem to result from better voice processing and could instead be related to better stream segregation or enhanced cognitive functions.
... instrument as a motoric skill, such musical training also results in specific changes in auditory perception and attention (Carey et al. 2015). Short-term training studies also demonstrate that the neural regions recruited during auditory perception change when people have motor experience rehearsing the music they listen to (D'Ausilio et al. 2006;Lahav et al. 2007). ...
... Activity in brain regions such as primary motor cortex, inferior frontal gyrus and the inferior parietal cortex is influenced by attention during listening (Dhanjal et al. 2008a;Wild, Davis, Yusuf, et al. 2012;Möttönen et al. 2014). Classical musicianship is often associated with domain-general increases in attention (Strait et al. 2010;Carey et al. 2015) and top-down executive control (Koelsch et al. 1999). It is thus possible that long-term improvements in domain-general perception and attention are responsible for the recruitment of sensorimotor regions during listening in classical musicians (but see Baumann et al. (2008)). ...
Article
Full-text available
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instrument-specific experience, we studied nonclassical musicians-beatboxers, who predominantly use their vocal apparatus to produce sound, and guitarists, who use their hands. We contrast fMRI activity in 20 beatboxers, 20 guitarists, and 20 nonmusicians as they listen to novel beatboxing and guitar pieces. All musicians show enhanced activity in sensorimotor regions (IFG, IPC, and SMA), but only when listening to the musical instrument they can play. Using independent component analysis, we find expertise-selective enhancement in sensorimotor networks, which are distinct from changes in attentional networks. These findings suggest that long-term sensorimotor experience facilitates access to the posterodorsal "how" pathway during auditory processing.
... Studies investigating the potential factors contributing to inhibitory control advantages in musicians have not yielded clear answers: Carey et al. (2015) compared violinists, pianists, and nonmusicians on a range of perceptual and cognitive tasks and found no instrumentbased differences on the cognitive measures (Carey et al., 2015). In a study with adult musicians, researchers assessed performance on musical discrimination tasks and inhibitory control but found no relationship between them (Slevc, Davey, Buschkuehl, & Jaeggi, 2016). ...
... Studies investigating the potential factors contributing to inhibitory control advantages in musicians have not yielded clear answers: Carey et al. (2015) compared violinists, pianists, and nonmusicians on a range of perceptual and cognitive tasks and found no instrumentbased differences on the cognitive measures (Carey et al., 2015). In a study with adult musicians, researchers assessed performance on musical discrimination tasks and inhibitory control but found no relationship between them (Slevc, Davey, Buschkuehl, & Jaeggi, 2016). ...
Article
Full-text available
Musical rhythm engages motor and reward circuitry that is important for cognitive control, and there is evidence for enhanced inhibitory control in musicians. We recently revealed an inhibitory control advantage in percussionists compared with vocalists, highlighting the potential importance of rhythmic expertise in mediating this advantage. Previous research has shown that better inhibitory control is associated with less variable performance in simple sensorimotor synchronization tasks; however, this relationship has not been examined through the lens of rhythmic expertise. We hypothesize that the development of rhythm skills strengthens inhibitory control in two ways: by fine-tuning motor networks through the precise coordination of movements "in time" and by activating reward-based mechanisms, such as predictive processing and conflict monitoring, which are involved in tracking temporal structure in music. Here, we assess adult percussionists and nonpercussionists on inhibitory control, selective attention, basic drumming skills (self-paced, paced, and continuation drumming), and cortical evoked responses to an auditory stimulus presented on versus off the beat of music. Consistent with our hypotheses, we find that better inhibitory control is correlated with more consistent drumming and enhanced neural tracking of the musical beat. Drumming variability and the neural index of beat alignment each contribute unique predictive power to a regression model, explaining 57% of variance in inhibitory control. These outcomes present the first evidence that enhanced inhibitory control in musicians may be mediated by rhythmic expertise and provide a foundation for future research investigating the potential for rhythm-based training to strengthen cognitive function.
... There is also evidence of better inhibitory control in musicians across age groupsin children (Degé et al., 2011;Moreno et al., 2011), young adults (Bialystok & Depape, 2009;Hurwitz, Wolff, Bortnick, & Kokas, 1975;Strait et al., 2010;Travis, Harung, & Lagrosen, 2011), and older adults (Seinfeld, Figueroa, Ortiz-Gil, & Sanchez-Vives, 2013). However, it is not consistently replicated across tasks or studies (Carey et al., 2015;Slevc, Davey, Buschkuehl, & Jaeggi, 2016;Zuk et al., 2014). ...
... The closer a match between the content and context of a skill domain and a tested ability, the greater the effect observed. The role of similarity in determining cognitive benefits has been supported from evidence on cognition-perception couplings in music, linking aspects of the trained and transfer domains (Carey et al., 2015;Parbery-Clark et al., 2009). ...
Article
Full-text available
The current study investigated whether long-term experience in music or a second language is associated with enhanced cognitive functioning. Early studies suggested the possibility of a cognitive advantage from musical training and bilingualism but have failed to be replicated by recent findings. Further, each form of expertise has been independently investigated leaving it unclear whether any benefits are specifically caused by each skill or are a result of skill learning in general. To assess whether cognitive benefits from training exist, and how unique they are to each training domain, the current study compared musicians and bilinguals to each other, plus to individuals who had expertise in both skills, or neither. Young adults (n = 153) were categorized into one of four groups: monolingual musician; bilingual musician; bilingual non-musician; and monolingual non-musician. Multiple tasks per cognitive ability were used to examine the coherency of any training effects. Results revealed that musically trained individuals, but not bilinguals, had enhanced working memory. Neither skill had enhanced inhibitory control. The findings confirm previous associations between musicians and improved cognition and extend existing evidence to show that benefits are narrower than expected but can be uniquely attributed to music compared to another specialized auditory skill domain. The null bilingual effect despite a music effect in the same group of individuals challenges the proposition that young adults are at a performance ceiling and adds to increasing evidence on the lack of a bilingual advantage on cognition.
... Strait and Kraus (2014) also suggested that these subcortical enhancements signal increased cortical control of sensory processing. There could be still other top-down processes at work, such as disparities in verbal and working memory between musicians and nonmusicians (Bialystok & Depape, 2009;Brandler & Rammsayer, 2003;Carey et al., 2015;Chan, Ho, & Cheung, 1998;Clayton et al., 2016). Such subcortical or top-down differences may help explain the small musician advantages seen in the conditions dominated by energetic masking, although they were not addressed in the current study design. ...
Article
Full-text available
Recent research has suggested that musicians have an advantage in some speech-in-noise paradigms, but not all. Whether musicians outperform nonmusicians on a given speech-in-noise task may well depend on the type of noise involved. To date, few groups have specifically studied the role that informational masking plays in the observation of a musician advantage. The current study investigated the effect of musicianship on listeners’ ability to overcome informational versus energetic masking of speech. Monosyllabic words were presented in four conditions that created similar energetic masking but either high or low informational masking. Two of these conditions used noise-vocoded target and masking stimuli to determine whether the absence of natural fine structure and spectral variations influenced any musician advantage. Forty young normal-hearing listeners (20 musicians and 20 nonmusicians) completed the study. There was a significant overall effect of participant group collapsing across the four conditions; however, planned comparisons showed musicians’ thresholds were only significantly better in the high informational masking natural speech condition, where the musician advantage was approximately 3 dB. These results add to the mounting evidence that informational masking plays a role in the presence and amount of musician benefit.
... Though the analysis of mean ratings yielded a main effect of musical training, with the exception of valence ratings when using averaged rating values, the amount of variance explained by mu- sical background was superseded by the amount of variance ex- plained by random effects on Participant ID for arousal and va- lence ratings, indicating that though groups can be formed, individual strategies are more important to explain these ratings. Though a large body of literature supports the existence of certain differences between musicians and nonmusicians (Brattico, Näätänen, & Tervaniemi, 2001;Carey et al., 2015;Fujioka, Trainor, Ross, Kakigi, & Pantev, 2004;Granot & Donchin, 2002), similar research by Dean et al. (2014a;Dean, Bailes, & Dunsmuir, 2014b) has also found that though there were differences between groups, individual differences explain more variance than musical background when rating the arousal and valence of electroacoustic and piano music. However, musical background did hold an im- portant predictive power for expectancy ratings, as musicians gave slightly higher ratings overall, showing greater unexpectedness. ...
Article
Full-text available
Pitch and timing information work hand in hand to create a coherent piece of music; but what happens when this information goes against the norm? Relationships between musical expectancy and emotional responses were investigated in a study conducted with 40 participants: 20 musicians and 20 non-musicians. Participants took part in one of two behavioural paradigms measuring continuous expectancy or emotional responses (arousal and valence) while listening to folk melodies that exhibited either high or low pitch predictability and high or low onset predictability. The causal influence of pitch predictability was investigated in an additional condition where pitch was artificially manipulated and a comparison conducted between original and manipulated forms; the dynamic correlative influence of pitch and timing information and its perception on emotional change during listening was evaluated using cross-sectional time series analysis. The results indicate that pitch and onset predictability are consistent predictors of perceived expectancy and emotional response, with onset carrying more weight than pitch. In addition, musicians and non-musicians do not differ in their responses, possibly due to shared cultural background and knowledge. The results demonstrate in a controlled lab-based setting a precise, quantitative relationship between the predictability of musical structure, expectation and emotional response.
... Another considerable factor is participants' musical experience, which may affect their interest in musical or acoustic stimuli while viewing scenes. There is evidence that musicians perceive auditory differences more finely than non-musicians and that they are slightly better at sustained auditory attention than non-musicians (e.g., Carey, Rosen, Krishnan, Pearce, Shepherd, Aydelott, et al., 2015). Musicians are also better than non-musicians at pre-attentively extracting information out of musically relevant stimuli (Koelsch, Schröger, & Tervaniemi, 1999). ...
Article
Full-text available
To date, there is insufficient knowledge of how visual exploration of outdoor scenes may be influenced by the simultaneous processing of music. Eye movements during viewing various outdoor scenes while listening to music at either a slow or fast tempo or in silence were measured. Significantly shorter fixations were found for viewing urban scenes compared with natural scenes, but there was no interaction between the type of scene and the acoustic conditions. The results revealed shorter fixation durations in the silent control condition in the range 30 ms, compared to both music conditions but, in contrast to previous studies, these differences were non-significant. Moreover, we did not find differences in eye movements between music conditions with a slow or fast tempo. It is supposed that the type of musical stimuli, the specific tempo, the specific experimental procedure, and the engagement of participants in listening to background music while processing visual information may be important factors that influence attentional processes, which are manifested in eye-movement behavior.
... Enhanced SIN perception observed in musicians could be due to better higher-level functions (Boebinger et al., 2015) such as auditory attention or auditory working memory capacity (Strait et al., 2010;Carey et al., 2015;Chan et al., 1998;Ho et al., 2003;Brandler and Rammsayer, 2003), which might act to fine-tune or separate incoming auditory information and help match it to knowledge, and are known to be enhanced by musical training. The links between musical training and cognitive abilities are complex (Schellenberg and Peretz, 2008; see Moreno and Bidelman, 2014 for a recent review). ...
Article
The ability to understand speech in the presence of competing sound sources is an important neuroscience question in terms of how the nervous system solves this computational problem. It is also a critical clinical problem that disproportionally affects the elderly, children with language-related learning disorders, and those with hearing loss. Recent evidence that musicians have an advantage on this multifaceted skill has led to the suggestion that musical training might be used to improve or delay the decline of speech-in-noise (SIN) function. However, enhancements have not been universally reported, nor have the relative contributions of different bottom-up versus top-down processes, and their relation to preexisting factors been disentangled. This information that would be helpful to establish whether there is a real effect of experience, what exactly is its nature, and how future training-based interventions might target the most relevant components of cognitive processes. These questions are complicated by important differences in study design and uneven coverage of neuroimaging modality. In this review, we aim to systematize recent results from studies that have specifically looked at musician-related differences in SIN by their study design properties, to summarize the findings, and to identify knowledge gaps for future work.
... Behavioral results are consistent with these physiological findings, with superior frequency-discrimination abilities reliably reported in musicians compared to non-musicians [17,18]. However, whether such auditory perceptual enhancements generalize beyond music-related tasks, such as frequency discrimination is unclear [19]. For instance, improvements in speech perception in noise or interfering talkers have been reported by some groups [20][21][22] but not by others [23][24][25][26]. ...
Article
Full-text available
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.
... Further segmentation studies should overcome methodological issues concerning the validity of the participant sample by including an established questionnaire, such as the Goldsmith's Musical Sophistication Index (Gold-MSI, see Müllensiefen, Gingras, Musil, & Stewart, 2014), which has been recently used for assessments in musicianship studies (Carey et al., 2015;Schaal, Banissy, & Lange, 2015). This can be helpful not only for comparing research findings but also for improving recruitment and classification: Gold-MSI takes into account that training may not determine musical abilities such as perception of form (Bigand & Poulin-Charronnat, 2006;Lalitte & Bigand, 2006), and also that some musical skills do not result from formal music training (Müllensiefen et al., 2014). ...
Article
While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects of experimental task (i.e., real-time vs. annotated segmentation), nor of musicianship on boundary perception are clear. Our study assesses musicianship effects and differences between segmentation tasks. We conducted a real-time experiment to collect segmentations by musicians and nonmusicians from nine musical pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary indication density, although this might be contingent on stimuli and other factors. In line with other studies, no musicianship effects were found: our results showed high agreement between groups and similar inter-subject correlations. Also consistent with previous work, time scales between one and two seconds were optimal for combining boundary indications. In addition, we found effects of task on number of indications, and a time lag between tasks dependent on beat length. Also, the optimal time scale for combining responses increased when the pulse clarity or event density decreased. Implications for future segmentation studies are raised concerning the selection of time scales for modelling boundary density, and time alignment between models.
... Accordingly, in an fMRI study Janata et al. (2002) have shown that attentive listening to music recruits neural circuits serving general functions, such as attention, WM, and semantic processing. Strong evidence favouring shared higher-order, executive functions have reported enhanced attention and WM associated with musical expertise, suggesting that this strengthening of executive functions is responsible for the speech processing benefits Bialystok and Depape, 2009;Carey et al., 2015;George and Coch, 2011;Parbery-Clark et al., 2012;Patel, 2014;Rigoulot et al., 2015;Smayda et al., 2015;Strait et al., 2010;Strait and Parbery-Clark, 2013). In the next Subsection 3.4 we will discuss the effects of musical training on melodic and speech pitch processing considering the potential role of general functions. ...
Article
Full-text available
Current research on music processing and syntax or semantics in language suggests that music and language share partially overlapping neural resources. Pitch also constitutes a common denominator, forming melody in music and prosody in language. Further, pitch perception is modulated by musical training. The present study investigated how music and language interact on pitch dimension and whether musical training plays a role in this interaction. For this purpose, we used melodies ending on an expected or unexpected note (melodic expectancy being estimated by a computational model) paired with prosodic utterances which were either expected (statements with falling pitch) or relatively unexpected (questions with rising pitch). Participants’ (22 musicians, 20 nonmusicians) ERPs and behavioural responses in a statement/question discrimination task were recorded. Participants were faster for simultaneous expectancy violations in the melodic and linguistic stimuli. Further, musicians performed better than nonmusicians, which may be related to their increased pitch tracking ability. At the neural level, prosodic violations elicited a front-central positive ERP around 150 ms after the onset of the last word/note, while musicians presented reduced P600 in response to strong incongruities (questions on low-probability notes). Critically, musicians’ P800 amplitudes were proportional to their level of musical training, suggesting that expertise might shape the pitch processing of language. The beneficial aspect of expertise could be attributed to its strengthening effect of general executive functions. These findings offer novel contributions to our understanding of shared higher-order mechanisms between music and language processing on pitch dimension, and further demonstrate a potential modulation by musical expertise.
... Relatedly, while we did assess multiple aspects of musical ability and experience, these are likely only a few of the skills that make up musical ability. In particular, our assessment of musical experience did not assess different types of musical experience, which may show differential relationships to cognitive abilities (Beaty, Smeekens, Silva, Hodges, & Kane, 2013;Carey et al., 2015;Merrett, Peretz, & Wilson, 2013). In addition, our processing tasks measured only melodic and rhythmic discrimination and did not assess musical production, sensitivity to timbre, or any number of other aspects of musical ability. ...
Article
A growing body of research suggests that musical experience and ability are related to a variety of cognitive abilities, including executive functioning (EF). However, it is not yet clear if these relationships are limited to specific components of EF, limited to auditory tasks, or reflect very general cognitive advantages. This study investigated the existence and generality of the relationship between musical ability and EFs by evaluating the musical experience and ability of a large group of participants and investigating whether this predicts individual differences on three different components of EF - inhibition, updating, and switching - in both auditory and visual modalities. Musical ability predicted better performance on both auditory and visual updating tasks, even when controlling for a variety of potential confounds (age, handedness, bilingualism, and socio-economic status). However, musical ability was not clearly related to inhibitory control and was unrelated to switching performance. These data thus show that cognitive advantages associated with musical ability are not limited to auditory processes, but are limited to specific aspects of EF. This supports a process-specific (but modality-general) relationship between musical ability and non-musical aspects of cognition.
... Although many studies have found differences in synchronization abilities between musicians and non-musicians, two did not find effects related to musical training (e.g., Essens and Povel, 1985;Hove et al., 2010). Other studies did not find differences between musician groups (van Vugt and Tillmann, 2014), or only saw differences in certain contexts (Krause et al., 2010;Carey et al., 2015;Manning and Schutz, 2015). In the study by Krause et al. (2010), drummers had more years of experience than those in the current study (∼15 years vs. ∼12 years in the current study) although they still had less experience than the other musician groups to whom they were compared. ...
Article
Full-text available
Studies comparing musicians and non-musicians have shown that musical training can improve rhythmic perception and production. These findings tell us that training can result in rhythm processing advantages, but they do not tell us whether practicing a particular instrument could lead to specific effects on rhythm perception or production. The current study used a battery of four rhythm perception and production tasks that were designed to test both higher- and lower-level aspects of rhythm processing. Four groups of musicians (drummers, singers, pianists, string players) and a control group of non-musicians were tested. Within-task differences in performance showed that factors such as meter, metrical complexity, tempo, and beat phase significantly affected the ability to perceive and synchronize taps to a rhythm or beat. Musicians showed better performance on all rhythm tasks compared to non-musicians. Interestingly, our results revealed no significant differences between musician groups for the vast majority of task measures. This was despite the fact that all musicians were selected to have the majority of their training on the target instrument, had on average more than 10 years of experience on their instrument, and were currently practicing. These results suggest that general musical experience is more important than specialized musical experience with regards to perception and production of rhythms.
... This speculation is in line with evidence of gray matter differences but no cognitive differences found in a similar age group (Bailey, Zatorre, & Penhune, 2014). Carey et al. (2015) reached a similar conclusion after finding only small differences in sustained auditory attention -with no differences in other non-musical skills -between adult musicians and non-musicians, speculating that music training may lead to early advantages in cognition that are later washed out by development and formal education. ...
Article
While many aspects of cognition have been investigated in relation to skilled music training, surprisingly little work has examined the connection between music training and attentional abilities. The present study investigated the performance of skilled musicians on cognitively demanding sustained attention tasks, measuring both temporal and visual discrimination over a prolonged duration. Participants with extensive formal music training were found to have superior performance on a temporal discrimination task, but not a visual discrimination task, compared to participants with no music training. In addition, no differences were found between groups in vigilance decrement in either type of task. Although no differences were evident in vigilance per se, the results indicate that performance in an attention-demanding temporal discrimination task was superior in individuals with extensive music training. We speculate that this basic cognitive ability may contribute to advantages that musicians show in other cognitive measures. Copyright © 2015 Elsevier Inc. All rights reserved.
... It is important to note that further empirical work is needed to determine the extent to which our results generalize to other listening situations. For example, Carey et al. 44 found that musicians and non-musicians performed similarly in an "environmental auditory scene analysis task" which required nonlinguistic sounds (such as animal sounds) to be detected in natural-sounding auditory scenes (such as farmyards). Thus there is a strong need to test variations of the current paradigm, e.g., varying the speech corpora/materials, number and gender of speakers, degree of spatial location, and manner of manipulating IM. ...
Article
Full-text available
Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical 'cocktail party problem' in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of 'informational masking' (IM) while keeping the amount of 'energetic masking' (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker "cocktail party" environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced "speech-in-noise" perception by musicians.
... In order to perform experiments in an equally challenging ASA environment as for non-musicians, task design could be adjusted to include melodies synthesized by the same instrument which incorporate incorrect and/or incomplete cues in only one stream for the Aggregate condition, mis-tuned notes within triplets, or incomplete triplet patterns consisting of two or three triplet notes. Further differences may exist within musicians based on their specific training, with soloists possibly showing more difficulty with source segregation than conductors or orchestra members who are constantly separating their own instruments in the presence of multiple competing music streams, and a more general enhanced perceptual segregation of their main performing instrument (for example, Pantev et al., 2001;Carey et al., 2015). Further understanding of both subject's locus of attention and polyphonic listening behavior could be achieved by employing trials which contain a non-matching instruction and response, for example an instruction indicating attention to the aggregate and a response whether or not triplets were present in the bassoon. ...
Article
Full-text available
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the Disbergen et al. ASA With Polyphonic Music timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments.
... The next factor may also be participants' musical experience, which may also affect their interest in musical, or in general, acoustical stimuli while viewing the scenes. There is evidence that musicians perceive auditory signal differences more finely than non-musicians and that they are slightly better at sustained auditory attention than non-musicians (e.g., Carey et al., 2015). Musicians are also better than non-musicians at pre-attentively extracting information out of musically relevant stimuli (Koelsch, Schröger, & Tervaniemi, 1999). ...
Conference Paper
Full-text available
Previous studies have observed longer fixations and fewer saccades while viewing various outdoor scenes and listening to music compared to a no-music condition. There is also evidence that musical tempo can modulate the speed of eye movements. However, recent investigations from environmental psychology demonstrated differences in eye movement behavior while viewing natural and urban outdoor scenes. The first goal of this study was to replicate the observed effect of music listening while viewing outdoor scenes with different musical stimuli. Next, the effect of a fast and a slow musical tempo on eye movement speed was investigated. Finally, the effect of the type of outdoor scene (natural vs. urban scenes) was explored. The results revealed shorter fixation durations in the no-music condition compared to both music conditions, but these differences were non-significant. Moreover, we did not find differences in eye movements between music conditions with fast and slow tempo. Although significantly shorter fixations were found for viewing urban scenes compared with natural scenes, we did not find a significant interaction between the type of scene and music conditions.
... Similarly, extensive exposure to sensory objects that often induce aesthetic enjoyment, including wine, music, or visual art, appears to change the way these objects are perceived and cognitively represented. For example, long-time musicians can detect tiny auditory differences better than non-musicians (Carey et al., 2015). It is possible that such difference in perceptual and cognitive ability affects the way the brain assigns aesthetic liking and disliking to objects that falls within the specific domain of expertise. ...
Preprint
Full-text available
This is a preprint of a chapter that will appear in A. Chatterjee & E. Cardillo (eds.), Brain Beauty, and Art: Essays Bringing Neuroaesthetics into Focus. Oxford University Press. Please do not cite this version.
... By contrast, other studies have not shown such advantages for musicians as compared to non-musicians in selective attention (Clayton, Swaminathan, Yazdanbakhsh, Zuk, Patel, & Kidd, 2016;Roden, Könen, Bongard, Frankenberg, Friedrich, & Kreutz 2014), vigilance (Carey et al., 2015;Roden et al., 2014;Wang, Ossher, & Reuter-Lorenz, 2015), or executive control (Clayton et al., 2016;Yeşil & Ünal, 2017;D'Souza, Moradzadeh, &Wiseheart, 2018). Furthermore, Lim and Sinnett (2011;see Experiment 1) showed no differences between musicians and non-musicians in either exogenous or endogenous attentional orienting. ...
Article
Full-text available
Previous literature has shown cognitive improvements related to musical training. Attention is one cognitive aspect in which musicians exhibit improvements compared to non-musicians. However, previous studies show inconsistent results regarding certain attentional processes, suggesting that benefits associated with musical training appear only in some processes. The present study aimed to investigate the attentional and vigilance abilities in expert musicians with a fine-grained measure: the ANTI-Vea (ANT for Interactions and Vigilance—executive and arousal components; Luna et al. in J Neurosci Methods 306:77–87, https://doi.org/10.1016/j.jneumeth.2018.05.011, 2018). This task allows measuring the functioning of the three Posner and Petersen’s networks (alerting, orienting, and executive control) along with two different components of vigilance (executive and arousal vigilance). Using propensity-score matching, 49 adult musicians (18–35 years old) were matched in an extensive set of confounding variables with a control group of 49 non-musicians. Musicians showed advantages in processing speed and in the two components of vigilance, with some specific aspects of musicianship such as years of practice or years of lessons correlating with these measures. Although these results should be taken with caution, given its correlational nature, one possible explanation is that musical training can specifically enhance some aspects of attention. Nevertheless, our correlational design does not allow us to rule out other possibilities such as the presence of cognitive differences prior to the onset of training. Moreover, the advantages were observed in an extra-musical context, which suggests that musical training could transfer its benefits to cognitive processes loosely related to musical skills. The absence of effects in executive control, frequently reported in previous literature, is discussed based on our extensive control of confounds.
... The sample size was based on a power analysis, conducted in G * Power 3.1. Regarding behavioral interactions between age and cognition, assuming an effect size of Cohen's f = 0.65 [derived from Carey et al. (2015)], an alpha of 0.05, and three groups, we determined that a total sample size of at least 15 individuals per study would provide 95% power to detect the effects. ...
Article
Full-text available
The effects of musical practice on cognition are well established yet rarely compared with other kinds of artistic training or expertise. This study aims to compare the possible effect of musical and theater regular practice on cognition across the lifespan. Both of these artistic activities require many hours of individual or collective training in order to reach an advanced level. This process requires the interaction between higher-order cognitive functions and several sensory modalities (auditory, verbal, visual and motor), as well as regular learning of new pieces. This study included participants with musical or theater practice, and healthy controls matched for age (18-84 years old) and education. The objective was to determine whether specific practice in these activities had an effect on cognition across the lifespan, and a protective influence against undesirable cognitive outcomes associated with aging. All participants underwent a battery of cognitive tasks that evaluated processing speed, executive function, fluency, working memory, verbal and visual long-term memories, and non-verbal reasoning abilities. Results showed that music and theater artistic practices were strongly associated with cognitive enhancements. Participants with musical practice were better in executive functioning, working memory and non-verbal reasoning, whereas participants with regular acting practice had better long-term verbal memory and fluency performance. Thus, taken together, results suggest a differential effect of these artistic practices on cognition across the lifespan. Advanced age did not seem to reduce the benefit, so future studies should focus on the hypothetical protective effects of artistic practice against cognitive decline.
... We do not share romanticising images of the semi-psychotic and unrealistic musician who creates his own world of fantasy and illusion, but we point out that dealing with music can have a strong impact on the psyche such as concerning the bridge between the unconscious and conscious world (Skar 2002) and altered modes of perception and cognition (Carey et al. 2015). Additionally, professional musicianship influences the brain (Barrett et al. 2013), its functions and its plasticity (Schlaug 2015). ...
... Importantly, the boundary could be considered as a plastic divider between different levels of information processing system. For example, the boundaries could be shaped based on environmental stimuli and vary from person to person and even change after an experience or learning in an individual (Carey et al., 2015;Witzel et al., 2017). Furthermore, boundary could be vanished or appeared based on viewer expectation or mental state (Bruce & Davies, 2014;Lupyan & Ward, 2013). ...
Article
Full-text available
Stimuli characteristics ha a decisive role in our perception and cognition. In the present study, we aimed to evaluate the effects of dimension of stimuli, two-dimensional (2D) versus three-dimensional (3D), on perception and working memory. In the first experiment, using the method of eye tracking, a higher blink rate, pupil size, and the number of saccade for three compared to 2D stimuli revealed a higher perceptual demand of 3D stimuli. In the second experiment, visual search task shows a higher response time for 3D stimuli and an equal performance with 2- and 3D stimuli in spatial working memory task. In the third experiment, four working memory tasks with high and low cognitive and perceptual load revealed 3D stimuli are memorized better in the both low and high load of working memory. We can conclude that 3D stimulus, compared 2D, imposes a higher load on perceptual system, but it is memorized better. It could be concluded that the phenomenon of filtering should occur in the early perceptual system for preventing overload.
... Although the results do not provide adequate quantitative support for any training effects, it is plausible that the observed benefit on predictable task switching may be more than a statistical artifact, because there is meaningful overlap between the training content in music and dance and the tested ability of predictable task switching. Theories of transfer and evidence from skill learning studies show that improvements are contingent on an overlap between training paradigms and tested cognitive abilities (Barnett & Ceci, 2002;Carey et al., 2015;Diamond, 2012;Green & Bavelier, 2008;Lövdén et al., 2010;Slagter et al., 2011). ...
Article
Full-text available
Musical training is popularly believed to improve children’s cognitive ability. Early research evidence, mostly correlational, suggested that musicians outperform nonmusicians on many cognitive abilities. However, recent experimental evidence has failed to replicate most benefits, leaving it unclear whether previously demonstrated effects were a direct result of learning music. Although a few studies have shown some change with as little as a few weeks of training, the larger training literature shows that transfer of skills between unrelated areas is extremely rare, especially in properly controlled studies. The current study used an experimental design to assess the cause (whether music uniquely produces change) and the effect (which cognitive abilities are impacted) of the link between music and cognition. Six- to 9-year-old children (n = 75) with no prior training were randomly allocated to 3 weeks of music or dance training. Cognitive performance before and after training was compared between trained groups, because both training forms share features of training plus a nontrained control group to isolate training-induced change from normal maturation. No changes were found on any measured ability (inhibitory control, working memory, task switching, processing speed, receptive vocabulary, and nonverbal intelligence). Findings confirm evidence from the general training literature that training-induced improvements on cognitive performance are unlikely. Short-term training effects have a much narrower scope than previous evidence suggests.
... Better stream segregation in musicians may also be a reason [80]. On the other hand, this superiority can be due to better cognitive capabilities such as better attention or better working memory in auditory tasks [81]. Optimization of the auditory system results in better acoustic resolution in responding to different components of music, including pitch, duration, rhythm, timbre, and melody and different aspects of speech processing such as active [82] and passive [83] discrimination. ...
Article
Full-text available
Background and Aim: Researchers in the fields of psychoacoustic and electrophysiology are mostly focused on demonstrating the better and different neurophysiological performance of musicians. The present study explores the impact of music upon the auditory system, the non-auditory system as well as the improvement of language and cognitive skills following listening to music or receiving music training. Recent Findings: Studies indicate the impact of music upon the auditory processing from the cochlea to secondary auditory cortex and other parts of the brain. Besides, the impact of music on speech perception and other cognitive processing is demonstrated. Some papers point to the bottom-up and some others to the top-down processing, which is explained in detail. Conclusion: Listening to music and receiving music training, in the long run, creates plasticity from the cochlea to the auditory cortex. Since the auditory path of musical sounds overlaps functionally with that of speech path, music helps better speech perception, too. Both perceptual and cognitive functions are involved in this process. Music engages a large area of the brain, so music can be used as a supplement in rehabilitation programs and helps the improvement of speech and language skills.
... • However, previous works showed some inconsistent results for certain attentional processes (e.g., vigilance [Carey et al., 2015], executive control [Clayton et al., 2016]). ...
... Only a few studies have looked at cognitive skills as a function of type of instrument training, mostly looking at online tasks such as monitoring different kinds of deviations during music listening. For instance, Carey et al. (2015) compared classically trained violinists and pianists and found very few differences across a wide variety of tasks such as sequence reproduction. The violinists were more sensitive to tuning differences, as might be expected (Nager et al., 2003). ...
Article
Full-text available
Jazz musicians rely on different skills than do classical musicians for successful performances. We investigated the working memory span of classical and jazz student musicians on musical and nonmusical working memory tasks. College-aged musicians completed the Bucknell Auditory Imagery Scale, followed by verbal working memory tests and musical working memory tests that included visual and auditory presentation modes and written or played recall. Participants were asked to recall the last word (or pitch) from each task after a distraction task, by writing, speaking, or playing the pitch on the piano. Jazz musicians recalled more pitches that were presented in auditory versions and recalled on the piano compared with classical musicians. Scores were positively correlated to years of jazz-playing experience. We conclude that type of training should be considered in studies of musical expertise, and that tests of musicians’ cognitive skills should include domain-specific components.
... By contrast, other studies have not shown such advantages for musicians as compared to non-musicians in selective attention (Clayton et al., 2016;Roden et al., 2014), vigilance (Carey et al., 2015;Roden et al., 2014;Wang, Ossher, & Reuter-Lorenz, 2015), or executive control (Clayton et al., 2016;Yeşil & Ünal, 2017;D'Souza, Moradzadeh, &Wiseheart, 2018). Furthermore, Lim and Sinnett (2011;see Experiment 1) showed no differences between musicians and non-musicians in either exogenous or endogenous attentional orienting. ...
Preprint
Full-text available
Previous literature has shown cognitive improvements related to musical training. Attention is one of the functions in which musicians exhibit improvements compared to non-musicians. However, previous studies show inconsistent results regarding certain attentional processes, suggesting that benefits associated with musical training appear only in some processes. The aim of the present study was to investigate attentional and vigilance abilities in expert musicians with a fine-grained measure: the ANTI-Vea (ANT for Interactions and Vigilance − executive and arousal components; Luna, Marino, Roca, & Lupiáñez, 2018). This task allows measuring the three Posner and Petersen’s networks (alerting, orienting, and executive control) along with two different components of vigilance (executive and arousal vigilance). Using propensity-score matching, 49 adult musicians (18-35 years old) were matched in an extensive set of confounds with a control group of 49 non-musicians. Musicians showed advantages in processing speed and in the two components of vigilance. Interestingly, improvements were related to characteristics of musical experience; in particular: years of practice and years of lessons. One possible explanation for these results is that musical training can specifically enhance some aspects of attention, although our correlational design does not allow to rule out other possibilities such as the presence of cognitive differences prior to the onset of training. Moreover, the advantages were observed in an extra-musical context, which suggests that musical training could transfer its benefits to cognitive processes loosely related to musical skills. The absence of effects in executive control, frequently reported in previous literature, is discussed based on our extensive control of confounds.
... ligence aspects of participants (Table 5). However, there might be marginally difference (p=0.066) among private hospitals in terms of motivational dimension of cultural intelligence scores of participants. This could be interpreted as a weak evidence for difference among hospitals based on motivational cultural intelligence scores of participants (Carey et. al, 2015). For this reason, the last hypothesis (H4) could not be rejected. ...
Article
Full-text available
Music may modify the impression of a visual environment. Most studies have explored the effect of music on the perception of various service settings, but the effect of music on the perception of outdoor environments has not yet been adequately explored. Music may make an environment more pleasant and enhance the relaxation effect of outdoor recreational activities. This study investigated the effect of music on the evaluation of urban built and urban natural environments. The participants ( N = 94) were asked to evaluate five environments in terms of spatio-cognitive and emotional dimensions while listening to music. Two types of music were selected: music with a fast tempo and music with a slow tempo. In contrast with a previous study by Yamasaki, Yamada & Laukka (2015), our experiment revealed that there was only a slight and not significant influence of music on the evaluation of the environment. The effect of music was mediated by the liking of music, but only in the dimensions of Pleasant and Mystery . The environmental features of the evaluated locations had a stronger effect than music on the evaluation of the environments. Environments with natural elements were perceived as more pleasant, interesting, coherent, and mysterious than urban built environments regardless of the music. It is suggested that the intensity of music may be an important factor in addition to the research methodology, individual variables, and cultural differences.
Article
According to several influential theoretical frameworks, phonological deficits in dyslexia result from reduced sensitivity to acoustic cues that are essential for the development of robust phonemic representations. Some accounts suggest that these deficits arise from impairments in rapid auditory adaptation processes that are either speech-specific or domain-general. Here, we examined the specificity of auditory adaptation deficits in dyslexia using a nonlinguistic tone anchoring (adaptation) task and a linguistic selective adaptation task in children and adults with and without dyslexia. Children and adults with dyslexia had elevated tone-frequency discrimination thresholds, but both groups benefited from anchoring to repeated stimuli to the same extent as typical readers. Additionally, although both dyslexia groups had overall reduced accuracy for speech sound identification, only the child group had reduced categorical perception for speech. Across both age groups, individuals with dyslexia had reduced perceptual adaptation to speech. These results highlight broad auditory perceptual deficits across development in individuals with dyslexia for both linguistic and nonlinguistic domains, but speech-specific adaptation deficits. Finally, mediation models in children and adults revealed that the causal pathways from basic perception and adaptation to phonological awareness through speech categorization were not significant. Thus, rather than having causal effects, perceptual deficits may co-occur with the phonological deficits in dyslexia across development. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Article
Musical training involves exposure to complex auditory and visual stimuli, memorization of elaborate sequences, and extensive motor rehearsal. It has been hypothesized that such multifaceted training may be associated with differences in basic cognitive functions, such as prediction, potentially translating to a facilitation in expert musicians. Moreover, such differences might generalize to non-auditory stimuli. This study was designed to test both hypotheses. We implemented a cross-modal attentional cueing task with auditory and visual stimuli, where a target was preceded by compatible or incompatible cues in mainly compatible (80% compatible, predictable) or random blocks (50% compatible, unpredictable). This allowed for the testing of prediction skills in musicians and controls. Musicians showed increased sensitivity to the statistical structure of the block, expressed as advantage for compatible trials (disadvantage for incompatible trials), but only in the mainly compatible (predictable) blocks. Controls did not show this pattern. The effect held within modalities (auditory, visual), across modalities, and when controlling for short-term memory capacity. These results reveal a striking enhancement in cross-modal prediction in musicians in a very basic cognitive task.
Article
Hierarchical structure with units of different timescales is a key feature of music. For the perception of such structures, the detection of each boundary is crucial. Here, using electroencephalography (EEG), we explore the perception of hierarchical boundaries in music, and test whether musical expertise modifies such processing. Musicians and non-musicians were presented with musical excerpts containing boundaries at three hierarchical levels, including section, phrase and period boundaries. Non-boundary was chosen as a baseline condition. Recordings from musicians showed CPS (closure positive shift) was evoked at all the three boundaries, and their amplitude increased as the hierarchical level became higher, which suggest that musicians could represent music events at different timescales in a hierarchical way. For non-musicians, the CPS was only elicited at the period boundary and undistinguishable negativities were induced at all the three boundaries. The results indicate that a different and less clear way was used by non-musicians in boundary perception. Our findings reveal, for the first time, an ERP correlate of perceiving hierarchical boundaries in music, and show that the phrasing ability could be enhanced by musical expertise.
Article
Auditory training (AT) may strengthen auditory skills that help human not only in on-task auditory perception performance but in continuous speech-shaped noise (SSN) environment. AT based on musical material has provided some evidence for an “auditory advantage” in understanding speech-in-noise (SIN), but with a long period training and complex procedure. Experimental research is essential to develop a simplified method named auditory target tracking training (ATT) which refined from musical material is necessary to determine the benefits of training. We developed two kinds of refined AT method: basic auditory target tracking (BAT) training and enhanced auditory target tracking (EAT) training to adult participants ([Formula: see text]) separately for 20 units, assessing performance to perceive speech in noise environment after training. The EAT group presented better speech perception performance than the other groups and no significant differences between BAT group and control group. The training effect of EAT is the most significant when uni-gender SSN and [Formula: see text] dB. Outcomes suggest that efficacy of trained EAT can improve speech perception performance and selective attention during SSN environment. These findings provide an important link between musical-based training and auditory selective attention in real-world, and extended to special vocational training.
Article
Full-text available
The article provides an overview of modern foreign psychological research devoted to the study of musicality. It describes in details Gold-MSI method (Goldsmiths Musical Sophistication Index), developed by psychologists at the University of Goldsmith to diagnose the level of musical development. In Russia, this technique is unknown, although it is widely used in foreign studies, it is standardized and has displayed good psychometric properties. The article presents the results of the initial testing of the Russian version of the Gold-MSI technique on a sample of 107 participants. It is shown that the Russian version of the methodology has a satisfactory reliability and validity and can be used for research purposes
Article
Traditionally, studies investigating the influence of musical background on task performance have distinguished musicians from nonmusicians primarily by the number of years of training on a musical instrument or voice. Individuals are usually sorted into two groups based on cutoff values according to years of musical training, even though training is a continuous variable. Categorizing continuous variables introduces several problems, including reduced power, spurious significance, loss of measurement reliability, and elimination of individual differences (MacCallum, Zhang, Preacher, & Rucker, 2002). To examine how common and variable this practice might be, we reviewed a cross-section of (40) pitch studies from the past 25 years. Sixty-five percent of studies relied on years of training and reflected 18 different cutoff values. The adverse impact of different cutoffs was investigated using two data sets from our laboratory in addition to simulated data sets varying in the distribution of musical training (positively skewed vs. normal). Cutoff values were systematically adjusted within each data set, and outcomes were compared against each other, as well as against a regression model in which musical training was treated as a continuous variable. As cutoff values varied, analyses of all data sets revealed substantial variability in effect sizes and only occasional significance. In contrast, regression analyses consistently revealed significance, with effect sizes that exceeded those obtained using cutoff values. These findings suggest that musicianship based upon years of training should be modeled as a continuous variable to maintain naturally occurring variability while facilitating more appropriate comparisons of findings across studies.
Article
Everyday communication mostly occurs in the presence of various background noises and competing talkers. Studies have shown that musical training could have a positive effect on auditory processing, particularly in challenging listening situations. To our knowledge, no groups have specifically studied the advantage of musical training on perception of consonants in the presence of background noise. We hypothesized that musician advantage in speech in noise processing may also result in enhanced perception of speech units such as consonants in noise. Therefore, this study aimed to compare the recognition of stops and fricatives, which constitute the highest number of Persian consonants, in the presence of 12-talker babble noise between musicians and non-musicians. For this purpose, stops and fricatives presented in the consonant-vowel-consonant format and embedded in three signal-to-noise ratios of 0, −5, and −10 dB. The study was conducted on 40 young listeners (20 musicians and 20 non-musicians) with normal hearing. Our outcome indicated that musicians outperformed the non-musicians in recognition of stops and fricatives in all three signal-to-noise ratios. These findings provide important evidence about the impact of musical instruction on processing of consonants and highlight the role of musical training on perceptual abilities.
Article
Full-text available
The ability to reproduce novel words is a sensitive marker of language impairment across a variety of developmental disorders. Nonword repetition tasks are thought to reflect phonological short-term memory skills. Yet, when children hear and then utter a word for the first time, they must transform a novel speech signal into a series of coordinated, precisely timed oral movements. Little is known about how children's oromotor speed, planning and co-ordination abilities might influence their ability to repeat novel nonwords, beyond the influence of higher-level cognitive and linguistic skills. In the present study, we tested 35 typically developing children between the ages of 5-8 years on measures of nonword repetition, digit span, memory for non-verbal sequences, reading fluency, oromotor praxis, and oral diadochokinesis. We found that oromotor praxis uniquely predicted nonword repetition ability in school-age children, and that the variance it accounted for was additional to that of digit span, memory for non-verbal sequences, articulatory rate (measured by oral diadochokinesis) as well as reading fluency. We conclude that the ability to compute and execute novel sensorimotor transformations affects the production of novel words. These results have important implications for understanding motor/language relations in neurodevelopmental disorders. Copyright © 2017 Krishnan et al.This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Article
Full-text available
This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.
Article
Full-text available
Sensitive periods in human development have often been proposed to explain age-related differences in the attainment of a number of skills, such as a second language (L2) and musical expertise. It is difficult to reconcile the negative consequence this traditional view entails for learning after a sensitive period with our current understanding of the brain's ability for experience-dependent plasticity across the lifespan. What is needed is a better understanding of the mechanisms underlying auditory learning and plasticity at different points in development. Drawing on research in language development and music training, this review examines not only what we learn and when we learn it, but also how learning occurs at different ages. First, we discuss differences in the mechanism of learning and plasticity during and after a sensitive period by examining how language exposure versus training forms language-specific phonetic representations in infants and adult L2 learners, respectively. Second, we examine the impact of musical training that begins at different ages on behavioral and neural indices of auditory and motor processing as well as sensorimotor integration. Third, we examine the extent to which childhood training in one auditory domain can enhance processing in another domain via the transfer of learning between shared neuro-cognitive systems. Specifically, we review evidence for a potential bi-directional transfer of skills between music and language by examining how speaking a tonal language may enhance music processing and, conversely, how early music training can enhance language processing. We conclude with a discussion of the role of attention in auditory learning for learning during and after sensitive periods and outline avenues of future research.
Article
Full-text available
While hearing in noise is a complex task, even in high levels of noise humans demonstrate remarkable hearing ability. Binaural hearing, which involves the integration and analysis of incoming sounds from both ears, is an important mechanism that promotes hearing in complex listening environments. Analyzing inter-ear differences helps differentiate between sound sources-a key mechanism that facilitates hearing in noise. Even when both ears receive the same input, known as diotic hearing, speech intelligibility in noise is improved. Although musicians have better speech-in-noise perception compared with non-musicians, we do not know to what extent binaural processing contributes to this advantage. Musicians often demonstrate enhanced neural responses to sound, however, which may undergird their speech-in-noise perceptual enhancements. Here, we recorded auditory brainstem responses in young adult musicians and non-musicians to a speech stimulus for which there was no musician advantage when presented monaurally. When presented diotically, musicians demonstrated faster neural timing and greater intertrial response consistency relative to non-musicians. Furthermore, musicians' enhancements to the diotically presented stimulus correlated with speech-in-noise perception. These data provide evidence for musical training's impact on biological processes and suggest binaural processing as a possible contributor to more proficient hearing in noise.
Article
Full-text available
We tested non-musicians and musicians in an auditory psychophysical experiment to assess the effects of timbre manipulation on pitch-interval discrimination. Both groups were asked to indicate the larger of two presented intervals, comprised of four sequentially presented pitches; the second or fourth stimulus within a trial was either a sinusoidal (or "pure"), flute, piano, or synthetic voice tone, while the remaining three stimuli were all pure tones. The interval-discrimination tasks were administered parametrically to assess performance across varying pitch distances between intervals ("interval-differences"). Irrespective of timbre, musicians displayed a steady improvement across interval-differences, while non-musicians only demonstrated enhanced interval discrimination at an interval-difference of 100 cents (one semitone in Western music). Surprisingly, the best discrimination performance across both groups was observed with pure-tone intervals, followed by intervals containing a piano tone. More specifically, we observed that: 1) timbre changes within a trial affect interval discrimination; and 2) the broad spectral characteristics of an instrumental timbre may influence perceived pitch or interval magnitude and make interval discrimination more difficult.
Article
Full-text available
Experience has a profound influence on how sound is processed in the brain. Yet little is known about how enriched experiences interact with developmental processes to shape neural processing of sound. We examine this question as part of a large cross-sectional study of auditory brainstem development involving more than 700 participants, 213 of whom were classified as musicians. We hypothesized that experience-dependent processes piggyback on developmental processes, resulting in a waxing-and-waning effect of experience that tracks with the undulating developmental baseline. This hypothesis led to the prediction that experience-dependent plasticity would be amplified during periods when developmental changes are underway (i.e., early and later in life) and that the peak in experience-dependent plasticity would coincide with the developmental apex for each subcomponent of the auditory brainstem response (ABR). Consistent with our predictions, we reveal that musicians have heightened response features at distinctive times in the life span that coincide with periods of developmental change. The effect of musicianship is also quite specific: we find that only select components of auditory brainstem activity are affected, with musicians having heightened function for onset latency, high-frequency phase-locking, and response consistency, and with little effect observed for other measures, including lower-frequency phase-locking and non-stimulus-related activity. By showing that musicianship imparts a neural signature that is especially evident during childhood and old age, our findings reinforce the idea that the nervous system's response to sound is "chiseled" by how a person interacts with his specific auditory environment, with the effect of the environment wielding its greatest influence during certain privileged windows of development.
Article
Full-text available
This study examines the deviation in the intonation of simultaneously sounding tones under the condition of an embedded melody task. Two professional musicians (trumpet players) were chosen as subjects to play the missing upper voice of a four-part audio example, while listening via headphones to the remaining three parts in adaptive five-limit just intonation and equal temperament. The experimental paradigm was that of a controlled varied condition with a 2 (tuning systems) X 5 (interval categories) x 5 (renditions) x 2 (players) factorial design. An analysis of variance showed a nonsignificant difference between the average deviation of harmonic intonation in the two systems used. Mean deviations of 4.9 cents (SD = 6.5 cents) in the equal-temperament condition and of 6.7 cents (SD = 8.1 cents) in the just-intonation condition were found. Thus, we assume that the musicians employed the same intonation for equal-temperament and just-intonation versions (an unconscious "always the same" strategy) and could not successfully adapt their performances to the just-intonation tuning system. Fewer deviations could be observed in the equal-temperament condition. This overall tendency can be interpreted as a "burn in" effect and is probably the consequence of long-term intonation practice with equal-temperament. Finally, a theoretical model of intonation is developed by use of factor analysis. Four factors that determine intonation patterns were revealed: the "major third factor," the "minor third and partials factor," the "instrumental tuning factor," and the "octave-minor seventh factor." To summarize, even in expert musicians, intonation is not determined by abstract tuning systems but is the result of an interaction among compositional features, the acoustics of the particular musical instrument, and deviation patterns in specific intervals.
Article
Full-text available
The relationship between music and cognitive abilities was studied by observing the cognitive development of children provided (n = 63) and not provided (n = 54) with individual piano lessons from fourth to sixth grade. There were no differences in cognitive abilities, musical abilities, motor proficiency, self-esteem, academic achievement, or interest in studying piano between the two groups of children at the beginning of the study. It was found that the treatment affected children's general and spatial cognitive development. The magnitude of such effects (omega squared) was small. Additional analyses showed that although the experimental group obtained higher spatial abilities scores in the Developing Cognitive Abilities Test after 1 and 2 years of instruction than did the control group, the groups did not differ in general or specific cognitive abilities after 3 years of instruction. The treatment did not affect the development of quantitative and verbal cognitive abilities.
Article
Full-text available
Although most studies that examined associations between music training and cognitive abilities had correlational designs, the prevailing bias is that music training causes improve-ments in cognition. It is also possible, however, that high-functioning children are more likely than other children to take music lessons, and that they also differ in personality. We asked whether individual differences in cognition and personality predict who takes music lessons and for how long. The participants were 118 adults (Study 1) and 167 10-to 12-year-old children (Study 2). We collected demographic information and measured cog-nitive ability and the Big Five personality dimensions. As in previous research, cognitive ability was associated with musical involvement even when demographic variables were controlled statistically. Novel findings indicated that personality was associated with musi-cal involvement when demographics and cognitive ability were held constant, and that openness-to-experience was the personality dimension with the best predictive power. These findings reveal that: (1) individual differences influence who takes music lessons and for how long, (2) personality variables are at least as good as cognitive variables at pre-dicting music training, and (3) future correlational studies of links between music training and non-musical ability should account for individual differences in personality.
Article
Full-text available
A central question for cognitive neuroscience is whether there is a single neural system controlling the allocation of attention. A dorsal frontoparietal network of brain regions is often proposed as a mediator of top-down attention to all sensory inputs. We used functional magnetic resonance imaging in humans to show that the cortical networks supporting top-down attention are in fact modality-specific, with distinct superior fronto-parietal and fronto-temporal networks for visuospatial and non-spatial auditory attention respectively. In contrast, parts of the right middle and inferior frontal gyri showed a common response to attentional control regardless of modality, providing evidence that the amodal component of attention is restricted to the anterior cortex.
Article
Full-text available
Training during a sensitive period in development may have greater effects on brain structure and behavior than training later in life. Musicians are an excellent model for investigating sensitive periods because training starts early and can be quantified. Previous studies suggested that early training might be related to greater amounts of white matter in the corpus callosum, but did not control for length of training or identify behavioral correlates of structural change. The current study compared white-matter organization using diffusion tensor imaging in early-and late-trained musicians matched for years of training and experience. We found that early-trained musicians had greater connectivity in the posterior midbody/isthmus of the corpus callosum and that fractional anisotropy in this region was related to age of onset of training and sensorimotor synchronization performance. We propose that training before the age of 7 years results in changes in white-matter connectivity that may serve as a scaffold upon which ongoing experience can build.
Article
Full-text available
Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented sequentially, either at the same or different ears. The frequency of the tones was also either the same or different (by at least two critical bands). When the task required same/different frequency judgments, presentation at the same ear significantly speeded responses and reduced errors. A same/different ear (location) judgment was likewise facilitated by keeping tone frequency constant. Perception was thus influenced by involuntary orienting of attention along the task-irrelevant dimension. When information in the two stimulus dimensions were congruent (same-frequency same-ear, or different-frequency different-ear), response was faster and more accurate than when they were incongruent (same-frequency different-ear, or different-frequency same-ear), suggesting the involvement of executive control to resolve conflicts. In total, the TAIL yielded five independent outcome measures: (1) baseline reaction time, indicating information processing efficiency, (2) involuntary orienting of attention to frequency and (3) location, and (4) conflict resolution for frequency and (5) location. Processing efficiency and conflict resolution accounted for up to 45% of individual variances in the low- and high-threshold variants of three psychoacoustic tasks assessing temporal and spectral processing. Involuntary orientation of attention to the irrelevant dimension did not correlate with perceptual performance on these tasks. Given that TAIL measures are unlikely to be limited by perceptual sensitivity, we suggest that the correlations reflect modulation of perceptual performance by attention. The TAIL thus has the power to identify and separate contributions of different components of attention to auditory perception.
Article
Full-text available
Older adults frequently complain that while they can hear a person talking, they cannot understand what is being said; this difficulty is exacerbated by background noise. Peripheral hearing loss cannot fully account for this age-related decline in speech-in-noise ability, as declines in central processing also contribute to this problem. Given that musicians have enhanced speech-in-noise perception, we aimed to define the effects of musical experience on subcortical responses to speech and speech-in-noise perception in middle-aged adults. Results reveal that musicians have enhanced neural encoding of speech in quiet and noisy settings. Enhancements include faster neural response timing, higher neural response consistency, more robust encoding of speech harmonics, and greater neural precision. Taken together, we suggest that musical experience provides perceptual benefits in an aging population by strengthening the underlying neural pathways necessary for the accurate representation of important temporal and spectral features of sound.
Article
Full-text available
The ability to separate concurrently occurring sounds based on periodicity cues is critical for parsing complex auditory scenes. This ability is enhanced in young adult musicians and reduced in older adults. Here, we investigated the impact of lifelong musicianship on concurrent sound segregation and perception using scalp-recorded ERPs. Older and younger musicians and nonmusicians were presented with periodic harmonic complexes where the second harmonic could be tuned or mistuned by 1-16% of its original value. The likelihood of perceiving two simultaneous sounds increased with mistuning, and musicians, both older and younger, were more likely to detect and report hearing two sounds when the second harmonic was mistuned at or above 2%. The perception of a mistuned harmonic as a separate sound was paralleled by an object-related negativity that was larger and earlier in younger musicians compared with the other three groups. When listeners made a judgment about the harmonic stimuli, the perception of the mistuned harmonic as a separate sound was paralleled by a positive wave at about 400 msec poststimulus (P400), which was enhanced in both older and younger musicians. These findings suggest attention dependent processing of a mistuned harmonic is enhanced in older musicians and provides further evidence that age-related decline in hearing abilities are mitigated by musical expertise.
Article
Full-text available
In vision, global (whole) features are typically processed before local (detail) features ("global precedence effect"). However, the distinction between global and local processing is less clear in the auditory domain. The aims of the present study were to investigate: (i) the effects of directed versus divided attention, and (ii) the effect musical training on auditory global-local processing in 16 adult musicians and 16 non-musicians. Participants were presented with short nine-tone melodies, each comprised of three triplet sequences (three-tone units). In a "directed attention" task, participants were asked to focus on either the global or local pitch pattern and had to determine if the pitch pattern went up or down. In a "divided attention" task, participants judged whether the target pattern (up or down) was present or absent. Overall, global structure was perceived faster and more accurately than local structure. The global precedence effect was observed regardless of whether attention was directed to a specific level or divided between levels. Musicians performed more accurately than non-musicians overall, but non-musicians showed a more pronounced global advantage. This study provides evidence for an auditory global precedence effect across attention tasks, and for differences in auditory global-local processing associated with musical experience.
Article
Full-text available
Human hearing depends on a combination of cognitive and sensory processes that function by means of an interactive circuitry of bottom-up and top-down neural pathways, extending from the cochlea to the cortex and back again. Given that similar neural pathways are recruited to process sounds related to both music and language, it is not surprising that the auditory expertise gained over years of consistent music practice fine-tunes the human auditory system in a comprehensive fashion, strengthening neurobiological and cognitive underpinnings of both music and speech processing. In this review we argue not only that common neural mechanisms for speech and music exist, but that experience in music leads to enhancements in sensory and cognitive contributors to speech processing. Of specific interest is the potential for music training to bolster neural mechanisms that undergird language-related skills, such as reading and hearing speech in background noise, which are critical to academic progress, emotional health, and vocational success.
Article
Full-text available
Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.
Article
Full-text available
Criminologists are often interested in examining interactive effects within a regression context. For example, “holding other relevant factors constant, is the effect of delinquent peers on one's own delinquent conduct the same for males and females?” or “is the effect of a given treatment program comparable between first-time and repeat offenders?” A frequent strategy in examining such interactive effects is to test for the difference between two regression coefficients across independent samples. That is, does b1= b2? Traditionally, criminologists have employed a t or z test for the difference between slopes in making these coefficient comparisons. While there is considerable consensus as to the appropriateness of this strategy, there has been some confusion in the criminological literature as to the correct estimator of the standard error of the difference, the standard deviation of the sampling distribution of coefficient differences, in the t or z formula. Criminologists have employed two different estimators of this standard deviation in their empirical work. In this note, we point out that one of these estimators is correct while the other is incorrect. The incorrect estimator biases one's hypothesis test in favor of rejecting the null hypothesis that b1= b2. Unfortunately, the use of this incorrect estimator of the standard error of the difference has been fairly widespread in criminology. We provide the formula for the correct statistical test and illustrate with two examples from the literature how the biased estimator can lead to incorrect conclusions.
Article
Full-text available
Three experiments were conducted to determine whether attention may be allocated to a specific frequency region. On each trial, a frequency cue was presented and was followed by a target tone. The cue indicated the most likely frequency of the forthcoming target about which the listeners were required to make a duration judgment. It was reasoned that if listeners are able to allocate attention to the cued frequency region, then judgments of any characteristic of a tone of the cued frequency should be facilitated relative to tones of different frequencies. Results indicated that duration judgments were made more quickly and accurately when the cue provided accurate frequency information than when it did not. In addition, performance generally declined as the frequency separation between cue and target increased. These effects are interpreted as an indication that listeners may use a frequency cue to allocate attention to a specific frequency region and that, under these conditions, the shape of the attentional focus conforms to a gradient. The possible similarities of covert orienting mechanisms in vision and audition are discussed.
Article
Full-text available
Cutting and Rosner (Perception & Psychophysics, 1974, 16, 564–570) reported that nonspeech stimuli differing in rise time were categorically perceived in the same way as speech sounds. With two independently generated sets of stimuli essentially the same as those described by Cutting and Rosner, we were unable to replicate their finding that discrimination measured in an ABX task was best around 40 msec, the category boundary. We found discrimination always best at the shortest rise times, decreasing monotonically with increasing rise time. Oscillographic traces of Cutting and Rosner’s original stimuli showed them not to have the intended rise times. Instead of starting with a rise of 0 msec and increasing linearly in 10-msec steps to 80 msec, the measured rise times were approximately 4, 6, 15, 19, 37, 43, 57, 66, and 76 msec, respectively. A set of stimuli having these rise times was generated. Two distinct patterns of response emerged from the discrimination task. Most subjects now showed best discrimination around 40 msec, but a few still performed best at the shortest rise times.
Book
"Bregman has written a major book, a unique and important contribution to the rapidly expanding field of complex auditory perception. This is a big, rich, and fulfilling piece of work that deserves the wide audience it is sure to attract." -- Stewart H. Hulse, Science Auditory Scene Analysis addresses the problem of hearing complex auditory environments, using a series of creative analogies to describe the process required of the human auditory system as it analyzes mixtures of sounds to recover descriptions of individual sounds. In a unified and comprehensive way, Bregman establishes a theoretical framework that integrates his findings with an unusually wide range of previous research in psychoacoustics, speech perception, music theory and composition, and computer modeling.
Article
The theoretical framework presented in this article explains expert performance as the end result of individuals' prolonged efforts to improve performance while negotiating motivational and external constraints. In most domains of expertise, individuals begin in their childhood a regimen of effortful activities (deliberate practice) designed to optimize improvement. Individual differences, even among elite performers, are closely related to assessed amounts of deliberate practice. Many characteristics once believed to reflect innate talent are actually the result of intense practice extended for a minimum of 10 years. Analysis of expert performance provides unique evidence on the potential and limits of extreme environmental adaptation and learning.
Article
When musical intervals are altered from their usual frequency ratios, listeners may experience a sensation of mistuning. We report results of experiments in which subjects judged degrees of mistuning of all intervals from unison to octave, as well as major tenth and twelfth. Using two simultaneous tones with fundamental frequencies between 250 and 800 Hz and 5 to 10 strong harmonics in each, we find: (1) just intervals, rather than tempered, are considered best in tune; (2) the range of mistunings considered acceptable generally becomes narrower when expressed in cents but wider when described by beat rate as we go from unison to octave, fifth and fourth; (3) whether that trend continues to sixths and thirds depends on individual listening strategies; and (4) the difficulty of judgment generally increases in going from the consonant toward the dissonant intervals, with the latter often eliciting only crude discrimination. Ability to judge mistuning with dichotic stimuli was also tested. We conclude that the beat rates of nearly coinciding harmonics provide an important clue to mistuning, but that a more abstract ability to judge interval size is also used; relative importance of the two strategies differs among subjects.
Article
This article reviews How Equal Temperament Ruined Harmony (and Why You Should Care) by Ross W. Duffin , 2006. 196 pp. Price: $25.95 (hardcover). ISBN-13: 978-0-393-06227-4
<