ArticlePDF Available

Modularity in Musical Processing: The Automaticity of Harmonic Priming

Authors:

Abstract

Three experiments investigated the modularity of harmonic expectations that are based on cultural schemata despite the availability of more predictive veridical information. Participants were presented with prime-target chord pairs and made an intonation judgment about each target. Schematic expectation was manipulated by the combination of prime and target, with some transitions being schematically more probable than others. Veridical information in the form of prime-target previews, local transition probabilities, or valid versus invalid previews was also provided. Processing was facilitated when a schematically probable target chord followed the prime. Furthermore, this effect was independent of all manipulations of veridical expectation. A solution to L. B. Meyer's (1967b) query "On Rehearing Music" is suggested, in which schematic knowledge contributes to harmonic expectation in a modular manner regardless of whether any veridical knowledge exists.
... Thus, tonal knowledge in long-term memory (referred to as schematic knowledge by Bharucha, 1987) robustly influences the way we predict the course of a musical sentence. The study by Justus and Bharucha (2001) showed, for instance, that response time differences in a harmonic priming paradigm still reflected the expectations based on representations in long-term memory of tonal relationships even when new information was repeatedly presented, and thus present in short-term memory (see also Filipic et al., 2010;Guo & Koelsch, 2016;Koelsch & Jentschke, 2008). ...
Article
Congenital amusia is a neurodevelopmental disorder of music processing, which includes impaired pitch memory, associated to abnormalities in the right fronto-temporal network. Previous research has shown that tonal structures (as defined by the Western musical system) improve short-term memory performance for short tone sequences (in comparison to atonal versions) in non-musician listeners, but the tonal structures only benefited response times in amusic individuals. We here tested the potential benefit of tonal structures for short-term memory with more complex musical material. Congenital amusics and their matched non-musician controls were required to indicate whether two excerpts were the same or different. Results confirmed impaired performance of amusic individuals in this short-term memory task. However, most importantly, both groups of participants showed better memory performance for tonal material than for atonal material. These results revealed that even amusics’ impaired short-term memory for pitch shows classical characteristics of short-term memory, that is the mnemonic benefit of structure in the to-be-memorized material. The findings show that amusic individuals have acquired some implicit knowledge of regularities of their culture, allowing for implicit processing of tonal structures, which benefits to memory even for complex material.
... how musical events are combined into sequences) creates predictions about upcoming musical events or schematic expectations [2], by which unstable chords should lead to more stable ones. Our brain also makes predictions based on the probability of appearance of unexpected musical events, which generate veridical expectations (e.g., through the repetition of violations; [3,4]). One of the key questions of music cognition research is how the creation and violation of these musical expectations are encoded in the brain. ...
Full-text available
Article
In western music, harmonic expectations can be fulfilled or broken by unexpected chords. Musical irregularities in the absence of auditory deviance elicit well-studied neural responses (e.g. ERAN, P3, N5). These responses are sensitive to schematic expectations (induced by syntactic rules of chord succession) and veridical expectations about predictability (induced by experimental regularities). However, the cognitive and sensory contributions to these responses and their plasticity as a result of musical training remains under debate. In the present study, we explored whether the neural processing of pure acoustic violations is affected by schematic and veridical expectations. Moreover, we investigated whether these two factors interact with long-term musical training. In Experiment 1, we registered the ERPs elicited by dissonant clusters placed either at the middle or the ending position of chord cadences. In Experiment 2, we presented to the listeners with a high proportion of cadences ending in a dissonant chord. In both experiments, we compared the ERPs of musicians and non-musicians. Dissonant clusters elicited distinctive neural responses (an early negativity, the P3 and the N5). While the EN was not affected by syntactic rules, the P3a and P3b were larger for dissonant closures than for middle dissonant chords. Interestingly, these components were larger in musicians than in non-musicians, while the N5 was the opposite. Finally, the predictability of dissonant closures in our experiment did not modulate any of the ERPs. Our study suggests that, at early time windows, dissonance is processed based on acoustic deviance independently of syntactic rules. However, at longer latencies, listeners may be able to engage integration mechanisms and further processes of attentional and structural analysis dependent on musical hierarchies, which are enhanced in musicians.
... The capacity for automatic processing of pitch and rhythm/meter supports one of the most popular modes of listening to music -background listening. Although it does not allow for the perception of the entirety of the music form, which Nattiez was talking about, nevertheless, it allows for perception of such structural features as melodic intervals (Trainor, Mcdonald, and Alain 2002), chords (Justus and Bharucha 2001), and probably even textural patterns -at least in two-part texture (Fujioka et al. 2005). It seems that automatic processing of music generally occurs when violations of musical expectancies in the time-frequency domain take place (Näätänen et al. 2007). ...
Full-text available
Method
This is a draft for a long-awaited method of musicological analysis of music, designed for identification, classification, and interpretation of music structures found in recordings of solo musicking within any form of traditional music. This includes but is not limited to solo songs, instrumental solos, and leading parts (melody) in a monodic composition (e.g. a song with instrumental accompaniment), ranging from tonality of Western classical music to ekmelic (indefinite-pitch based) forms of musicking. The proposed method features novel formats of visualization of the analyzed music, of tabulation of the metrics collected with the help of existing software frequency analyzers, of matrix technique for identification of melodic intonations, of kinematic graphing of degrees in a musical mode, and of frustrum projection for convenient visualization of a musical mode.
... However, the predictive information in this setup was derived implicitly from stimulus probability, and it may be processed differently when used explicitly and consciously by the listener. Along these lines, behavioral studies have shown that the veridical expectation about the critical chord does not abolish the priming effect for schematically expected harmony, even when such expectation is formed through a preview of the critical chord beforehand (Justus and Bharucha, 2001) or familiarization with a less-expected musical structure (Tillmann and Bigand, 2010). Furthermore, recently, Guo and Koelsch (2016) found that providing predictive visual cues (colored fixation crosses on the screen) elicited the ERAN earlier in both musicians and non-musicians, but did not influence the amplitude. ...
Full-text available
Article
The early right anterior negativity (ERAN) in event-related potentials (ERPs) is typically elicited by syntactically unexpected events in Western tonal music. We examined how visual predictive information influences syntactic processing, how musical or non-musical cues have different effects, and how they interact with sequential effects between trials, which could modulate with the strength of the sense of established tonality. The EEG was recorded from musicians who listened to chord sequences paired with one of four types of visual stimuli; two provided predictive information about the syntactic validity of the last chord through either musical notation of the whole sequence, or the word “regular” or “irregular,” while the other two, empty musical staves or a blank screen, provided no information. Half of the sequences ended with the syntactically invalid Neapolitan sixth chord, while the other half ended with the Tonic chord. Clear ERAN was observed in frontocentral electrodes in all conditions. A principal component analysis (PCA) was performed on the grand average response in the audio-only condition, to separate spatio-temporal dynamics of different scalp areas as principal components (PCs) and use them to extract auditory-related neural activities in the other visual-cue conditions. The first principal component (PC1) showed a symmetrical frontocentral topography, while the second (PC2) showed a right-lateralized frontal concentration. A source analysis confirmed the relative contribution of temporal sources to the former and a right frontal source to the latter. Cue predictability affected only the ERAN projected onto PC1, especially when the previous trial ended with the Tonic chord. The ERAN in PC2 was reduced in the trials following Neapolitan endings in general. However, the extent of this reduction differed between cue-styles, whereby it was nearly absent when musical notation was used, regardless of whether the staves were filled with notes or empty. The results suggest that the right frontal areas carry out the primary role in musical syntactic analysis and integration of the ongoing context, which produce schematic expectations that, together with the veridical expectation incorporated by the temporal areas, inform musical syntactic processing in musicians.
... These schematic priming effects also remained unaffected by veridical knowledge about how each stimulus might proceed (Justus & Bharucha, 2001;Tillmann & Poulin-Charronnat, 2010). Nevertheless, disentangling low-level sensory influences from cognitive accounts of tonal expectancy remains a tremendous challenge, and appealing to the musical materials themselves 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 Expectations for Tonal Cadences 10 staircase tonal expectations by comparing the most expected cadential sequences against their less expected (and thus, less stable) cadential counterparts. ...
Full-text available
Article
Studies examining the formation of melodic and harmonic expectations during music listening have repeatedly demonstrated that a tonal context primes listeners to expect certain (tonally related) continuations over others. However, few such studies have (1) selected stimuli using ready examples of expectancy violation derived from real-world instances of tonal music, (2) provided a consistent account for the influence of sensory and cognitive mechanisms on tonal expectancies by comparing different computational simulations, or (3) combined melodic and harmonic representations in modelling cognitive processes of expectation. To resolve these issues, this study measures expectations for the most recurrent cadence patterns associated with tonal music and then simulates the reported findings using three sensory-cognitive models of auditory expectation. In Experiment 1, participants provided explicit retrospective expectancy ratings both before and after hearing the target melodic tone and chord of the cadential formula. In Experiment 2, participants indicated as quickly as possible whether those target events were in or out of tune relative to the preceding context. Across both experiments, cadences terminating with stable melodic tones and chords elicited the highest expectancy ratings and the fastest and most accurate responses. Moreover, the model simulations supported a cognitive interpretation of tonal processing, in which listeners with exposure to tonal music generate expectations as a consequence of the frequent (co-)occurrence of events on the musical surface.
Article
To examine priming effects for tonal chord sequences using behavioral methods, experimenters typically employ a discrimination task that draws the participants’ attention to features of the stimulus other than those being assessed, but that may still be affected by the expectedness of the target event. And yet, despite an extensive body of evidence demonstrating tonal priming effects using several discrimination tasks, the relationships between task difficulty, musical expertise, and the expectedness of the target event are not well understood. Thus, this study predicts intonation discrimination performance for tonal chord sequences using measures related to the predictability of the terminal, target chord and the musical sophistication of the participant sample. One hundred participants (50 musicians) were presented with tonal chord sequences and asked to respond to a one-alternative forced choice (1AFC) intonation discrimination task, where out-of-tune targets were mistuned in a range from 1–100 cents sharp using a parametric adaptive staircasing procedure. Chord predictability was estimated using a probabilistic finite-context model, and musical sophistication was measured using the Goldsmith Musical Sophistication Index (Gold-MSI). As expected, discrimination thresholds varied substantially across the participant sample and were strongly negatively correlated with the Gold-MSI. What is more, moderation analyses revealed a significant interaction between intonation size and predictability, suggesting participants conflate chord intonation with unexpectedness when tuning differences are small.
Chapter
One of the main principle guidelines for our Robotic Musicianship research is to develop robots that can “listen like a human and play like a machine.” We would like our robots to be able to understand music as humans so they can connect with their co-players (“listen like a human”) but also surprise and inspire humans with novel music ideas and capabilities (“play like a machine”).
Article
In this article, I first address the question of how musical forms come to represent meaning—that is, the semantics of music—and illustrate an important conceptual distinction articulated by Leonard Meyer in Emotion and Meaning in Music between absolute or intramusical meaning and referential or extramusical meaning through a critical analysis of two recent films. Second, building examples of scholarship around a single piece of music frequently used in film—Samuel Barber’s Adagio for Strings—I follow the example set by Murray Smith in Film, Art, and the Third Culture and discuss the complementary approaches of the humanities, the behavioral sciences, and the natural sciences to understanding music and its use in film.
Article
Recent work has shown that authentic and half cadences can be identified via harmonic features in both supervised and unsupervised settings, suggesting that humans may use such cues in perceiving and learning cadences. The present study tests melodic features in these same tasks. Both n-gram models and profile hidden Markov models of melodic patterns are used for supervised classification and unsupervised learning of cadences in Classical string quartets. Success is achieved at the supervised task but not the unsupervised task, indicating that melodic cues would help in perceiving cadences but not in learning to perceive them.
Full-text available
Article
A chord-priming paradigm was used to test predictions of a neural net model (MUSACT). The model makes a nonintuitive prediction: Following a prime chord, expectations for the target chord are based on psychoacoustic similarity at short stimulus onset asynchronies (SOAs) but on implicit knowledge of conventional relationships at longer SOAs. In a critical test, 2 targets were selected for each prime. One was more psychoacoustically similar to the prime, and the other was more closely related on the basis of convention. With an SOA of 50 ms, priming favored the psychoacoustically similar target; with SOAs of 500 ms and longer, the effect reversed, and priming favored conventional relatedness. The results underscore the limitations of models of harmony based on psychoacoustic factors alone. These studies demonstrate how neural net learning models that are appropriately constrained can be subject to strong empirical verification. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Full-text available
Article
The time course of chord priming was explored in four experiments. In chord priming, a chord (a typical combination of simultaneously sounded tones) primes other chords that are musically related. In the present study, the prime duration and the stimulus onset asynchrony (SOA) between the prime chord and the chord to be judged were varied. Priming occurred at an SOA and prime duration as short as 50 msec, the shortest tested. When the prime duration was held constant at 50 msec, priming occurred at an SOA as long as 2,500 msec, the longest tested, and the magnitude of the priming effect did not diminish. To eliminate a possible role of sensory memory in maintaining the priming effect during the silence following the prime, a 250-msec noise mask was presented immediately following the 50-msec prime. The interpolated noise mask did not eliminate priming, thereby supporting the view that chord priming is the consequence of associative activation.
Full-text available
Article
PsyScope is an integrated environment for designing and running psychology experiments on Macintosh computers. The primary goal of PsyScope is to give both psychology students and trained researchers a tool that allows them to design experiments without the need for programming. PsyScope relies on the interactive graphic environment provided by Macintosh computers to accomplish this goal. The standard components of a psychology experiment—groups, blocks, trials, and factors—are all represented graphically, and experiments are constructed by working with these elements in interactive windows and dialogs. In this article, we describe the overall organization of the program, provide an example of how a simple experiment can be constructed within its graphic environment, and discuss some of its technical features (such as its underlying scripting language, timing characteristics, etc.). PsyScope is available for noncommercial purposes free of charge and unsupported to the general research community. Information about how to obtain the program and its documentation is provided.
Chapter
Considerable work has been done on mapping out the mental organization of pitch in music (e.g., Dowling, 1978; Krumhansl, 1990). Studies suggest that much of this psychological structure exists even for subjects who have had no explicit musical training (Bharucha & Stoeckig, 1986). Furthermore, the data that point to such mental structures cannot be accounted for solely on the basis of the spectral content of music (Bharucha & Stoeckig, 1987). These and other findings suggest that the musical regularities of a culture are learned through passive perceptual exposure.
Article
This chapter explores the hierarchical expectation and musical style of extraopus style. Musicians tend to think of style in terms of chronological period, provenance, nationality, genre, composer, and work. All music listening depends on remembering both intraopus and extraopus style. Knowledge of style enables listeners to recognize similarity between percept and memory and thus to map learned, top-down expectations. In the structural sense, style enters into the top-down processing of incoming signals as a level complex. With reference to melodic expectation, the memory of a specific implication connects to a learned style-structural realization situated within a specific durational, metric, and harmonic context. In addition, listeners invoke style structures that are implicatively relevant to the perceptual and cognitive analysis of input. As regards the listener's attention to learned expectations, repetition of intraopus stylistic structures normally takes priority over extraopus stylistic replication. As regards the listener's attention to learned expectations, repetition of intraopus stylistic structures normally takes priority over extraopus stylistic replication. Thus, the chapter concentrates on the structural and hierarchical aspects of extraopus style.
Article
This study investigates the influence of global harmonic structures on priming effects in music, in keeping the local context constant. Subjects were presented eight-chord sequences. The harmonic context, created by the first six chords, was manipulated in order to vary the function of the last two chords. In one context the last chord was analysed as a tonic chord, and in the other as a subdominant chord. Considering these changes of harmonic function, the last chord was assumed to be more anticipated in the first context than in the second. The importance of the global context was revealed by lower completion judgements when sequences ended on an unexpected chord (Experiment 1). In Experiment 2, the global context effect was revealed by shorter processing times for the last chord when expected (priming effect) . These results are discussed in reference to Bharucha's (1987) connectionist model of tonal cognition.
Article
Attempts to tie together some of the issues raised by D. Deutsch and W. J. Dowling (see PA, Vol 72:16425 and 16426, respectively) in their comments on the author's Indian music study (see M. A. Castellano et al; PA, Vol 72:16424). A distinction is made between 2 kinds of hierarchical representations of musical stability. (15 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This study further explores the effect of global context on chord processing reported by E. Bigand and M. Pineau (1997). Expectations of a target chord were varied by manipulating the preliminary harmonic context while holding constant the chord(s) prior to the target. In Experiment 1, previously observed priming effects were replicated with an on-line paradigm. Experiment 2 was an attempt to identify the point in chord sequences that is responsible for the occurrence of the priming effect. In Experiment 3, Bigand and Pineau's findings were extended to wider harmonic contexts (i.e., defined at three hierarchical levels), and new evidence was provided that chord processing also depends on the temporal organization of the musical sequence. Neural net simulations globally support J. J. Bharucha's (1987, 1994) view that priming effects result from activations spreading via a schematic knowledge of Western harmony. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
A special set of computer‐generated complex tones is shown to lead to a complete breakdown of transitivity in judgments of relative pitch. Indeed, the tones can be represented as equally spaced points around a circle in such a way that the clockwise neighbor of each tone is judged higher in pitch while the counterclockwise neighbor is judged lower in pitch. Diametrically opposed tones—though clearly different in pitch—are quite ambiguous as to the direction of the difference. The results demonstrate the operation of a “proximity principle” for the continuum of frequency and suggest that perceived pitch cannot be adequately represented by a purely rectilinear scale.