Music Perception

Published by University of California Press

Online ISSN: 1533-8312

·

Print ISSN: 0730-7829

Articles


The Speed of Musical Pitch Identification by Absolute-Pitch Possessors
  • Article

December 1990

·

64 Reads

Three experiments on absolute- pitch identification were performed to examine how quickly and accurately subjects with absolute pitch could respond to different pitch classes. Sixty different pitches in a five-octave range were tested. Subjects with absolute pitch tried to identify the tones as rapidly as possible by pressing corresponding keys on a musical keyboard or a numerical keypad, or by naming vocally. Converging evidence was obtained indicating that the speed and accuracy of responses were directly related. In general, responses to the white-key notes on the musical keyboard were faster and more accurate than those to the black-key notes, C and G being most quickly and accurately identified. This seems to reflect the differential accessibility of pitch classes in the long-term memory of the absolute-pitch possessors, which may be interpreted as a consequence of the acquisition process of absolute pitch in early life.
Share

Bimusicalism: The Implicit Dual Enculturation of Cognitive and Affective Systems

December 2009

·

209 Reads

One prominent example of globalization and mass cultural exchange is bilingualism, whereby world citizens learn to understand and speak multiple languages. Music, similar to language, is a human universal, and subject to the effects of globalization. In two experiments, we asked whether bimusicalism exists as a phenomenon, and whether it can occur even without explicit formal training and extensive music-making. Everyday music listeners who had significant exposure to music of both Indian (South Asian) and Westerners traditions (IW listeners) and listeners who had experience with only Indian or Western culture (I or W listeners) participated in recognition memory and tension judgment experiments where they listened to Western and Indian music. We found that while I and W listeners showed an in-culture bias, IW listeners showed equal responses to music from both cultures, suggesting that dual mental and affective sensitivities can be extended to a nonlinguistic domain.

TABLE 1 
From Singing to Speaking: Why Singing May Lead to Recovery of Expressive Language Function in Patients with Broca's Aphasia
  • Article
  • Full-text available

April 2008

·

1,048 Reads

It has been reported that patients with severely nonfluent aphasia are better at singing lyrics than speaking the same words. This observation inspired the development of Melodic Intonation Therapy (MIT), a treatment whose effects have been shown, but whose efficacy is unproven and neural correlates remain unidentified. Because of its potential to engage/unmask language-capable regions in the unaffected right hemisphere, MIT is particularly well suited for patients with large left-hemisphere lesions. Using two patients with similar impairments and stroke size/location, we show the effects of MIT and a control intervention. Both interventions' post-treatment outcomes revealed significant improvement in propositional speech that generalized to unpracticed words and phrases; however, the MIT-treated patient's gains surpassed those of the control-treated patient. Treatment-associated imaging changes indicate that MIT's unique engagement of the right hemisphere, both through singing and tapping with the left hand to prime the sensorimotor and premotor cortices for articulation, accounts for its effect over nonintoned speech therapy.
Download

FIGURE 1. Neural structures involved in the control of heart rate. Note: Adapted from Benarroch (1993), Berntson et al. (1998), Berthoud and Neuhuber (2000), Gianaros (2008), Loewy (1990), Thayer and Lane (2009), and Verberne and Owens (1998). Solid black arrows indicate efferent pathways to the heart, including right vagus nerve (PNS) and stellate ganglion (SNS) inputs to the SA node. Dotted gray arrows indicate afferent pathways to medullary structures via aortic baroreceptor signals carried through the vagus. Dashed black arrows indicate bidirectional connections. AMB: nucleus ambiguus; BF: basal forebrain; BLA: basolateral amygdala; CeA: central nucleus of the amygdala; CVLM: caudal ventrolateral medullary neurons; DVMN: dorsal vagal motor nuclei; Hyp: hypothalamus (lateral and paraventricular); IML: intermediolateral cell column of the spinal cord LC: locus coeruleus; NTS: nucleus of the solitary tract; PAG: periaqueductal gray; PBN: parabrachial nucluei; PFC: prefrontal cortex; PGi: nucleus paragigantocellularis; RVLM: rostral ventrolateral medullary neurons. Heart graphic (http://en.wikipedia. org/wiki/File:Heartgraphic.svg) used under the GNU Free Documentation License.  
TABLE 1 . Search Terms Queried.
Music and Autonomic Nervous System (Dys)Function

April 2010

·

2,969 Reads

Despite a wealth of evidence for the involvement of the autonomic nervous system (ANS) in health and disease and the ability of music to affect ANS activity, few studies have systematically explored the therapeutic effects of music on ANS dysfunction. Furthermore, when ANS activity is quantified and analyzed, it is usually from a point of convenience rather than from an understanding of its physiological basis. After a review of the experimental and therapeutic literatures exploring music and the ANS, a "Neurovisceral Integration" perspective on the interplay between the central and autonomic nervous systems is introduced, and the associated implications for physiological, emotional, and cognitive health are explored. The construct of heart rate variability is discussed both as an example of this complex interplay and as a useful metric for exploring the sometimes subtle effect of music on autonomic response. Suggestions for future investigations using musical interventions are offered based on this integrative account.

FIGURE 1.   The human auditory system is interconnected by a complex circuitry of bottom up (thin gray lines) and top down (thick black lines) neural fibers that extend from the cochlea to the cortex and back again (A). Together, these pathways facilitate the modulation of neural function according to parameters that include directed attention to particular sounds or sound features, recent experiences being held in temporary memory storage sites, and a sound or sound pattern’s acquired behavioral relevance, such as through associations gained with training. Evidence suggests that music training refines human auditory processing in each of these domains. With regard to attention (B), adult musicians demonstrate faster reaction times during a sustained attention task than nonmusicians. Similarly, musicians demonstrate increased auditory working memory capacity compared to nonmusicians (C), which is thought to contribute to musicians’ enhanced speech in noise perception (see Figure 2). Music training also facilitates the subcortical differentiation of the upper and lower notes of musical intervals (D), with musicians demonstrating enhanced representations of an upper note of a musical interval compared to the lower note. Musically untrained participants, by contrast, do not show selective subcortical enhancements to either tone. * p < .05, ** p < .01. 
FIGURE 2.   Compared to nonmusicians, musicians’ speech processing is more resistant to the degradative effects of background noise. For example, musicians are better able to repeat sentences correctly when they are presented in noise at lower signal-to-noise ratios (A, left panel); this benefit may be partially driven by enhanced auditory cognitive abilities (A, right panel). Musicians also demonstrate decreased neural response degradation by background noise (C), as revealed in musicians’ and nonmusicians’ auditory brainstem responses (ABRs) to the speech sound /da/ with and without background noise. Because the ABR physically resembles the acoustic properties of incoming sounds, the elicited ABR waveform in each subject (B, lower waveform) resembles the waveform of the evoking stimulus (B, upper waveform). Although both musicians and nonmusicians demonstrate robust neural responses to the speech sound when presented in quiet, nonmusicians’ responses are particularly degraded by the addition of background noise (C). ** p < .01. 
Playing Music for a Smarter Ear: Cognitive, Perceptual and Neurobiological Evidence

December 2011

·

520 Reads

Human hearing depends on a combination of cognitive and sensory processes that function by means of an interactive circuitry of bottom-up and top-down neural pathways, extending from the cochlea to the cortex and back again. Given that similar neural pathways are recruited to process sounds related to both music and language, it is not surprising that the auditory expertise gained over years of consistent music practice fine-tunes the human auditory system in a comprehensive fashion, strengthening neurobiological and cognitive underpinnings of both music and speech processing. In this review we argue not only that common neural mechanisms for speech and music exist, but that experience in music leads to enhancements in sensory and cognitive contributors to speech processing. Of specific interest is the potential for music training to bolster neural mechanisms that undergird language-related skills, such as reading and hearing speech in background noise, which are critical to academic progress, emotional health, and vocational success.

FIGURE 2. 
FIGURE 3. 
The Ecology of Entrainment: Foundations of Coordinated Rhythmic Movement

September 2010

·

696 Reads

Entrainment has been studied in a variety of contexts including music perception, dance, verbal communication and motor coordination more generally. Here we seek to provide a unifying framework that incorporates the key aspects of entrainment as it has been studied in these varying domains. We propose that there are a number of types of entrainment that build upon pre-existing adaptations that allow organisms to perceive stimuli as rhythmic, to produce periodic stimuli, and to integrate the two using sensory feedback. We suggest that social entrainment is a special case of spatiotemporal coordination where the rhythmic signal originates from another individual. We use this framework to understand the function and evolutionary basis for coordinated rhythmic movement and to explore questions about the nature of entrainment in music and dance. The framework of entrainment presented here has a number of implications for the vocal learning hypothesis and other proposals for the evolution of coordinated rhythmic behavior across an array of species.

The Therapeutic Effects of Singing in Neurological Disorders

April 2010

·

910 Reads

Music making (playing an instrument or singing) is a multimodal activity that involves the integration of auditory and sensorimotor processes. The ability to sing in humans is evident from infancy, and does not depend on formal vocal training but can be enhanced by training. Given the behavioral similarities between singing and speaking, as well as the shared and distinct neural correlates of both, researchers have begun to examine whether singing can be used to treat some of the speech-motor abnormalities associated with various neurological conditions. This paper reviews recent evidence on the therapeutic effects of singing, and how it can potentially ameliorate some of the speech deficits associated with conditions such as stuttering, Parkinson's disease, acquired brain lesions, and autism. By reviewing the status quo, it is hoped that future research can help to disentangle the relative contribution of factors to why singing works. This may ultimately lead to the development of specialized or "gold-standard" treatments for these disorders, and to an improvement in the quality of life for patients.

The Song Remains the Same: A Replication and Extension of the MUSIC Model

December 2012

·

421 Reads

·

Lewis R Goldberg

·

·

[...]

·

There is overwhelming anecdotal and empirical evidence for individual differences in musical preferences. However, little is known about what drives those preferences. Are people drawn to particular musical genres (e.g., rap, jazz) or to certain musical properties (e.g., lively, loud)? Recent findings suggest that musical preferences can be conceptualized in terms of five orthogonal dimensions: Mellow, Unpretentious, Sophisticated, Intense, and Contemporary (conveniently, MUSIC). The aim of the present research is to replicate and extend that work by empirically examining the hypothesis that musical preferences are based on preferences for particular musical properties and psychological attributes as opposed to musical genres. Findings from Study 1 replicated the five-factor MUSIC structure using musical excerpts from a variety of genres and subgenres and revealed musical attributes that differentiate each factor. Results from Studies 2 and 3 show that the MUSIC structure is recoverable using musical pieces from only the jazz and rock genres, respectively. Taken together, the current work provides strong evidence that preferences for music are determined by specific musical attributes and that the MUSIC model is a robust framework for conceptualizing and measuring such preferences.

Listening to Filtered Music as a Treatment Option for Tinnitus: A Review

April 2010

·

168 Reads

TINNITUS IS THE PERCEPTION OF A SOUND IN THE absence of an external acoustic stimulus and it affects roughly 10-15% of the population. This review will discuss the different types of tinnitus and the current research on the underlying neural substrates of subjective tinnitus. Specific focus will be paid to the plasticity of the auditory cortex, the inputs from non-auditory centers in the central nervous system and how these are affected by tinnitus. We also will discuss several therapies that utilize music as a treatment for tinnitus and highlight a novel method that filters out the tinnitus frequency from the music, leveraging the plasticity in the auditory cortex as a means of reducing the impact of tinnitus.

TABLE 1. Mean Fractal Statistics—Performance Data. 
FIGURE 4. The tempo map (bpm = 60/IBI) for 3 different metrical levels ( 1 ⁄16-note, 1 ⁄8-note, 1 ⁄4-note) of Chopin's Etude in E major, Op. 10, No. 3,  
Fractal Tempo Fluctuation and Pulse Prediction

June 2009

·

331 Reads

WE INVESTIGATED PEOPLES’ ABILITY TO ADAPT TO THE fluctuating tempi of music performance. In Experiment 1, four pieces from different musical styles were chosen, and performances were recorded from a skilled pianist who was instructed to play with natural expression. Spectral and rescaled range analyses on interbeat interval time-series revealed long-range (1/f type) serial correlations and fractal scaling in each piece. Stimuli for Experiment 2 included two of the performances from Experiment 1, with mechanical versions serving as controls. Participants tapped the beat at ¼- and ⅛-note metrical levels, successfully adapting to large tempo fluctuations in both performances. Participants predicted the structured tempo fluctuations, with superior performance at the ¼-note level. Thus, listeners may exploit long-range correlations and fractal scaling to predict tempo changes in music. [Work supported by NSF grant BCS-0094229.].

FIGURE 2. (A) A diagram of the finite-state grammar illustrating the composition of melody from harmony. Each number represents n in the Bohlen-Pierce scale formula in Figure 1a. Legal paths of each grammar are shown by the arrows. (B) The derivation of one melody from one of the two grammars. Dark arrows illustrate the paths taken, whereas light arrows illustrate other possible paths that are legal in the grammar. The resultant melody is shown at the bottom of the figure.  
FIGURE 5.  
Humans Rapidly Learn Grammatical Structure in a New Musical Scale

June 2010

·

362 Reads

Knowledge of musical rules and structures has been reliably demonstrated in humans of different ages, cultures, and levels of music training, and has been linked to our musical preferences. However, how humans acquire knowledge of and develop preferences for music remains unknown. The present study shows that humans rapidly develop knowledge and preferences when given limited exposure to a new musical system. Using a non-traditional, unfamiliar musical scale (Bohlen-Pierce scale), we created finite-state musical grammars from which we composed sets of melodies. After 25-30 min of passive exposure to the melodies, participants showed extensive learning as characterized by recognition, generalization, and sensitivity to the event frequencies in their given grammar, as well as increased preference for repeated melodies in the new musical system. Results provide evidence that a domain-general statistical learning mechanism may account for much of the human appreciation for music.

Fig. 5.1 Schematic illustrations of two models of emotion. The left panel shows a diagram of two-dimensional affective (valence X arousal) space – the circumplex model. Example emotions are noted in each quadrant. The right panel shows a model of mixed valence. Pure positive and negative responses lie along the axes in white. Darker shades of gray represent greater mixed feelings , which have shared positive and negative activation to varying degrees  
Fig. 5.2 The effect of exposure on liking as a function of number of exposures, the type of exposure, and stimulus complexity (Data are from Szpunar et al. 2004)  
Fig. 5.3 Liking for happy-andsad-sounding music as a function of the type of previous exposure (upper panel, data from Schellenberg et al. 2008) and listener's mood (lower panel, data from Hunter and Schellenberg 2008)  
Music and Emotion

August 2010

·

14,333 Reads

These two quotations reflect common attitudes about music. Tolstoy’s comment suggests that music conveys emotion, whereas Torke’s question implies that music influences listeners’ emotions. Section 5.2 of the present chapter includes a discussion of the various theoretical approaches that are used to explain affective responses to music. Few scholars dispute the claim that listeners recognize emotions in music. Some argue, however, that music does not elicit true emotions in the listener (e.g., Kivy 1980, 1990, 2001). For example, many years ago Meyer (1956) posited that affective responses to music consist of experiences of tension and relaxation (rather than actual emotions), which occur when listeners’ expectancies about what will happen next in a piece of music are violated or fulfilled, respectively. This position has been challenged in recent years with findings from studies using behavioral, physiological, and neurological measures, all of which indicate that listeners respond affectively to music (e.g., Krumhansl 1997; Gagnon and Peretz 2003; Mitterschiffthaler et al. 2007; Witvliet and Vrana 2007). Nonetheless, the debate continues (e.g., Konečni 2008).

The Perception of Family and Register in Musical Tones

August 2010

·

347 Reads

This chapter is about the sounds made by musical instruments and how we perceive them. It explains the basics of musical note perception, such as why a particular instrument plays a specific range of notes; why instruments come in families; and why we hear distinctive differences between members of a given instrument family, even when they are playing the same note. The answers to these questions might, at first, seem obvious; one could say that brass instruments all make the same kind of sound because they are all made of brass, and the different members of the family sound different because they are different sizes. But answers at this level just prompt more questions, such as: What do we mean when we say the members of a family produce the same sound? What is it that is actually the same, and what is it that is different, when different instruments within a family play the same melody on the same notes? To answer these and similar questions, we examine the relationship between the physical variables of musical instruments, such as the length, mass, and tension of a string, and the variables of auditory perception, such as pitch, timbre, and loudness. The discussion reveals that there are three acoustic properties of musical sounds, as they occur in the air, between the instrument and the listener, that are particularly useful in summarizing the effects of the physical properties on the musical tones they produce, and in explaining how these musical tones produce the perceptions that we hear.

Fig. 3.1 (a) Probe tone ratings for a C major context. (b) Probe tone ratings for a C minor context. Values from Krumhansl and Kessler (1982)  
Fig. 3.2 (a) The basic pitch space for the tonic triad chord (I) in C major. (b) The basic pitch space for the chord built on the second scale tone (ii) of F major. Seven elements change between the two basic spaces (See Lerdahl 2001; Lerdahl and Krumhansl 2007)  
Fig. 3.3 Nonadjacent dependencies in Lerdahl's (2001) prolongation structure. The V chord links to the I chord even though the ii chord intervenes. The tension of the ii chord is computed as the distance between the ii chord and the V chord plus the distance between the V chord and the I chord  
A Theory of Tonal Hierarchies in Music

August 2010

·

3,952 Reads

One of the most pervasive structural principles found in music historically and cross-culturally is a hierarchy of tones. Certain tones serve as reference pitches; they are stable, repeated frequently, are emphasized rhythmically, and appear at structurally important positions in musical phrases. The details of the hierarchies differ across styles and cultures. Variation occurs in the particular intervals formed by pitches in the musical scale and the hierarchical levels assigned to pitches within the scale. This variability suggests that an explanation for how these hierarchies are formed cannot be derived from invariant acoustic facts, such as the harmonic structure (overtones) of complex tones. Rather, the evidence increasingly suggests that these hierarchies are products of cognition and, moreover, that they rely on fundamental psychological principles shared by other domains of perception and cognition.

Neurodynamics of Music

January 1970

·

121 Reads

Music is a high-level cognitive capacity, similar in many respects to language (Patel 2007). Like language, music is universal among humans, and musical systems vary among cultures and depend upon learning. But unlike language, music rarely makes reference to the external world. It consists of independent, that is, self-contained, patterns of sound, certain aspects of which are found universally among musical cultures. These two aspects – independence and universality – suggest that general principles of neural dynamics might underlie music perception and musical behavior. Such principles could provide a set of innate constraints that shape human musical behavior and enable children to acquire musical knowledge. This chapter outlines just such a set of principles, explaining key aspects of musical experience directly in terms of nervous system dynamics. At the outset, it may not be obvious that this is possible, but by the end of the chapter it should become clear that a great deal of evidence already supports this view. This chapter examines the evidence that links music perception and behavior to nervous system dynamics and attempts to tie together existing strands of research within a unified theoretical framework.

Tempo and Rhythm

August 2010

·

15,697 Reads

It is a remarkable feat that listeners develop stable representations for auditory events, given the varied, and often ambiguous, temporal patterning of acoustic energy received by the ears. The focus of this chapter is on empirical and theoretical approaches to tempo and rhythm, two aspects of the temporal patterning of sound that are fundamental to musical communication.

Fig. 1. Mean percent correct across test sessions for children (3–4 years old, 5–6 years old) and adults.  
Fig. 2. Percentage of trials on which each tone was selected as the " special note " across test sessions for children (3–4 years old, 5–6 years old) and adults.  
Figure A. Percent correct and best-fitting response distribution (RA = random, PR = pitch region, IM = incorrect mode, CM = correct mode) across test sessions for individual children.  
Learning the "Special Note": Evidence for a Critical Period for Absolute Pitch Acquisition

September 2003

·

2,045 Reads

Children (3––6 years old) and adults were trained for 6 weeks to identify a single tone, C5. Test sessions, held at the end of each week, had participants identify C5 within a set of seven alternative tones. By the third week of training, identification accuracy of children 5––6 years old surpassed the accuracies of children 3––4 years old and adults. Combined with an analysis of perceptual strategies, the data provide strong support for a critical period for absolute pitch acquisition. Received July 12, 2003, accepted August 1,2003

FIG 1. The average key profile for musicians in Experiment 1. Capital letters and small letters denote major and minor keys, respectively.
FIG 2. The average final-tone profile for nonmusicians in Experiment 2.
FIG 3. The distribution of responses of musicians and nonmusicians for the 60 tone sequences.
Cues for Key Perception of a Melody: : Pitch Set Alone?

December 2005

·

106 Reads

STUDIES HAVE SHOWN THAT PITCH SET, which refers to a set of pitches of constituent tones of a melody, is a primary cue for perceiving the key of a melody. The present study investigates whether characteristics other than pitch set function as additional cues for key perception. In Experiment 1, we asked 13 musicians with absolute p itch t o select k eys for 60 stim ulus t one sequences consisting of the same pitch set differing in pitch sequence. In Experiment 2, we asked 31 nonmusicians to select tonal centers for the 60 stimulus tone sequences. Responses made by the musicians and the nonmusicians yielded essentially equivalent results, suggesting that key perception is never unique to musicians. The listeners' responses were limited to a few keys/tones, and some tone sequences elicited agreement among the majority of the listeners for each of the keys/tones. These findings confirm that key perception is not only defined by pitch set but also influenced by characteristics other than pitch set such as pitch sequence.

Music and Motion: How Music-Related Ancillary Body Movements Contribute to the Experience of Music

April 2009

·

429 Reads

Expressive performer movements in musical performances represent implied levels of communication and can contain certain characteristics and meanings of embodied human expressivity. This study investigated the contribution of ancillary body movements on the perception of musical performances. Using kinematic displays of four clarinetists, perceptual experiments were conducted in which participants were asked to rate specific music-related dimensions of the performance and the performer. Additionally, motions of particular body parts, such as movements of the arms and torso, as well as motion amplitudes of the whole body were manipulated in the kinematic display. It was found that manipulations of arm and torso movements have fewer effects on the observers‘ ratings of the musicians than manipulations concerning the movement of the whole body. The results suggest that the multimodal experience of musicians is less dependent on the players‘ particular body motion behaviors than it is on the players‘ overall relative motion characteristics.

Preserved Singing in Aphasia: A Case Study of the Efficacy of Melodic Intonation Therapy

September 2006

·

585 Reads

This study examined the efficacy of Melodic Intonation Therapy (MIT) in a male singer (KL) with severe Broca’s aphasia. Thirty novel phrases were allocated to one of three experimental conditions: unrehearsed, rehearsed verbal production (repetition), and rehearsed verbal production with melody (MIT). The results showed superior production of MIT phrases during therapy. Comparison of performance at baseline, 1 week, and 5 weeks after therapy revealed an initial beneficial effect of both types of rehearsal; however, MIT was more durable, facilitating longer-term phrase production. Our findings suggest that MIT facilitated KL’s speech praxis, and that combining melody and speech through rehearsal promoted separate storage and/or access to the phrase representation.

Fig. 1. Examples of rhythm patterns. Quadruple and triple patterns are notated in A and B, respectively, with beat and bar-level metric pulsations depicted below as dots. A nonmetrical pattern is notated in C, with an underlying 49-unit grid (each unit
Fig. 3. Quadruple, triple, and nonmetrical target integrant patterns and (metrically ambiguous) aggregate patterns from 12 rhythm sets. Rhythmic figures are indicated by ellipses in the quadruple integrant pattern from Rhythm Set I. Assignment of rhythm sets to the six lev
Fig. 4. Target and distracter versions of integrant and aggregate test items from a single rhythm set. Brackets indicate regions where the structure of distracter items deviates from target structure.
Fig. 6. The rhythmic canon task used in Experiment 2. The model antecedent/consequent pattern (presented by computer) is reproduced by the participant at a lag interval. DA is required during the section of the canon where computer and the performer overlap (i.e., consequent model accompanies antecedent reproduction).
Musical Meter in Attention to Multipart Rhythm

June 2005

·

2,675 Reads

Performing in musical ensembles can be viewed as a dual task that requires simultaneous attention to a high priority "target" auditory pattern (e.g., a performer's own part) and either (a) another part in the ensemble or (b) the aggregate texture that results when all parts are integrated. The current study tested the hypothesis that metric frameworks (rhythmic schemas) promote the efficient allocation of attentional resources in such multipart musical contexts. Experiment 1 employed a recognition memory paradigm to investigate the effects of attending to metrical versus nonmetrical target patterns upon the perception of aggregate patterns in which they were embedded. Experiment 2 required metrical and nonmetrical target patterns to be reproduced while memorizing different, concurrently presented metrical patterns that were also subsequently reproduced. Both experiments included conditions in which the different patterns within the multipart structure were matched or mismatched in terms of best-fitting meter. Results indicate that dual-task performance was best in matched-metrical conditions, intermediate in mismatched-metrical conditions, and worst in nonmetrical conditions. This suggests that metric frameworks may facilitate complex musical interactions by enabling efficient allocation of attentional resources. © 2005 by the Regents of the University of California. All rights reserved.

An Unusual Effect in the Canon Per Tonos from J. S. Bach's Musical Offering

December 2001

·

512 Reads

We propose that the phrase repetitions in the canon per tonos from J.S. Bach's Musical Offering are not recognized by listeners as being successively upward. We examine possible causes for this effect and suggest that it may be due to Bach's use of chromatic harmony. To test this hypothesis, we conducted an experiment in which one group of listeners was presented with Bach's canon, while another group was presented with a modified version of the canon in which the harmonies were altered in order to make the upward phrase repetitions more apparent. We found that subjects recognized the ascending pattern in the modified canon with greater ease than they recognized the ascending pattern in Bach's canon. We also consider briefly why Bach may have wished to cause such an effect.

Brain Activity Patterns Suggest Prosodic Influences on Syntactic Parsing in the Comprehension of Spoken Sentences

October 1998

·

24 Reads

We address the question of how syntactic and prosodic factors interact during the comprehension of spoken sentences. Previous studies using event-related brain potential measures have revealed that syntactic phrase structure violations elicit an early left anterior negativity followed by a late posterior positivity (P600). We present recent experimental evidence showing that prosodic information can modulate these components and thus the syntactic processes they reflect. We conclude that the initiation of first-pass parsing processes is affected by the appropriateness of the prosodic realization of the preceding element.

Modeling Pattern Importance in Chopin's Mazurkas

April 2011

·

99 Reads

This study relates various quantifiable characteristics of a musical pattern to subjective assessments of a pattern's salience. Via score analysis and listening, twelve music undergraduates examined excerpts taken from Chopin's mazurkas. They were instructed to rate already-discovered patterns, giving high ratings to patterns that they thought were noticeable and/or important. Each undergraduate rated thirty specified patterns and ninety patterns were examined in total. Twenty-nine quantifiable attributes (some novel but most proposed previously) were determined for each pattern, such as the number of notes a pattern contained. A model useful for relating participants' ratings to the attributes was determined using variable selection and cross-validation. Individual participants were much poorer than the model at predicting the consensus ratings of other participants. While the favoured model contains only three variables, many variables were identified as having some predictive value if considered in isolation. Implications for music psychology, analysis, and information retrieval are discussed.

Analysis-By-Synthesis of Timbre, Timing, and Dynamics in Expressive Clarinet Performance

February 2011

·

287 Reads

In a previous study, mechanical and expressive clarinet performances of Bach's Suite no. II and Mozart's Quintet for Clarinet and Strings were analysed to determine whether some acoustical correlates of timbre (e.g., Spectral Centroid), timing (Intertone Onset Interval) and dynamics (Root Mean Square envelope) showed significant differences depending on the expressive intention of the performer. In the present companion study, we investigate the effects of these acoustical parameters on listeners' preferences. An analysis-by-synthesis approach was used to transform previously recorded clarinet performances by reducing the expressive deviations from the Spectral Centroid, the Intertone Onset Interval and the acoustical energy. Twenty skilled musicians were asked to select which version they preferred in a paired-comparison task. The results of statistical analyses show that the removal of the Spectral Centroid variations resulted in the greatest loss of musical preference.

Top-cited authors