Music Perception: an interdisciplinary journal

Music Perception: an interdisciplinary journal

Published by University of California Press

Online ISSN: 1533-8312

·

Print ISSN: 0730-7829

Disciplines: Psychology, Multidisciplinary

Journal websiteAuthor guidelines

Top-read articles

55 reads in the past 30 days

Conceptual model.
Three phases of research.
Research design including iterations.
Development of the participant’s mean level of excitement at four measuring points during the concert (based on 11 participants, given that two participants didn’t receive all notifications because of connectivity problems during the concert).
Development of all participant’s valence at four measuring points during the concert in relation to: the performing artist (5a), the audience/crowd (5b), the place/physical setting (5c), the social company (5d) and the participant him/herself (5e) (based on 11 participants).
Constructing the live experience at pop music concerts: a phenomenological, multi-method study of concertgoers

September 2023

·

537 Reads

·

1 Citation

·

View access options

46 reads in the past 30 days

The Perceptual and Emotional Consequences of Articulation in Music

February 2023

·

361 Reads

·

12 Citations

Aims and scope


Music Perception: an interdisciplinary journal publishes theory driven, basic and applied science, empirical reports, theoretical papers, and reviews. The journal’s scope concerns the perception and cognition of music in composing, improvising, playing, performing, recalling, recognizing, teaching, learning and responding to music through single or multiple modalities.

Recent articles


Cultural Accounts of Consonance Perception. A Lakatosian Approach to Save Pythagoras
  • Article
  • Publisher preview available

December 2024

·

32 Reads

In 1945, Norman Cazden published a groundbreaking article in the literature on consonance perception. In this seminal work, Cazden combined historical, musicological, and theoretical arguments to assert that the Pythagorean approach to consonance, based on integer ratios, lacked substantiation. Recent empirical evidence has bolstered Cazden’s perspective, indicating that the perception of consonance is primarily shaped by culture rather than by arithmetical ratios. Nevertheless, some scholars have drawn attention to other evidence from the bio-musicological literature that supports the Pythagorean hypothesis. Consequently, the current debate on consonance tends to center around the nature vs. culture dichotomy. In this paper, I endeavor to demonstrate that many of the “cultural” arguments can coexist with the Pythagorean hypothesis if we adopt a more epistemologically suitable framework, as proposed by Imre Lakatos’s philosophy of science. To achieve this, I conduct an in-depth analysis of Cazden’s arguments, along with examining both historical and contemporary reinterpretations of them. Then, I apply Lakatos’s concept of “research programme” to the case study of consonance, highlighting various research avenues that have drawn inspiration from the Pythagorean hypothesis and have been successfully pursued. I conclude by claiming that the Pythagorean account can be regarded, in Lakatosian terms, as a progressive research programme.


Effects of Melodic Contour on Sung Speech Intelligibility in Noisy Environments in Musicians and Nonmusicians

December 2024

·

10 Reads

Using songs to facilitate speech processing in noisy environments seems appealing and practical. However, current research suggests otherwise, possibly due to the simplicity of sung speech contours. This study investigates the effects of contour tonality on sung speech intelligibility in noisy environments. A cohort of 20 trained musicians and 20 nonmusicians were tested on the intelligibility of Mandarin sentences sung on tonal, atonal, and fixed-pitch melodies or normally spoken under three signal-to-noise ratios (SNRs: −3, −6, and −9 dB). Perceptual musical skills related to speech-in-noise perception were also assessed. Results showed that overall speech-in-noise intelligibility decreased as the SNR decreased, with spoken speech being more intelligible than sung speech. Sung speech intelligibility was higher for fixed- than variable-pitch contours, with no difference between tonal and atonal melodies. No musician advantage was reported for spoken speech. Musicians, nonetheless, outperformed nonmusicians in identifying sung speech across all melodic contour types. Moreover, the musician sung speech advantage correlated with enhanced music perception abilities on pitch and accent. These results suggest that musicians have an advantage in sung speech in noisy environments. However, melody tonality provided no additional benefits, suggesting that imposing tonality on sung speech does not improve speech perception in noisy environments.


Scatterplot depicting the relationship between energy and tension arousal ratings by stimulus set and emotion.
The Effect of Aural and Visual Presentation Formats on Categorical and Dimensional Judgements of Emotion for Sung and Spoken Expressive Performances

December 2024

·

17 Reads

The purpose of this study was to investigate the effects of visual information on the perception of emotion in three contexts: a short spoken phrase, an analogous short melody, and a longer melody with greater complexity of pitch and rhythm. Participants without substantial formal music training were assigned to either an audio-only, visual-only, or audiovisual presentation mode; all observed musicians and actors who sang melodies or spoke phrases intending to communicate happiness, sadness, or anger. Participants rated these performances for positive and negative valence, energy arousal, and tension arousal. They were also asked to select the discrete emotion they perceived, and to rate how certain they were about this selection. Participants perceived energy arousal and tension arousal as fairly distinct features of the performances and the performances were perceived as having clear dimensional characteristics based on intended emotion (e.g., happy performances had the highest positive valence, lowest negative valence, etc.). Regarding presentation, participants in the audiovisual mode categorized emotional intentions with greater accuracy than those in the others. Although ratings of negative valence, energy, and tension ratings were more extreme for actors than for musicians, participants were most in agreement about the valence of the actors’ sung performances.


Musical Preferences and Personality Traits

December 2024

·

27 Reads

Music resonates differently in every single person according to human culture and characteristics. Individual differences such as personality traits may play a key role in modulating the emotional response to music and yet relatively little is known about the underlying structure of Italian musical preferences. On these bases, the current research presents a replication of the structure of the MUSIC Model and examines its relationships with personality traits. A sample of 2,104 participants completed an online survey. The current study is divided into three parts: 1) an exploratory factor analysis study, 2) a confirmatory factor analysis study to verify the emerged structure, and 3) a structural equation modeling for the investigation of the relationships between music and personality. Findings confirmed a latent five-factor structure that was labeled as follows: 1) Mellow: classical, gospel, opera, and religious; 2) Unpretentious: alternative, bluegrass, and new age; 3) Sophisticated: jazz, blues, soul R&B; 4) Intense: heavy metal, punk, and rock; and 5) Contemporary: dance electronica, world (international), pop, reggae, and rap hip-hop. Overall, findings suggested musical preferences are subject to some personality traits. These results could be considered a promising starting point to generate new theories to use music in therapeutic contexts for enhancing well-being.


Estimates of Tempo (panel A) and Consistency (panel B) in typically developing (TD), mixed DD, and pure DD participants across three rhythmic conditions (Comfortable, Fast, Free).
Density distribution of rhythm ratios per group. On-integer (dark green) and off-integer (light green) ratio ranges are highlighted.
Estimates of on- and off-integer ratios (panel A) and on-integer ratios only (panel B) in typical development (TD), mixed DD, and pure DD participants in interaction with task Range (A) or Ratio (B).
Testing Rhythmic Abilities in Developmental Dyslexia

December 2024

·

21 Reads

·

Eline A. Smit

·

·

[...]

·

Rhythm processing deficits in developmental dyslexia (DD) span across different rhythmic subcomponents and are difficult to capture using one experimental paradigm. How are dyslexic deficits related to motor periodicity, i.e., the execution of repetitive actions while internally generating rhythm? The present experiment investigated rhythm production in DD by means of unprompted tapping paradigm, testing the hypothesis that the ability to internally generate rhythmic patterns may be impaired. The tasks involved tapping of isochronous sequences at a comfortable and a fast tempo and tapping of a free rhythm. Forty adolescents diagnosed with DD (with or without comorbid dyscalculia) participated, along with thirty typically developing control participants. A background questionnaire gathered information about participants’ prior music training. The data show that both dyslexic groups tapped faster than the typically developing participants at the comfortable tempo. We found no statistical differences between groups in fast isochronous tapping or in the free rhythm production tasks, irrespective of music training or the presence of dyscalculia. All participants favored regular rhythms when tapping a free rhythm, with a notable preference for isochrony. These results have theoretical and clinical implications for rhythm deficit hypotheses of DD.


Interaction of LFA and syncopation on groove. Note: LFA = low-frequency amplitude. Error bars represent confidence intervals.
Move Your Body! Low-frequency Amplitude and Syncopation Increase Groove Perception in House Music

Studies demonstrate that low frequencies and syncopation can enhance groove—the pleasurable urge to move to music. This study examined the simultaneous effect of low-frequency amplitude and syncopation on groove by manipulating basslines in house music, a subgenre of electronic dance music (EDM). One hundred and seventy-nine participants listened to 20 novel house music clips in which basslines were manipulated across two levels of low-frequency amplitude and syncopation. Music and dance-related experience, as well as genre preferences, were also assessed. Groove perception was most pronounced for house tracks combining high low-frequency amplitude (LFA) and high syncopation, and least pronounced for tracks with low LFA, irrespective of syncopation. Exploratory correlation analysis revealed that groove perception is influenced by listeners’ preferences for energetic and rhythmic music styles, their urge to dance, and their propensity to experience an emotional connection to music. Our findings reveal that the urge to move when listening to music is shaped by the interplay of rhythmic complexity and sonic texture, and is influenced by dance and music experiences and preferences.


Formalized Harmony

September 2024

·

18 Reads

Different theories have been proposed to explain the perception of a root in a pitch set, with Terhardt’s (1984) virtual pitch theory being the one generally considered most plausible, despite none of them having been confirmed empirically. Understanding the perceived root phenomenon is an essential component in explaining the origin of the major and minor triads as the most fundamental chords in twelve-tone equal temperament or 5-limit just intonation, as well as in proposing new chords for new tunings. This music perception study, in which 42 participants rated chords in 10 different tunings, empirically establishes harmonic dualism, negative harmony, and Barlow’s harmonicity as the key factors for the perception of the harmonic root and the origin of the major and minor triads, and proposes a model as a basis for a tuning-agnostic—trans-spatial—music theory.


Equal Temperament and Interval Sizes of Ascending and Descending Intervals

September 2024

·

23 Reads

Stringed instruments are known to be flexible regarding intonation. This suppleness is arguably at the root of the many discussions concerning tuning and intonation that have taken place in the past. Historical authors have, depending on the era, advocated for the use of just intonation, as well as tempered and Pythagorean, thus deepening the controversy regarding this issue. Similarly, contemporary studies on intonation in the realm of stringed instruments have reached a variety of conclusions, despite observing an apparent adherence to the Pythagorean system and, to a lesser extent, equal temperament. Given this discussion, this paper explored to what extent performances of 53 violin students conform to equal temperament (ET), as well as to what extent alignment or misalignment to ET is a product of chance. In addition, we asked whether the direction of an interval has an impact on its size, as well as whether there is any association between the sizes of different ascending and descending intervals. We also asked whether the academic year of participants had any impact of the size of intervals. Results suggest that the intonation of participants does not match ET, and that differences in interval size regarding ET occur for reasons other than chance; namely, in the case of ascending and descending minor seconds, descending major seconds, descending augmented seconds, ascending and descending minor thirds, descending major thirds, and descending diminished fourths. Furthermore, results not only show that interval direction may have an impact on interval size, but also that there is association between sizes of different intervals. The academic year of participants seemed to have an impact on ascending minor thirds.



Number of thoughts reported by thought type and genre.
Words reported in thought descriptions at least 15 times for each genre (from left to right: classical, electronic, pop/rock). Word size corresponds to frequency of appearance (larger words = more frequent).
Music-Evoked Thoughts

September 2024

·

133 Reads

·

1 Citation

Music listening can evoke a range of extra-musical thoughts, from colors and smells to autobiographical memories and fictional stories. We investigated music-evoked thoughts as an overarching category, to examine how the music’s genre and emotional expression, as well as familiarity with the style and liking of individual excerpts, predicted the occurrence, type, novelty, and valence of thoughts. We selected 24 unfamiliar, instrumental music excerpts evenly distributed across three genres (classical, electronic, pop/rock) and two levels of expressed valence (positive, negative) and arousal (high, low). UK participants (N = 148, Mage = 28.68) heard these 30-second excerpts, described any thoughts that had occurred while listening, and rated various features of the thoughts and music. The occurrence and type of thoughts varied across genres, with classical and electronic excerpts evoking more thoughts than pop/rock excerpts. Classical excerpts evoked more music-related thoughts, fictional stories, and media-related memories, while electronic music evoked more abstract visual images than the other genres. Positively valenced music and more liked excerpts elicited more positive thought content. Liking and familiarity with a style also increased thought occurrence, while familiarity decreased the novelty of thought content. These findings have key implications for understanding how music impacts imagination and creative processes.


Musician Advantage for Segregation of Competing Speech in Native Tonal Language Speakers

September 2024

·

34 Reads

The aim of this study was to replicate previous English-language musician advantage studies in Mandarin-speaking musicians and nonmusicians. Segregation of competing speech, melodic pitch perception, and spectro-temporal pattern perception were measured in normal-hearing native Mandarin-speaking musicians and nonmusicians. Speech recognition thresholds were measured in the presence of two-talker masker speech. The masker sex was either the same as or different from the target; target and masker speech were either co-located or spatially separated. Melodic pitch perception was tested using a melodic contour identification task. Spectro-temporal resolution was measured using a modified spectral ripple detection task. We hypothesized that, given musician advantages in pitch perception, musician effects would be larger when the target and masker sex was the same than when different. For all tests, performance was significantly better for musicians than for nonmusicians. Contrary to our expectation, larger musician effects were observed for segregation of competing speech when the target and masker sex was different. The results show that musician effects observed for non-tonal language speakers extend to tonal language speakers. The data also suggest that musician effects may depend on the difficulty of the listening task and may be reduced when listening tasks are too easy or too difficult.


Trade-offs in Coordination Strategies for Duet Jazz Performances Subject to Network Delay and Jitter

September 2024

·

26 Reads

·

1 Citation

Coordination between participants is a necessary foundation for successful human interaction. This is especially true in group musical performances, where action must often be temporally coordinated between the members of an ensemble for their performance to be effective. Networked mediation can disrupt this coordination process by introducing a delay between when a musical sound is produced and when it is received. This can result in significant deteriorations in synchrony and stability between performers. Here we show that five duos of professional jazz musicians adopt diverse strategies when confronted by the difficulties of coordinating performances over a network—difficulties that are not exclusive to networked performance but are also present in other situations (such as when coordinating performances over large physical spaces). What appear to be two alternatives involve: 1) one musician being led by the other, tracking the timings of the leader’s performance; or 2) both musicians accommodating to each other, mutually adapting their timing. During networked performance, these two strategies favor different sides of the trade-off between, respectively, tempo synchrony and stability; in the absence of delay, both achieve similar outcomes. Our research highlights how remoteness presents new complexities and challenges to successful interaction.



Brain Responses to Real and Imagined Interpretation of Tonal Versus Atonal Music

June 2024

·

106 Reads

·

1 Citation

Professional musicians have been teaching/learning/interpreting Western classical tonal music for longer than atonal music. This may be reflected in their brain plasticity and playing efficiency. To test this idea, EEG connectivity networks (EEG-CNs) of expert cellists at rest and during real and imagined musical interpretation of tonal and atonal excerpts were analyzed. Graphs and connectomes were constructed as models of EEG-CNs, using functional connectivity measurements of EEG phase synchronization in different frequency bands. Tonal and atonal interpretation resulted in a global desynchronization/dysconnectivity versus resting—irrespective of frequency bands—particularly during imagined-interpretation. During the latter, the normalized local information-transfer efficiency (NLE) of graph-EEG-CN’s small-world structure at rest increased significantly during both tonal and atonal interpretation, and more significantly during atonal-interpretation. Regional results from the graphs/connectomes supported previous findings, but only certain EEG frequency bands. During imagined-interpretation, the number of disconnected regions and subnetworks, as well as regions with higher NLE, were greater in atonal-interpretation than in tonal-interpretation for delta/theta/gamma-EEG-CNs. The opposite was true during real-interpretation, specifically limited to alpha-EEG-CN. Our EEG-CN experimental paradigm revealed perceptual differences in musicians’ brains during tonal and atonal interpretations, particularly during imagined-interpretation, potentially due to differences in cognitive roots and brain plasticity for tonal and atonal music, which may affect the musicians’ interpretation.


Mean accuracies of pitched musicians, unpitched musicians, and nonmusicians collapsed across tone pairs in the Thai tone discrimination task. Note. Error bars denote 95% confidence intervals.
Mean accuracies of pitched musicians, unpitched musicians, and nonmusicians on individual tone pairs in the Thai tone discrimination task. Note. Error bars denote 95% confidence intervals.
Mean accuracies of pitched musicians, unpitched musicians, and nonmusicians in the sequence recall task. Note. Error bars denote 95% confidence intervals.
Musical Advantage in Lexical Tone Perception Hinges on Musical Instrument

June 2024

·

211 Reads

Different musical instruments have different pitch processing demands. However, correlational studies have seldom considered the role of musical instruments in music-to-language transfer. Addressing this research gap could contribute to a nuanced understanding of music-to-language transfer. To this end, we investigated whether pitched musicians had a unique musical advantage in lexical tone perception relative to unpitched musicians and nonmusicians. Specifically, we compared Cantonese pitched musicians, unpitched musicians, and nonmusicians on Thai tone discrimination and sequence recall. In the Thai tone discrimination task, the pitched musicians outperformed the unpitched musicians and the nonmusicians. Moreover, the unpitched musicians and the nonmusicians performed similarly. In the Thai tone sequence recall task, both pitched and unpitched musicians recalled level tone sequences more accurately than the nonmusicians, but the pitched musicians showed the largest musical advantage. However, the three groups recalled contour tone sequences with similar accuracy. Collectively, the pitched musicians had a unique musical advantage in lexical tone discrimination and the largest musical advantage in level tone sequence recall. From a theoretical perspective, this study offers correlational evidence for the Precision element of the OPERA hypothesis. The choice of musical instrumental may matter for music-to-language transfer in lexical tone discrimination and level tone sequence recall.


EEG Montage with frontal and parietal regions of interest (ROI).
A) PASAT total number correct, and B) mean Stroop errors on mixed conditions.
A) Resting state frontal theta power, and B) resting state parietal alpha power.
A) Frontal theta, and B) parietal alpha power during the Improvisation Continuation Task.
Jazz Piano Training Modulates Neural Oscillations and Executive Functions in Older Adults

Musical improvisation is one of the most complex forms of creative behavior, often associated with increased executive functions. However, most traditional piano programs do not include improvisation skills. This research examined the effects of music improvisation in a novel jazz piano training intervention on executive functions and neural oscillatory activity in healthy older adults. Forty adults were recruited and randomly assigned to either jazz piano training (n = 20, 10 females) or a control group (n = 20, 13 females). The jazz piano training program included aural skills, basic technique, improvisation, and repertoire with 30 hours of training over 10 days. All participants at pre- and post-testing completed a battery of standardized cognitive measures (i.e., processing speed, inhibition, verbal fluency), and neurophysiological data was recorded during resting state and a musical improvisation task using electroencephalography (EEG). Results showed significantly enhanced processing speed and inhibition performance for those who received jazz piano training as compared to controls. EEG data revealed changes in frontal theta power during improvisation in the training group compared to controls. Learning to improvise may contribute to cognitive performance.


First blocks (eight trials) in the blocked and interleaved conditions from Experiment 1. Within a block, participants heard a phrase and were given two seconds to respond on each of the eight trials. In the blocked condition, participants heard and responded to the same phrase on all eight trials and a different phrase was heard during each block. In the interleaved condition, participants heard and responded to a different phrase on each of the eight trials, and the same eight phrases were heard during each block. rep. = repetition number; rep. 0 represents the first presentation of a phrase.
Effects of conditions across repetitions in Experiment 1. (A) Modeled change in musicality ratings across repetitions based on exposure condition. Asterisks denote significant contrasts between conditions. (B) Average change in musicality ratings across trials based on exposure condition. The median value of the musicality scale is 4.5, minimum = 0 (“sounds exactly like speech”), maximum = 9 (“sounds exactly like song”). Shaded regions represent the SEM of the bootstrapped marginal estimates in A and the SEM of the participant means in B.
Effects of conditions across repetitions in Experiment 2. (A) Modeled change in musicality ratings across repetitions based on exposure condition. Asterisks denote significant contrast between conditions. (B) Average change in musicality ratings across trials based on exposure condition. The median value of the musicality scale is 4.5, minimum = 0 (“sounds exactly like speech”), maximum = 9 (“sounds exactly like song”). Shaded regions represent the SEM of the bootstrapped marginal estimates in A and the SEM of the participant means in B.
Rapid Learning and Long-term Memory in the Speech-to-song Illusion

June 2024

·

7 Reads

·

1 Citation

The speech-to-song illusion is a perceptual transformation in which a spoken phrase initially heard as speech begins to sound like song across repetitions. In two experiments, we tested whether phrase-specific learning and memory processes engaged by repetition contribute to the illusion. In Experiment 1, participants heard 16 phrases across two conditions. In both conditions, participants heard eight repetitions of each phrase and rated their experience after each repetition using a 10-point scale from “sounds like speech” to “sounds like song.” The conditions differed in whether the repetitions were heard consecutively or interleaved such that participants were exposed to other phrases between each repetition. The illusion was strongest when exposures to phrases happened consecutively, but phrases were still rated as more song-like after interleaved exposures. In Experiment 2, participants heard eight consecutive repetitions of each of eight phrases. Seven days later, participants were exposed to eight consecutive repetitions of the eight phrases heard previously as well as eight novel phrases. The illusion was preserved across a delay of one week: familiar phrases were rated as more song-like in session two than novel phrases. The results provide evidence for the role of rapid phrase-specific learning and long-term memory in the speech-to-song illusion.


Artificial Intelligence and Musicking

June 2024

·

32 Reads

·

2 Citations

Artificial intelligence (AI) deployed for customer relationship management (CRM), digital rights management (DRM), content recommendation, and content generation challenge longstanding truths about listening to and making music. CRM uses music to surveil audiences, removes decision-making responsibilities from consumers, and alters relationships among listeners, artists, and music. DRM overprotects copyrighted content by subverting Fair Use Doctrine and privatizing the Public Domain thereby restricting human creativity. Generative AI, often trained on music misappropriated by developers, renders novel music that seemingly represents neither the artistry present in the training data nor the handiwork of the AI’s user. AI music, as such, appears to be produced through AI cognition, resulting in what some have called “machine folk” and contributing to a “culture in code.” A philosophical analysis of these relationships is required to fully understand how AI impacts music, artists, and audiences. Using metasynthesis and grounded theory, this study considers physical reductionism, metaphysical nihilism, existentialism, and modernity to describe the quiddity of AI’s role in the music ecosystem. Concluding thoughts call researchers and educators to act on philosophical and ethical discussions of AI and promote continued research, public education, and democratic/laymen intervention to ensure ethical outcomes in the AI music space.


Music Exposure and Maternal Musicality Predict Vocabulary Development in Children with Cochlear Implants

April 2024

·

45 Reads

·

2 Citations

Children with cochlear implants (CIs) exhibit large individual differences in vocabulary outcomes. We hypothesized that understudied sources of variance are amount of music engagement and exposure and maternal musicality. Additionally, we explored whether objective measures of music exposure captured from the CI data logs and parent reports about music engagement provide converging and/or complementary evidence, and whether these correlate with maternal musicality. Sixteen children with CIs (M age = 16.7 months, SD = 7.7, range = 9.6–32.9) were tested before implantation and three, six, and 12 months post-CI activation. Music exposure throughout the first year post-activation was extracted from the CI data logs. Children’s vocabulary and home music engagement and maternal musicality were assessed using parent reports. Analyses revealed relatively low home music engagement and maternal musicality. Nonetheless, positive effects emerged for music exposure on children’s early receptive and expressive vocabulary and for maternal musicality on expressive vocabulary three months post-activation. Results underline the importance of combining automatic measures and parent reports to understand children’s acoustic environment and suggest that environmental music factors may affect early vocabulary acquisition in children with CIs. The presence of these effects despite poor music exposure and skills further motivates the involvement of children with CIs and their parents in music intervention programs.


Impact of Native Language on Musical Working Memory

April 2024

·

77 Reads

Music and language share similar sound features and cognitive processes, which may lead to bidirectional transfer effects of training in one domain on the processing in the other domain. We investigated the impact of native language on musical working memory by comparing nontonal language (Finnish) speakers and tonal language (Chinese) speakers. For both language backgrounds, musicians and nonmusicians were recruited. In an experimenter-monitored online paradigm, participants performed a forward-memory task measuring the maintenance of musical sequences, and a backward-memory task measuring the manipulation of musical sequences. We found that maintenance of music sequences was facilitated in Chinese participants compared with Finnish participants, with musicians outperforming nonmusicians. However, performance in the backward-memory task did not differ between Chinese and Finnish participants, independently of music expertise. The presence or absence of tonal structure in the musical sequences did not affect the advantage of Chinese over Finnish participants in either maintenance or manipulation of the musical sequences. Overall, these findings suggest that Mandarin Chinese speakers have facilitated maintenance of musical sounds, compared with Finnish speakers, regardless of musical expertise and the presence of tonal structure. Our study furthers the understanding of language-to-music transfer and provides insights into cross-cultural differences in music cognition.


The Vocal Advantage in Memory for Melodies is Based on Contour

April 2024

·

63 Reads

Recognition memory is better for vocal melodies than instrumental melodies. Here we examine whether this vocal advantage extends to recall. Thirty-one violinists learned four melodies (28 notes, 16 s), two produced by voice and two by violin. Their task was to listen to each melody and then immediately sing (for vocal stimuli) or play back on violin (for violin stimuli) the melody. Recall of the melody was tested in ten consecutive trials. After a brief delay (∼10 min), participants were asked to perform the four melodies from memory. Each performance was scored based on the accuracy of two measures: (1) intervals and (2) contour. The results revealed an advantage for vocal over violin melodies in immediate recall of the melodic contour and, after the delay, a reverse pattern with an advantage for violin over vocal melodies. The findings are consistent with the hypothesis that the voice facilitates learning of melodies and further show that the vocal advantage in recall is short-lived and based on contour.



Italian Validation of the Barcelona Music Reward Questionnaire

March 2024

·

38 Reads

·

1 Citation

Music experience is considered a highly pleasant activity, with musical stimuli evoking emotions and activating reward brain circuits. The Barcelona Music Reward Questionnaire (BMRQ) evaluates the main facets of music experience that could describe individual differences in music-associated reward. In this work, we validated an Italian version of the BMRQ since only Spanish, English, and French versions are currently available. The original version was translated into Italian, adapted and validated following the methodology proposed in the original BMRQ. The questionnaire was administered to adult participants through an online survey. A total of 1,218 participants were considered for the analysis. Our primary factor analysis showed an overall good structural validity (goodness of fit index > .999). The reliability estimates of each facet varied from .839 and .930, with some items showing higher factor loadings for a different facet than the expected one. Similar findings resulted from an additional analysis performed on a restricted sample (age <= 30 years and upper secondary education level), more comparable with those of the other studies. The Italian BMRQ appears overall valid and reliable despite some differences with the previous studies, suggesting its applicability might be worth testing with different populations and contexts (e.g., clinical context).


The processes and relationships in composers scale. Construction and psychometric analysis of a new self-assessment inventory

February 2024

·

95 Reads

We introduce a new inventory labeled the Processes and Relationships in Composers Scale (PRCS). This is a novel inventory developed to self-assess creative and social factors inherent in music composition. The PRCS consists of two separate scales of 12 items each, namely the Composing Processes Scale (CPS) and the Social Relationship Scale (SRS). An exploratory factor analysis revealed that the CPS scale has a single factor structure, while the SRS scale relies on three main factors: loneliness, support, and friendship. The total score of the CPS was found to be highly reliable, whereas the SRS obtained a lower score. The PRCS can contribute new insights into how creative and social processes can be self-assessed by music composers with different backgrounds and levels of musical expertise. Our work aims to deepen understanding of the relationship between musical creativity and social life, contributing to existing scholarship that has explored this connection in musical activities specifically.


Thematic map of overarching themes and sub-themes.
Saxophone Players’ Self-Perceptions About Body Movement in Music Performing and Learning: An Interview Study

February 2024

·

67 Reads

·

2 Citations

Quantitative studies demonstrate that performers’ gestures reflect technical, communicative, and expressive aspects of musical works in solo and group performances. However, musicians’ perspectives and experiences toward body movement are little understood. To address this gap, we interviewed 20 professional and pre-professional saxophone players with the aims of: (1) identifying factors influencing body movement; (2) understanding how body movement is approached in instrumental pedagogy contexts; and (3) collecting ideas about the impact of movements on performance quality. The qualitative thematic analysis revealed that musical features (i.e., musical character, dynamics) constitute a preponderant influencing factor in musicians’ body behavior, followed by previous experiences and physical and psychological characteristics. In the pedagogical dimension, participants presented an increased awareness of the importance of body movement compared to their former tutors, describing in-class implementation exercises and promoting reflection with their students. Still, a lack of saxophone-specific scientific knowledge was highlighted. Regarding performance quality, participants discussed the role of movement in facilitating performers’ execution (i.e., sound emission, rhythmical perception) and enhancing the audience’s experience. We provide insights into how professionals conceive, practice, and teach motor and expressive skills, which can inspire movement science and instrumental embodied pedagogy research.


Journal metrics


1.3 (2023)

Journal Impact Factor™

Editors