ArticlePDF Available

Effect of harmonic relatedness on the detection of temporal asynchronies

Authors:

Abstract and Figures

Speeded intonation judgments of a target chord are facilitated when the chord is preceded by a harmonically related prime chord. The present study extends harmonic priming to temporal asynchrony judgments. In both tasks, the normative target chords (consonant, synchronous) are processed more quickly and accurately after a harmonically related prime than after a harmonically unrelated prime. However, the influence of harmonic context on sensitivity (d') differs between the two tasks: d' was higher in the related context for intonation judgments but was higher in the unrelated context for asynchrony judgments. A neural net model of tonal knowledge activation provides an explanatory framework for both the facilitation in the related contexts and the sensitivity differences between the tasks.
Content may be subject to copyright.
Copyright 2002 Psychonomic Society, Inc. 640
Perception & Psychophysic s
2002, 64 (4), 640-649
A context influences the processing of an upcoming
event. The processing of a target word is facilitated if the
word is preceded by a semantically related word (Meyer
& Schvaneveldt, 1971) or if it forms a semantically and
syntactically congruent ending to a sentence (Stanovich
& West, 1979). In music, the processing of a target chord
is facilitated if the chord is preceded by a harmonically
related context that can be short (one chord) or long
(chord sequences). Harmonic priming has been studied
by using judgments of the sensory dissonance or intona-
tion of the target chord. In the present study, harmonic
priming is investigated by using temporal asynchrony
judgments to examine the generality of the priming phe-
nomena across tasks.
The relationships between prime and target that have
been manipulated in harmonic-priming studies are based
on the regularities inherent in the Western tonal system.
A brief review of the major regularities will allow us to
present the underlying rationale of harmonic-priming re-
search. In Western tonal music, a set of 12 chromatic
tones is combined in a highly constrained way, yielding
a system of relationships. The 12 tones are organized in
subsets of 7 tones, called diatonic scales. On each degree
of a scale, chords (combinations of 3 tones) can be con-
structed. The chords built on the first, fifth, and fourth
scale degrees (referred to as the tonic, dominant, and
subdominant chords, respectively) are more frequent in
tonal musical pieces and have more central functions
than do chords built on other degrees (Francès, 1958;
Krumhansl, 1990). Identical chords may occur in a vari-
ety of different keys. Chords sharing a parent key are said
to be harmonically related (such as the chords C-major
and B -major, which both belong to the key of F-major).
Harmonically related chords are those that are more fre-
quently associated in tonal musical pieces than are other
chords. Through mere exposure, listeners of Western
music have acquired implicit knowledge of the specific
constraints and conventional relationships among tones,
chords, and keys (Tillmann, Bharucha, & Bigand, 2000).
A musical context activates listeners’ tonal knowledge so
that they expect harmonically related events more than
they expect harmonically unrelated events. These expec-
tations then influence the processing of further events, as
is shown by harmonic-priming studies with short and
long contexts.
In harmonic-priming studies with short contexts (Bha-
rucha & Stoeckig, 1986, 1987; Tekman & Bharucha, 1992,
1998), a single chord defined the prime and was followed
by the target chord. The two chords were either related
(shared a common key) or unrelated (did not share a
common key) harmonically. For example, when the prime
was a C-major chord, a B -major chord was a related tar-
get, and a F -major chord was an unrelated target. On
half the trials, the target chord was slightly mistuned. Par-
ticipants were asked to make a speeded intonation judg-
ment—that is, to decide as quickly and accurately as pos-
sible whether the target chord was in tune. The priming
effect was shown by (1) a bias to judge target chords to
be in tune when they were related to the prime and
(2) shorter response times for in-tune targets when they
were related to the prime and for out-of-tune targets when
they were unrelated to it. Thus, a single chord can gen-
erate expectancies for related chords to follow, resulting
in greater perceived consonance and faster processing
for expected chords. Harmonic-priming effects have been
extended to longer prime contexts (Bigand, Madurell, Till-
mann, & Pineau, 1999; Bigand & Pineau, 1997; Till-
This research has been supported in part by a grant to B.T. from the
Deutsche Akademische Austauschdienst DAAD and by grants to J.J.B.
from the National Science Foundation (SBR-9601287) and NIH (2P50
NS17778-18). Correspondence concerning this article should be ad-
dressed to B. Tillmann, Université Claude Bernard Lyon I, Laboratoire
Sciences & Systèmes Sensoriels, Equipe Audition CNRS-UMR 5020,
50 Avenue Tony Garnier, F-69366 Lyon cedex 07, France (e-mail:
barbara.tillmann@olfac.univ-lyon1.fr).
Effect of harmonic relatedness on the
detection of temporal asynchronies
BARBARA TILLMANN and JAMSHED J. BHARUCHA
Dartmouth College, Hanover, New Hampshire
Speeded intonation judgments of a target chord are facilitated when the chord is preceded by a har-
monically related prime chord. The present study extends harmonic priming to temporal asynchrony
judgments. In both tasks, the normative target chords (consonant, synchronous) are processed more
quickly and accurately after a harmonically related prime than after a harmonically unrelated prime.
However, the influence of harmonic context on sensitivity (d
¢
) differs between the two tasks: d
¢
was
higher in the related context for intonation judgments but was higher in the unrelated context for asyn-
chrony judgments. A neural net model of tonal knowledge activation provides an explanatory frame-
work for both the facilitation in the related contexts and the sensitivity differences between the tasks.
EFFECT OF HARMONIC RELATEDNESS 641
mann, Bigand, & Pineau, 1998). Bigand and Pineau ma-
nipulated the global context of eight-chord sequences in
order to change the expectations for the last chord. The
penultimate chord (the local context) was held constant in
the context sequences to control for local psychoacoustic
influences. Participants were faster and more accurate in
their intonation judgment of the last chord when it was
strongly related (the tonic chord of the context key) than
when it was less related (the subdominant chord of the
context key). These results suggest that harmonic prim-
ing involves higher level harmonic structures and does
not occur only from chord to chord.
Several control conditions suggest that harmonic con-
text effects are unlikely to be caused by sensory priming.
In short contexts, harmonic priming occurs even if the
target chord does not share tones (or even harmonics)
with the prime or is preceded by white noise (Bharucha
& Stoeckig, 1987; Tekman & Bharucha, 1992). In long
contexts, harmonic priming persists even if the local con-
text (up to six chords; Bigand et al., 1999, Experiment 3)
is held constant or if the target chord itself never occurs
in the preceding context (Bigand, Poulin, Tillmann,
D’Adamo, & Madurell, 2001). Harmonic priming thus
occurs at a cognitive level, on the basis of the activation
of listeners’ implicit knowledge of the Western tonal sys-
tem, with its underlying regularities and the hierarchies
of importance of tonal events.
The observed harmonic-priming effects and their in-
terpretation in terms of knowledge activation have al-
ways involved a judgment of the sensory dissonance of
the target chord. For half of the trials, target chords were
rendered either out of tune (Bharucha & Stoeckig, 1986,
1987; Tekman & Bharucha, 1992, 1998) or dissonant by
adding a nondiatonic tone (Bigand et al., 1999; Bigand
& Pineau, 1997; Tillmann et al., 1998). Participants had
to decide if the target chord was in tune (consonant) or
out of tune (dissonant). It might be argued that the con-
textual dissonance created by the harmonically unrelated
context creates a confound with the mistuning or disso-
nance of the target chord itself. In other words, the ten-
dency to judge the target to be consonant and to respond
more quickly in a harmonically related context may
simply be a confounding of these two forms of disso-
nance, rather than being indicative of an underlying
priming process in terms of knowledge activation (see
Terhardt, 1984, for two kinds of dissonance). The goal
of our study was to explore the generality of harmonic
priming by extending it to a task based on a different
perceptual judgment of the target chord: the detection
of temporal asynchrony.
Studies of auditory perception have provided evidence
for listeners’ sensitivity to temporal asynchrony in simul-
taneously occurring events. The asynchrony in simulta-
neously occurring events is used for grouping or segre-
gation of streams in order to determine sound sources in
auditory scene analysis (Bregman, 1990). For example,
two pure tones of different frequencies segregate in
streams if their onsets are played relatively asynchro-
nously but fuse into one percept for synchronous onsets.
Zera and Green (1993, 1995) studied the perception of
temporal asynchrony of a single harmonic at both the
onset and the offset of a complex sound. Listeners’ dis-
crimination of an asynchronous complex from a synchro-
nous one was superior when the asynchronous feature was
situated at the onset of the complex sound. In musical
performance, onset asynchronies and, in particular, early
onsets in the melodic line (melody leads) provide im-
portant perceptual cues aiding listeners in identifying the
melody in multivoiced pieces (e.g., Palmer, 1996).
For sequentially occurring events, the detection of de-
viations from temporal regularity has been investigated
in sequences with regular interonset intervals made of
clicks (Halpern & Darwin, 1982) or tones (Drake, 1993;
Drake & Botte, 1983; Hirsh, Monahan, Grant, & Singh,
1990) or in musical excerpts (Repp, 1992, 1998, 1999).
The performance is influenced not only by the inter-
onset interval size and the positions in the sequence (ini-
tial vs. final), but also by the sequences structure in the
vicinity of the to-be-detected event. The musical struc-
ture of the sequence interacts with the detection of tem-
porally deviant events in isochronously played musical
sequences. Across all positions in the musical sequence,
the detection accuracy decreases in places where length-
ening would typically occur in an expressive musical in-
terpretation (e.g., at the end of a section). The musical
structure of the previous context allows the develop-
ment of expectations on timing structures for upcoming
events that influence the detection accuracy of temporal
deviations.
In our study, the influence of harmonic expectations
on the detection of a temporal onset asynchrony in si-
multaneously occurring tones was investigated. As has
been shown in previous priming studies, a harmonic con-
text activates listeners’ tonal knowledge and allows the
development of expectations for subsequent chords, with
harmonically related events being more strongly ex-
pected. In the present study, we were interested in the in-
fluence of harmonic relatedness of the context on the
judgment of a temporal feature in a chord. The temporal
asynchrony was related to the onset of one tone in a
chord of four tones. In half of the trials, one tone of the
target was played with a short temporal delay, and the
participants were required to indicate whether the chord
contained a temporal asynchrony.
EXPERIMENT 1
Method
Participants . Fifteen introductory psychology students partici-
pated in this experiment. Number of years of musical training, as
measured by years of instrumental instruction, ranged from 0 to 13,
with a mean of 4 (SD = 4.24) and a median of 3.
Material. Prime chords and target chords were major chords
consisting of four component tones (i.e., C2–E3–G3–C4). Each of
the 12 major chords occurred twice as target, preceded by either a
642 TILLMANN AND BHARUCHA
related or an unrelated prime. The related prime chord was built on
the degree of the chromatic scale that is seven semitones (a perfect
fifth) above or five semitones (a perfect fourth) below (i.e., a fre-
quency factor of 2
7/12
above or 2
5/12
below) the one for the target.
The related pair formed an authentic cadence, a frequently occur-
ring chord succession in Western tonal music that creates a strong
impression of ending (e.g., G-major followed by C-major). The un-
related prime chord was built on a degree of the chromatic scale
that is six semitones (i.e., a tritone, a frequency factor of 2
7/12
)
above or below the one for the target, creating a pair with harmon-
ically distant chords (e.g., F -major followed by C-major).
All the stimuli were played with sampled piano sounds produced
by a KORG New SG-1D. The sound stimuli were captured by
SoundEditPro software (MacroMedia), and the experiment was run
on PsyScope software (Cohen, MacWhinney, Flatt, & Provost,
1993). The prime chord sounded for 666 msec, the target chord
sounded for 2 sec, and the interstimulus interval was set to 0.
For prime chords and synchronous targets, the four tones of the
chord were played at the same time. For the targets containing a tem-
poral asynchrony (asynchronous targets), the tonic tone in the soprano
voice (i.e., the highest tone in the chord) was played with a delay of
50 msec, as compared with the three other tones. This tone did not
occur in either the related or the unrelated prime chord. Velocity, a
parameter related to the sound level, was constant for all pitches (set
to 64 in Performer 5.3 software, Marc of the Unicorn), except for
the delayed tone, for which the velocity was adjusted for compara-
ble salience (with a mean velocity of 62.3, ranging from 55 to 68).
The calibration of the velocities was done on the basis of the agree-
ment of two listeners .
Procedure. After a training phase on the experimental task with
isolated chords, the 48 chord pairs were presented in random order.
The participants had to make a normal versus one note late judg-
ment on the second chord as quickly and accurately as possible by
pressing one of two keys on the computer keyboard. The next trial
began when the participant pressed a third key on the keyboard. In
order to motivate the participants to respond as quickly and accu-
rately as possible, the second chord stopped sounding immediately
after a correct response (allowing the participants to continue with
the next trial), but not after an incorrect response, which in addition ,
was followed by an alerting feedback signal.
Results
Percentages of errors
1
(Figure 1, left panel) and response
times for correct responses (Figure 1, right panel) were an-
alyzed by two 2 3 2 analyses of variance (ANOVAs), with
harmonic context (related /unrelated) and target type
(synchronous/asynchronous) as within-subjects factors.
For the synchronous targets, error rates were lower
and response times were faster for the related context
than for the unrelated context. For asynchronous targets,
error rates were higher and response times tended to be
slower for the related context than for the unrelated con-
text. This interaction between harmonic context and tar-
get type was significant for percentages of errors and for
response times [F(1,14) = 7.58, MS
e
= 220.4, p < .05,
and F(1,14) = 9.64, MS
e
= 1,812.47, p < .01, respec-
tively]. In addition, the effect of harmonic context was
significant for percentages of errors [F(1,14) = 7.49,
MS
e
= 50.10, p < .05]. For response times, the size of the
priming effect for synchronous targets (i.e., the differ-
ence in response times between related and unrelated
contexts) was not correlated with years of musical train-
ing [r(13) = .05]. Overall, response times were faster for
synchronous targets than for asynchronous targets
[F(1,14) = 17.5, MS
e
= 2,455.34, p < .01]. It might be
suggested that the presence of the critical task feature
(the asynchronous tone) in the asynchronous targets al-
lowed faster responding than for the synchronous targets;
in other words, for synchronous targets, the participants
may have waited somewhat longer before deciding that
no late tone was forthcoming.
related unrelated
Percent Errors
synchronous
asynchronous
5
10
15
20
25
30
Response Times (msec)
600
640
680
720
760
related unrelated
Figure 1. Percentages of errors (left panel) and correct response times (right panel) averaged across all
chord pairs as a function of harmonic context (related/unrelated) and target type (synchronous/asyn-
chronous) in Experiment 1. Error bars indicate between-subjects standard errors .
EFFECT OF HARMONIC RELATEDNESS 643
Two further analyses were performed in terms of sig-
nal detection theory, with sensitivity (d¢ ) and response
criterion (c) as dependent variables and harmonic context
as a within-subjects factor (Table 1). False alarms were
defined as errors for synchronous targets, hits as correct
responses for asynchronous targets. The parameters d¢
and c were calculated for each participant separately. In
cases without false alarms or with the maximum number
of hits, frequencies of 0 and N were converted into 1/2
and N 2 1/2 (Kadlec, 1999). The possible range of d¢
scores was from 0 to 3.5. The response criterion c was
chosen because this bias parameter is independent of d¢
(Macmillan & Creelman, 1991). The sensitivity (d¢ ) was
significantly higher for the unrelated context than for the
related context [F(1,14) = 5.63, p < .05]. The participants
showed a small bias to answer one note late when the tar-
get was unrelated to the prime and to answer normal
when it was related [F(1,14) = 7.47, p < .05].
Discussion
In Experiment 1, the harmonic relatedness of the con-
text influenced the processing of a chord when a tempo-
ral synchrony judgment was required. A synchronous
target chord (say, C-major) was faster and more accu-
rately processed when it followed a chord belonging to
the same musical key (G-major) rather than to another
key (F -major). Chords with a temporal asynchrony
showed the reverse pattern, with mainly the accuracy
data being influenced by the harmonic context. Thus, the
harmonic expectations that developed after the presenta-
tion of the prime chord influenced the processing of a
temporal feature of the target chord. The results suggest
that harmonic priming in short musical contexts gener-
alizes beyond intonation judgments to temporal syn-
chrony judgments. Previously reported facilitation effects
in related contexts seem not to be based on a potential
confound between contextual dissonance and dissonance
of the target itself.
Analogous to the intonation task (Bharucha & Stoeckig,
1986), a response bias was observed with temporal asyn-
chrony detection in Experiment 1. The participants tended
to respond synchronous for related chords and asynchro-
nous for unrelated chords. A surprising result was the
higher sensitivity (as measured by d¢ ) for unrelated than
for related chords. In the intonation task, the reverse pat-
tern has been reported for long prime contexts, with a sen-
sitivity being higher for related than for unrelated chords
(Tillmann & Bigand, 2001; Tillmann et al., 1998).
In Experiment 2, this difference with single-chord
prime contexts was further investigated by directly com-
paring the two tasks in a within-subjects design. The par-
ticipants were required to judge the target chord for tem-
poral asynchrony (normal/one note late) in one phase of
the experiment and for intonation (consonant/dissonant)
in the other phase.
EXPERIMENT 2
Method
Participants . Twenty-eight introductory psychology students
participated in this experiment; none had participated in Experi-
ment 1. Number of years of musical training, as measured by years
of instrumental instruction, ranged from 0 to 16, with a mean of
5.75 (SD = 3.52) and a median of 5.
Material. The definition of related and unrelated chord pairs
was identical to that in Experiment 1. A further methodologica l
control was added in order to avoid possible temporal inaccuracie s
owing to the MIDI system: The same recordings of target chords
were used in both harmonic contexts (following either a related or
an unrelated prime). For the asynchrony task, the definitions of the
synchronous and asynchronous targets were as described for Ex-
periment 1, with a mean velocity of the delayed tone of 63.8 (rang-
ing from 58 to 68). For the intonation task, the consonant targets
were exactly the same as the synchronous targets. For the dissonant
targets, the sensory consonance was altered by adding an aug-
mented fifth, played at reduced velocity in order to make the dis-
sonance only moderately salient (i.e., C2–E3–G3–G 3– C4). The
velocity for the added tone was calibrated to produce comparable
salience across chords (with a mean velocity of 43.2, ranging from
40 to 49).
Procedure. The experimental procedure was split into two
phases. Each phase had the same structure as in Experiment 1. In
one phase, the task was a temporal asynchrony judgment (nor-
mal/one note late); in the other phase, it was an intonation judgment
(consonan t/dissonant). The order of the two phases was counter-
balanced over participants. As in Experiment 1, the next trial began
when the participant pressed a third key on the keyboard, and the
participants were alerted by a feedback signal if they gave an in-
correct response.
Results
Accuracy. For each task (asynchrony/intonation),
percentages of errors (Figure 2, left panels) were analyzed
by a 2 3 2 ANOVA, with harmonic context and target
type as within-subjects factors. For synchronous targets
and for consonant targets, more errors were observed in
the unrelated context than in the related context. For
asynchronous targets and for dissonant targets, more errors
were observed in the related context than in the unrelated
context. This interaction between harmonic context and
target type was significant for both tasks [F(1,27) =
21.06, MS
e
= 49.49, p < .001, for the asynchrony task;
F(1,27) = 28.32, MS
e
= 161.95, p < .001, for the intona-
tion task]. In addition, the effect of target type was sig-
nificant for the asynchrony task [F(1,27) = 8.37, MS
e
=
71.17, p < .01], and the effect of harmonic context was
significant for the intonation task [F(1,27) = 15.61,
MS
e
= 63.57, p < .01]. In both tasks, an interaction be-
Table 1
Mean Sensitivity (d
¢
) and Mean Response Criterion (c) for
Experiment 1 (With Between-Subjects Standard Errors)
Harmonic Context
Related Unrelated
Parameter M SE M SE
d
¢
2.16 .19 22.50 .17
c .30 .10 02.08 .09
Note—Positive values of c stand for a tendency to respond normal; neg-
ative values stand for a tendency to respond one note late.
644 TILLMANN AND BHARUCHA
tween harmonic context and target type was observed.
However, the error rate difference between the related
and the unrelated contexts was more pronounced for
consonant targets than for synchronous targets. In a 2 3
2 3 2 ANOVA with task (asynchrony/intonation) as the
third within-subjects factor, this difference in interaction
found expression in a three-way interaction between
task, harmonic context, and target type [F(1,27) = 6.67,
MS
e
= 94.10, p < .05].
Response times. For each task (asynchrony/intona-
tion), correct response times (Figure 2, right panels) were
analyzed by a 2 3 2 ANOVA with harmonic context and
target type as within-subjects factors.
For the asynchrony task, the influence of harmonic
context was observed primarily for synchronous targets:
Response times were shorter in the related context than
in the unrelated context [F(1,27) = 5.07, MS
e
= 2,082.79,
p < .05]. The main effect of harmonic context and its
5
10
15
20
25
550
600
650
700
750
800
0
Asynchrony Task
related unrelated related unrelated
Response Times (msec)
Percent Errors
synchronous
asynchronous
5
10
15
20
25
550
600
650
700
750
800
850
0
Response Times (msec)
Percent Errors
Intonation Task
related unrelatedrelated unrelated
dissonant
consonant
Figure 2. Percentages of errors (left panels) and correct response times (right panels) averaged across
all chord pairs as a function of harmonic context (related/unrelated) and target type for the asynchrony
task (top) and the intonation task (bottom) in Experiment 2. Error bars indicate between-subjects stan-
dard errors.
EFFECT OF HARMONIC RELATEDNESS 645
interaction with target type failed to reach significance
( p < .09 and p = .08, respectively). As was observed in
Experiment 1, the main effect of target type was signifi-
cant [F(1,27) = 19.53, MS
e
= 3,895.8, p < .001]: Reac-
tion times were shorter for asynchronous than for syn-
chronous targets.
For the intonation task, the effects of harmonic context
[F(1,27) = 10.01, MS
e
= 3,328.5, p < .01] and of target
type [F(1,27) = 6.18, MS
e
= 10,797.2, p < .05] and their
interaction [F(1,27) = 13.25, MS
e
= 7,953.9, p < .01] were
significant. For consonant targets, response times were
shorter in the related than in the unrelated context. For
dissonant targets, the reverse tendency was observed,
with longer response times in the related context.
In both tasks, the size of the priming effect for syn-
chronous and consonant targets (i.e., the difference in re-
sponse times between related and unrelated contexts)
was not correlated with musical training [r(26) = 2.01
for the asynchrony task and r(26) = .16 for the intonation
task].
Sensitivity and response criterion. Signal detection
parameters d¢ and c (Figure 3) were analyzed with two
2 3 2 ANOVAs, with task (asynchrony/intonation) and
harmonic context (related /unrelated) as within-subjects
factors. For sensitivity (d¢ ; Figure 4, left), the interaction
between task and harmonic context was signif icant
[F(1,27) = 11.88, MS
e
= 0.28, p < .001]. This interaction
confirmed the hypothesis, based on previous results, that
the influence of context differs between the two tasks.
As for long harmonic contexts (see Tillmann et al., 1998),
d¢ for the intonation task was higher for related than for
unrelated chords [F(1,27) = 23.6, p < .001]. In contrast,
as in Experiment 1, d¢ for the asynchrony task was higher
for unrelated than for related chords [although here, this
difference was not significant in a separate comparison;
F(1,27) = 1.11, p = .30]. In addition, the main effects of
task and of harmonic context were significant [F(1,27) =
6.18, MS
e
= 0.17, p < .05, and F(1,27) = 7.1, MS
e
= 0.55,
p < .05, respectively].
For the response criterion (c; Figure 3, right panel),
the effect of harmonic context and its interaction with task
were significant [F(1,27) = 39.85, MS
e
= 0.10, p < .001,
and F(1,27) = 7.99, MS
e
= 0.05, p < .01, respectively].
There was a bias to answer one note late/dissonant for
unrelated chords and a bias to answer normal/consonant
for related chords. Both tasks showed a similar shift in
response criterion, although it was stronger in the into-
nation task than in the asynchrony task. The analogous
criterion shift in the two tasks suggests that the previ-
ously observed bias in the intonation task was not due
only to a potential conflict between the types of disso-
nance (contextual dissonance and target dissonance), al-
though it might reinforce the bias, but seems to reflect a
more general phenomenon linked to contextual harmonic
relatedness.
Discussion
In Experiment 2, harmonic priming was observed with
both temporal asynchrony and intonation judgments. The
processing of a (synchronous, consonant) target chord
was facilitated when it was preceded by a related prime
chord, in comparison with an unrelated prime chord. The
fact that priming was observed for both tasks eliminates
the possibility that priming effects observed with intona-
tion judgments are due solely to a confound between dis-
sonance types—the contextual dissonance created by
harmonic unrelatedness and the dissonance of the target
itself. Converging evidence has been reported recently
with a phoneme-monitoring task for long context prim-
ing (Bigand, Tillmann, Poulin, & DAdamo, 2001). Lis-
asynchrony task
intonation task
Sensitivity d¢
1.8
2
2.2
2.4
2.6
2.8
3
Response Criterion c
0.35
0.25
0.15
0.05
0.05
0.15
0.25
0.35
related unrelatedrelated unrelated
Figure 3. Left panel: sensitivity (d
¢
) as a function of harmonic context (related/unrelated) and task (asynchrony/intona-
tion). Right panel: response criterion (c) as a function of harmonic context (related/unrelated) and task (asynchrony/into-
nation). Positive values stand for a tendency to respond normal/consonant; negative values stand for a tendency to respond
one note late/dissonant. Error bars indicate between-subjects standard errors.
646 TILLMANN AND BHARUCHA
tening to chord sequences created with synthetic pho-
nemes, participants were required to decide quickly
whether the target (the last chord of the sequence) was
sung on the phoneme /i/ or /u/. Results showed that har-
monic relatedness influences the processing of the target
chord: Phoneme monitoring was faster for phonemes
sung on a strongly related target (a referential tonic
chord) than for those sung on a less related, structurally
less important target (a subdominant chord). The prim-
ing results obtained with three different tasks thus point
to the robustness of the processing advantages conferred
by expectations arising from a previous context. This
previous context can be as short as one single chord and
influences the processing of further events. The outcome
of the temporal asynchrony task showed that expecta-
tions based on harmonic structures of the context also
have an impact on the perception of temporal features of
an event.
GENERAL DISCUSSION
Our study provides evidence that a harmonic context
influences the processing of a target chord: Temporal
synchrony judgments and consonant judgments of target
chords are facilitated after related prime chords, as com-
pared with unrelated prime chords. In previous studies
(Bharucha & Stoeckig, 1986, 1987; Bigand et al., 1999;
Bigand & Pineau, 1997; Bigand, Poulin, et al., 2001; Bi-
gand, Tillmann, et al., 2001; Tillmann & Bigand, 2001;
Tillmann et al., 1998) and the present one, musical ex-
pertise only weakly influences priming. The fact that the
harmonic context effect is observed consistently for mu-
sicians and for participants without musical training or
formal knowledge in tonal music suggests that harmonic
priming is based on cognitive processes that do not re-
quire explicit knowledge of musical structure. Listeners
to Western music acquire, through mere exposure, im-
plicit knowledge of the tonal system and its underlying
regularities. The acquisition of implicit tonal knowledge
and the activation of this knowledge in a context can be
simulated by a connectionist model (Bharucha, 1987;
Tillmann et al., 2000). The model of tonal knowledge ac-
tivation simulates previously observed harmonic prim-
ing effects (Bharucha & Stoeckig, 1987; Bigand et al.,
1999; Tekman & Bharucha, 1998; Tillmann & Bigand,
2001; Tillmann et al., 1998). For our data, it provides a
framework to explain the short context priming and the
sensitivity differences between the two tasks, depending
on the context.
In the model, tonal knowledge is conceived as a net-
work of interconnected units. Once learning has oc-
curred, the units are organized in three layers corre-
sponding to tones, chords, and keys. Each tone unit is
connected to the chords of which that tone is a compo-
nent. Analogously, each chord unit is connected to the
keys of which it is a member. Harmonic relations emerge
from the activation that reverberates via connected links
between tone, chord, and key units. When a chord is
played, the units representing the sounded component
tones are activated, and phasic activation (i.e., the
change of activation) is sent via connected links from
tones to chords and from chords to keys (bottom-up ac-
tivation). Phasic activation reverberates from keys to
chords and from chords to tones (top-down activation)
until an equilibrium is reached (see Bharucha, 1987, and
Tillmann et al., 2000, for more details). In early activa-
tion cycles, activated chord units contain at least one of
the component tones of the stimulus chord. At equilib-
rium, the activation pattern of chord units and of tone
units reflects the Western tonal hierarchy of the key con-
text and takes into account the key membership of chords
and tones. A given chord activates the units of harmoni-
cally related chords more strongly than it does the units
of unrelated chords. After two harmonically distant
prime chords (e.g., G and F -major in Figure 4), the ac-
tivation patterns differ strongly, with each one reflecting
its harmonic relations with the other major chords. A tar-
get chord unit (e.g., C-major) is more strongly activated
after the presentation of a harmonically related chord
that shares a parent key (G-major) than after an unrelated
chord with no common parent key (F -major). Activation
levels are interpreted as levels of expectation for subse-
quent events (Bharucha & Stoeckig, 1987; Bigand et al.,
1999; Tekman & Bharucha, 1998; Tillmann et al., 1998)
and predict harmonic priming, with facilitated process-
ing after a related prime. In our study, activation levels of
F# C# G# D# A# F C G D A E B
target
Activation Level
unrelated prime: F# major
related prime: G major
Chord
units:
Figure 4. Relative activation of major chord units in the net-
work at equilibrium after the presentation of a G-major and an
F -major chord. The activation of the chord unit C-major is
stronger after the harmonically related prime chord G than after
the unrelated chord F .
EFFECT OF HARMONIC RELATEDNESS 647
chord units thus simulate the observed harmonic context
effects on correct response times and error rates for syn-
chronous and consonant targets.
However, for the asynchronous and dissonant targets,
the pattern of results (primarily of the accuracy data) dif-
fered between the two tasks. Signal detection analysis re-
vealed that this difference was not due to the response
criterion, but rather to sensitivity: d¢ was higher for the
unrelated context than for the related context in the tem-
poral asynchrony task (Experiments 1 and 2), and the re-
verse was observed in the intonation task (Experi-
ment 2). The question arises as to how the neural net
model of tonal knowledge activation can account for
these observed context differences in d¢ .
The asynchrony and intonation tasks involve the pro-
cessing of a particular tone in the target chord: the de-
layed tone and the tone creating the dissonance. When a
chord is presented to the model, activation reverberates
to chord and key layers and back to the tone layer, where
the received phasic activation reflects the importance of
the tones in the actual key context (i.e., the tonal hierar-
chy). After the presentation of a G-major chord (i.e., one
consisting of the three tones GB–D), the activation of
the tone layer reflects the relationship among tones in the
key of G-majoreven if not all tones have been pre-
sented to the model (cf. profile of the related prime G-
major in Figure 5). As a consequence of top-down influ-
ences in reverberation, the tone units belonging to the
key of G-major (A, B, C, D, E, F , G) receive more acti-
vation than do tone units outside the key (A , C , D , F,
G ), with a reduced activation for the tone F .
In response to different prime chords (i.e., G and F -
major), the activation patterns of the tone units reflect the
tonal relations among the 12 tones in the corresponding
keys (G and F -major, respectively; cf. Figure 5). These
activation levels determine how strong the change of ac-
tivation in a tone unit will be when a particular tone oc-
curs in the following chord. If a tone unit is strongly ac-
tivated by the previous context, the change of activation
in this tone unit will be less than if it was weakly activated
previously. When the listeners’ task involves a detection
of this tone, we hypothesize that the strength of activa-
tion change influences the ease of tone detection: A small
activation change might result in a more difficult task and
less precise processing than does a large activation
change. In addition, a strong activation level of a tone unit
before the tone sounds (or even if it does not occur) might
render the rejection of its presence in the following chord
more difficult than does a weak activation level.
Because the sensitivity data differ as a function of har-
monic context and task, we would expect different acti-
vation levels of these two tones as a function of the prime
context, which in turn lead to different activation changes
when the target occurs. For the example in Figure 5, the
target that follows the related (G-major) and the unre-
lated (F -major) primes is a C-major chord. In the asyn-
chrony task, the to-be-detected tone in the target is the
tone C. Because of reverberation back to the tone layer,
the activation level of the C unit is stronger after the re-
lated prime than after the unrelated prime. After the re-
lated prime, the delayed tone is already strongly acti-
vated by top-down reverberation before actually sounding.
This strong activation and the resulting small activation
change when the tone is then played in the target might
render the detection of the tone (i.e., its moment of oc-
currence) more difficult than in the unrelated context, in
which a low activation level is followed by a large acti-
vation change. In the intonation task, the distinctive tone
A A# B C C# D D# E F F# G G#
unrelated prime: F# major
related prime: G major
Activation Level
Tone
units:
Figure 5. Activations received by the 12 units of the tone layer during reverberation after the presentation of a G-
major and an F -major chord.
648 TILLMANN AND BHARUCHA
creating the dissonance is G . It is more strongly acti-
vated after an unrelated prime than after a related prime.
The activation levels are thus reversed in comparison
with the tone C and would result in the opposite activa-
tion change when the target is played. These activation
patterns predict a reversed detection facilitation as a
function of harmonic context than in the asynchrony
task. The prediction for activation changes in tone units
was strengthened by further simulations that were per-
formed with the related and unrelated primes followed
by either a C-major chord (consisting of the tones C
EG) or a C-major chord with an additional G tone.
2
Figure 6 displays the difference of activation before and
after the presentation of the target for the tone units C
and G . For the tone C, the activation change is stronger
after the unrelated prime than after the related prime. For
the tone G , this pattern is reversed. The strength of ac-
tivation change thus depends on the harmonic context
and the considered tone, and it reflects the pattern of the
d¢ data observed for the asynchrony and intonation tasks.
The simulations suggest that the neural net model of
tonal knowledge activation provides a framework for in-
tegrating the observed priming results of the two tasks.
The facilitated processing of normative (synchronous,
consonant) targets (with higher accuracy and faster re-
sponse times) is simulated at a global level of processing
by the activation of the target chord unit. The differences
in d¢ between the two tasks depending on the harmonic
context are simulated with differences in activation lev-
els and activation changes of the tone units that repre-
sent the critical features. The model explains the influ-
ence of harmonic context on the detection of temporal
onset asynchrony in terms of activation of listeners’ im-
plicit knowledge of harmonic relations. The model also
offers a framework for generating new, testable predic-
tions that relate to the influence of harmonic relations on
the processing of temporal features. A specific predic-
tion derives directly from activation levels in tone units
(Figure 5): A similar outcome should occur when the de-
layed tone in the target is not C, but G or E. Further pre-
dictions are linked to the temporal decay of activation,
which simulates the dynamic aspects of harmonic pro-
cessing. Systematic modifications of the interstimulus
interval should allow us to further investigate the time
course of knowledge activation and its influence on tem-
poral processing.
The present study suggests that structures based on
the pitch dimension influence the processing of tempo-
ral information. The harmonic relatedness of the context
chord influences the detection of a temporal onset asyn-
chrony in simultaneously occurring tones. A further con-
cern is to investigate whether these results generalize to
temporal processing in longer contexts. It already has
been shown that the temporal perception of sequentially
occurring events in musical sequences is influenced by
the overall phrase structure of the piece (Repp, 1992,
1998, 1999). Future research must investigate whether
the detection of temporal deviations between sequential
musical events is influenced specifically by the har-
monic relatedness of the deviant event to its surrounding
context. For example, the question arises whether listen-
ers’ sensitivity to accurate timing is smaller for strongly
related events than for unrelated ones, or whether a de-
viant tone would be more easily detected when it is a
nondiatonic event—or even a less related diatonic event
in the actual key context.
REFERENCES
Bharucha, J. J. (1987). Music cognition and perceptual facilitation: A
connectionist framework. Music Perception, 5, 1-30.
Bharucha, J. J., & Stoeckig, K. (1986). Reaction time and musical
expectancy: Priming of chords. Journal of Experimental Psychology:
Human Perception & Performance, 12, 403-410.
Bharucha, J. J., & Stoeckig, K. (1987). Priming of chords: Spread-
ing activation or overlapping frequency spectra? Perception &
Psychophysics, 41, 519-524.
Bigand, E., Madurell, F., Tillmann, B., & Pineau, M. (1999). Ef-
fect of global structure and temporal organization on chord process-
ing. Journal of Experimental Psychology: Human Perception & Per-
formance, 25, 184-197.
Bigand, E., & Pineau, M. (1997). Global context effects on musical
expectancy. Perception & Psychophysic s, 59, 1098-1107.
Bigand, E., Poulin, B, Tillmann, B., DAdamo, D., & Madurell, F.
(2001). Cognitive versus sensory components in harmonic priming.
Manuscript submitted for publication.
Bigand, E., Tillmann, B., Poulin, B., & DAdamo, D. (2001). The ef-
fect of harmonic context on phoneme monitoring in vocal music.
Cognition, 81, B11-B20.
Bregman, A. S. (1990). Auditory scene analysis: The perceptual orga-
nization of sound. Cambridge, MA: MIT Press.
Cohen, J., MacWhinney, B., Flatt, M., & Provost, J. (1993).
PsyScope: An interactive graphic system for designing and controlling
experiments in the psychology laboratory using Macintosh computers.
Behavior Research Methods, Instruments, & Computers, 25, 257-271.
Drake, C. (1993). Perceptual and performed accents in musical se-
quences. Bulletin of the Psychonomic Society, 31, 107-110.
related
unrelated
tone unit C tone unit G#
Activation Changes
Figure 6. Difference of activation for the tone units C and G
before and after the presentation of the chord C-major that fol-
lowed the prime chord (i.e., the related G-major or the unrelated
F -major chord). The tone C is the crucial feature for the asyn-
chrony task, and the tone G for the intonation task.
EFFECT OF HARMONIC RELATEDNESS 649
Drake, C., & Botte, M.-C. (1993). Tempo sensitivity in auditory se-
quences: Evidence for a multiple-look model. Perception & Psycho-
physics, 54, 277-286.
Francès, R. (1958). La perception de la musique. Paris: Vrin. [The per-
ception of music (W. J. Dowling, Trans.) (1988). Hillsdale, NJ: Erlbaum]
Halpern, A. R., & Darwin, C. J. (1982). Duration discrimination in a
series of rhythmic events. Perception & Psychophysic s, 31, 86-89.
Hirsh, I. J., Monahan, C. B., Grant, K. W., & Singh, P. G. (1990).
Studies in auditory timing: I. Simple patterns. Perception & Psycho-
physics, 47, 215-226.
Kadlec, H. (1999). Statistical properties of d
¢
and
b
estimates of signal
detection theory. Psychological Methods, 4, 2-43.
Krumhansl, C. L. (1990). Cognitive foundations of musical pitch. Ox-
ford: Oxford University Press.
Macmillan, N. A., & Creelman, C. D. (1991). Detection theory: A
users guide. Cambridge: Cambridge University Press.
Meyer, D. E., & Schvaneveldt, R. W. (1971). Facilitation in recog-
nizing pairs of words: Evidence of a dependence between retrieval
operations. Journal of Experimental Psychology, 90, 227-234.
Palmer, C. (1996). On the assignment of structure in music perfor-
mance. Music Perception, 14, 23-56.
Repp, B. H. (1992). Probing the cognitive representation of musical
time: Structural constraints on the perception of timing. Cognition,
44, 241-281.
Repp, B. H. (1998). Variations on a theme by Chopin: Relations between
perception and production of timing in music. Journal of Experi-
mental Psychology: Human Perception & Performance, 24, 791-811.
Repp, B. H. (1999). Detecting deviations from metronomic timing in
music: Effects of perceptual structure on the mental timekeeper. Per-
ception & Psychophysic s, 61, 529-548.
Stanovich, K. E., & West, R. F. (1979). Mechanisms of sentence con-
text effects in reading: Automatic activation and conscious attention.
Memory & Cognition, 7, 77-85.
Tekman, H. G., & Bharucha, J. J. (1992). Time course of chord prim-
ing. Perception & Psychophysics, 51, 33-39.
Tekman, H. G., & Bharucha, J. J. (1998). Implicit knowledge versus
psychoacoustic similarity in priming of chords. Journal of Experi-
mental Psychology: Human Perception & Performance, 24, 252-260.
Terhardt, E. (1984). The concept of musical consonance: A link be-
tween music and psychoacoustics. Music Perception, 1, 276-295.
Tillmann, B., Bharucha, J. J., & Bigand, E. (2000). Implicit learn-
ing of music: A self-organizing approach. Psychological Review,
107, 885-913.
Tillmann, B., & Bigand, E. (2001). Global context effect in normal
and scrambled musical sequences. Journal of Experimental Psychol-
ogy: Human Perception & Performance, 27, 1185-1196 .
Tillmann, B., Bigand, E., & Pineau, M. (1998). Effects of global and
local contexts on harmonic expectanc y. Music Perception, 16, 99-
118.
Zera, J., & Green, D. M. (1993). Detecting temporal onset and offset
asynchrony in multicomponent complexes. Journal of the Acoustical
Society of America, 93, 1038-1052.
Zera, J., & Green, D. M. (1995). Effect of signal component phase on
asynchrony discrimination. Journal of the Acoustical Society of
America, 98, 817-827.
NOTES
1. Error rates and response times were reported side by side to show
facilitation (fewer errors and shorter reaction times) and to show that the
observed context effect was not due solely to a speed/accuracy tradeoff.
2. The model in its present form does not include pitch height. The
12 chromatic tone units represent absolute pitch classes only (i.e., tones
generalized across octaves). Because of this representation, the C-major
chord was coded with three tones (each tone was coded in the input
layer with an activation value of 1). The additional tone G was coded
with a value of .25, in order to have a comparable energy input for the
two presented chords (i.e., C–E–G and C–E–G–G ). It should be noted
that the absolute amount of change (Figure 6) cannot be compared
across units, because this relation depends on the coded input value for
G . For example, a higher activation coding of G would result in
stronger activation changes for G than occur with the present coding
and that would be stronger than those for the tone C.
(Manuscript received October 23, 2000;
revision accepted for publication August 1, 2001.)
... They further talk of the 'supramodal invariants of grouping'. 4. At the same time, however, it is perhaps also worth noting that auditory psychophysicists have highlighted a number of other perceptual consequences of varying the harmonic relatedness of pairs of auditory stimuli (e.g., see Plomp & Levelt, 1965;Tillmann & Bharucha, 2002), though these will not be discussed further here. 5. For, as Schloss and Palmer (2011, p. 552) note: "Because preference and harmony are so clearly different concepts in music perception, we are sceptical of claims that they are the same concept in color perception." ...
Full-text available
Article
The notion of harmony was first developed in the context of metaphysics before being applied to the domain of music. However, in recent centuries, the term has often been used to describe especially pleasing combinations of colors by those working in the visual arts too. Similarly, the harmonization of flavors is nowadays often invoked as one of the guiding principles underpinning the deliberate pairing of food and drink. However, beyond the various uses of the term to describe and construct pleasurable unisensory perceptual experiences, it has also been suggested that music and painting may be combined harmoniously (e.g., see the literature on “color music”). Furthermore, those working in the area of “sonic seasoning” sometimes describe certain sonic compositions as harmonizing crossmodally with specific flavor sensations. In this review, we take a critical look at the putative meaning(s) of the term “harmony” when used in a crossmodal, or multisensory, context. Furthermore, we address the question of whether the term's use outside of a strictly unimodal auditory context should be considered literally or merely metaphorically (i.e., as a shorthand to describe those combinations of sensory stimuli that, for whatever reason, appear to go well together, and hence which can be processed especially fluently).
... Based on Bharucha and Stoeckig's (1986) original result, as well as a wealth of subsequent research by others (Bharucha & Todd, 1989;Bigand et al., 1999;Bigand & Pineau, 1997;Bigand et al., 2003;Bigand & Poulin-Charronnat, 2016;Tillman & Bharucha, 2002;Tillman & Bigand, 2001;Tillman, Janata, Birk, & Bharucha, 2008), RT and error rate measurements should produce two patterns. First, and most centrally, for in-tune chords, harmonically related primes produce faster RTs, and fewer errors, than harmonically unrelated primes. ...
Article
Although typically thought of as a unimodal phenomenon, musical experience is fundamentally multisensory. The current study examined such multisensory influences by exploring the impact of visual information on the processing of higher-order musical structure, within the context of harmonic expectancy formation. Specifically, musically trained participants experienced four different priming events, including an auditory chord prime, a visual chord prime, an auditory-visual note/chord prime, and an auditory note prime, and discriminated harmonically related versus unrelated in-tune and out-of-tune target chords. Analyses of reactions and error rates revealed typical harmonic priming effects (i.e., faster reaction times and lower error rates to related than unrelated targets for in-tune chords) for auditory chord and auditory-visual note/chord primes, but no differences for visual chord and auditory note primes. The finding that auditory-visual note/chord primes led to priming indicates that visual information can induce auditory expectancies when supported by auditory cue information. These results are discussed within the context of models of tonal relations, as well as in relation to theories of multisensory processing in general. ARTICLE HISTORY
... De nombreuses simulations d'amorçage harmonique ont été conduites sur ce modèle. Les attentes harmoniques des sujets humains, reflétées dans les patterns d'activation, ont été simulées dans le contexte d'un accord seul (Bharucha & Stoeckig, 1987;Tekman & Bharucha, 1998;Tillmann & Bharucha, 2002), mais aussi dans le cas de contextes harmoniques plus longs, expliquant la facilitation de traitement des accords reliés (Bigand et coll., 1999;Tillmann & Bigand, 2001;Tillmann, Bigand, & Pineau, 1998). De plus, les interactions entre contextes local et global étudiées par Tillmann, Bigand et Pineau (1998) L'intérêt de ce modèle, qui reste bien évidemment trop simple pour rendre compte de tous les aspects subtils que peut revêtir la perception de la musique, est d'illustrer comment une architecture de connaissances distribuées peut rendre compte de façon élégante et économique d'un ensemble varié de réponses perceptives à la musique tonale. ...
... The relatively small sample size (24 contexts) may have resulted in Type II errors, therefore replication with larger samples is important. Furthermore, the results should be replicated using other methodological approaches to assess uncertainty including reaction time studies (e.g., Bharucha and Stoeckig, 1986;Bigand et al., 2001Bigand et al., , 2005Tillmann and Bharucha, 2002;Bigand and Poulin-Charronnat, 2006;Tillmann et al., 2006;Omigie et al., 2012) and various methods for assessing expectations continuously throughout a listening session without pausing the stimulus to collect responses (e.g., Eerola et al., 2002;Aarden, 2003;Toiviainen and Krumhansl, 2003;Pearce et al., 2010a). There is a difficulty with the latter in that changing the pitch of one note effectively changes the size of two pitch intervals, making it difficult to ascertain whether participants' responses relate to the note itself or the interval following it. ...
Full-text available
Article
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Article
Given the key role of pitch and time in the mental representation of music, the way the two dimensions combine is a crucial question in music cognition. In the present study, using electroencephalography (EEG), we manipulated both musical pitch and time structures and investigated how the two dimensions work together. Musicians were presented with eight-chord sequences, in which the last target chord was harmonically or temporally expected or unexpected based on the preceding contexts. ERP analysis showed that listeners track both dimensions as music unfolds in time. For the time dimension, irregular temporal events induced greater MMN than regular temporal events. For the pitch dimension, harmonically less-related chords revealed greater ERAN and N5 than harmonically related chords. Moreover, there was an interaction between musical pitch and time dimensions in the N5 effect. These results indicate that for music perception, pitch and time dimensions are processed independently at the early stage and interactively at the late stage.
Full-text available
Preprint
Previous work has shown that musicians tend to slow down as they approach phrase boundaries (phrase-final lengthening). In the present experiments, we used a paradigm from the action perception literature, the dwell time paradigm (Hard, Recchia, & Tversky, 2011), to investigate whether participants engage in phrase boundary lengthening when self-pacing through musical sequences. When participants used a key press to produce each successive chord of Bach chorales, they dwelled longer on boundary chords than non-boundary chords in both the original chorales and atonal manipulations of the chorales. When a novel musical sequence was composed that controlled for metrical and melodic contour cues to boundaries, the dwell time difference between boundaries and non-boundaries was greater in the tonal condition than in the atonal condition. Furthermore, similar results were found for a group of non-musicians, suggesting that phrase-final lengthening in musical production is not dependent on musical training and can be evoked by harmonic cues.
Full-text available
Thesis
Song and Speech Perception Dissertation 2013
Article
As music unfolds in time, structure is recognised and understood by listeners, regardless of their level of musical expertise. A number of studies have found spectral and tonal changes to quite successfully model boundaries between structural sections. However, the effects of musical expertise and experimental task on computational modelling of structure are not yet well understood. These issues need to be addressed to better understand how listeners perceive the structure of music and to improve automatic segmentation algorithms. In this study, computational prediction of segmentation by listeners was investigated for six musical stimuli via a real-time task and an annotation (non real-time) task. The proposed approach involved computation of novelty curve interaction features and a prediction model of perceptual segmentation boundary density. We found that, compared to non-musicians’, musicians’ segmentation yielded lower prediction rates, and involved more features for prediction, particularly more interaction features; also non-musicians required a larger time shift for optimal segmentation modelling. Prediction of the annotation task exhibited higher rates, and involved more musical features than for the real-time task; in addition, the real-time task required time shifting of the segmentation data for its optimal modelling. We also found that annotation task models that were weighted according to boundary strength ratings exhibited improvements in segmentation prediction rates and involved more interaction features. In sum, musical training and experimental task seem to have an impact on prediction rates and on musical features involved in novelty-based segmentation models. Musical training is associated with higher presence of schematic knowledge, attention to more dimensions of musical change and more levels of the structural hierarchy, and higher speed of musical structure processing. Real-time segmentation is linked with higher response delays, less levels of structural hierarchy attended and higher data noisiness than annotation segmentation. In addition, boundary strength weighting of density was associated with more emphasis given to stark musical changes and to clearer representation of a hierarchy involving high-dimensional musical changes.
Full-text available
Article
Previous work has shown that musicians tend to slow down as they approach phrase boundaries (phrase-final lengthening). In the present experiments, we used a paradigm from the action perception literature, the dwell time paradigm (Hard, Recchia, & Tversky, 2011), to investigate whether participants engage in phrase boundary lengthening when self-pacing through musical sequences. When participants used a key press to produce each successive chord of Bach chorales, they dwelled longer on boundary chords than nonboundary chords in both the original chorales and atonal manipulations of the chorales. When a novel musical sequence was composed that controlled for metrical and melodic contour cues to boundaries, the dwell time difference between boundaries and nonboundaries was greater in the tonal condition than in the atonal condition. Furthermore, similar results were found for a group of nonmusicians, suggesting that phrase-final lengthening in musical production is not dependent on musical training and can be evoked by harmonic cues. (PsycINFO Database Record
Full-text available
Article
Three behavioral experiments were conducted to investigate the hypothesis that perceived emotion activates expectations for upcoming musical events. Happy, sad, and neutral pictures were used as emotional primes. In Experiments 1 and 2, expectations for the continuation of neutral melodic openings were tested using an implicit task that required participants to judge the tuning of the first note of the melodic continuation. This first note was either high or low in pitch (Experiment 1) or followed either a narrow or wide melodic interval (Experiment 2). Experiment 3 assessed expectations using an explicit task and required participants to rate the quality of melodic continuations, which varied in register and interval size. Experiments 1 and 3 confirmed that emotion indeed modulates expectations for melodic continuations in a high or low register. The effect of emotion on expectations for melodic intervals was significant only in Experiment 3, although there was a trend for happiness to increase expectations for wide intervals in Experiment 2. © 2014 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA ALL RIGHTS RESERVED.