ArticlePDF Available

Abstract and Figures

Scholars debate whether musical and linguistic abilities are associated or independent. In the present study, we examined whether musical rhythm skills predict receptive grammar proficiency in childhood. In Experiment 1, 7- to 17-year-old children (N = 68) were tested on their grammar and rhythm abilities. In the grammar-comprehension task, children heard short sentences with subject-relative (e.g., "Boys that help girls are nice") or object-relative (e.g., "Boys that girls help are nice") clauses, and determined the gender of the individual performing the action. In the rhythm-discrimination test, children heard two short rhythmic sequences on each trial and decided if they were the same or different. Children with better performance on the rhythm task exhibited higher scores on the grammar test, even after holding constant age, gender, music training, and maternal education. In Experiment 2, we replicated this finding with another group of same-age children (N = 96) while further controlling for working memory. Our data reveal, for the first time, an association between receptive grammar and rhythm perception in typically developing children. This finding is consistent with the view that music and language share neural resources for rule-based temporal processing. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Content may be subject to copyright.
Rhythm and Syntax Processing in School-Age Children
Yune S. Lee, Sanghoon Ahn,
and Rachael Frush Holt
The Ohio State University
E. Glenn Schellenberg
University of Toronto Mississauga
Scholars debate whether musical and linguistic abilities are associated or independent. In the present
study, we examined whether musical rhythm skills predict receptive grammar proficiency in childhood.
In Experiment 1, 7- to 17-year-old children (N!68) were tested on their grammar and rhythm abilities.
In the grammar-comprehension task, children heard short sentences with subject-relative (e.g., “Boys that
help girls are nice”) or object-relative (e.g., “Boys that girls help are nice”) clauses, and determined the
gender of the individual performing the action. In the rhythm-discrimination test, children heard two
short rhythmic sequences on each trial and decided if they were the same or different. Children with
better performance on the rhythm task exhibited higher scores on the grammar test, even after holding
constant age, gender, music training, and maternal education. In Experiment 2, we replicated this finding
with another group of same-age children (N!96) while further controlling for working memory. Our
data reveal, for the first time, an association between receptive grammar and rhythm perception in
typically developing children. This finding is consistent with the view that music and language share
neural resources for rule-based temporal processing.
Keywords: rhythm, syntax, music, language, children
Behavioral studies often report associations between music and
speech. In childhood, music aptitude is correlated with phonolog-
ical (Anvari, Trainor, Woodside, & Levy, 2002; Moritz, Yampol-
sky, Papadelis, Thomson, & Wolf, 2013) and pronunciation (Mi-
lovanov et al., 2009) skills. Although one study reported that pitch
perception was correlated positively with phonological awareness
(Anvari et al., 2002), musical rhythm skills (e.g., rhythm discrim-
ination or reproduction) are more often predictive of speech per-
ception (Carr, White-Schwoch, Tierney, Strait, & Kraus, 2014;
Moritz et al., 2013; Ozernov-Palchik, Wolf, & Patel, 2018; Poli-
timou, Dalla Bella, Farrugia, & Franco, 2019; Swaminathan &
Schellenberg, 2017, 2019). For example, a recent study found that
rhythm perception and production best accounted for phonological
awareness in 4-year-olds (Politimou et al., 2019). By contrast,
impaired rhythm abilities are associated with deficits in phonolog-
ical awareness. For example, children with SLI (specific language
impairment) or developmental dyslexia exhibit poor performance
on tasks that require them to detect rhythmic timing or amplitude
rise— cues that are essential to speech perception (Corriveau,
Pasquini, & Goswami, 2007; Corriveau & Goswami, 2009; Gos-
wami, Gerson, & Astruc, 2010; Huss, Verney, Fosker, Mead, &
Goswami, 2011).
In some instances, children assigned randomly to music lessons
exhibit enhanced performance on auditory tasks that require dis-
crimination and detection of subtle phonetic features in speech
(Degé & Schwarzer, 2011; Flaugnacco et al., 2015; François,
Grau-Sánchez, Duarte, & Rodriguez-Fornells, 2015; Moreno et al.,
2009). For example, children who received two years of music
classes performed better on a test of speech-segmentation ability
than other children who received two years of painting class
(François, Chobert, Besson, & Schön, 2013). Similarly, children
with dyslexia who received seven months of music training out-
performed their counterparts who received painting training on
phonemic blending (i.e., hearing /c/-/a/-/t/ and producing “cat”) or
rhythm reproduction tasks (Flaugnacco et al., 2015). These find-
ings raise the possibility that music training causes improvements
in speech processing, as some scholars have theorized (Kraus &
Chandrasekaran, 2010; Patel, 2011). According to Patel (2011),
the particular characteristics of the music-learning process are
demanding but enjoyable, leading to enhanced listening skills that
transfer to speech perception. The perspective of Kraus is similar
but focused on encoding sound in the brainstem, which becomes
more faithful and accurate with music training, such that it en-
hances the perception of speech (Kraus & Chandrasekaran, 2010).
XYune S. Lee, Department of Speech and Hearing Science and
Chronic Brain Injury Program, The Ohio State University; XSanghoon
Ahn, Department of Neuroscience, The Ohio State University; XRachael
Frush Holt, Department of Speech and Hearing Science, The Ohio State
University; XE. Glenn Schellenberg, Department of Psychology, Uni-
versity of Toronto Mississauga.
We thank Jessica Grahn and Reyna Gordon for sharing materials for the
rhythm test. We also thank Ian Goldthwaite, Allison Byrer, Kate Corbeil,
Katherine Miller, Korrin Perry, Aiesha Polakampalli, Christina Roup,
Laura Wagner, Cynthia Clopper, and Kathryn Campbell-Kibler for their
support. Special thanks to the parents and children who participated in our
study. Data are available upon request.
Correspondence concerning this article should be addressed to Yune S.
Lee, who is now at School of Behavioral and Brain Sciences, The Uni-
versity of Texas at Dallas, 800 West Campbell Road, Richardson, TX
75080. E-mail:
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Developmental Psychology
© 2020 American Psychological Association 2020, Vol. 2, No. 999, 000
ISSN: 0012-1649
Although both scholars focus on transfer from music to speech,
they also believe that enhanced speech perception has cascading
effects that positively influence language use more broadly (e.g.,
Despite numerous reports of associations between music and
speech perception (for review, see Schellenberg & Weiss, 2013),
only recently have scholars turned their attention to plausible
connections between musical ability and language skills beyond
simple listening (or acoustic) processes in childhood (Gordon,
Jacobs, Schuele, & McAuley, 2015; Gordon, Shivers, et al., 2015;
Politimou et al., 2019). For example, Gordon, Shivers, et al. (2015)
tested 6-year-old children and reported a positive association be-
tween rhythm-discrimination ability and expressive grammar (i.e.,
producing morpho-syntactically well-formed words/phrases). No-
tably, the association remained evident even after controlling for
IQ, musical experiences, and socioeconomic status, which sug-
gests that similar underlying mechanisms influence both rhythm
and expressive grammar.
For adults, some evidence points to interactions between rhythm
and syntactic processing when these processes operate in parallel
during language comprehension. For example, words that unfold
metrically over time (i.e., with a beat) facilitate comprehension of
sentences that are syntactically complex or ambiguous (Roncaglia-
Denissen, Schmidt-Kassow, & Kotz, 2013; Schmidt-Kassow &
Kotz, 2008). By contrast, processing of word sequences with
irregular rhythmic patterns is more effortful (Bohn, Knaus, Wiese,
& Domahs, 2013). Priming with external rhythmic cues (e.g.,
march music) also leads to enhanced performance on tests of
syntax (Bedoin, Brisseau, Molinier, Roch, & Tillmann, 2016;
Canette et al., 2020; Chern, Tillmann, Vaughan, & Gordon, 2018;
Kotz & Gunter, 2015; Przybylski et al., 2013).
In the present study, we tested school-age children. Our goal
was to examine the possibility of a connection between proficiency
on a task that measured receptive grammar, and the ability to
perceive, remember, and discriminate musical rhythms. Basic syn-
tactic abilities are acquired early in life (Corrêa, 1995; Kidd &
Bavin, 2002; Labelle, 1990), such that older, school-age children
tend to be fluent in commanding multiclausal sentences (Nippold,
2009). Nevertheless, syntactic skills continue to improve through-
out the adolescent period (Frizelle, Thompson, McDonald, &
Bishop, 2018; Hartshorne, Tenenbaum, & Pinker, 2018; Loban,
1976). Notably, in a recent study based on a large amount of data
(N"600,000), Hartshorne et al. (2018) estimated that grammar-
learning abilities improved until approximately 17 years of age.
Although syntactic competency is thought to remain stable
throughout adulthood (Chomsky, 2014; Herschensohn, 2009;
Nowak, Komarova, & Niyogi, 2001), there are individual differ-
ences in syntactic ability among adults (Da˛browska, 2012a, 2012b,
2018, 2019; Da˛browska & Street, 2006). In a recent study,
Da˛browska (2018) demonstrated that grammar competency among
adults depended on differences in IQ, education, and exposure to
print. There are similarly marked individual differences in chil-
dren’s syntactic ability (Nippold, 2009; Nippold, Mansfield, &
Billow, 2007; Spencer, Clegg, & Stackhouse, 2012). In the present
study, we held constant extraneous individual differences (i.e.,
confounding variables) to ensure that any observed associations
between rhythm and grammar were not artifacts. We hypothesized
that this association would emerge because central auditory pro-
cessing is required for rapid and efficient temporal analysis of
musical and linguistic structures.
To test this hypothesis, we administered short tests of rhythm
and grammar, which were tailored for testing outside of the labo-
ratory (i.e., in a children’s museum). In the rhythm test, children
compared pairs of rhythm sequences that required same/different
judgments. In the grammar test, children were asked to indicate the
gender of a noun that was linked to an “action” verb in a sentence
with either a subject- or object-relative embedded clause. For
example, consider the following two sentences, which comprise
the same six words:
“Kings that help queens are nice”
“Kings that queens help are nice”
Whereas the first sentence has an embedded clause that relates
to the subject of the action, the second sentence has an embedded
clause that relates to the object of the action (Kings in both
instances). Such object-relative (OR) clauses are syntactically
more complex than subject-relative (SR) clauses, a consequence of
the order (or temporal) rearrangement of the same words presented
serially. Half of these SR and OR sentences were further manip-
ulated in acoustic clarity by applying a vocoding-filter (Experi-
ment 1) or by adding multitalker babble (Experiment 2). The
clarity manipulation would allow us to explore a potential inter-
action between sensory (acoustic) and linguistic (syntactic) chal-
lenges (Wingfield, McCoy, Peelle, Tun, & Cox, 2006). Although
all of the degraded sentences were intelligible, such acoustic
manipulations could still render difficulty in syntactic access.
Experiment 1
The study protocol used here and in Experiment 2 was approved
by the Institutional Review Board at the Ohio State University
(IRB #: 2012B0213; Language studies in the labs in Life POD at
the Center of Science and Industry).
A priori power analysis conducted with G
Power 3.1 (Faul,
Erdfelder, Buchner, & Lang, 2009) indicated that a sample of 63
participants was required to reach 85% certainty of detecting a
medium-sized association (f
!.15;Cohen, 1988) between rhythm
and grammar with six other variables held constant, #!.05. Our
goal was to ensure that the sample was at least this large, and our
arrangement with the museum did not allow for turning away
children after this goal was reached.
Participants. Ninety-eight native English-speaking children,
with reported normal speech, language, and hearing, were re-
cruited from the visitor population at a local museum. Only five
children were bilingual, and one was trilingual. The children
ranged in age from 7 to 17 years, which ensured marked individual
differences in grammar and rhythm skills. Parental consent and
child assent were obtained prior to the beginning of the experi-
ment. Some children (N!26) were subsequently excluded from
the sample for significantly below-chance (i.e., worse than guess-
ing) levels of performance on the grammar test (i.e., in any of 4
conditions), which likely arose due to misunderstanding directions
or swapping button responses. Another three children did not want
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
to complete the task, and one child was excluded for concurrently
receiving speech therapy. Thus, the final sample comprised 68
children (35 girls), whose mean age was 11.3 years (SD !2.7).
We also measured demographic variables, including age, gen-
der, music training, and maternal education (as a proxy for socio-
economic status, or SES). Because these demographic variables
are known to be associated with children’s language skills (Barbu
et al., 2015; Hoff, 2003; Tabri, Chacra, & Pring, 2011), they served
as covariates when we examined whether musical-rhythm sensi-
tivity predicts receptive-grammar proficiency. Duration of music
training was calculated as the square root of total period of training
(i.e., total years), which was summed for children who had learned
more than one musical instrument, as in previous research (e.g.,
Swaminathan & Schellenberg, 2017). Lastly, maternal education
was measured on a 5-point scale (1 !high school diploma or less,
2!associate’s degree, 3 !bachelor’s degree, 4 !master’s
degree, and 5 !doctorate).
Stimuli. In the grammar test, stimuli comprised sentences
uttered by a native American-English speaking female. Ten “base”
sentences varied in syntax and acoustic clarity (Figure 1A). For the
syntactic manipulation, each of the sentences was center-
embedded with an SR clause or an OR clause. Sentences with SR
and OR clauses consisted of identical words, the only difference
being the position of two words in each sentence. Each sentence
also contained a male and a female noun, but only one of them
performed the action of the sentence (e.g., “hug,” Figure 1A). The
gender of the characters was counterbalanced, as was the presence
of SR and OR clauses. For the acoustic manipulation, sentences
were processed by a 15-channel vocoder that reduced spectral
details, hampering acoustic clarity. Although sound quality was
substantially degraded, sentences were still intelligible, as would
be expected (Eisenberg, Shannon, Martinez, Wygonski, &
Boothroyd, 2000; Fishman, Shannon, & Slattery, 1997; Lee, Min,
Wingfield, Grossman, & Peelle, 2016). The stimuli comprised 40
sentences in total, 10 in each of four conditions: SR and OR in
clear and vocoded formats. A second set of 40 sentences was
created to counterbalance gender fully with the syntactic and
acoustic manipulations. The two sets were alternated from one
child to the next. The sentence stimuli were equalized in mean
RMS (root-mean-square) intensity.
In the rhythm test, 20 rhythm sequences were chosen from
Grahn and Brett (2009), with the original pure tones (sine waves)
replaced by woodblock sounds. The new rhythm stimuli (.wav
files; 44.100 kHz; stereo) were obtained from the instrument
source in Ableton Live music production software (www.ableton
.com). Sound intensity was equalized based upon mean RMS. Half
of the rhythms consisted of seven sounds; the other half had eight
(Figure 1B). All sequences were structured so that the woodblock
sounds’ onsets were aligned with four beats (i.e., not syncopated),
in order to provide a strong sense of meter. The standard and
comparison rhythms varied on “different” trials, but even then they
had the same number of woodblock sounds.
Procedure. All children were administered the grammar and
rhythm tests, which were programmed on Open Sesame 3.1.6 and
run on desktop computers (Dell OptiPlex 7040). Both tests took
approximately 10 min. Sound stimuli were presented binaurally
through Bose Quiet Comfort 15 Acoustic Noise Canceling head-
phones. A parent completed a background questionnaire regarding
the child’s age, gender, language/music background, maternal
education, and any history of speech or language deficits and/or
therapy. The grammar test was always administered before the
rhythm test to avoid the potential influence of musical-rhythm
activity on subsequent grammar performance. For both tasks,
accuracy and response times (RTs) were recorded.
Children were first familiarized with the grammar task by un-
dergoing 14 practice trials. On each trial, they were instructed to
indicate the gender of the actor by pressing either the “male” (left
arrow) or “female” (right arrow) key as quickly and accurately as
possible (Figure 1A). During sentence presentation, children were
instructed to view the fixation cross on the monitor (Dell Profes-
sional P2417H 23.8” Screen LED-Lit) located approximately 50
cm in front of the child, and to hover their right-hand fingers over
the left and right arrows on the keyboard. During practice, there
was no restriction on response time, and children received feed-
back after each response. During the actual test session that fol-
lowed, children were encouraged to respond within 3 s and in-
structed to proceed to the next trial if no response was made during
this window. No feedback was given, but noncontingent verbal
encouragement was provided.
After a short break, children took the rhythm test (Figure 1C),
which had 20 trials. On each trial, children heard a pair of rhythm
sequences (Grahn & Brett, 2009) presented concurrently with
visual images of cartoon characters adapted from Gordon, Shivers,
et al. (2015). Five practice trials with feedback were administered
first to familiarize children with the test. On each trial, children
heard a rhythmic “standard” sequence while viewing a single
cartoon character playing drums. After a short delay (1500 ms), a
Syntax Acoustic Sentence Response
SR Clear Brothers that hug sisters are good Male
SR Noisy Sisters that hug brothers are good Female
OR Clear Brothers that sisters hug are good Female
OR Noisy Sisters that brothers hug are good Male
Sandy Same
Left Right
Beat 1
Beat 2
300 ms
1500 ms
4000 ms
4000 ms
3000 ms
Figure 1. (A) Examples of sentence conditions from the grammar test.
(B) Examples of a 7-sound and 8-sound rhythm in the rhythm test. (C)
Schematic illustration of the rhythm test. See the online article for the color
version of this figure.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
comparison rhythm sequence was presented with side-by-side pic-
tures of two cartoon characters, one being the same as the character
who had just appeared, the other being new. In other words, the
cartoon characters provided a visual analogue for children’s
“same” or “different” responses. During familiarization, there was
no restriction on response time, and feedback was given following
each response. For the test session, children were encouraged to
answer within 3 s and no feedback was provided except for
noncontingent verbal encouragement. Trials were fully random-
ized across participants.
Results and Discussion
Scores measuring performance accuracy on the grammar and
rhythm tasks were converted to d=scores for statistical analyses.
Because perfect performance leads to an indeterminate d=, hit and
false-alarm rates were modified slightly by adding 0.5 to the
numerator and 1.0 to the denominator. This transformation has no
effect on the rank order of scores (Thorpe, Trehub, Morrongiello,
& Bull, 1988).
Figures 2A and 2B illustrate descriptive statistics for the gram-
mar task, separately for accuracy and RTs. Mean levels of perfor-
mance accuracy were significantly above chance levels (d=!0) in
each of the four conditions (Bonferroni-Holm corrected for four
tests), ps$.05. A two-way analysis of variance (ANOVA) was
used to analyze effects of syntax (SR or OR) and acoustic clarity
(clear or vocoded) as repeated measures. For accuracy, a main
effect of syntax, F(1, 67) !132.11, p$.001, %p
confirmed that children were more accurate with SR than OR
sentences. Similarly, a main effect of acoustic clarity, F(1, 67) !
13.02, p!.001, %p
2!.163, indicated higher accuracy for clear
than for vocoded speech. There was no two-way interaction, F$
1. For RTs, there was a main effect of syntax, F(1, 67) !50.21,
p$.001, %p
2!.428, with performance on SR trials being faster
than it was on OR trials. There was no main effect of acoustic
clarity and no two-way interaction, ps".1. Further analyses were
restricted to performance accuracy.
Figures 2C and 2D illustrate descriptive statistics for the rhythm
test. Accuracy was similar for sequences that had seven- or eight-
sound rhythm, p".1, correlated across conditions, r!.464, N!
68, p$.001, and substantially better than chance in both condi-
tions, ps$.001. Similarly, RTs did not differ reliably across
conditions, p".1, but they were correlated across conditions, r!
.377, N!68, p!.002. Further analyses considered performance
accuracy collapsed across the two conditions.
A linear mixed-effects regression (using the LMER framework
via lme4 package in R, Version 3.4) was used to predict d=scores
in the four sentence conditions as a function of performance
accuracy (d=) on the rhythm test. Covariates (fixed effects) in-
cluded syntax (SR/OR), clarity (clear/vocoded), age (M!11.13
years, SD !2.7), duration of music training (M!1.9 years, SD !
2.8), maternal education (M!2.7, SD !1.0), and gender (Male/
Female). Intercepts for subjects were included as random effects.
Syntax and clarity were included as random slopes. The results
revealed that age was the most significant predictor of grammatical
ability, as one would expect, with rhythm being the second-best
predictor, in line with the expected pattern of higher levels pre-
dicting better grammar performance (see Table 1).
In sum, as children improved on the rhythm task, they also
improved on the grammar task. Although this finding was evident
when maternal education, age, duration of music training, and
gender were held constant, one potentially important though miss-
ing covariate was a measure of short-term or working memory. In
Experiment 2, we attempted to replicate and extend the findings of
Experiment 1 by adding a brief test of working memory.
Experiment 2
Time constraints of testing in a museum setting precluded the
possibility of administering a comprehensive measure of general
cognitive ability, such as IQ. We therefore opted to measure one
aspect of general cognition that might best account for perfor-
mance on the grammar and rhythm tests: auditory working mem-
ory. On same-different tasks of musical ability, performance tends
to be associated with scores on nonmusical tests of auditory
memory (Hausen, Torppa, Salmela, Vainio, & Särkämö, 2013).
Our inclusion of a measure of working memory as a covariate
could decrease the size of the partial association between rhythm
and grammar. Thus, we used G
Power 3.1 (Faul et al., 2009) to
determine that a sample of 92 participants was required to be 85%
certain of detecting a small- to medium-sized association (f
Cohen, 1988), with seven other variables held constant, #!.05.
Participants. Children were recruited as in Experiment 1.
Although we tested 136 children, 40 were excluded for the fol-
Time (ms)
Time (ms)
7 sounds 8 sounds
clear voc. clear
d prime
7 sounds 8 sounds
d prime
clear voc. clear voc.
Figure 2. Descriptive statistics for Experiment 1: (A) Mean d=in the four
conditions of the grammar test (voc. !vocoded; OR !object-relative;
SR !subject-relative); (B) Mean response times in the four conditions of
the grammar test; (C) Mean d=in the two conditions of the rhythm test; (D)
Mean response times in the two conditions of the rhythm test. Error bars
are standard errors calculated using method from Loftus and Masson
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
lowing reasons: 32 performed significantly below chance levels in
at least one condition of the grammar test, 4 did not complete the
task, and 4 had speech problems or language delays. Thus, the final
sample comprised 96 children (56 females), whose mean age was
11.1 years (SD !2.7). Only two children were bilingual. Two of
the 96 caregivers did not provide information about maternal
education. These missing values were replaced by the mean.
Stimuli and measures. The stimuli were the same as in Ex-
periment 1 with one exception. For the grammar test, instead of
degrading the speech signal itself (via vocoding), the original
sentences were presented in a background of multitalker babble
that consisted of three male and three female talkers (adapted from
Sperry, Wiley, & Chial, 1997). MATLAB code was used to
combine the babble with each sentence at a signal-to-noise ratio
(SNR) of 2 dB. In pilot testing, this SNR manipulation rendered a
degree of difficulty comparable to the vocoding manipulation of
Experiment 1. An additional buffer of 0.5 s babble was included
before and after each sentence. The intensity (mean RMS) of all
stimuli was equated.
For our test of auditory working memory, we adapted Stern-
berg’s (1966) paradigm. On each trial, a group of three or four
novel synthetic sounds was presented followed by a probe sound.
Participants indicated whether the probe sound was old (i.e., pre-
sented in the group) or new. An example is provided at the
following link:!9TwDVn
Results and Discussion
As in Experiment 1, performance accuracy was indexed with d=
scores, including performance on the test of working memory.
Figures 3A and 3B show descriptive statistics across the four
different conditions of the grammar test. As in Experiment 1,
performance was above chance levels in all four conditions, ps$
.005. A two-way ANOVA revealed a main effect of syntax, F(1,
95) !102.85, p$.001, %p
2!.520, with higher accuracy for SR
than for OR sentences. Neither acoustic clarity, F$1, nor the
interaction between syntax and clarity, p".1, was significant. For
the ANOVA on RTs, there was again a main effect of syntax, F(1,
95) !69.25, p$.001, %p
2!.422, with faster RTs for SR than for
OR sentences, as well as a main effect of clarity, F(1, 95) !11.88,
p!.001, %p
2!.111, with faster RTs for sentences presented in
quiet than in babble. There was no interaction between syntax and
clarity, F$1. In other words, background babble slowed down
responding, but it did not make the children less accurate.
Figures 3C and 3D provide descriptive statistics for perfor-
mance on the rhythm test. For accuracy, performance was much
higher than chance levels in both conditions, ps$.001. Perfor-
mance was correlated across the two conditions, r!.263, N!96,
p!.010, but better for 8-sound than for 7-sound rhythms, t(95) !
3.50, p!.001, %p
2!.114. RTs were also faster for 8-sound
rhythms, t(95) !3.51, p!.001, %p
2!.115, which were never-
theless correlated with 7-sound rhythms, r!.394, N!96, p$
For the test of auditory working memory, the children responded
correctly on an average of 75.4% (SD !14.0) of the trials, such
that the mean d=score was substantially better than chance, p$
.001. Scores were correlated with mean performance on the gram-
mar test, r!.296, N!96, p!.003, but not with performance on
the rhythm test, p".4. When age was held constant, the associ-
ation between working memory and grammar disappeared, p".4.
As in Experiment 1, the linear mixed-effects regression was
conducted to predict d=in the four sentence conditions as a
function of accuracy (d=) on the rhythm test. Other variables for
fixed effects were syntax (SR/OR), clarity (clear/vocoded), age,
rhythm, auditory working memory (M!1.38, SD !0.38), dura-
tion of music training (M!1.2 years, SD !2.1), maternal
education (M!2.7, SD !1.1), and gender (Male/Female).
Intercepts for subjects were included as random effects, as were
the slopes of the syntax and clarity manipulations. Results are
summarized in Table 2. After controlling for all other variables,
performance on the grammar test improved dramatically with age,
and significantly with rhythm scores. In short, the findings repli-
Table 1
Results From the Linear Mixed-Effects Model Predicting
Performance on the Grammar Test in Experiment 1
Variable Estimate Std. Error tscore pvalue
Syntax (SR) 1.287 0.111 11.580 $.001
Acoustic clarity (voc.) &0.273 0.077 &3.530 $.001
Age 0.147 0.03 4.987 $.001
Gender (M) 0.052 0.154 0.337 .737
Maternal education 0.102 0.083 1.231 .222
Music training 0.047 0.035 1.357 .179
Rhythm discrimination 0.272 0.097 2.793 .007
Note. SR !subject-relative; voc. !vocoded; M !male.
Time (ms)
Time (ms)
clear bab. clear
7 sounds 8 sounds
clear bab. clear
d prime
7 sounds 8 sounds
d prime
Figure 3. Descriptive statistics for Experiment 2: (A) Mean d=in the four
conditions of the grammar test (bab. !babble; OR !object-relative; SR !
subject-relative); (B) Mean response times in the four conditions of the
grammar test; (C) Mean d=in the two conditions of the rhythm test; (D)
Mean response times in the two conditions of the rhythm test. Error bars
are standard errors calculated using method from Loftus and Masson
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
cated the association between rhythm and grammar found in
Experiment 1, but with auditory working memory held constant as
In a final analysis, we collapsed the data sets from Experiments
1 and 2 in order to look at developmental trends more closely. Of
particular interest was whether the association between rhythm and
grammar would become weaker or stronger with age. We used
multiple regression to predict performance on the grammar task
(aggregated across the four conditions) as a function of age,
rhythm, and the interaction between age and rhythm (variables
centered). Additional control variables included gender (dummy
coded), maternal education, music training, and a dummy variable
that accounted for differences between the two experiments. The
model explained 40.3% of the variance in grammar performance,
R!.637, F(7, 156) !15.05, p$.001 (adjusted R
Significant contributions were made by age, '!.532, t(156) !
8.24, p$.001, rhythm, '!.239, t(156) !3.62, p$.001, and the
interaction between age and rhythm, '!.148, t(156) !2.35, p!
.020. All control variables were nonsignificant, ps".1. In short,
performance on the grammar task was better among older children
and among children with better performance on the rhythm task.
The positive slope for the interaction term indicated that the
association between rhythm and grammar became stronger as age
To test for possible interactions between age and variance in
performance due to the syntactic manipulation, we correlated age
with the difference between performance in the subject- and
object-relative conditions. The association was negative but very
weak, r!&.143, N!164, p!.034 (one-tailed; a positive
association would be uninterpretable). In other words, we found
weak evidence that the performance advantage for subject- over
object-relative sentences decreased as children became older and
more masterful with English grammar.
General Discussion
In two experiments, we explored the possibility of an asso-
ciation between musical rhythm skills and receptive grammar in
school-age children. In Experiment 1, rhythm discrimination
predicted the comprehension of syntactically complex sen-
tences (i.e., with embedded clauses), and this positive associa-
tion remained significant after accounting for individual differ-
ences in age, gender, music experience, and maternal education.
This finding was replicated in Experiment 2 while further
controlling for individual differences in working memory. After
collapsing both data sets, we found that the rhythm– grammar
link became stronger as children grew older. These data further
corroborate the association between rhythm and grammar in
typically developing children, and provide support for the pre-
vailing notion that shared neural resources are involved in some
aspects of music and language processing (Heard & Lee, 2020).
Despite ample documentation of a positive association be-
tween musical expertise and speech perception (see Schellen-
berg & Weiss, 2013, for review), it is less common to find links
between music and higher-order language processes, such as
grammar or reading. In one previous study, cited earlier, the
6-year-olds’ rhythm discrimination ability predicted their use of
expressive syntax (Gordon, Shivers, et al., 2015). Our results
extend this finding by documenting an association between
rhythm abilities and receptive grammar among children who
varied substantially in age. In both studies, there was no asso-
ciation between music training and grammar proficiency when
rhythm abilities were held constant, which raises the possibility
that the link may be mediated by preexisting neural traits. This
interpretation is inconsistent with proposals that music training
benefits speech and language skills (Kraus & Chandrasekaran,
2010; Patel, 2011), but consistent with other studies that failed
to find a positive influence of music training on speech percep-
tion (Boebinger et al., 2015; Ruggles, Freyman, & Oxenham,
2014; Swaminathan & Schellenberg, 2017, 2019) and reading
comprehension (Swaminathan & Schellenberg, 2019; Swami-
nathan, Schellenberg, & Venkatesan, 2018). One possibility is
that the discrepancy may be due to the differences in the way
that music training was measured, although Swaminathan and
her colleagues reported the same finding when they coded
music training in four different ways (Swaminathan & Schel-
lenberg, 2017, 2019; Swaminathan et al., 2018). Indeed, links
between music training and language abilities may be epiphe-
nomenal (Schellenberg, 2015), such that they disappear when
individual differences in musical aptitude or general cognitive
ability are held constant.
Although our findings are consistent with those of Gordon,
Shivers, et al. (2015), there are notable differences between the
two studies. The earlier study measured the use of morpho-
syntactic operations in expressive grammar, whereas we used a
receptive grammar test that required listeners to cope rapidly
with syntactic complexities while listening to a series of short
sentences. Although SR and OR sentences involved temporal
interruption due to the center-embedded clause, OR sentences
were more challenging than SR sentences due to the noncanoni-
cal ordering of the words, as evidenced by less accurate and
slower performance. Although most children use sentences with
relative clauses well before they enter school (Brown, 1971;
Corrêa, 1995; de Villiers, Tager Flusberg, Hakuta, & Cohen,
1979; Kidd & Bavin, 2002; Labelle, 1990; Sheldon, 1976,
1977), our data confirm that full mastery of these types of
sentences develops throughout the school-age period. With such
large developmental differences between 7 and 17 years of age,
however, it would be ideal to replicate the present results with even
larger samples of children. Nevertheless, our findings are in line
with those from a large and multinational online sample, which
documented that grammar development continued throughout
Table 2
Results From the Linear Mixed-Effects Model Predicting
Performance on the Grammar Test in Experiment 2
Variable Estimate Std. Error tscore pvalue
Syntax (SR) 1.334 0.131 10.195 $.001
Acoustic clarity (babble) &0.033 0.066 &0.495 .622
Age 0.162 0.027 5.978 $.001
Gender (M) 0.145 0.136 1.065 .737
Maternal education &0.011 0.061 &0.182 .856
Music training &0.026 0.033 &0.782 .436
Rhythm discrimination 0.211 0.097 2.168 .033
Working memory 0.039 0.086 0.450 .654
Note. SR !subject-relative; M !male.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
most of adolescence, plateauing at approximately 17 years of age
(Hartshorne et al., 2018).
In a recent study of 6- to 9-year-old children (Swaminathan &
Schellenberg, 2019), performance on a test of receptive grammar
was correlated positively with performance on a test of rhythm
discrimination, a finding that corroborates the present results.
Moreover, as in the present study, the association remained evident
after holding constant SES and general cognitive ability. Claims of
a special link between rhythm and grammar require evidence of
discriminant validity, however, which their data only partly sup-
ported. On the one hand, rhythm discrimination was better than
melody discrimination at predicting receptive grammar and speech
perception. On the other hand, scores on a test of memory for
music matched rhythm abilities in predictive power. These results
raise the possibility that the “special” status of rhythm in predict-
ing language abilities may emerge primarily when it is compared
directly with melody perception, a result that is now common in
studies of adults (Bhatara, Yeung, & Nazzi, 2015; Hausen et al.,
2013; Swaminathan & Schellenberg, 2017). Among children, how-
ever, things may be less clear-cut. Indeed, studies of very young
children have reported that melody is better than rhythm at pre-
dicting grammar (Politimou et al., 2019), and that training in
melody is superior to training in rhythm at improving phonological
awareness (Patscheke, Degé, & Schwarzer, 2019). In short, a
complete developmental account of associations between music
and language will require researchers to include multiple measures
in both domains.
One particularly positive aspect of the present study was its
relatively large samples of children compared to previous re-
search. In the study by Gordon, Jacobs, et al. (2015), only 25
children were tested, whereas we had a total of 164 children
across two experiments. A notable limitation of the present
study was that we did not include full-scale IQ to measure
general cognitive ability, due to the time constraint imposed by
testing in the museum (i.e., $25–30 min). Rather, in Experi-
ment 2, we administered a brief test of auditory working mem-
ory. Another potential limitation was that the experiment was
conducted in an open laboratory space where other experiments
were sometimes conducted simultaneously. Although our audi-
tory stimuli were delivered via noise-canceling headphones, the
children may still have been distracted periodically in the open
environment. Such distraction may have led to a higher exclu-
sion rate than anticipated due to misunderstanding of the task,
and/or loss of interest in both auditory experiments. In any
event, the cross-experiment replication and large samples pro-
vide clear evidence of a link between rhythm and receptive
grammar among school-age children, thereby extending evi-
dence of a link with expressive grammar in 6-year-olds (Gor-
don, Shivers, et al., 2015).
In addition to varying the degree of syntactic complexity, we
varied the acoustic clarity of the speech stimuli using two
strategies. In Experiment 1, we removed some of the spectral
details using a 15-channel vocoder, whereas in Experiment 2,
we added multitalker babble as background noise to mask the
speech energetically. The vocoded speech was challenging for
our child listeners, who were less accurate with vocoded than
with clear speech. By contrast, the multitalker babble had no
effect on accuracy, but it led to slower responding. Although we
attempted to equate the perceptual difficulty between the two
types of manipulation, we failed to do so in the sense that they
had differential effects on accuracy, but succeeded in the sense
that both manipulations affected the processing time required to
complete the task. As noted in the introduction, our rationale for
including acoustic clarity in the stimuli design was to explore
children’s language comprehension when both syntactic com-
plexity and acoustic clarity varied simultaneously. Indeed,
noise is detrimental to the comprehension of syntactically com-
plex sentences, such as those with object-relative clauses for
older adults (Wingfield et al., 2006). For our children, vocoded
object-relative sentences were the most difficult to comprehend
in Experiment 1, but the effects of syntax and clarity were
additive rather than interactive.
Tierney and Kraus (2015) suggest that different tests of
rhythm involve different aspects of cognitive and sensory pro-
cesses (e.g., working memory, sensory-motor integration, etc.).
The rhythm-discrimination test that we used allowed us to
measure children’s auditory sensitivity to moment-by-moment
temporal dynamics in musical sequences. Why would such
sensitivity predict children’s ability to comprehend syntacti-
cally and sequentially complex sentences? One likely possibil-
ity is that the neural mechanism responsible for analyzing
temporal structures extends to both musical and linguistic
events. That is, certain aspects of music and language pro-
cesses, such as the rhythm and syntactic tasks explored here, are
mediated by common temporal-processing mechanisms.
Emerging evidence demonstrates that temporal structures of
sentences affect syntactic analysis. As noted, individuals per-
form better on a syntactic task when constituent words of
a sentence are presented metrically with a regular beat
(Roncaglia-Denissen et al., 2013). They also display a concom-
itantly reduced P600 EEG amplitude, a hallmark of syntactic
processing in response to metrical sentences, which suggests
that the established meter made syntactic processing less effort-
ful. Syntactic processing is also influenced by external rhythms
that are independent of the intrinsic temporal structure of given
sentences. Specifically, Przybylski et al. (2013) demonstrated
that children are better at detecting morphosyntactic violations
after listening to 32 s of rhythmically regular rather than irreg-
ular musical sequences. This finding was subsequently ex-
tended to a design that compared priming with regular beats to
arrhythmic environmental sounds (Bedoin et al., 2016). Evi-
dence of discriminant validity comes from results showing that
regular-beat priming improves grammar performance but not
mathematical ability (Chern et al., 2018).
According to dynamic attending theory (Jones, 1976; Jones &
Boltz, 1989), neural oscillations for syntactic operations become
more efficient when a regular rhythm serves as a prime. The
observed phenomena could nonetheless be independent of atten-
tional modulation; rather, temporal processing could be enhanced
via the sensorimotor network. Future research is warranted to
elucidate further the detailed neurofunctional and neuroanatomical
mechanisms that explain the link between rhythm and grammar. In
any case, our data provide behavioral support for the prevailing
notion that similar or identical neural mechanisms are used for
rule-based temporal processing in language and music (Heard &
Lee, 2020).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Anvari, S. H., Trainor, L. J., Woodside, J., & Levy, B. A. (2002). Relations
among musical skills, phonological processing, and early reading ability
in preschool children. Journal of Experimental Child Psychology, 83,
Barbu, S., Nardy, A., Chevrot, J.-P., Guellaï, B., Glas, L., Juhel, J., &
Lemasson, A. (2015). Sex differences in language across early child-
hood: Family socioeconomic status does not impact boys and girls
equally. Frontiers in Psychology, 6, 1874.
Bedoin, N., Brisseau, L., Molinier, P., Roch, D., & Tillmann, B. (2016).
Temporally regular musical primes facilitate subsequent syntax process-
ing in children with specific language impairment. Frontiers in Neuro-
science, 10, 245.
Bhatara, A., Yeung, H. H., & Nazzi, T. (2015). Foreign language learning
in French speakers is associated with rhythm perception, but not with
melody perception. Journal of Experimental Psychology: Human Per-
ception and Performance, 41, 277–282.
Boebinger, D., Evans, S., Rosen, S., Lima, C. F., Manly, T., & Scott, S. K.
(2015). Musicians and non-musicians are equally adept at perceiving
masked speech. The Journal of the Acoustical Society of America, 137,
378 –387.
Bohn, K., Knaus, J., Wiese, R., & Domahs, U. (2013). The influence of
rhythmic (ir)regularities on speech processing: Evidence from an ERP
study on German phrases. Neuropsychologia, 51, 760 –771. http://dx.doi
Brown, H. D. (1971). Children’s comprehension of relativized English
sentences. Child Development, 42, 1923–1936.
Canette, L.-H., Fiveash, A., Krzonowski, J., Corneyllie, A., Lalitte, P.,
Thompson, D.,...Tillmann, B. (2020). Regular rhythmic primes boost
P600 in grammatical error processing in dyslexic adults and matched
controls. Neuropsychologia, 138, 107324.
Carr, K., White-Schwoch, T., Tierney, A. T., Strait, D. L., & Kraus, N.
(2014). Beat synchronization predicts neural speech encoding and read-
ing readiness in preschoolers. Proceedings of the National Academy of
Sciences of the United States of America, 111, 14559 –14564. http://dx
Chern, A., Tillmann, B., Vaughan, C., & Gordon, R. L. (2018). New
evidence of a rhythmic priming effect that enhances grammaticality
judgments in children. Journal of Experimental Child Psychology, 173,
Chomsky, N. (2014). Aspects of the theory of syntax. Cambridge, MA: MIT
Cohen, J. (1988). Statistical power analysis for the behavioral sciences
(2nd ed.). Hillsdale, NJ: Routledge.
Corrêa, L. M. S. (1995). An alternative assessment of children’s compre-
hension of relative clauses. Journal of Psycholinguistic Research, 24,
Corriveau, K. H., & Goswami, U. (2009). Rhythmic motor entrainment in
children with speech and language impairments: Tapping to the beat.
Cortex, 45, 119 –130.
Corriveau, K., Pasquini, E., & Goswami, U. (2007). Basic auditory pro-
cessing skills and specific language impairment: A new look at an old
hypothesis. Journal of Speech, Language, and Hearing Research, 50,
647– 666.
Da˛browska, E. (2012a). Different speakers, different grammars: Individual
differences in native language attainment. Linguistic Approaches to
Bilingualism, 2, 219 –253.
Da˛browska, E. (2012b). Explaining individual differences in linguistic
proficiency. Linguistic Approaches to Bilingualism, 2, 324 –335. http://
Da˛browska, E. (2018). Experience, aptitude and individual differences in
native language ultimate attainment. Cognition, 178, 222–235. http://dx
Da˛browska, E. (2019). Experience, aptitude, and individual differences in
linguistic attainment: A comparison of native and nonnative speakers.
Language Learning, 69, 72–100.
Da˛browska, E., & Street, J. (2006). Individual differences in language
attainment: Comprehension of passive sentences by native and non-
native English speakers. Language Sciences, 28, 604 – 615. http://dx.doi
Degé, F., & Schwarzer, G. (2011). The effect of a music program on
phonological awareness in preschoolers. Frontiers in Psychology, 2,
de Villiers, J. G., Tager Flusberg, H. B., Hakuta, K., & Cohen, M. (1979).
Children’s comprehension of relative clauses. Journal of Psycholinguis-
tic Research, 8, 499 –518.
Eisenberg, L. S., Shannon, R. V., Martinez, A. S., Wygonski, J., &
Boothroyd, A. (2000). Speech recognition with reduced spectral cues as
a function of age. The Journal of the Acoustical Society of America, 107,
2704 –2710.
Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical
power analyses using G
Power 3.1: Tests for correlation and regression
analyses. Behavior Research Methods, 41, 1149 –1160.
Fishman, K. E., Shannon, R. V., & Slattery, W. H. (1997). Speech recog-
nition as a function of the number of electrodes used in the SPEAK
cochlear implant speech processor. Journal of Speech, Language, and
Hearing Research, 40, 1201–1215.
Flaugnacco, E., Lopez, L., Terribili, C., Montico, M., Zoia, S., & Schön, D.
(2015). Music training increases phonological awareness and reading
skills in developmental dyslexia: A randomized control trial. PLoS ONE,
10, e0138715.
François, C., Chobert, J., Besson, M., & Schön, D. (2013). Music training
for the development of speech segmentation. Cerebral Cortex, 23,
2038 –2043.
François, C., Grau-Sánchez, J., Duarte, E., & Rodriguez-Fornells, A.
(2015). Musical training as an alternative and effective method for
neuro-education and neuro-rehabilitation. Frontiers in Psychology, 6,
Frizelle, P., Thompson, P. A., McDonald, D., & Bishop, D. V. M. (2018).
Growth in syntactic complexity between four years and adulthood:
Evidence from a narrative task. Journal of Child Language, 45, 1174 –
Gordon, R. L., Jacobs, M. S., Schuele, C. M., & McAuley, J. D. (2015).
Perspectives on the rhythm-grammar link and its implications for typical
and atypical language development. Annals of the New York Academy of
Sciences, 1337, 16 –25.
Gordon, R. L., Shivers, C. M., Wieland, E. A., Kotz, S. A., Yoder, P. J., &
Devin McAuley, J. (2015). Musical rhythm discrimination explains
individual differences in grammar skills in children. Developmental
Science, 18, 635– 644.
Goswami, U., Gerson, D., & Astruc, L. (2010). Amplitude envelope
perception, phonology and prosodic sensitivity in children with devel-
opmental dyslexia. Reading and Writing, 23, 995–1019. http://dx.doi
Grahn, J. A., & Brett, M. (2009). Impairment of beat-based rhythm
discrimination in Parkinson’s disease. Cortex, 45, 54 – 61. http://dx.doi
Hartshorne, J. K., Tenenbaum, J. B., & Pinker, S. (2018). A critical period
for second language acquisition: Evidence from 2/3 million English
speakers. Cognition, 177, 263–277.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Hausen, M., Torppa, R., Salmela, V. R., Vainio, M., & Särkämö, T. (2013).
Music and speech prosody: A common rhythm. Frontiers in Psychology,
4, 566.
Heard, M., & Lee, Y. S. (2020). Shared neural resources of rhythm and
syntax: An ALE meta-analysis. Neuropsychologia, 137, 107284. http://
Herschensohn, J. (2009). Fundamental and gradient differences in language
development. Studies in Second Language Acquisition, 31, 259 –289.
Hoff, E. (2003). The specificity of environmental influence: Socioeco-
nomic status affects early vocabulary development via maternal speech.
Child Development, 74, 1368 –1378.
Huss, M., Verney, J. P., Fosker, T., Mead, N., & Goswami, U. (2011).
Music, rhythm, rise time perception and developmental dyslexia: Per-
ception of musical meter predicts reading and phonology. Cortex, 47,
674 – 689.
Jones, M. R. (1976). Time, our lost dimension: Toward a new theory of
perception, attention, and memory. Psychological Review, 83, 323–355.
Jones, M. R., & Boltz, M. (1989). Dynamic attending and responses to
time. Psychological Review, 96, 459 – 491.
Kidd, E., & Bavin, E. L. (2002). English-speaking children’s comprehen-
sion of relative clauses: Evidence for general-cognitive and language-
specific constraints on development. Journal of Psycholinguistic Re-
search, 31, 599 – 617.
Kotz, S. A., & Gunter, T. C. (2015). Can rhythmic auditory cuing reme-
diate language-related deficits in Parkinson’s disease? Annals of the New
York Academy of Sciences, 1337, 62– 68.
Kraus, N., & Chandrasekaran, B. (2010). Music training for the develop-
ment of auditory skills. Nature Reviews Neuroscience, 11, 599 – 605.
Labelle, M. (1990). Predication, WH-movement, and the development of
relative clauses. Language Acquisition, 1, 95–119.
Lee, Y.-S., Min, N. E., Wingfield, A., Grossman, M., & Peelle, J. E.
(2016). Acoustic richness modulates the neural networks supporting
intelligible speech processing. Hearing Research, 333, 108 –117. http://
Loban, W. (1976). Language development: Kindergarten through grade
twelve. NCTE Committee on Research Report No. 18. Retrieved from!ED128818
Loftus, G. R., & Masson, M. E. (1994). Using confidence intervals in
within-subject designs. Psychonomic Bulletin & Review, 1, 476 – 490.
Milovanov, R., Huotilainen, M., Esquef, P. A. A., Alku, P., Välimäki, V.,
& Tervaniemi, M. (2009). The role of musical aptitude and language
skills in preattentive duration processing in school-aged children. Neu-
roscience Letters, 460, 161–165.
Moreno, S., Marques, C., Santos, A., Santos, M., Castro, S. L., & Besson,
M. (2009). Musical training influences linguistic abilities in 8-year-old
children: More evidence for brain plasticity. Cerebral Cortex, 19, 712–
Moritz, C., Yampolsky, S., Papadelis, G., Thomson, J., & Wolf, M. (2013).
Links between early rhythm skills, musical training, and phonological
awareness. Reading and Writing, 26, 739 –769.
Nippold, M. A. (2009). School-age children talk about chess: Does knowl-
edge drive syntactic complexity? Journal of Speech, Language, and
Hearing Research, 52, 856 – 871.
Nippold, M. A., Mansfield, T. C., & Billow, J. L. (2007). Peer conflict
explanations in children, adolescents, and adults: Examining the devel-
opment of complex syntax. American Journal of Speech-Language
Pathology, 16, 179 –188.
Nowak, M. A., Komarova, N. L., & Niyogi, P. (2001). Evolution of
universal grammar. Science, 291, 114 –118.
Ozernov-Palchik, O., Wolf, M., & Patel, A. D. (2018). Relationships
between early literacy and nonlinguistic rhythmic processes in kinder-
garteners. Journal of Experimental Child Psychology, 167, 354 –368.
Patel, A. D. (2011). Why would musical training benefit the neural encod-
ing of speech? The OPERA hypothesis. Frontiers in Psychology, 2, 142.
Patscheke, H., Degé, F., & Schwarzer, G. (2019). The effects of training in
rhythm and pitch on phonological awareness in four- to six-year-old
children. Psychology of Music, 47, 376 –391.
Politimou, N., Dalla Bella, S., Farrugia, N., & Franco, F. (2019). Born to
speak and sing: Musical predictors of language development in pre-
schoolers. Frontiers in Psychology, 10, 948.
Przybylski, L., Bedoin, N., Krifi-Papoz, S., Herbillon, V., Roch, D., Lécu-
lier, L.,...Tillmann, B. (2013). Rhythmic auditory stimulation influ-
ences syntactic processing in children with developmental language
disorders. Neuropsychology, 27, 121–131.
Roncaglia-Denissen, M. P., Schmidt-Kassow, M., & Kotz, S. A. (2013).
Speech rhythm facilitates syntactic ambiguity resolution: ERP evidence.
PLoS ONE, 8, e56000.
Ruggles, D. R., Freyman, R. L., & Oxenham, A. J. (2014). Influence of
musical training on understanding voiced and whispered speech in noise.
PLoS ONE, 9, e86980.
Schellenberg, E. G. (2015). Music training and speech perception: A
gene– environment interaction. Annals of the New York Academy of
Sciences, 1337, 170 –177.
Schellenberg, E. G., & Weiss, M. W. (2013). Music and cognitive abilities.
In D. Deutsch (Ed.), The psychology of music (3rd ed., pp. 499 –550).
New York, NY: Elsevier.
Schmidt-Kassow, M., & Kotz, S. A. (2008). Entrainment of syntactic
processing? ERP-responses to predictable time intervals during syntactic
reanalysis. Brain Research, 1226, 144 –155.
Sheldon, A. (1976). The acquisition of relative clauses in French and
English: Implications for language learning universals. Retrieved from!ED132846
Sheldon, A. (1977). On strategies for processing relative clauses: A com-
parison of children and adults. Journal of Psycholinguistic Research, 6,
Spencer, S., Clegg, J., & Stackhouse, J. (2012). Language and disadvan-
tage: A comparison of the language abilities of adolescents from two
different socioeconomic areas. International Journal of Language &
Communication Disorders, 47, 274 –284.
Sperry, J. L., Wiley, T. L., & Chial, M. R. (1997). Word recognition
performance in various background competitors. Journal of the Ameri-
can Academy of Audiology, 8, 71– 80.
Sternberg, S. (1966). High-speed scanning in human memory. Science,
153, 652– 654.
Swaminathan, S., & Schellenberg, E. G. (2017). Musical competence and
phoneme perception in a foreign language. Psychonomic Bulletin &
Review, 24, 1929 –1934.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Swaminathan, S., & Schellenberg, E. G. (2019). Musical ability, music
training, and language ability in childhood. Journal of Experimental
Psychology: Learning, Memory, and Cognition. Advance online publi-
Swaminathan, S., Schellenberg, E. G., & Venkatesan, K. (2018). Explain-
ing the association between music training and reading in adults. Journal
of Experimental Psychology: Learning, Memory, and Cognition, 44,
Tabri, D., Chacra, K. M. S. A., & Pring, T. (2011). Speech perception in
noise by monolingual, bilingual and trilingual listeners. International
Journal of Language & Communication Disorders, 4, 411– 422. http://
Thorpe, L. A., Trehub, S. E., Morrongiello, B. A., & Bull, D. (1988). Percep-
tual grouping by infants and preschool children. Developmental Psychology,
24, 484 – 491.
Tierney, A., & Kraus, N. (2015). Evidence for Multiple Rhythmic Skills.
PLOS ONE, 10(9), e0136645.
Wingfield, A., McCoy, S. L., Peelle, J. E., Tun, P. A., & Cox, L. C. (2006).
Effects of adult aging and hearing loss on comprehension of rapid
speech varying in syntactic complexity. Journal of the American Acad-
emy of Audiology, 17, 487– 497.
Received October 22, 2019
Revision received April 16, 2020
Accepted April 17, 2020 !
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
... Beyond potential rhythm deficits in children with DLD, there is accumulating evidence that rhythm and language processing are tightly linked in individuals with typical language development (Carr et al., 2014;Gordon et al., 2015;Lee et al., 2020;Patel, 2003;Persici et al., 2021;see Fiveash et al., 2021 and Nayak et al., accepted, for reviews). The mechanisms underlying these associations, however, are not yet well understood. ...
... Previous research shows associations between receptive grammar and rhythm abilities (Lee et al., 2020;Swaminathan & Schellenberg, 2019). As we had data on receptive grammar measured by the TOLD-P:4 Syntactic Understanding subtest from all DLD children and a subset of children with TD (n = 62) as a part of screening, we decided to run an exploratory analysis to test the association of receptive grammar with tapping measures. ...
Full-text available
Children with Developmental Language Disorder (DLD) show relative weaknesses on rhythm tasks beyond their characteristic linguistic impairments. The current study compares preferred tempo and the width of an entrainment region for 5- to 7-year-old typically developing children and children with DLD and considers the associations with rhythm aptitude and expressive grammar skills in the two populations. Preferred tempo was measured with a spontaneous motor tempo task (tapping tempo at a comfortable speed) and the width (range) of an entrainment region was measured by the difference between the upper (slow) and lower (fast) limits of tapping a rhythm normalized by an individual’s spontaneous motor tempo. Data from N = 16 children with DLD and N = 114 children with TD showed that whereas entrainment-region width did not differ across the two groups, slowest motor tempo, the determinant of the upper (slow) limit of the entrainment region, was at a faster tempo in children with DLD vs. TD. Entrainment-region width was positively associated with rhythm aptitude and receptive grammar even after taking into account potential confounding factors, whereas expressive grammar did not show an association with any of the tapping measures. Preferred tempo was not associated with any study variables after including covariates in the analyses. These results motivate future neuroscientific studies of low-frequency neural oscillatory mechanisms as the potential neural correlates of entrainment-region width and their associations with musical rhythm and spoken language processing in children with typical and atypical language development.
... Click here to access/download;Manuscript;MAPLE_Manuscript_R2_no UNDERSTANDING MUSICALITY-LANGUAGE LINKS abilities and grammatical skills (Gordon, Shivers, et al., 2015;Lee et al., 2020), reading-related skills (Ozernov-Palchik et al., 2018;Woodruff Carr et al., 2014), prosodic perception (Hausen et al., 2013;Morrill et al., 2015), and speech discrimination (Swaminathan & Schellenberg, 2020). ...
... In keeping with these observations, preschoolers who had stronger abilities to synchronize to an external beat, had higher scores on tests of reading and sentence imitation (Woodruff-Carr et al., 2014). Moreover, Lee et al. (2020) also found a correlation between rhythm discrimination and receptive grammar via a language comprehension task that required participants to identify the agent of the sentence, in a wider age range of participants (7-17-yearolds), while controlling for working memory, age, and musical training, exhibiting that the relationship between musical rhythm sensitivity and grammar cuts across the developmental arc. ...
Full-text available
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
... compared with r = .02 to 0.14 for the other activity exposures) extends findings from a breadth of literature highlighting widespread covariance in the general population between musical abilities (e.g., rhythm and tonal-melodic skills) and language-related traits, including speech perception skills (Borrie et al. 2018;Hutchins 2018;Morrill et al. 2015;Slater and Kraus 2016), morphosyntactic skills (Cohrdes et al. 2016;Gordon et al. 2015;Lee et al. 2020;Nitin et al., in press;Swaminathan and Schellenberg 2020), reading-related abilities (Ozernov-Palchik et al. 2018;Woodruff Carr et al. 2014), and second or foreign language learning (Slevc and Miyake 2006). These common associations between musical and language-related skills have been predicted to be genetically and phenotypically associated across the lifespan (e.g., the MAPLE Framework: Nayak et al. 2022). ...
Full-text available
Music engagement is a powerful, influential experience that often begins early in life. Music engagement is moderately heritable in adults (~ 41–69%), but fewer studies have examined genetic influences on childhood music engagement, including their association with language and executive functions. Here we explored genetic and environmental influences on music listening and instrument playing (including singing) in the baseline assessment of the Adolescent Brain Cognitive Development study. Parents reported on their 9–10-year-old children’s music experiences (N = 11,876 children; N = 1543 from twin pairs). Both music measures were explained primarily by shared environmental influences. Instrument exposure (but not frequency of instrument engagement) was associated with language skills (r = .27) and executive functions (r = .15–0.17), and these associations with instrument engagement were stronger than those for music listening, visual art, or soccer engagement. These findings highlight the role of shared environmental influences between early music experiences, language, and executive function, during a formative time in development.
... Importantly, covarying non-verbal IQ and age did not impact these associations, thus indicating that the dynamics of the relationship between musical rhythm and spoken grammar skills are not driven by prosodic perception, verbal working memory, general cognition, or non-specific developmental effects. These findings replicate and extend other recent work that has shown a relationship between rhythm-related skills and spoken grammar task performance, above and beyond general cognitive effects or working memory [2][3][4] . ...
Full-text available
A growing number of studies have shown a connection between rhythmic processing and language skill. It has been proposed that domain-general rhythm abilities might help children to tap into the rhythm of speech (prosody), cueing them to prosodic markers of grammatical (syntactic) information during language acquisition, thus underlying the observed correlations between rhythm and language. Working memory processes common to task demands for musical rhythm discrimination and spoken language paradigms are another possible source of individual variance observed in musical rhythm and language abilities. To investigate the nature of the relationship between musical rhythm and expressive grammar skills, we adopted an individual differences approach in N = 132 elementary school-aged children ages 5–7, with typical language development, and investigated prosodic perception and working memory skills as possible mediators. Aligning with the literature, musical rhythm was correlated with expressive grammar performance (r = 0.41, p < 0.001). Moreover, musical rhythm predicted mastery of complex syntax items (r = 0.26, p = 0.003), suggesting a privileged role of hierarchical processing shared between musical rhythm processing and children’s acquisition of complex syntactic structures. These relationships between rhythm and grammatical skills were not mediated by prosodic perception, working memory, or non-verbal IQ; instead, we uncovered a robust direct effect of musical rhythm perception on grammatical task performance. Future work should focus on possible biological endophenotypes and genetic influences underlying this relationship.
... Due to their rhythm and order, they are popular at any time and are at our mouth even after many years. Studies show that students with a better ability to understand and perform rhythmic music have improved ability in their language and grammar skills (Chern et al., 2018;Lee et al., 2020). In live performances, ensemble of drums are used to create rhythmic sound and when they are played in a certain rhythm, the activities of the brain takes place at a similar rate to the strokes on the drum heads. ...
Full-text available
Many consider physics to be a highly mathematical oriented subject to study. To break this opinion and also to generate a deep interest in physics, a course on ‘Physics of Music’ can be introduced at any level of a curriculum. We present a simple and practical way of introducing this topic even for school level students. Teachers, along with students, can visualise and feel physics all time throughout the course.
... The result of stable phrasal CTS from 5 years on is also compatible with the view that phrasal CTS reflects lexical and syntactic processing. Indeed, typically developing children of that age possess basic syntactic skills (93) such as the ability to understand relative clauses (94,95). In addition, the stability of phrasal CTS across the age range we investigated indicates that potential methodological differences between children and adults-such as head size and head movements-did not significantly impact our CTS estimates. ...
Full-text available
Humans’ extraordinary ability to understand speech in noise relies on multiple processes that develop with age. Using magnetoencephalography (MEG), we characterize the underlying neuromaturational basis by quantifying how cortical oscillations in 144 participants (aged 5 to 27 years) track phrasal and syllabic structures in connected speech mixed with different types of noise. While the extraction of prosodic cues from clear speech was stable during development, its maintenance in a multi-talker background matured rapidly up to age 9 and was associated with speech comprehension. Furthermore, while the extraction of subtler information provided by syllables matured at age 9, its maintenance in noisy backgrounds progressively matured until adulthood. Altogether, these results highlight distinct behaviorally relevant maturational trajectories for the neuronal signatures of speech perception. In accordance with grain-size proposals, neuromaturational milestones are reached increasingly late for linguistic units of decreasing size, with further delays incurred by noise.
... Functionally, "links" between individuals' musical and speech-language abilities are seen across music domains (e.g., rhythm; tonality), and speech-language domains (e.g., prosody; grammar; reading-related; speechperception-in-noise), and at both lower and higher ends of the spectrum of ability. For example, rhythmic sensitivity (e.g., the ability to discriminate between rhythmic patterns) is strongly correlated with morphosyntactic abilities (Gordon, Shivers, et al., 2015;Lee et al., 2020) and phonological skills (Politimou et al. 2019), and poorer rhythm skills are implicated in a host of speech-language disorders (Ladanyi et al., 2020). Further, individual differences in musicality across a wide continuum are correlated with variability in phonemic and reading-related skills in a second or foreign language (Christiner & Reiterer, 2018;Foncubierta et al., 2020;Milovanov & Tervaniemi, 2011). ...
Full-text available
Research has demonstrated that musical abilities are often linked with language and literacy skills, including in children with disorders of speech, language, and reading. For example, children with Developmental Language Disorder (DLD) and developmental dyslexia exhibit impairments in various musical perception and production skills. Research in child language development, cognitive neuroscience, and communication disorders has sought to discover how music can be used as a tool to modulate or improve language and reading performance. This chapter reviews current evidence for music-based intervention in individuals with DLD and developmental dyslexia, and discusses potential cognitive mechanisms driving effective intervention efforts. The chapter further explores the potential clinical applications for music. Keeping in mind the central role of music in human development and culture, future directions for exploring music-based language and literacy interventions are outlined.
... Specifically, unexpected rhythms in speech and music have been shown to modulate linguistic and musical syntax processing [39], and it is hypothesized that such disruptions are due to shared cognitive resources between linguistic syntactic and musical rhythmic processing [40]. In support of this shared resource hypothesis, normally developing children that show better rhythm discrimination also performed generally better on a grammar test [41]. Likewise, in children with developmental language disorders, external temporally-regular rhythmic stimulation improves judgments of grammatical syntax [42]. ...
Full-text available
The acoustic structure of birdsong is spectrally and temporally complex. Temporal complexity is often investigated in a syntactic framework focusing on the statistical features of symbolic song sequences. Alternatively, temporal patterns can be investigated in a rhythmic framework that focuses on the relative timing between song elements. Here, we investigate the merits of combining both frameworks by integrating syntactic and rhythmic analyses of Australian pied butcherbird (Cracticus nigrogularis) songs, which exhibit organized syntax and diverse rhythms. We show that rhythms of the pied butcherbird song bouts in our sample are categorically organized and predictable by the song’s first-order sequential syntax. These song rhythms remain categorically distributed and strongly associated with the first-order sequential syntax even after controlling for variance in note length, suggesting that the silent intervals between notes induce a rhythmic structure on note sequences. We discuss the implication of syntactic–rhythmic relations as a relevant feature of song complexity with respect to signals such as human speech and music, and advocate for a broader conception of song complexity that takes into account syntax, rhythm, and their interaction with other acoustic and perceptual features.
Full-text available
Binaural beats—an auditory illusion produced when two pure tones of slightly different frequencies are dichotically presented—have been shown to modulate various cognitive and psychological states. Here, we investigated the effects of binaural beat stimulation on auditory sentence processing that required interpretation of syntactic relations (Experiment 1) or an evaluation of syntactic well formedness (Experiment 2) with a large cohort of healthy young adults (N = 200). In both experiments, participants performed a language task after listening to one of four sounds (i.e., between-subject design): theta (7 Hz), beta (18 Hz), and gamma (40 Hz) binaural beats embedded in music, or the music only (baseline). In Experiment 1, 100 participants indicated the gender of a noun linked to a transitive action verb in spoken sentences containing either a subject or object-relative center-embedded clause. We found that both beta and gamma binaural beats yielded better performance, compared to the baseline, especially for syntactically more complex object-relative sentences. To determine if the binaural beat effect can be generalized to another type of syntactic analysis, we conducted Experiment 2 in which another 100 participants indicated whether or not there was a grammatical error in spoken sentences. However, none of the binaural beats yielded better performance for this task indicating that the benefit of beta and gamma binaural beats may be specific to the interpretation of syntactic relations. Together, we demonstrate, for the first time, the positive impact of binaural beats on auditory language comprehension. Both theoretical and practical implications are discussed.
Good musical abilities are typically considered to be a consequence of music training, such that they are studied in samples of formally trained individuals. Here, we asked what predicts musical abilities in the absence of music training. Participants with no formal music training (N = 190) completed the Goldsmiths Musical Sophistication Index, measures of personality and cognitive ability, and the Musical Ear Test (MET). The MET is an objective test of musical abilities that provides a Total score and separate scores for its two subtests (Melody and Rhythm), which require listeners to determine whether standard and comparison auditory sequences are identical. MET scores had no associations with personality traits. They correlated positively, however, with informal musical experience and cognitive abilities. Informal musical experience was a better predictor of Melody than of Rhythm scores. Some participants (12%) had Total scores higher than the mean from a sample of musically trained individuals (≥ 6 years of formal training), tested previously by Correia et al. (2022). Untrained participants with particularly good musical abilities (top 25%, n = 51) scored higher than trained participants on the Rhythm subtest and similarly on the Melody subtest. High-ability untrained participants were also similar to trained ones in cognitive ability, but lower in the personality trait openness-to-experience. These results imply that formal music training is not required to achieve musician-like performance on tests of musical and cognitive abilities. They also suggest that informal music practice and music-related predispositions should be considered in studies of musical expertise.
Full-text available
The relationship between musical and linguistic skills has received particular attention in infants and school-aged children. However, very little is known about pre-schoolers. This leaves a gap in our understanding of the concurrent development of these skills during development. Moreover, attention has been focused on the effects of formal musical training, while neglecting the influence of informal musical activities at home. To address these gaps, in Study 1, 3-and 4-year-old children (n = 40) performed novel musical tasks (perception and production) adapted for young children in order to examine the link between musical skills and the development of key language capacities, namely grammar and phonological awareness. In Study 2, we investigated the influence of informal musical experience at home on musical and linguistic skills of young pre-schoolers, using the same evaluation tools. We found systematic associations between distinct musical and linguistic skills. Rhythm perception and production were the best predictors of phonological awareness, while melody perception was the best predictor of grammar acquisition, a novel association not previously observed in developmental research. These associations could not be explained by variability in general cognitive functioning, such as verbal memory and non-verbal abilities. Thus, selective music-related auditory and motor skills are likely to underpin different aspects of language development and can be dissociated in pre-schoolers. We also found that informal musical experience at home contributes to the development of grammar. An effect of musical skills on both phonological awareness and language grammar is mediated by home musical experience. These findings pave the way for the development of dedicated musical activities for pre-schoolers to support specific areas of language development.
Regular musical rhythms orient attention over time and facilitate processing. Previous research has shown that regular rhythmic stimulation benefits subsequent syntax processing in children with dyslexia and specific language impairment. The present EEG study examined the influence of a rhythmic musical prime on the P600 late evoked-potential, associated with grammatical error detection for dyslexic adults and matched controls. Participants listened to regular or irregular rhythmic prime sequences followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence while EEG was recorded. In addition, tasks on syntax violation detection as well as rhythm perception and production were administered. For both participant groups, ungrammatical sentences evoked a P600 in comparison to grammatical sentences and its mean amplitude was larger after regular than irregular primes. Peak analyses of the P600 difference wave confirmed larger peak amplitudes after regular primes for both groups. They also revealed overall a later peak for dyslexic participants, particularly at posterior sites, compared to controls. Results extend rhythmic priming effects on language processing to underlying electrophysiological correlates of morpho-syntactic violation detection in dyslexic adults and matched controls. These findings are interpreted in the theoretical framework of the Dynamic Attending Theory (Jones, 1976, 2019) and the Temporal Sampling Framework for developmental disorders (Goswami, 2011).
A growing body of evidence has highlighted behavioral connections between musical rhythm and linguistic syntax, suggesting that these may be mediated by common neural resources. Here, we performed a quantitative meta-analysis of neuroimaging studies using activation likelihood estimate (ALE) to localize the shared neural structures engaged in a representative set of musical rhythm (rhythm, beat, and meter) and linguistic syntax (merge movement, and reanalysis). Rhythm engaged a bilateral sensorimotor network throughout the brain consisting of the inferior frontal gyri, supplementary motor area, superior temporal gyri/temporoparietal junction, insula, the intraparietal lobule, and putamen. By contrast, syntax mostly recruited the left sensorimotor network including the inferior frontal gyrus, posterior superior temporal gyrus, premotor cortex, and supplementary motor area. Intersections between rhythm and syntax maps yielded overlapping regions in the left inferior frontal gyrus, left supplementary motor area, and bilateral insula-neural substrates involved in temporal hierarchy processing and predictive coding. Together, this is the first neuroimaging meta-analysis providing detailed anatomical overlap of sensorimotor regions recruited for musical rhythm and linguistic syntax.
We tested theories of links between musical expertise and language ability in a sample of 6- to 9-year-old children. Language ability was measured with tests of speech perception and grammar. Musical expertise was measured with a test of musical ability that had 3 subtests (melody discrimination, rhythm discrimination, and long-term memory for music) and as duration of music training. Covariates included measures of demographics, general cognitive ability (IQ, working memory), and personality (openness-to-experience). Music training was associated positively with performance on the grammar test, musical ability, IQ, openness, and age. Musical ability predicted performance on the tests of speech perception and grammar, as well as IQ, working memory, openness, and age. Regression analyses-with other variables held constant-revealed that language abilities had significant partial associations with musical ability and IQ but not with music training. Rhythm discrimination was a better predictor of language skills compared with melody discrimination, but memory for music was equally good. Bayesian analyses confirmed the results from the standard analyses. The implications of the findings are threefold: (a) musical ability predicts language ability, and the association is independent of IQ and other confounding variables; (b) links between music and language appear to arise primarily from preexisting factors and not from formal training in music; and (c) evidence for a special link between rhythm and language may emerge only when rhythm discrimination is compared with melody discrimination. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
This study compares the performance of native speakers and adult second language (L2) learners on tasks tapping proficiency in grammar, vocabulary, and collocations. In addition, data were collected on several predictors of individual differences in linguistic attainment, including some related to language experience (print exposure, education, and—for L2 speakers—length of residence and use of English) and some relating to an individual's aptitude to learn (language analytic ability and nonverbal intelligence) as well as age and (for L2 speakers) age of arrival. As anticipated, the native group outperformed L2 speakers on all three language measures, although the effect sizes were much larger for collocations than for grammar or vocabulary. Crucially, there were vast individual differences in both groups and considerable overlap between groups, particularly for grammar. Regression analyses revealed both similarities and differences between native and nonnative speakers in which nonlinguistic measures best predict performance on the language tasks.
Several recent studies have demonstrated that some native speakers do not fully master some fairly basic grammatical constructions of their language, thus challenging the widely-held assumption that all native speakers converge on the same grammar. This study investigates the extent of individual differences in adult native speakers' knowledge of a range of constructions as well as vocabulary size and collocational knowledge, and explores the relationship between these three aspects of linguistic knowledge and four nonlinguistic predictors: nonverbal IQ, language aptitude, print exposure and education. Individual differences in grammatical attainment were comparable to those observed for vocabulary and collocations; furthermore, performance on tests assessing speakers' knowledge of these three aspects of language was correlated (rs from 0.38 to 0.57). Two of the nonlinguistic measures, print exposure and education, were found to contribute to variance in all three language tests, albeit to different extents. In addition, nonverbal IQ was found to be relevant for grammar and vocabulary, and language aptitude for grammar. These findings are broadly compatible with usage-based models of language and problematic for modular theories.
Studies examining productive syntax have used varying elicitation methods and have tended to focus on either young children or adolescents/adults, so we lack an account of syntactic development throughout middle childhood. We describe here the results of an analysis of clause complexity in narratives produced by 354 speakers aged from four years to adulthood using the Expressive, Receptive, and Recall of Narrative Instrument (ERRNI). We show that the number of clauses per utterance increased steadily through this age range. However, the distribution of clause types depended on which of two stories was narrated, even though both stories were designed to have a similar story structure. In addition, clausal complexity was remarkably similar regardless of whether the speaker described a narrative from pictures, or whether the same narrative was recalled from memory. Finally, our findings with the youngest children showed that the task of generating a narrative from pictures may underestimate syntactic competence in those aged below five years.
Musical rhythm and the grammatical structure of language share a surprising number of characteristics that may be intrinsically related in child development. The current study aimed to understand the potential influence of musical rhythmic priming on subsequent spoken grammar task performance in children with typical development who were native speakers of English. Participants (ages 5-8 years) listened to rhythmically regular and irregular musical sequences (within-participants design) followed by blocks of grammatically correct and incorrect sentences upon which they were asked to perform a grammaticality judgment task. Rhythmically regular musical sequences improved performance in grammaticality judgment compared with rhythmically irregular musical sequences. No such effect of rhythmic priming was found in two nonlinguistic control tasks, suggesting a neural overlap between rhythm processing and mechanisms recruited during grammar processing. These findings build on previous research investigating the effect of rhythmic priming by extending the paradigm to a different language, testing a younger population, and employing nonlanguage control tasks. These findings of an immediate influence of rhythm on grammar states (temporarily augmented grammaticality judgment performance) also converge with previous findings of associations between rhythm and grammar traits (stable generalized grammar abilities) in children. Taken together, the results of this study provide additional evidence for shared neural processing for language and music and warrant future investigations of potentially beneficial effects of innovative musical material on language processing.
Children learn language more easily than adults, though when and why this ability declines have been obscure for both empirical reasons (underpowered studies) and conceptual reasons (measuring the ultimate attainment of learners who started at different ages cannot by itself reveal changes in underlying learning ability). We address both limitations with a dataset of unprecedented size (669,498 native and non-native English speakers) and a computational model that estimates the trajectory of underlying learning ability by disentangling current age, age at first exposure, and years of experience. This allows us to provide the first direct estimate of how grammar-learning ability changes with age, finding that it is preserved almost to the crux of adulthood (17.4 years old) and then declines steadily. This finding held not only for “difficult” syntactic phenomena but also for “easy” syntactic phenomena that are normally mastered early in acquisition. The results support the existence of a sharply-defined critical period for language acquisition, but the age of offset is much later than previously speculated. The size of the dataset also provides novel insight into several other outstanding questions in language acquisition.