ArticlePDF Available

Visual prosody supports reading aloud expressively for deaf readers

Authors:
  • ONICI, Independent Information Centre on Cochlear Implants

Abstract and Figures

Type is a wonderful tool to represent speech visually. Therefore, it can provide deaf individuals the information that they miss auditorily. Still, type does not represent all the information available in speech: it misses an exact indication of prosody. Prosody is the motor of expressive speech through speech variations in loudness, duration, and pitch. The speech of deaf readersis often less expressive because deafness impedes the perception and production of prosody. Support can be provided by visual cues that provide information about prosody—visual prosody—supporting both the training of speech variations and expressive reading. We will describe the influence of visual prosody on the reading expressiveness of deaf readers between age 7 and 18 (in this study, ‘deaf readers’ means persons with any kind of hearing loss, with or without hearing devices, who still developed legible speech). A total of seven cues visualize speech variations: a thicker/thinner font corresponds with a louder/quieter voice; a wider/narrower font relates to a lower/faster speed; a font raised above/lowered below the baseline suggests a higher/lower pitch; wider spaces between words suggest longer pauses. We evaluated the seven cues with questionnaires and a reading aloud test. Deaf readers relate most cues to the intendedspeech variation and read most of them aloud correctly. Only the raised cue is di#cult to connect to the intended speech variation at first, and a faster speed and lower pitch prove challenging to vocalize. Despite those two difficulties, this approach to visual prosody is elective in supporting speech prosody. The applied materials can form an example for typographers, type designers, graphic designers, teachers, speech therapists, and researchers developing expressive reading materials.
Content may be subject to copyright.
5
april .
2021
Maarten Renckens
Leo De Raeve2, 3
Erik Nuyts1
María Pérez Mena1
Ann Bessemans1
Visual prosody
supports reading aloud expressively
for deaf readers
1 Hasselt University in
collaboration with PXL University
College, Belgium;
2 Independent Information Centre
about Cochlear Implantation
(ONICI) Zonhoven, Belgium;
3 KIDS (Royal Institute for the
Deaf) Hasselt, Belgium.
Type is a wonderful tool to represent speech visually. Therefore, it can
provide deaf individuals the information that they miss auditorily. Still, type
does not represent all the information available in speech: it misses an exact
indication of prosody. Prosody is the motor of expressive speech through
speech variations in loudness, duration, and pitch. The speech of deaf read-
ers is often less expressive because deafness impedes the perception and
production of prosody. Support can be provided by visual cues that provide
information about prosody—visual prosody—supporting both the training
of speech variations and expressive reading.
We will describe the inuence of visual prosody
on the reading expressiveness of deaf readers between age 7 and 18 (in
this study, deaf readers’ means persons with any kind of hearing loss, with
or without hearing devices, who still developed legible speech). A total of
seven cues visualize speech variations: a thicker/thinner font corresponds
with a louder/quieter voice; a wider/narrower font relates to a lower/faster
speed; a font raised above/lowered below the baseline suggests a higher/
lower pitch; wider spaces between words suggest longer pauses.
We evaluated the seven cues with questionnaires
and a reading aloud test. Deaf readers relate most cues to the intended
speech variation and read most of them aloud correctly. Only the raised cue
is dicult to connect to the intended speech variation at rst, and a faster
speed and lower pitch prove challenging to vocalize. Despite those two
diculties, this approach to visual prosody is eective in supporting speech
prosody. The applied materials can form an example for typographers, type
designers, graphic designers, teachers, speech therapists, and researchers
developing expressive reading materials.
Keywords
type design,
visual prosody,
prosody,
deaf readers,
expressive reading,
reading comprehension
6
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
1. Introduction
1.1. About prosody
Type visually provides information to deaf indi-
viduals that they otherwise do not hear. Therefore, type can be a useful tool
for deaf individuals who are able to read. However, it remains an incomplete
representation of speech: type often only focuses on “which words are said,
but less on “how words are said.
How words are said is referred to as speech
expression” (Veenendaal, Groen & Verhoeven, 2014), and its motor is speech
prosody: variations in the speech features loudness, duration, and pitch
(Bessemans et al., 2019; Chan, 2018; Soman, 2017; Belyk & Brown, 2014;
Karpiński, 2012; Sitaram & Mostow, 2012; Nakata, Trehub & Kanda, 2011;
Patel & Furr, 2011; Patel & McNab, 2010; Wagner & Watson, 2010). Prosody
plays an important role in language comprehension: it distinguishes homo-
graphs such as PREsent versus preSENT. It can add information to what is
said, such as statements, questions, sarcasm, surprise, and it can inuence
the meaning of a sentence. For example, the sentence “That old man cannot
hear you very well” has a dierent meaning if “cannot” or “you” is empha-
sized (Seidenberg, 2017; Wagner & Watson, 2010; Carlson, 2009; Verstraete,
1999; Guberina & Asp, 1981 for similar examples).
Prosody is required for all applications of language,
even in uent reading where a proper expression (of which prosody is
the motor) is as important as a proper rate (speed), and accuracy (correct
decoding of the letters) (Groen, Veenendaal & Verhoeven, 2019; Reading
Rockets, 2019; International Literacy Association (ILA), 2018; Hasbrouck
and Glaser, 2012; Paige, Rasinski, Magpuri-Lavell, 2012; Sitaram & Mostow,
2012; National Institute of Child Health and Human Development (NIH),
2000; National Assessment of Educational Progress (NAEP), 1995). During
prosodic reading, a total of six prosodic characteristics are dened: pausal
intrusions, length of phrases, appropriateness of phrases, nal phrase
lengthening, terminal intonation contours (e.g., lowering the voice after
a group of words), and stress (Kuhn & Stahl, 2003 citing Dowhower, 1991).
Correctly applying prosody in reading can improve word recognition, read-
ing accuracy, reading speed, and comprehension skills, because expressive
readers can segment text into meaningful units (ILA, 2018; Young-Suk Grace
2015; Veenendaal, Groen, Verhoeven, 2014; Binder et al., 2013, Carlson, K.
2009; Miller & Schwanenugel, 2006; Ashby, 2006). A highly uent reading
leads to better reading motivation and comprehension (Hasbrouck & Glaser,
2012), while a less developed prosody is related to poorer comprehension
(Groen, Veenendaal & Verhoeven, 2019; Gross et al., 2013 citing National
Research Council, 1999). Kuhn and Stahl (2003, citing Schreiber, 1987)
7
april .
2021
suggest that speech is easier to understand than reading because of its
prosody. Even during silent reading, prosody plays an active role in uent
reading and reading comprehension (Breen et al., 2016; Leinenger, 2015;
Young-Suk Grace, 2015; Gross et al., 2013; Ashby, 2006; Fodor, 1998). For
example, prosody, indicated by periods, commas, question marks, exclama-
tion marks, or other prosody indicators, inuences how we read and claries
the intention of the sentence.
1.2. Speech prosody influenced
by hearing loss
Despite its importance, prosody is challenging to
master by almost all individuals with hearing loss (Hutter, 2015; Marx et al.,
2014; Stiles & Nadler, 2013; See et al., 2013; Wang et al., 2013; Nakata, Trehub
& Kanda, 2012; Vander Beken et al., 2010; Lyxell et al., 2009; Peng, Tomblin &
Turner, 2008; Markides, 1983). It is important to look at their use of prosody
within the broader context of their hearing problems.
The terms deaf, hard of hearing, hearing loss, or
hearing impaired refer to a suboptimal perception of sounds from the envi-
ronment. The cause may be a decit within the outer ear, the inner ear,
a damaged nerve, and/or brain damage. Hearing loss can occur before or
after mastering basic understanding of oral communication (pre-and post-
lingual deafness).
All kinds of hearing loss limit the
information perceived from the environment, which results in fewer stimuli
to develop cognition. So, individuals with hearing loss often experience a
disadvantage in their general learning process. If not countered by the use
of sign language, hearing devices, or very good support, a hearing loss can
cause a delay or suboptimal development in:
cognition and language development (see, for example, De
Raeve, 2014; Boons et al, 2012, 2013a, 2013b, 2013c; Fagan &
Pisoni, 2010);
speech (see, for example, Baudonck et al., 2015; Limb & Roy,
2014) and use of prosody (see, for example, De Clerck et al.,
2018; Øydis, 2013);
reading uency (see, for example, Mayer et al., 2016; Luckner
& Urlbach, 2011). Not all individuals with hearing loss can read
well because they are not able to relate letters with the cor-
relating sounds.
8
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Most of these steps are non-chronologically inter-
woven, and mastering each of them is a process taking several years.
The group of individuals with hearing loss is a very
diverse group with much dierentiation. Individuals with the same amount
of hearing loss and who receive the same support can still have a dierent
cognitive development. Due to the hindrances in connecting to the ‘hearing
society, individuals with hearing loss often rely on sign language, and they
developed their own Deaf culture (doof.nl, 2017; Fevlado, 2013).
The sooner intervention takes place, the lesser
the delays in (speech) development. Nowadays, hearing devices such as co-
chlear implants (CI’s) mostly restore the provision of sound stimuli, enabling
the development of spoken language at an age-appropriate level (Hearing
Team rst, 2017). Still, the sound output of the devices does not perfectly
resemble the sound perceived by a hearing individual (Scarbel et al., 2012),
and each device needs to be calibrated for the individual wearing it, the
“tting. For a hearing individual, the sound received by the cochlear implant
could be described as a low-quality sound (listen to the Daily Mail Online
(2014) at https://www.dailymail.co.uk/sciencetech/article-2636415/What-
deaf-hear-Audio-le-reveals-s-like-listen-world-using-cochlear-implant.
html for an auditory example made by Michael Dorman, an Arizona State
University professor of speech and hearing science). After a while, the brain
will adjust to the input and process the sound information in the best pos-
sible way. The improved hearing status improves phonological awareness,
resulting in an increase in general literacy (Mayer et al., 2016; Harris, 2015;
Dillon, Cowan & Ching, 2013).
So, while speech has become more accessible
than ever before for a part of this group, learning to speak uently remains
an adventure that not all of them bring to a positive end. The impact on the
topic at hand, namely prosody, is that neither prosodic perception nor pro-
sodic production by individuals with hearing loss is similar to that of their
hearing peers. While some children with implants even produce speech
containing minimal to no dierences compared to typical hearing children
(Boons et al., 2013b, See et al., 2013; Boons, 2013; Vanherck & Vuegen, 2009),
the perception of prosody is hindered by limitations of the hearing devices.
Production of prosody is in general atter than that of their hearing peers.
Their speech still can be reliably distinguished from their hearing peers
(Boonen et al., 2017). The achieved speech quality depends on the hearing
threshold, the hearing devices, the applied therapy, and more. One of the
aspects that dier compared with their hearing peers is their production of
prosody, even for the younger generation of deaf individuals who are wear-
ing hearing devices from an early age (De Clerck et al., 2018; Øydis, 2014;
Wang et al., 2013; Chin, Bergeson & Phan, 2013). Compared to typical hear-
ing children, children with cochlear implants demonstrate a smaller pitch
range in their utterances (De Clerck et al., 2018; Øydis, 2014); a lower pitch
9
april .
2021
modulation (Wang et al., 2013); a divergent nasal resonance (Baudonck et al.,
2015) or a lesser application of prosody in general (Chin, Bergeson & Phan,
2013). Several of those imperfections relate to the three speech variations
loudness, duration, and pitch, which form the base of prosody. To optimize
the speech and prosody of children with hearing loss, training sessions that
include singing, vocal exercises, movements with the body, and more are
default practice in their education (KIDS, n.d.; Advanced bionics, unpub-
lished; Vander Beken et al., 2010; Asp, 2006; Guberina & Asp, 1981). On the
one hand, these sessions ensure that the vocal cords receive the training
they need, while on the other hand, the children become aware of the vocal
variations, namely “how words are said.
1.3. Evaluating visual prosody
for deaf readers
While prosody is essential during the reading
process, type lacks exact representations for several prosodic functions. If
type would implement more indicators to the intended prosody, suggesting
“how words are said, deaf readers would gain more access to the prosody
they partly miss.
Visualizations of prosody in type already exist and
are mostly intended to encourage speech variations (Bessemans et al., 2019;
Patel & McNab, 2010; Staum, 1987; van Uden, 1973). These typographic visu-
alizations of speech prosody are referred to as visual prosody. Visual prosody
adds visual cues to type, each cue hinting to a particular aspect of prosody,
such as one of the three individual speech variations loudness, duration, or
pitch. Several centers of expertise in deaf education apply visual prosody in
teaching materials on a pragmatical basis to exercise vocal variations with
deaf readers (KIDS, n.d.; Advanced bionics, unpublished; Staum, 1987; van
Uden, 1973). For some examples, see Figure 1. When visual prosody is ap-
plied, most existing cues are relatively intuitive, meaning that their intention
can be spontaneously interpreted (Shaikh, 2009; Lewis and Walker, 1989).
For example, a bold typeface relates to louder sounds (Shaikh, 2009). The
intuitive character provides information about “how easily a reader would
apply the prosodic cues as intended if no explanation is provided.
10
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Figure 1.a, b & c.
Some schools in (Dutch)
deaf education apply visual
prosody as a teaching method.
1. Horizontally stretched
words (lllllaaaaannnngggg,
lllloooonnnnngggg) correlate with
a longer duration. 2. Bolder and
larger text correlates with more
loudness, and 3. A higher position,
vertically stretched text, a rising
line, or music notes correlate with
a higher pitch in the voice (left
to right: image from KIDS, n.d.;
image from Advanced Bionics,
unpublished; image based on van
Uden, 1973).
Sadly enough, empirical information about how
deaf readers handle prosodic cues and/or read them aloud is not yet avail-
able. We, as design researchers specialized in typography and type design,
are interested in how changes in typography inuence reading and how
designers can optimize those changes. Therefore, this research aims to
optimize visual prosody for deaf readers between 7 and 18 years old in the
Dutch language (Flanders region in Belgium and the Netherlands) by test-
ing several cues representing all speech variations.
Visual prosody is positioned as a visual manner to
encourage expression, and where necessary, to teach how speech prosody
should sound. In this study, we will only focus on the reading expressiveness
(the application of prosody while reading) and not yet on reading uency.
While we’re very much aware that visual prosody is not a magical solution
to solve all problems that readers with hearing loss encounter (cognition,
language development, speech, prosody, and uent reading), we believe vi-
sual prosody could support part of this audience when developing reading
and speech skills. If successful, these cues could be applied in later studies
evaluating speech training over a longer time period, or in studies aiming to
improve their reading uency and thus overall literacy.
11
april .
2021
In this study, visual prosody is approached with
the hypothesis, “Visual prosody leads to more vocal prosody while reading
aloud, and inuences reading comprehension of deaf readers between 7 and 18.
In this study, ‘deaf readers’ refers to ‘deaf students who have developed spo-
ken language that is distinct enough to be understood, in a signed bilingual
educational setting. Three research objectives, with multiple sub-questions,
evaluate the hypothesis:
1. “ How intuitive is visual prosody for deaf readers between
7 and 18?”
a. “What is the reader’s perceived intention of
visual prosody?”
b. “How noticeable are the prosodic cues?”
c. “How well does each deaf reader relate the pro-
sodic cues to the intended speech variation?”
2. “Does visual prosody increase the speech variations of deaf
readers between 8 and 18?”
3. “Does visual prosody inuence the understood meaning of
a sentence for deaf readers between 8 and 18?”
While evaluating these sub-questions, we col-
lected as much relevant information as possible about each participant: the
amount of hearing loss, pre/post-lingual deaf, the native language, hearing
device, etc., to evaluate their inuence on how visual prosody is handled.
2. Methodology
Participants
Because of the great diversity in this audience
(some describe it as the most diverse audience in terms of perceived prob-
lems, provided support, and thus personal development), we set out three
principles which the participants had to meet.
Firstly, because this is the rst known study to
evaluate the prosodic cues for individuals with hearing loss empirically, we
are interested in the cues’ eect on their reading aloud. Thus, all partici-
pants in this study should be able to read aloud. In these times (and in the
Western world), this is not a big issue. This research took place primarily in
Flanders, the Dutch-speaking region of Belgium, where the care for individu-
als with hearing loss is well established. This region oers universal neonatal
hearing screening to all newborns since 1998, within the rst three weeks
of life (Vander Beken, 2010). Children who were referred by the universal
12
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
hearing screening test (the Maico test) are redirected to a referral center for
audiological diagnostics and early intervention. The broad application of
neonatal hearing screening ensures that most of the individuals with hear-
ing loss receive early supervision, and fast implantation is recommended
and re-funded by the Flemish government. Most of the younger individuals
with hearing loss now wear cochlear implants. At the secondary education
level in Belgium, more than 70 percent of students with hearing loss attend
regular schools (De Raeve et al., 2012). They receive additional support from
schools for the deaf for several hours per week in the form of speech therapy
or extra exercises, a sign or speech-to-text interpreter, etc. This way, most
individuals with hearing loss in Flanders can develop spoken language to a
certain degree (this is independent from spoken or signed language being
their native and/or most used language).
Secondly, we determined that deaf readers aged
7 to 18 could benet most from visual prosody during their educational
career. Therefore, it was a prerequisite in this study that participants
mastered technical reading, in the form of the automatized decoding of
letters, which is needed before uent reading can be established (Groen,
Veenendaal & Verhoeven, 2019; Miller & Schwanenugel, 2006). Decoding
text is the act of recognizing letter sequences as a word (technical reading).
They rst start to relate letters to a sound, to crack the code, learning which
letter belongs to which speech sound. When the technical reading skills are
fully acquired, learners can spend more attention on prosody. The age of 7
became the youngest age to participate because, at this age, readers pos-
sess the technical reading skills to read sentences as a whole unit instead of
separate words. The target age was limited up to the age of 18: at this age,
compulsory schooling in the Dutch-speaking regions ends.
Thirdly, cognitive disorders of any kind that heavily
impede their learning development were excluded. Participants’ characteris-
tics were carefully checked, and participants who did not meet the require-
ments were excluded from the research.
Within these boundaries, a very heterogeneous
group of 38 deaf readers participated in this study. Their characteristics are
described in table 1.
They were on average 12.21 years old; the youngest one was
7.2y and the oldest one 19.4y (on June 30th, 2018. This was not
the date on which all participants were tested).
One participant had two deaf parents; two participants had a
deaf mother; the other 35 participants had hearing parents.
13
april .
2021
Thirty-three participants were prelingual deaf. Two were post-
lingual deaf (after the age of 3), and from two participants, this
information was not available.
Nineteen participants had a bilateral hearing loss > 90 dB on
one or more sides; 15 participants had a hearing loss between
89and 27 dB. For 4participants, this information was not
available.
Thirty-six participants were wearing CI’s, hearing aids, or a
combination. Two participants did not wear a hearing device
because those participants only had mild hearing loss.
A total of 30 participants were educated in a regular school.
Only 8 were educated in a special school for the deaf. In
general, that is a fair reection of the target audience, of which
70% attend secondary school within regular education (De
Raeve et al., 2012). The smaller number of participants in spe-
cial secondary education correlates with the trend that deaf
participants move to regular education after primary school.
14
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Table 1.
Information about the diverse
group of participants in this study.
Informatio n about the pupils
Information about the
language
Information about the ears
Code number
Primary or secondary?
Special education (SE) or support network
(SN)
class
Gender (M/F)
Age at June 30th, 2018
Knows Sign Language?
Native language (NL, FR, VGT, NGT,…)
Months when becoming deaf/hearing
impaired (0 is born deaf)
Left: threshold level
Left: hearin g aid? None/Hearing aid/CI
Right: threshold level
Right: hear ing aid? None/Hearing aid/C I
HI002 Pri SN 6 F 11y9m N N NL 48 120 CI 120 CI
HI003
Pri
SN
5
F
11y2m
N
NL
36
HA
HA
HI004
Pri
SN
5
F
11y5m
Y
NL
0
91
HA
91
HA
HI005
Sec
SN
2
F
14y10m
N
NL
0
120
CI
120
CI
HI006 Sec SN 6 F 18y3m N Y NL 0 120 CI 120 CI
HI007 Sec SN 6 M 17y11m N N NL 0 50 HA 50 HA
HI008
Sec
SN
6
M
17y7m
N
NL
0
average
HA
average
HA
HI009
Sec
SN
3
M
16y3m
N
NL
0
47
HA
45
HA
HI010
Sec
SN
3
M
15y0m
N
NL
0
N
light loss
Y
HI011 Sec SN 2 F 14y0m N N NL 0
N
N
HI012
Sec
SN
1
M
14y4m
Y
Maroccan
(learned
NL)
0
27
CI
120
Cl
HI013 sec SN 4 F 15y7m N Y NL 30 50 HA 55 HA
HI014
Pri
SE
M
9y9m
Y
NL
0
110
CI
115
CI
HI015
Pri
SN
6
F
16y2m
little
NL
0
90
HA ( but not
always
wearing)
120
CI
HI016
Sec
SE
1
M
13y9m
N/little
according
to him
NL
0
113
CI
113
CI
HI017
Pri
SE
F
11y7m
Y
NL
18
95
CI
115
CI
HI018
Pri
SN
5
M
12y
Y
HI019
Pri
SE
3
F
19y4m
Y
24
67
HA
75
HA
HI020 Pri SE 5 M 13y2m N
Dutch
with
gestur es
Turkish
(learned
NL) 0 72 HA 75 HA
HI021
Pri
SE
5
F
10y6m
Dutch
with
gestur es
NL
6
77
HA
78
HA
15
april .
2021
Informatio n about the pupils
Information about the
language
Information about the ears
Code num b
er
Primary or secondary?
Special education (SE) or support network (SN)
class
Gender (M/F)
Age at June 30th, 2018
Deaf
Knows Sign Language?
Native language (NL, FR, VGT, NGT,…)
Months when becoming deaf/hearing impaired (0
is born deaf)
Left: threshold level
Left: hearin g aid? None/Hearing aid/CI
Right: threshold level
Right: hear ing aid? None/Hearing aid/C I
HI024
Pri
SN
4
M
9y9m
N
NL
0
83
HA
72
HA
HI027 Sec SN 5 F 16y8m N N NL 0 100 HA 88 HA
HI028 Sec SN 2 F 14y0m N N NL 0 110 CI 91,65 HA
HI029
Pri
None
2
M
8y0m
Dutch
with
gestur es
NL
32
HA
HA
HI030
Sec
SN
5
M
17y7m
N
Turkish &
NL mixed
0
average
HA
average
HA
HI031
Sec
SN
2
F
14y4m
N
NL
0
120
CI
120
CI
HI032 Pri SN 5 M 11y0m Mother Y NL 0 118 CI 118 CI
HI033
Pri
SN
3
M
9y4m
Y
NL
0
120
CI
120
CI
HI034
Pri
SN
2
F
8y1m
N
NL
0
63
Y
62
Y
HI035
Pri
SN
6
F
11y9m
little
NL
0
120
CI
120
CI
HI036
Sec
SN
1
F
12y
Little
NL
120
CI
120
CI
HI037
Pri
SN
6
F
12y2m
Y
NL
0
100
CI
100
CI
HI038 Pri SN 6 F 12y2m N Y NL 0 100 CI 100 CI
HI041
Pri
SN
2
M
7y9m
NL
0
100
CI
100
CI
HI042
Pri
SN
4
M
7y6m
NL
0
71
HA
71
HA
HI045
Sec
SN
4
F
16y5m
N
NL
0
33
HA ( but not
always wearing)
58,8
HA ( but not
always
wearing)
HI046 Pri SE 6 M 10y7m N N NL 4 65 HA 55 HA
HI047 Pri SE 5 F 7y2m N N NL 1 30
30
16
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Visual prosody applied
in this study
The prosodic cues as applied in Bessemans et al.
(2019) formed the basis for this study. These cues were adjusted to represent
both directions of the speech variations: the thickness of the letters cor-
relates with a louder/softer voice; the width of the letters correlates with
the duration of what is said; the vertical height of the letters correlates with
the height of the pitch. Additionally, a larger space connects to the duration,
correlating with a longer pause. All applied fonts are shown in Figure 2.
Note that not all cues are symmetrical. For exam-
ple, where the raised cue was moved up 250 units, the lower cue was only
moved 125 units. Design experiments showed that moving letters down
below the straight baseline was more noticeable than moving letters above
the often curved x-height. The advantage of less vertical displacement is
avoiding collisions between lines of text.
Figure 2.A, B, C & D.
Words set dierently within a
sentence form prosodic cues that
indicate a specic speech variation.
For some cues, gradations were
implemented. The Dutch sentence
translates to “The poor man stayed
behind, alone.”
A.
B.
C.
D.
De arme man
bleef
alleen achter.
De arme man bleef alleen achter.
De arme man bleef alleen achter.
De arme man bleef alleen achter.
‘Full thinner’ for a quieter vocalization
‘Half thinner’ for a quieter vocalization
‘Normal’ for a normal vocalization
‘Thicker’ for a louder vocalization
De arme man bleef alleen achter.
De arme man bleef alleen achter.
De arme man bleef alleen achter.
De arme man bleef alleen achter.
De arme man bleef alleen achter.
De arme man bleef alleen achter.
‘Full oblique’ for a faster vocalization
‘Half oblique’ for a faster vocalization
‘Full narrower’ for a faster vocalization
‘Half narrower’ for a faster vocalization
‘Normal’ for a normal vocalization
‘Wider’ for a slower vocalization
De arme man bleef alleen achter.
De arme man bleef alleen achter.
De arme man bleef alleen achter.
‘Lower’ for a lower pitch
‘Normal’ for a normal vocalization
‘Higher’ for a higher pitch
De arme man bleef alleen achter.
De arme man bleef alleen achter.
De arme man bleef alleen achter.
‘Normal’ for a normal vocalization
‘Double space’ for a longer pauze
‘Tripple space’ for a longer pauze
17
april .
2021
The test materials
Test material for Objective 1:
“How intuitive is visual prosody
for deaf readers between 8 and 18?”
The sub-question “What is the reader’s perceived in-
tention of visual prosody?” was evaluated by means of a short video fragment
including subtitles showing prosodic cues [Figure 3]. In the booklet, the
participants were asked, “Why do some words look dierent in the sentence,
according to you?” The intended answer required a link to speech variations
or a reference to speech expression.
Figure 3.
Presenting the prosodic cues
together with a short video
fragment evaluated if participants
related visual prosody to speech
prosody.
The sub-question “How noticeable are the prosodic
cues?” was evaluated by presenting all cues from Figure 1 in mixed order
within a list of sentences. Participants were invited to mark the prosodic cue
within each sentence if one was present. Marked more often indicates a
higher noticeability. One additional sentence was added to the list to check
if the noticeability of the lowered pitch cue would be inuenced by the
word’s context: when this cue is followed by letters with descenders it might
become less noticeable.
The sub-question “How well can each deaf reader
manage to relate the prosodic cues to the intended speech variation?” was
evaluated by sentences wherein a prosodic cue was applied on one word,
followed by the question of how they would pronounce that one word.
Participants could mark the answer in a list containing all possible speech
variations: louder, quieter, higher, lower, faster, slower. The enlarged space
was treated dierently: in a multiple-choice, participants could choose from
breath in, divide the sentence into parts, breath out, wait longer/take a
pause, something else.
18
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Test material for Objective 2:
“Does visual prosody increase
the speech variations of deaf
readers between 8 and 18?”
To answer the second objective, booklets with
sentences intended to read aloud were created. Those booklets carefully
incorporated the results from objective 1. To optimize the representation
of all cues, the prosodic cues which were marked most often in the test for
sub-question “How noticeable are the prosodic cues?” were applied: ‘thicker,
‘full thinner,’ ‘full narrower,’ ‘wider applied on a longer word,’ ‘higher,’ ‘lower’ and
the ‘triple space. At the same time, the oblique cue’ was not implemented as
this cue was not often related to its intended speech component.
Before the actual reading test, the participants re-
ceived a small exercise-booklet containing all cues. This information allowed
them to memorize the intended voice variation for each prosodic cue and to
exercise those voice variations for a short while. Providing a separate exer-
cise booklet prevented the participants from seeing the nal test sentences
in advance while still acquainting them with the usage of prosodic cues.
For the actual reading tests, the design of the
booklets diered per age group. Participants were grouped according to
third and fourth grade (approximately 7 till 10 years old) and fth and sixth
grade (approximately 10 till 12 years old) of the primary school plus the sec-
ondary school (approximately 12 till 18 years old). Each age group received
ve dierent sentences adjusted to their reading level, and each of those
ve sentences was presented nine times: twice in a regular condition, and
seven times alternating the word that contained one of the prosodic cues.
To avoid the impact of learning eects (by the repetition of the same sen-
tence) on the outcome of the experiment, ve dierent booklets were made
for each age group. Those ve booklets all contained a dierent random
order of the sentences.
To create an optimal reading experience, the sen-
tences were presented in a way similar to reading materials familiar to each
age group. All sentences were presented in a booklet with slightly o-white
to yellow paper. For the age group 7-10, there were 5 sentences per page in
a corps of 16 pt. For the age group 10-12, there were 5 sentences per page
in a corps of 14 pt. For the older ones, aging 12-18, there were 8 sentences
per page in a corps of 12 pt, the size almost reecting that of reading books
for adults.
To increase the reading pleasure for the two
youngest age groups, the encouraging sentence “Halfway! Well done. was
19
april .
2021
expressed in the middle of the booklet. This allowed the participants to
have a break, which was found necessary to keep the youngest participants
focused till the end of the test (as in Bessemans et al., 2019).
Test material for Objective 3:
“Does visual prosody influence
the understood meaning of a
sentence for deaf readers
between 7 and 18?”
This latest objective evaluated if visually em-
phasized words within a sentence inuence the understood meaning of
the whole sentence. Ten sentences were created. To compensate for the
divergent reading levels of the participants, two dierent sentences were
developed per age group 7–9, 9–10, 10–11, 11–12, and 12–18 years old. The
sentences were reviewed by speech therapists on feasibility, and each of
those sentences was presented three times to the participant, each time
with a dierent emphasized word.
One such sentence was, “That old man cannot
hear you very well. The prosodic cue “thicker” was used to emphasize one of
the words. Participants were then asked to mark the perceived meaning of
the sentence in a list. The possible meanings in that list referred to a specic
word, such as “you.” If the word “you” was emphasized, participants were
then expected to mark the corresponding meaning “do something about
your speech. The possible meanings in the list relied as little as possible on a
literal denition of one of the words.
The research procedure
In the rst stage, the schools for the deaf were con-
tacted. To comply with the privacy regulations, the supervisors (teachers/
therapists) selected the children who met the participation requirements in
this research. After that, each participant was visited twice in their school:
an initial visit to test how intuitive visual prosody is and a second follow-up
visit for testing the reading aloud and the inuence of visual prosody on the
meaning of a sentence.
During the initial visit for the rst test of the study,
they received the rst booklet about how intuitive visual prosody is. No
information was provided beforehand. Participants whose rst answer did
20
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
not relate to speech were encouraged to guess a second time what the em-
phasized words could mean. Independent of the second answer’s correct-
ness, the test continued with the next question. During the test, participants
gradually received the required information for each exercise. At the end of
this rst visit, each participant knew that visual prosody serves to enhance
expressive reading.
During the second visit, the focus was rst on
reading aloud. Participants received an exercise-booklet rst. They were al-
lowed to briey repeat the visual cues to get used to reading visual prosody
aloud. This short repetition helped to refresh and memorize the intended
speech variations; to briey train the vocalization of speech variations; to
grow comfortable to the test and to speak into the microphone [Figure
4]. It was emphasized to the participants that they were allowed to read
at their own pace to avoid acting as if this was a reading test for speed. As
soon as participants were at ease with the procedure, a second booklet that
matched their reading level was provided to them. Participants were asked
to read the sentences aloud the best way they could with attention for the
expressiveness. During the test, the researcher pointed with a nger to the
sentence that was to be read aloud, making sure that all the sentences
were read.
Figure 4.
Each participant was free to set
up the microphone and booklet
as desired. The participant in
this photo was one of the few
participants who preferred the
booklet next to the microphone.
After the reading test, participants received the
questionnaire about how visual prosody inuences the understanding of a
sentence. Participants were asked to mark the answer which corresponded
the most with the sentence. Only when a participant could not understand
the intention of the test, specic questions were asked to draw attention
to the emphasized word within the sentence and what the location of the
21
april .
2021
emphasis would involve for the meaning of the sentence. If really needed,
participants were encouraged to read the sentence aloud. No hint was given
about how a relation could be made.
At the end of the second visit, each participant
was able to write down feedback about visual prosody and to provide
comments in open questions focusing on the appreciation of the cues. The
written feedback allowed each participant to express their opinions, ideas,
concerns, or suggestions about this approach to visual prosody. It also
provided the possibility for the researcher to ask additional questions, for
example, about diculties experienced during the test.
The data collection,
conversion, and analysis
The reading aloud of each participant was record-
ed with an XML 990 microphone and processed with the application Praat
(Boersma & Weenink, 2014). The application was extended and given the
ability to split the recordings between sentences and automatically name
and number the les (Renckens & Vanmontfort, 2015a). The research group
ESAT (Catholic University of Leuven) performed the speech recognition to
determine the place of the most important vowels within all recordings. A
newly developed plugin for Praat extracted the required data of each sound
recording (Renckens & Vanmontfort, 2015b). The analysis of prosody (loud-
ness, duration, and pitch) was based on the values of the most important
vowels in the words marked with prosodic cues. The decision to use vowels
was based on:
Within a single word, prosody can vary fast and several times.
Peaks are often situated on the vowel. Analyzing longer
speech fragments (such as whole words) would make it more
dicult to compare the eect of the cues, a problem that Patel
& McNab (2011) probably encountered in their rst analysis
that did not deliver the expected results.
Smaller fragments within the speech allow a more precise
analysis of the intended eect. We aim at correct vocalizations,
such as “bEEEEEEr” for the Dutch word beer (bear) instead of
the incorrect pronunciation “beeRRRRRRR. That latest would
sound wrong in the Dutch language. An analysis on the vowel
omitted unintended eects of visual prosody.
With X as the loudness, duration, or pitch, results
are calculated as {average X of one vowel of one specic word} divided by
{average X of all the same vowels of the same word of the same child}. E.g.,
the average pitch of the “ee in the word “beer” written in the thicker cue,
22
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
compared with the average of {all the average pitches of all the “ee” of all the
words “beer” the same child has pronounced}.
The eect of the fonts on the parameters of visual
prosody is measured by a one-way ANOVA with repeated measures. ANOVA
compares the averages between the dierent prosodic cues. Tukey’s meth-
od is used to test the set of all pairwise comparisons {μi−μj} simultaneously.
Pauses are not recognized by speech recognition
software. When speaking, most sounds are connected to each other. For this
reason, the analysis is based on {the point in time of the latest millisecond
of the latest vowel of the word before the space} till {the point in time of the
rst millisecond of the rst vowel of the word after the pause}. Measuring
pause this way enables comparisons, even when there is no real pause
detected with the speech recognition. It is a useful technique as long as
comparisons are made within the data of the same participant.
For tests where the children had to link a pre-
sented object with one of n items, proportion tests were used to test if the
percentage of how often a given item was selected diered from chance
(being 1/n). E.g., if a cue was presented and they had to choose if it indicated
a louder, quieter, higher, lower, faster or slower voice, n equals 6.. Proportion
tests were performed to test if the percentage of the intended vocalization
was signicantly larger than 1/6 = 17%.
3. Results
Results for Objective 1:
“How intuitive is visual prosody for
deaf readers between 8 and 18?”
While evaluating the rst sub-question, “What is
the reader’s perceived intention of visual prosody?”, 23 of the 38 participants
did not provide an answer related to expressive speech when no informa-
tion about visual prosody was provided. Only 3 participants provided an
answer related to speech expression on a rst try, while 12 participants
provided an answer with this relationship when they were asked to make a
second attempt. In total, 15 out of 38 (39%) of the participants related visual
prosody to one or more aspects of expressive speech in a rst encounter.
Answers not related to expressive speech stated that visual prosody might
have the intention to “make things easier, “indicate the verbs,” or “lead to bet-
ter knowledge. Ambiguous answers were evaluated by the researcher during
the test to determine if the participant meant an expressive speech. Answers
23
april .
2021
that were deemed correct included “some of those words were read louder,
“it inuences the intonation,” or “that it serves the pronunciation.
While evaluating the second sub-question, “How
perceptible are the prosodic cues?”, a notable result from this question is that
all full versions of prosodic cues were marked more often than subtle ver-
sions [Chart 1].
Chart 1.
The prosodic cues on top are
marked in a statistically signicant
number of occurrences.
The answers to the third sub-question, “How well
does each deaf reader relate the prosodic cues to the intended speech varia-
tion?” claried that most participants would change their voice as intended
on ve out of seven prosodic cues [indicated by a full border in Chart 2.].
Two cues were more often correlated to unintended speech variations than
to their intended speech variations:
Raised type, which was mostly connected to slower (26%) or
quieter (24%) while higher (18%, proportion test p=0.29 was
intended;
Oblique, which was related to slower (37%, proportion test
p=0.02) while faster (29%) was intended.
They were also related to the intended speech variations for a statistically
non-signicant number of times [indicated by a dashed border in Chart 2.].
13%
37%
42%
53%
53%
66%
74%
82%
84%
84%
89%
92%
92%
97%
0% 20% 40% 60% 80% 100%
Double space
Wider applied on a short word
Wider applied on a longer word
Half narrower
Half oblique
Full lowered normal applied
Tripple space
Half thinner
Full lowered before descender
Full thinner
Full narrower
Raised
Full oblique
Thicker
THE PROSODIC CUES
SORTED BY THE PERCENTAGE OF TIMES THEY ARE MARKED WITHIN THEIR CONTEXTS
24
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Chart 2.
How well each deaf reader
relates the prosodic cues to the
intended speech variation. A
border indicates the intended
speech variation. A full border
points to a statistically signicant
result, a dashed border points to a
statistically non-signicant result.
For the enlarged space, a total of 58 answers were
provided, and 30 out of 58 answers (51%, proportion test p≤0.001) related
an enlarged space to waiting longer between words.
A closed question assessed if participants acknowl-
edge the benet of reading text containing visual prosody. A vast majority
of the participants (86%) believed that visual prosody would help them to
some degree to read with more expression. Only a minority (14%) expressed
that they did not deem visual prosody helpful.
The open feedback pointed to a ratio of
positive:negative comments of 2.3:1. Some participants wrote positive as
well as negative feedback at the same time. Participants made 53 positive
comments in total, including Indicating important words makes me pay more
attention and claries the text,” “I would have liked to learn to read with these
kinds of booklets,” “the voice sounded nicer than normal,” “it helped me as deaf
person and I think it will help others to read and speak. It probably will help nor-
mal hearing individuals as well,” “because of the emphasis, the sentence receives
much more meaning,” “I found reading the sentences aloud a good instruction
because I could use and train the voice better with this. It supports you very
well if you just learn how to read. I would have preferred learning to read like
this” and “Yes, it supported me. E.g., faster, slower, louder, quieter—it seemed
interesting. A total of 23 negative comments were provided, of which 13
only indicated that visual prosody was dicult. Two participants related this
statement with the parameter that the participant deemed the most dif-
cult: once the quieter voice; once the higher/lower voice. Other comments
were “I sometimes forgot about it” or “it is dicult to read with other voices.
47%
Louder
11%16%11%
18%
11%
24%
0%
37%Quieter
5%
5%
5% 24%
11%
3%
11%
42%Faster
29%Faster 8%
13%8%
34%18%13%
37%
55%Slower 26%
24%
8%
11%16% 5%
8%
18%Higher
8%
8%
11%5%
5%
5%
0%
26%Lower
0% 3% 3% 8%
0%
8%
0%
THI CK ER F ULL
THI NNER
FULL
NARRO W ER
FULL
OB LI QUE
WID ER R AIS ED FULL
LO WE R
HO W W EL L EAC H DEAF REA DER RE LATE S TH E PRO SOD IC C UES
TO TH E INT END ED S PEE CH V ARI ATI ON
Louder Quieter Faster Slower Higher Lower No answer
25
april .
2021
Results for Objective 2:
“Does visual prosody increase the
variations in the speech features of
deaf readers between 8 and 18?”
A total of 38 individuals participated in the test,
but due to an unknown microphone error, the vocalization of one partici-
pant was not saved. So, 37 participants remained.
A total of 4,995 vowels were expected in the re-
cordings (37 individuals read aloud 45 sentences, and within each sentence,
3 words were selected to compare the eect of the prosodic cues.). The
speech recognition software recognized 3,994 words, which is an accuracy
of 80%.
A total of 135 words were expected for each par-
ticipant during analysis. The minimum number of words recognized in the
recordings of one participant was 44%, and the maximum number of words
detected in the recordings of another participant was 95%. Four out of six
cues resulted in statistically signicant speech variations as intended when
compared with the normal font [indicated by a full border in Table 2].
Table 2.
Four out of six cues resulted in a
statistically signicant intended
eect (indicated by the full
borders), and two cues resulted in
the intended but non-statistically
signicant eect (indicated by
the dashed borders). All other
cells contain non-intended eects,
which are all smaller than the
intended eects. The average of
one condition is divided by the
average of the normal condition
for volume, duration, and pitch.
Asterisks (*) indicate a signicant
dierence from the normal font:
*=p<0.05; **=p<0.01; ***=p<0.001.
The examples are based on an
example “neutral condition” of
respectively 240Hz, 51 dB, and
0.13sec.
Prosodic
cue
Effect on
intensity
of a vowel
Loudness
example
(on 51dB)
Effect on
duration
of a vowel
Duration
example (on
0.13sec)
Effect on
pitch of a
vowel
Pitch
example
(on 240Hz)
Raised
105% ***
53
145% ***
0.19
127% ***
305
Full lower
100%
51
143% ***
0.19
99%
237
Wider
100%
51
166% ***
0.22
106% ***
254
Full narrower
105% ***
53
98%
0.13
109% ***
262
Thicker
109% ***
56
150% ***
0.19
114% ***
273
Full thinner
90% ***
46
107%
0.14
101%
242
Normal
100%
51
100%
0.13
100%
240
26
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Participants read the prosodic cue intended to
read louder (‘thicker’) with a statistically signicant 9% increase in intensity
compared with the normal voice, and on average statistically signicant
louder than all other prosodic cues. The prosodic cue intended to read qui-
eter (‘full thinner’) is performed with a statistically signicant 10% decrease
of intensity when compared with the normal voice, and on average quieter
than all other prosodic cues [Chart 3.]. Eects of the other cues on the loud-
ness were always smaller than the eect of ‘Full thinner’ of ‘Thicker’ and were
not always signicant [Table 2].
Chart 3.
Comparisons of the eects of
the dierent cues on the average
loudness of the voice illustrated on
an example of 51 dB. The columns
represent the average for each
cue. The thick borders indicate that
the cues intended to inuence
loudness are in their intended
place and had a statistically
signicant result.
Participants read a prosodic cue intended to read
slower (‘wider’) with a statistically signicant 66% increase of the duration
of the voice compared with the normal voice, and on average, statistically
signicantly slower than all other prosodic cues. The prosodic cue intended
to read faster (‘full narrower’) is performed with a 2% decrease in dura-
tion when compared with the normal voice, and on average faster than all
other prosodic cues. But this prosodic cue does not dier signicantly from
the normal condition [Chart 4.]. Eects of other cues on the duration were
always an increase of the duration, smaller than the eect of ‘wider’ and not
always signicant [Table 2].
46
51 51 5 1
53 53
56
40
42
44
46
48
50
52
54
56
58
Full
thinner
Full
lower
Normal Wider Raised Full
narrower
Thicker
INFLUENCE OF EACH CUE ON THE AVERAGE LOUDNESS (IN DB)
27
april .
2021
Chart 4.
Comparisons of the eects of
the dierent cues on the average
duration of the voice illustrated
on an example of 0,13sec. The
columns represent the average
for each cue. The thick border
indicates that the cue intended
to inuence the duration is in
its intended place and had a
statistically signicant result. The
dashed border indicates that the
cue intended to inuence the
duration is in its intended place,
but the result was not statistically
signicant.
Participants read a prosodic cue intended to read
with a higher voice (‘full raised’) with a statistically signicant 27% higher
pitch compared with the normal voice, and on average higher than all other
prosodic cues. The prosodic cue intended to read with a lower voice (‘full
lower’) is performed with a 1% lower pitch when compared with the normal
voice, and on average lower than all other prosodic cues. But this prosodic
cue does not dier signicantly from the normal condition [Chart 5.]. Eects
of other cues on the duration were always an increase of the pitch, smaller
than the eect of ‘full raised’ and not always signicant [Table 2].
237240242
254
262
273
305
200
220
240
260
280
300
320
Full
lower
Normal Full
thinner
Wider Full
narrower
Thicker Raised
INFLUENCE OF EACH CUE ON THE AVERAGE PITCH (IN HZ)
28
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Chart 5.
Comparisons of the eects of
the dierent cues on the average
pitch of the voice illustrated on an
example of 240Hz. The columns
represent the average for each
cue. The thick border indicates that
the cue intended to inuence the
pitch is in its intended place and
had a statistically signicant result.
The dashed border indicates that
the cue intended to inuence the
pitch is in its intended place, but
the results were not statistically
signicant.
The prosodic cue indicating pause resulted in a
pause on average 4.26 times longer pause. This time span is measured inde-
pendently of how the pause was created: creating a pause by briey waiting
for the next word; creating a pause by breathing in/out between words.
Several prosodic cues had a statistically signicant
eect on unintended speech variations. Those eects were always smaller
than the inuence on the intended speech variation. The possible relations
between loudness, pitch, and duration are expressed with the Pearson cor-
relation coecients. All Pearson correlation coecients remain below 0.28
(intensity-pitch: 0.28 with p<.0001; duration-pitch: 0.12 with p<.0001 and
intensity-duration: 0.16 with p<.0001). While a correlation coecient has an
exact mathematical meaning, the interpretation of the magnitude of a cor-
relation coecient is ambiguous (Kotrlik et al., 2011). However, the various
interpretations by dierent experts describe a correlation coecient lower
than 0.3 as “low, “small,“little if any” (Kotrlik et al., 2011). Therefore, it can be
stated that while a cue can have an eect on several speech variations at the
same time, the eects on intended and unintended speech variations hardly
relate to each other.
Participant’s characteristics such as age, type of
education, amount of hearing loss, type of hearing device all can inuence
the vocalization of visual prosody. They were collected for statistical analysis,
but most of those analyses delivered no consistent insights. Therefore, it
is suggested that the same prosodic cues can be used for all deaf readers
between 7 and 18, without dierentiation. The only analysis which delivered
a certain pattern came from participants in regular education versus those
237240242
254
262
273
305
200
220
240
260
280
300
320
Full
lower
Normal Full
thinner
Wider Full
narrower
Thicker Raised
INFLUENCE OF EACH CUE ON THE AVERAGE PITCH (IN HZ)
29
april .
2021
in special education. Participants in special education applied less intensity
and duration but more pitch to vocalize the prosodic cues. This can point to
a less controlled vocalization and the need for more support by a supervisor.
Results for Objective 3:
“Does visual prosody influence
the understood meaning of a
sentence for deaf readers
between 7 and 18?”
A total of 38 booklets were lled in, each contain-
ing six sentences, accounting for 228 sentences. In 148 out of 228 sentences
(66%), participants marked the intended meaning of the sentences correct.
This outcome is statistically signicantly higher than the 25% chance level
on the correct answer when guessing out of four possible answers (propor-
tion test, p<0.0001).
4. Discussion This research aims to optimize visual prosody for
deaf readers between 7 and 18 years old as a support tool to encourage
speech variations while reading. The focus was on visualizations for loud-
ness, duration (including pauses), and pitch in both directions: increase and
decrease. Therefore, six cues for speech variations and one separate cue for
the pause were developed. While this approach to visual prosody led to a
successful inuence on the reading expressiveness, the cues in their current
form cannot be applied without some additional guidance from
a supervisor.
The intuitive use of the cues
Readers with hearing loss experience dicul-
ties starting to use those cues. Only 39% of the participants did create the
link between visual prosody and a form of prosody/ expression/ speech
variations automatically. That is in line with research pointing out that their
access to prosody (both perception and production) is lower than for their
hearing peers (De Clerck et al., 2018; Øydis, 2014; Wang et al., 2013; Chin,
Bergeson & Phan, 2013). Two possible reasons why deaf readers do not auto-
matically create the link between the cues and prosody were found.
30
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
The rst reason is that at school, children already
receive dierent types of augmented texts to learn about grammar and the
structure of the sentence. For example, dierent colors are used to indicate
verbs, nouns, subjects, articles, etc. This explains why some participants
related visual prosody to grammar: “to comprehend text better” or “to indicate
verbs” after seeing visual prosody for the rst time.
The second reason is that during reading evalu-
ations at school, children are mostly evaluated for speed and accuracy of
decoding (Bessemans et al., 2019; ILA, 2018; Mostow & Duong, 2009). Thus, it
could be possible that participants focus on the words but not on how those
words were stated. This is more valid for deaf children following regular edu-
cation as they more often participate in reading evaluations. Children within
special education do not always participate in reading evaluations. This
second reason is also supported by the results of Bessemans et al. (2019),
where a group of children receiving no information about speech variations
in advance did not read prosodic cues with more expression.
Because not all readers relate visual prosody to
expressive reading, it is important that a supervisor is present during the
rst use of those cues. The supervisor needs to explain the purpose of the
prosodic cues before any exercise on speech prosody starts.
The noticeability of the cues
When visual prosody was presented in a text,
the wider cue was the only full cue that was not noticeable enough, only
marked in 42% of the occurrences. When this cue was applied on a short
two-letter word, the number of times marked was even lower, at 37%. This
dierence supports, but does not yet prove, the hypothesis that the visibility
of the wider font might depend on the word on which it is applied. Even
more surprisingly, the cue was identical to the research of Bessemans et
al. (2019), wherein the same cue was marked in 78% of all occurrences. The
ndings seem to contradict each other but can be explained by the order
of the tests. Bessemans et al. evaluated the noticeability after the reading
aloud test, while in this research, the noticeability was evaluated before the
reading test. The participants in Bessemans et al. thus already had a training
session and thus were more accustomed to all cues.
For future use, additional widening is recommend-
ed for the wider cue. That would make noticing this cue easier, for example,
when there is no training session beforehand or when there is no supervisor
to point to this cue. The current design for the wider font in this study was
based on the full-wide version of duration in the study of Bessemans et al.
(2019), which was judged the most aesthetically justied variation in letter
shapes without disturbing the text color too much in relation to the other
31
april .
2021
full versions and the non-adapted Matilda regular. No adaptations were
made to the wider font during this research. However, the typeface Matilda
applied for this prosodic parameter a custom spacing system: the white
space on the left and right of a letter is of similar width in each font. While
it is more common in type design to provide each font a dierent spacing,
this choice was made for research purposes: the design parameter ‘letter’
was taken into account and not the ‘letter-spacing’ parameter (in that sense,
they were kept constant) to precisely point out the eect to wider letters
(and not possible interaction eects with wider letter spacing). To ensure
that future visual prosody users will notice the wider cue more readily, it is
suggested to follow the standard spacing system and to insert more letter
spacing. More letter spacing can increase the noticeability as the letters
have more distance between each other.
The low noticeability seems to contradict the reading aloud
test wherein this cue resulted in a 66% longer duration of the vocalization.
The large inuence on the reading aloud is explained by the repetition of
the cues just before the reading aloud test. During that training session,
participants were made aware of the cues within the text until they noticed
them. Therefore, the participants did not yet know what to look for while we
tested the noticeability, but participants gradually got more accustomed to
the prosodic cues throughout the whole research.
The intuitive relation between
the cues and speech variations
Once readers with hearing loss understand the re-
lation between the prosodic cues and the speech variations, they intuitively
relate ve cues correctly to the intended speech variation. And once the
relation between each cue and its intended speech variation is explained,
participants found that relationships easy to remember, for example, a
raised font with a higher pitch. The obvious relationships are in line with
Niebuhr et al. (2017), who state that iconic visualizations (visual representa-
tions of speech variations) are more intuitive than symbolic visualizations
(prosody indicated by symbols added to the text).
Based on the literature that reviews a lesser pitch
perception by individuals with hearing loss (Svirsky, 2017; Marx et al., 2014;
Perreau, Tyler & Witt, 2010), it was expected that the cues for a higher and
lower pitch would be dicult for deaf readers to relate to their intended
speech variation. They were indeed the two cues related the least often to
the intended speech variation. The explanation could be that pitch percep-
tion is more dicult for deaf individuals (Limb & Roy, 2014), and they have a
more limited understanding of pitch.
32
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Unexpected was that the bold cue was related to
a louder vocalization in only 47% of all occurrences. This cue was expected
to be the best related to the intended speech variation in this test because
thicker fonts are often related to volume (Lewis & Walker, 1989) and are
widely applied in comics and reading books to express volume. It was there-
fore expected that this cue would be the easiest to interpret. The current
research cannot explain why the percentage was relatively low.
Rather unexpected was that the experimental
oblique font was more related to a slower vocalization than to a faster
vocalization. Because the oblique font was not related to its intended
speech variation, it was not studied in the reading aloud tests of the current
research and was put aside for possible future studies. The correlation of an
oblique cue with slower reading could originate in the common application
in reading materials, where italics are often applied to highlight important
parts. A later follow-up study can evaluate if deaf readers perceive oblique/
italic text as more important and to be read with more attention (thus
slower). Based on such future studies, new guidelines for reading italic/
oblique fonts can be formulated.
An explanation is needed to correct the readers
who confuse the cues and their intended speech variation. Because deaf
readers can have less knowledge about prosody (and speech variations), the
presence of a supervisor is recommended. This supervisor can correct the
reader and ll in possible gaps in his knowledge about speech variations.
The influence of visual prosody
on the reading aloud
Deaf readers read aloud the cues for a louder, qui-
eter, slower, and higher vocalization, plus the pause, as intended. Therefore,
these cues can support their expressive reading and can be used in speech
therapy to train their speech expressiveness, as several organizations
already do on an experimental and pragmatic basis with their custom cues
(KIDS, n.d.; Advanced bionics, unpublished; Staum, 1987; van Uden, 1973).
The two cues which did not result in a statistically signicant and intended
speech variation were the narrower font and the lower font. The correlating
speech variations to these cues (faster speed and lower pitch) are deemed
much more challenging to produce. The diculties performing a faster or
lower vocalization and the comments on the narrower font do not dimin-
ish the relevance of those two cues: although not all children will (be able
to) perform the related speech variations, their presence is useful to start a
discussion about vocal speed and pitch.
33
april .
2021
In general, pitch is a dicult speech component
to attain. At the beginning of the reading test, several deaf readers needed
some extra exercises on this speech component. More than once, read-
ers moved their whole body upwards when producing a higher voice; the
notions “higher” and “lower” are more often used to indicate objects within
a person’s spatial environment. A bodily motion was literally mentioned by
one participant in this study and also noticed in Bessemans et al. (2019).
A lower-pitched vocalization could be dicult to
execute because the regular speaking voice already sounds low and is close
to the lowest limit in a voice’s pitch range [Table 3.] (Meijer, 2015; De Bodt
et al., 2015). Further, it is known that technological limitations constrain the
pitch perception through cochlear implants (Limb & Roy, 2014). That hinders
to perceive prosodic speech information accurately (Kalathottukaren, Purdy
& Ballard, 2017). Both reasons could have contributed to the fact that the
results of the prosodic cue intended to read with a lower voice is statistically
non-signicant.
Table 3.
The average pitch during speech
is already close to the lowest
pitch that a voice can produce.
The values can dier slightly for
individual measurements. See,
for example, Anderson (1977) or
Benninger and Murry (2008). Note
that the maximal pitch mentioned
here is taken from the singing
voice, which can reach a higher
limit than the speaking voice, but
illustrates the voice’s full pitch
range. The table is a simplied
version of De Bodt et al. (2015,
citing Mathieson, 2001).
The speech variation on some cues was more than
once exaggerated. For the higher pitch, a change in vocal register often
occurred. A switch to a higher vocal register allows the voice to reach higher
pitch values but is not common in daily speech: it can cause the vocalization
to sound forced. When visual prosody is applied in speech training, a super-
visor will need to indicate when the pitch is going too high.
Each prosodic cue had an eect on unintended
speech components. For example, the prosodic cue intended to result in a
higher pitch not only caused a higher pitch, but also signicantly increased
both loudness and duration. Bessemans et al. (2019) and Patel, Kember &
Natale (2014) noticed the same side eect. The cause needs to be sought in
the human anatomy: all human anatomical motion tends to cooperate in or-
der to create speech. An extra eort in one body part inuences the achieve-
ments of the other parts as well. It is known that pitch rises exponentially if
Voice type
Vocal range
Average pitch in speech
Man
Bass
82-333 Hz
98 Hz
Baritone
98-392 Hz
124 Hz
Tenor
131-523 Hz
165 Hz
Woman
Alto
147-587 Hz
175 Hz
Mezzo-soprano
165-880 Hz
196 Hz
Soprano
196-1174 Hz
247 Hz
34
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
intensity increases (Buekers & Kinsma, 2005). Although some cues gave an
unintended increase in loudness/duration/pitch, the intended eect of the
cue was always signicantly larger. The unintended eects, therefore, do not
diminish the positive outcome of this research.
Visual prosody influencing the
understood meaning of a sentence
The inuence of visual prosody on understanding
the meaning of the sentence is strong: without any explanation, 66% of
the participants marked the intended answer that correlated with the em-
phasized word. The result is expected to improve even more when partici-
pants get more acquainted with visual prosody or receive more explanation
in advance.
Why (visual) emphasis in written sentences cor-
relates with emphasis in speech (while reading silent) is not to be deter-
mined within this research. Gross et al. (2013) express the same caution. We
propose two possible reasons: an auditory one and a visual one. The rst
possible reason might be that an inner voice is active during reading and
that the emphasized word triggers speech prosody. The second possible
reason might be that participants isolate the visually stressed word and
base their answer on the meaning of this one word separately. Whatever
the reason may be, because the inuence is the same in spoken and written
sentences, visual prosody can be applied to achieve a correct interpretation
of a sentence and thus to discuss why and where speech prosody should be
applied to create emphasis.
The perception of visual prosody
Visual prosody is perceived as useful by the read-
ers, and where some participants expressed diculties in handling the cues,
more exposure and more extended training will automatically result in
habituation. That should encourage developers of reading/learning materi-
als to adopt visual prosody. The positive reaction of parents and speech
therapists lies in line with the perception of the deaf readers. Speech thera-
pists found prosody an aspect of speech that deserved more attention. One
parent of a deaf child having more than average diculties with developing
speech responded: “In his way [my son] was so enthusiastic and so proud of
his certicate [that he received after participating in this research], immediately
after arriving home he was overjoyed and wanted to display the cues. I think I
could sense he had understood the reading program. He explained it to me com-
pletely, which is quite remarkable for him. It clearly made a good impression on
35
april .
2021
him. (literal translation of communication with a parent, 2018). The positive
comments are in line with ve other studies that mention a positive attitude
of the participants towards visual prosody (Bessemans et al., 2019; Patel,
Kember & Natale, 2014; Patel & McNab, 2010; Argyropoulos et al., 2009).
In this study, there could have been an under-
representation of low achievers in reading. In this research, only a couple
of participants had signs of what could be severe to problematic speech
development delays. This study is not the rst with this conclusion: Mayer
et al. (2016) also concluded that their study did not completely represent
the heterogeneous group of deaf. As mentioned in Sininger, Grimes &
Christensen (2010) and Holly (1997), low achievers do not always participate
in research. While it does not diminish the research results, supervisors need
to evaluate rst where possible problems with speech will occur before
commencing speech training.
We close the discussion with a prospect on pos-
sible future research. Prosody, as the motor of expressive speech, is part of
uent reading (according to the denition of the National Reading Panel in
NIH, 2000). Fluent reading is important because those readers “processed the
text smoothly, identify and understand words easily, eciently and rapidly, dis-
cern syntax, and focus on the meaning” (Luckner & Urbach, 2011). Fluent read-
ing is a part of the reading process wherein deaf readers generally develop
slower than their hearing peers (Mayer et al., 2016; Luckner & Urlbach, 2011).
Even with CI’s, they still do not reach the same level as their hearing peers
(Boons et al., 2013; Mayberry, 2002; Vermeulen et al., 2007). Future research
needs to determine the full eects of visual prosody; whether it supports
uent reading in general, and if so, how much visual prosody is able to sup-
port reading comprehension.
5. Conclusion This study conrms the hypothesis, “Visual prosody
leads to more vocal prosody while reading aloud, and inuences reading com-
prehension of deaf readers between 7 and 18. ‘Deaf readers’ in this study refers
to readers who have developed spoken language that is distinct enough to
be understood. For this audience, the approach to visual prosody used in
this study is successful in creating more speech variations, and thus a more
expressive voice. Therefore, visual prosody can be used not only in reading
materials aimed at expressive reading but also in speech therapy to learn
about speech variations or to train the prosody of deaf readers.
Typographers, type designers, graphic designers,
teachers, speech therapists, and researchers who are developing reading
materials intended to support expressive speech could use this study as a
base when developing materials supporting expressive reading by relating
36
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
loudness to the thickness (blackness) of a font; duration to the width of a
font; pitch to the vertical position of a font; and a pause to a wider space.
The example cues in this article illustrate a good starting point for further
development. During further developments, some of the suggested im-
provements for the noticeability can be implemented, such as an even wider
cue to read slower.
It is important to remember that expressive
reading cannot be achieved only by visual prosody. Prosodic cues are not
intuitive enough to be handled by a reader alone, and some readers lack the
necessary knowledge about speech variations. If a reader would start using
those cues without supervision, some cues will be read without a change in
vocalization, or some other errors will be made in the vocalization. Therefore,
a supervisor is needed to guide the reader through the process and to
provide corrections where needed. But once the intention of visual prosody
is clear, readers seem to handle the cues well.
37
april .
2021
6. AcknowledgementsThanks to all schools in the Dutch-speaking
regions that supported this research and facilitated connecting with deaf
students. In alphabetical order: Antwerp Plus (Antwerp), BUSO Zonnebos
(Schoten), Cor Emous (The Hague), Kasterlinden (Kasterlinden), KIDS
(Hasselt), Koninklijk Instituut Woluwe (Woluwe), Sint Gregorius (Ghent-
Bruges), Sint-Lievenspoort (Ghent), Spermalie (Bruges). Without their help,
this research would not have been possible.
7. References
ADVANCED BIONICS. (unpublished version). A musical journey through the
rainforest.
ANDERSON, VA (1977). Training the Speaking Voice. New York: Oxford
University Press.
ARGYROPOULOS, V.; SIDERIDIS, G.; KOUROUPETROGLOU, G.; & XYDAS, G.
(2009). “Auditory discriminations of typographic attributes
of documents by students with blindness. The British
Journal of visual impairment. 27(3): 183–203. http://dx.doi.
org/10.1177/0264619609106360
ASHBY, J. (2006). “Prosody in skilled silent reading: evidence from eye
movements.Journal of Research in Reading. 29 (3) 318–333.
http://dx.doi.org/10.1111/j.1467-9817.2006.00311.x
ASP, CW (2006). Verbotonal Speech Treatment. San Diego: Plural Publishing.
BAUDONCK, N.; VAN LIERDE, K.; D’HAESELEER, E.; & DHOOGE, I. (2015).
“Nasalance and nasality in children with cochlear implants and
children with hearing aids. International Journal of Pediatric
Otorhinolaryngology. 79 (2015): 541–545.
BENNINGER, MS; & MURRY, T. (2008). The Singer’s Voice. Abington: Plural
Publishing.
BESSEMANS, A.; RENCKENS, M.; BORMANS, K.; NUYTS, E; LARSON, K. (2019).
“How to visualize prosody in order to help children read aloud
with more expression. Visible Language. 70(3): 28-49.
BELYK, M. & BROWN, S. (2014). “Perception of aective and linguistic
prosody: an ALE meta-analysis of neuroimaging studies. Social
Cognitive and Aective Neuroscience. 9 (9): 1395-1403.
38
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
BINDER, KS; TIGHE, E.; JIANG, Y., KAFTANSKI, K.; QIC.; ARDOIN, SP (2012).
“Reading expressively and understanding thoroughly - An
examination of prosody in adults with low literacy skills.
Reading and Writing. 26 (5): 665–680. http://dx.doi.org/10.1007/
s11145-012-9382-7
BOERSMA, P., WEENINK, D. (2014). Praat: doing phonetics by computer
[Online] http://www.fon.hum.uva.nl/praat [January 7th, 2015].
BOONEN, N.; KLOOTS, H.; VERHOEVEN, J.; & GILLIS, S. (2017). “Can listeners
hear the dierence between children with normal hearing
and children with a hearing impairment?” Clinical Linguistics &
Phonetics. 33 (4): 316-333. https://doi.org/10.1080/02699206.201
8.1513564
BOONS, T.; BROKX, J.; FRIJS, J.; PHILIPS, B. VERMEULEN, A.; WOUTERS, J.; VAN
WIERINGEN, A. (2013c) “Newborn hearing screening and cochlear
implantation: Impact on spoken language development. B-ENT.
9 (Suppl. 21): 91-98.
BOONS, T.; DE RAEVE, L.; LANGEREIS, M.; PEERAER, L.; WOUTERS, J.; VAN
WIERINGEN, A. (2013b). “Narrative spoken language skills in
severely hearing impaired school-aged children with cochlear
implants. Research in Developmental Disabilities. 34: 3833–3846.
http://dx.doi.org/10.1016/j.ridd.2013.07.033
BOONS, T.; DE RAEVE, L.; LANGEREIS, M.; PEERAER, L.; WOUTERS, J.; VAN
WIERINGEN, A. (2013a). Expressive vocabulary, morphology,
syntax and narrative skills in profoundly deaf children after early
cochlear implantation. Research in Developmental Disabilities. 34:
2008–2022.
BREEN, M.; KASWER, L.; VAN DYKE, JA; KRIVOKAPIĆ, J.; LANDI, N. (2016).
“Imitated prosodic uency predicts reading comprehension
ability in good and poor high school readers. Frontiers in
Psychology. 7 (1026). http://dx.doi.org/10.3389/fpsyg.2016.01026
BULKERS, R.; & KINGSMA, H. (2009). “Impact of phonation intensity upon
pitch during speaking: A quantitative study in normal subjects.
Logopedics Phoniatrics Vocology. 22 (2): 71-77. http://dx.doi.
org/10.3109/14015439709075317
CARLSON, K. (2009). “How Prosody Inuences Sentence Comprehension.
Language and Linguistics Compass. 3 (5): 1188–1200.
CHAN, H. (2018). “A method of prosodic assessment: Insights from a singing
workshop.Cogent education. 5 https://doi.org/10.1080/233118
6X.2018.1461047
39
april .
2021
CHIN, SB; BERGESON, TR; PHAN, J. (2012). “Speech Intelligibility and
Prosody Production in Children with Cochlear Implants.Journal
of Communication Disorders. 45 (5). 355–366. http://dx.doi.
org/10.1016/j.jcomdis.2012.05.003.
DAILY MAIL ONLINE. (May 23rd, 2014). The world through a deaf persons
ears Video reveals what its like to listen to sound using a cochlear
implant [Online] https://www.dailymail.co.uk/sciencetech/
article-2636415/What-deaf-hear-Audio-le-reveals-s-like-listen-
world-using-cochlear-implant.html [March 19th, 2019]
DE BODT, M.; HEYLEN, L.; MERTENS, F.; VANDERWEGEN, J.; VAN DE HEYNING,
P. (2015). Stemstoornissen. Handboek voor de klinische praktijk
(Speaking disorders. Manual for the clinical practice). Antwerpen:
Garant.
DE CLERK, I; PETTINATO, M.; GILLIS, S.; VERHOEVEN, J.; & GILLIS, S. (2018).
“Prosodic modulation in the babble of cochlear implanted and
normally hearing infants: a perceptual study using a visual
analogue scale. First Language. 38 (5): 481–502. https://doi.
org/10.1177/0142723718773957
DE RAEVE, L. (2014). Paediatric Cochlear Implantation: outcomes and current
trends in education and rehabilitation [Dissertation] Radboud
University.
DE RAEVE, L.; BAERTS, J.; COLLEYE, E.; & CROUX, E. (2012). “Changing
Schools for the Deaf - Updating the Educational Setting for Our
Deaf Children in the 21st Century, a Big Challenge.Deafness &
education international. 14 (1): 48–59. https://doi.org/10.1179/15
57069X11Y.0000000012
DILLON, H.; COWAN, R.; & CHING, TY (2013). “Longitudinal outcomes of
children with hearing impairment (LOCHI).International Journal
of Audiology. 52, (Suppl 2): S2-3. https://doi.org/10.3109/1499202
7.2013.866448
DOOF.NL. (2017). Update erkenning Nederlandse Gebarentaal (Update on
the acknowledgment of the Dutch Sign Language). [Online]
https://www.doof.nl/samenleving-maatschappij/update-
erkenning-nederlandse-gebarentaal [1 November 2018].
FAGAN, MK, PISONI, DB (2010). Hearing experience and receptive vocabulary
development in deaf children with cochlear implants. Journal
of Deaf Studies and Deaf Education. 15(2): 149-61. https://doi.
org/10.1093/deafed/enq001
40
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
FEVLADO (2013). Visietekst dovenonderwijs (vision tekst on deaf education).
[Online document] http://fevlado.be [21 March 2018].
FODOR, J. D. (1998). “Learning to parse?” Journal of Psycholinguistic Research.
27 (2): 285–319.
GROEN, MA; VEENENDAAL, NJ; VERHOEVEN, L. (2019). “The role of prosody
in reading comprehension: evidence from poor comprehenders.
Journal of Research in Reading. 42 (1): 37–57. ISSN 0141-0423
https://doi.org/10.1111/1467-9817.12133
GROSS, J.; MILLETT, AL; BARTEK, B.; BREDELL, KH; WINEGARD, B. (2013).
“Evidence for prosody in silent reading.Reading Research
Quarterly. 49 (2): 189–208. https://doi.org/10.1002/rrq.67
GUBERINA, P. & ASP, C. W. (1981). The Verbo-tonal Method for rehabilitation
people with communication problems. [Online] http://www.
suvag.com/ang/histoire/autrestextes.html [28 February 2018]
HARRIS, M. (2015). The Impact of Cochlear Implants on Deaf Children’s
Literacy. In: MARSCHARK, M. & SPENCER, E. (2015). The
Oxford Handbook of Deaf Studies in Language. New
York: Oxford University Press. http://dx.doi.org/10.1093/
oxfordhb/9780190241414.013.27
HASBROUCK, J.; & GLASER, DR (2012). Reading uency: Understanding
and teaching this complex skill. Austin, TX: Gibson Hasbrouck &
Associates.
HEARING TEAM FIRST. (2017). Start with the brain and connect the dots -
supporting Children who are deaf or hard of hearing to develop
literacy through listening and spoken language. [Online pdf]
https://hearingrst.org/blog/2017/08/17/Connecting-the-Dots
[10 April 2019].
HOLLY, F-B. (1997). “Cochlear Implant Use by Prelingually Deafened Children:
The Inuences of Age at Implant and Length of Device Use.
Journal of Speech, Language, and Hearing Research. 40(1): 183-199.
ISSN: 1092-4388.
HUTTER, E.; ARGSTATTER, H.; GRAPP, M.; Plinkert, P.K. (2015). “Music therapy
as specic and complementary training for adults after cochlear
implantation: A pilot study.Cochlear Implants International. 16
(S3): S13.
INTERNATIONAL LITERACY ASSOCIATION (ILA). (2018). Reading uently does
not mean reading fast. [Online pdf] https://literacyworldwide.org/
statements [13 August 2019].
41
april .
2021
KALATHOTTUKAREN, RT; PURDY, SC & BALLARD, E. (2017). “Prosody
perception and musical pitch discrimination in adults using
cochlear implants. International Journal of Audiology. 54 (7):
444-452.
KARPIŃSKI, M. (2012). “The boundaries of language: dealing with
paralinguistic features.Lingua Posnaniensis. 54 (2): 37-54. ISSN
0079-4740, ISBN 978-83-7654-252-2.
KIDS (Unpublished). Teaching materials for children with language disorders.
[Internal materials]
KOTRLIK, J.; WILLIAMS, H.; & JABOR, M. (2011). “Reporting and Interpreting
Eect Size in Quantitative Agricultural Education Research.
Journal of Agricultural Education. 52(1): 132–142.
KUHN, MR; & STAHL, SA (2003). “Fluency - A review of developmental and
remedial practices. Journal of Educational Psychology. 95 (1): 3–21.
https://doi.org/10.1037/0022-0663.95.1.3.
LEINENGER, M. (2015). “Phonological coding during reading. Psychological
Bulletin. 140 (6): 1534-1555.
LEWIS, C., & WALKER, P. (1989). “Typographic inuences on reading. British
Journal of Psychology. 80, (2), 241-257.
LIMB, C. J., & ROY, A. T. (2014). “Technological, biological, and acoustical
constraints to music perception in cochlear implant users.
Hearing Research. 308: 13–26. https://doi.org/10.1016/j.
heares.2013.04.009.
LUCKNER, JL; & URBACH, J. (2011). Reading Fluency and Students Who
Are Deaf or Hard of Hearing: Synthesis of the Research.
Communication Disorders Quarterly: 2011. https://doi.
org/10.1177/1525740111412582.
LYXELL, B.; WASS, M.; SAHLÉN, B.; SAMUELSSON, C.; ASKER-ÁRNASON, L.;
IBERTSSON, T.; MÄKI-TORKKO, E.; LARSBY, B.; & HÄLLGREN, M.
(2009). “Cognitive development, reading and prosodic skills
in children with cochlear implants.Scandinavian Journal of
Psychology. 50: 463–474.
MARKIDES, A. (1983). The speech of hearing-impaired children. Manchester
University Press, New Hampshire, USA.
MARX, M.; JAMES, C.; FOXTON, J.; CAPBER, A. FRAYSSE, B.; BARONE, P.
DEGUINE, O. (2014). “Speech prosody perception in cochlear
implant users with and without residual hearing. Ear & Hearing.
36 (2): 239–248. http://doi.org/10.1097/AUD.0000000000000105
42
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
MAYBERRY, RI (2002). “Cognitive development in deaf children - the interface
of language and perception in neuropsychology.Handbook of
Neuropsychology, 2nd Edition. Vol. 8, Part II.
MAYER, C.; WATSON, L.; ARCHBOLD, S.; YEN, Z.NG. MULLA, I. (2016).
“Reading and Writing Skills of Deaf Pupils with Cochlear
Implants. Deafness & Education International. 18 (2): 71-86. http://
dx.doi.org/10.1080/14643154.2016.1155346
MEIJER, A. (2015) Kopstem en Spreekstem. [Online] https://muziekschool.nl/
kopstem-en-spreekstem [16 April 2019].
MILLER, J.; & SCHWANENFLUGEL, PJ (2006). “Prosody of syntactically complex
sentences in the oral reading of young children.Journal of
Educational Psychology. 98 (4): 839-843.
NAKATA, T.; TREHUB, SE; & KANDA, Y. (2011). Eect of cochlear implants on
children’s perception and production of speech prosody. J. Acoust.
Soc. Am. 131: 2.
NATIONAL ASSESSMENT OF EDUCATIONAL PROGRESS (NAEP). (1995,
August). Listening to children read aloud: Oral uency. NAEPFacts,
1, (1): 2-5.
NATIONAL INSTITUTE OF CHILD HEALTH AND HUMAN DEVELOPMENT (NIH).
(2000). Report of the National Reading Panel. Teaching Children
to Read: an evidence-based assessment of the scientic research
literature on reading and its implications for reading instruction.
(NIH Publication 00-4769) Washington, DC, USA: US Government
Printing Oce.
NIEBUHR, O.; ALM, MH; SCHÜMCHEN, N.; & FISCHER, K. (2017). “Comparing
visualization techniques for learning second language prosody:
rst results. International Journal of Learner Corpus Research. 3(2):
250-277. https://doi.org/10.1075/ijlcr.3.2.07nie
ØYDIS, H. (2013). Acoustic Features of Speech by Young Cochlear Implant
Users. [Dissertation] Universiteit Antwerpen.
PAIGE, DD; RASINSKI, TV; MAGPURI-LAVELL, T. (2012). “Is uent, expressive
reading important for high school readers?” Journal of Adolescent
& Adult Literacy. 56 (1): 67–76. https://doi.org/10.1002/
JAAL.00103
PATEL, R., & FURR, W. (2011, May, 7-12). “ReadN’Karaoke: Visualizing prosody
in children’s books for expressive oral reading.CHI Session: Books
& Language. 3203-3206.
43
april .
2021
PATEL, R. & MCNAB, C. (2010). “Feasibility of augmenting text with visual
prosodic cues to enhance oral reading.Speech Communication.
53 (3): 431-441.
PATEL, R., KEMBER, H. & NATALE, E. (2014). “Feasibility of Augmenting
Text with Visual Prosodic Cues to Enhance Oral Reading.
Speech Communication. 65. https://doi.org/10.1016/j.
specom.2014.07.002
PENG, Y.; HSU, M-W.; TAELE, P.; LIN, T-Y.; LAI, P-E., HSU, L.; CHEN, T-C.; WU,
T-Y.; CHEN, Y-A.; TANG, H-H.; CHEN, MY (2018). SpeechBubbles:
Enhancing Captioning Experiences for Deaf and Hard-of-
Hearing People in Group Conversations. [Conference paper].
CHI ‘18 Proceedings of the 2018 CHI Conference on Human Factors
in Computing Systems Paper No. 293. CHI 2018, Montréal, QC,
Canada. https://doi.org/10.1145/3173574.3173867
PENG, SC; TOMBLIN, JB; & TURNER, CW (2008). Production and perception
of speech intonation in pediatric cochlear implant recipients and
individuals with normal hearing. Ear Hear. 29 (3): 336-51. http://
doi.org/10.1097/AUD.0b013e318168d94d
PERREAU, A.; TYLER, RS; WITT, SA (2010). “The Eect of Reducing the Number
of Electrodes on Spatial Hearing Tasks for Bilateral Cochlear
Implant Recipients. J Am Acad Audiol. 21 (2): 110–120.
READING ROCKETS. (2019). Fluency. [Online] http://www.readingrockets.
org/teaching/reading-basics/uency [13 January 2019].
RENCKENS, M.; & VANMONTFORT, W. (2015a). Adjustments to Praat.
[Application] https://github.com/READSEARCH/praat [21 July
2020].
RENCKENS, M.; & VANMONTFORT, W. (2015b). Praat plugin: export time-
intensity-pitch. [Script]. https://github.com/READSEARCH/Praat_
plugin_export_time-intensity-pitch [21 July 2020].
SCARBEL, L.; VILAIN, A.; LOEVENBRUCK, H.; SCHMERBER, S. (2012). An
acoustic study of speech production by French children wearing
cochlear implants. 3rd Early Language Acquisition Conference. Dec
2012, Lyon, France.
SHAIKH, D. (2009). “Know your typefaces! Semantic dierential presentation
of 40 onscreen typefaces. Usability News. 11, (2).
44
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
SEE, RL; DRISCOLL, VD; GFELLER, K.; KLIETHERMES, S. & OLESON, J. (2013).
“Speech intonation and melodic contour recognition in children
with cochlear implants and with normal hearing. Otology
and Neurotology. 34 (3): 490–8. https://doi.org/10.1097/
MAO.0b013e318287c985.
SEIDENBERG, M. (2017). Language at the speed of sight. New York: Basic Books.
SININGER, YS; GRIMES, A.; & CHRISTENSEN, E. (2010). “Auditory development
in early amplied children: Factors inuencing auditory-based
communication outcomes in children with hearing loss.Ear &
Hearing. 31 (2): 166-185.
SITARAM, S., & MOSTOW, J. (2012). “Mining data from project LISTEN’s
reading tutor to analyze development of children’s oral reading
prosody.Proceedings of the Twenty-Fifth International Florida
Articial Intelligence Research Society Conference. 478-483.
SOMAN, UG (2017). Characterizing Perception of Prosody in Children with
Hearing Loss. [Dissertation]. Tennessee: Graduate School of
Vanderbilt University.
STAUM, MJ (1987). “Music Notation to Improve the Speech Prosody of
Hearing Impaired Children. Journal of Music Therapy. 24 (3):
146-159.
STILES, DJ; & NADLER, LJ (2013). “Sarcasm Recognition in Children with
Hearing Loss - The Role of Context and Intonation.Journal of
Educational Audiology. 19: 3-11.
SVIRSKY, M. (2017). “Cochlear implants and electronic hearing.Physics Today.
70 (8): 52-58. https://doi.org/10.1063/PT.3.3661
VANDER BEKEN, K.; DEVRIENDT, V.; VAN DEN WEYNGAERD, R.; DE RAEVE, L.;
LIPPENS, K.; BOGAERTS, J.; MOERMAN, D. (2010). Personen met
een auditieve handicap (Persons with an auditory limitation). In:
BROEKAERT et al. Handboek bijzondere orthopedagogiek (Manual
for extraordinary remedial education). (2016: fourteenth press)
Garant. Antwerpen-Apeldoorn. p 131-210.
VANHERCK, C.; VUEGEN, D. (2009). Prosodie bij kinderen met een cochleair
implant - verkenning van de vaardigheden en objectivering met
behulp van het prosogram. [Online] https://exporl.med.kuleuven.
be/web/index.php/Public:Masterproefverdedigingen/2009/
Prosodie_bij_kinderen_met_een_cochleair_inplant:_verkenning_
van_de_vaardigheden_en_objectivering_met_behulp_van_het_
prosogram [9 April 2017].
45
april .
2021
VAN UDEN, A. (1973). Taalverwerving door taalarme kinderen (Language
acquisition by language-poor children). Universitaire Pers
Rotterdam, Rotterdam.
VEENENDAAL, NJ; GROEN, MA; VERHOEVEN, L. (2014). “The role of speech
prosody and text reading prosody in children’s reading
comprehension. British Journal of Educational Psychology. 2014
(84): 521–536.
VERMEULEN, A.M.; VAN BON, W.; & SCHREUDER, R. (2007). Reading
Comprehension of Deaf Children With Cochlear Implants. Journal
of Deaf Studies and Deaf Education 12(3): 283-302.
VERSTRAETE, E. (1999). De stilte verbroken – Diagnostiek en revalidatie van
personen met een auditieve handicap: Deel 1: Theoretische basis
(The silence broken –diagnosis and revalidation of persons with
an auditory limitation: part 1: theoretical basics). Vormingsdienst
SIG. Destelbergen, Belgium.
WANG, DJ; TREHUB, SE; VOLKOVA, A.; VAN LIESHOUT, P. (2013). “Child implant
users’ imitation of happy- and sad-sounding speech. Frontiers in
Psychology. 4 (351). http://dx.doi.org/10.3389/fpsyg.2013.00351.
WAGNER, M. & WATSON, D. (2010). “Experimental and theoretical advances
in prosody: A review.Language and Cognitive Processes. 25 (7-9):
905–945. http://dx.doi.org/10.1080/01690961003589492.
YOUNG-SUK GRACE, K. (2015). “Developmental, component-based model of
reading uency: an investigation of predictors of word-reading
uency, text reading uency, and reading comprehension.
Reading Research Quarterly. 50 (4): 459-481. ISSN: ISSN00340553.
46
Visible
Language
55 . 1
Renckens, et al.
Visual prosody
supports reading aloud expressively
for deaf readers
Authors
Maarten Renckens
Maarten Renckens is a teacher and design researcher with a love for let-
ters and a heart for people. Dealing with a reading diculty himself, he is
very interested in the reading process. His projects include the typeface
'Schrijfmethode Bosch' (Writing Method Bosch) that learns children how
to write and typefaces to encourage beginner readers and readers with
hearing loss to read more expressively. With a background in architectural
engineering, he is used to approach concepts technically and mathemati-
cally. He applies this technical knowledge to unravel letterforms, in order to
determine the eects of dierent letterforms on the reading process.
Leo De Raeve
Leo De Raeve PhD has 3 professions: he is a Doctor in Medical Sciences,
psychologist and teacher of the deaf. He is founding director of ONICI, the
Independent Information and Research Center on Cochlear Implants, is
lecturer at University College Leuven-Limburg and scientic advisor of the
European Users Association of Cochlear Implant (EURO-CIU).
Erik Nuyts
Prof. Dr. Erik Nuyts is researcher and lecturer at the University College PXL
and associate professor at the University of Hasselt. He got a master degree
in mathematics, and afterwards a PhD in biology.
Since his specialty is research methodology and
analysis, his working area is not limited to one specic eld. His experiences
in research, therefore, vary from mathematics to biology, trac engineering
and credit risks, health, physical education, (interior) architecture
and typography.
His responsibilities both at the University College
PXL as at the University of Hasselt involve preparation of research methodol-
ogy, data collection and statistical analyses in many dierent projects. He is
responsible for courses in research design, statistics, and mathematics.
47
april .
2021
María Pérez Mena
Dr. María Pérez Mena is an award-winning graphic and type designer. She
is postdoctoral researcher at the legibility research group READSEARCH at
PXL-MAD School of Arts and Hasselt University. María teaches typography
and type design in the BA in Graphic Design at PXL-MAD and is lecturer
in the International Master program ‘Reading Type & Typography’ and the
Master program ‘Graphic Design’ at the same institution. She received her
PhD “with the highest distinction from University of Basque Country and is
a member of the Data Science Institute UHasselt.
Ann Bessemans
Prof. Dr. Ann Bessemans is a legibility expert and award-winning graphic
and type designer. She founded the READSEARCH legibility research group
at the PXL-MAD School of Arts and Hasselt University where she teaches ty-
pography and type design. Ann is the program director of the international
Master program ‘Reading Type & Typography’. Ann received her PhD from
Leiden University and Hasselt University under the supervision of
Prof. Dr. Gerard Unger. She is a member of the Data Science Institute
UHasselt, the Young Academy of Belgium and lecturer at the Plantin
Institute of Typography.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Acoustic measurements have shown that the speech of hearing-impaired (HI) children differs from that of normally hearing (NH) children, even after several years of device use. This study focuses on the perception of HI speech in comparison to NH children’s speech. The purpose of this study was to investigate whether adult listeners can identify the speech of NH and HI children. Moreover, it is studied whether listeners’ experience and the children’s length of device use play a role in that assessment. For this study, short utterances of seven children with a cochlear implant (CI), seven children with an acoustic hearing aid (HA), and seven children with NH were presented to 90 listeners who were required to specify the hearing status of each speech sample. The judges had different degrees of familiarity with hearing disorders: there were 30 audiologists, 30 primary school teachers and 30 inexperienced listeners. The results show that the speech of children with NH and HI can reliably be identified. However, listeners do not manage to distinguish between children with CI and HA. Children with CI are increasingly identified as NH with increasing length of device use. For children with HA there is no similar change with longer device use. Also, experienced listeners seem to display a more lenient attitude towards atypical speech, whereas inexperienced listeners are stricter and generally consider more utterances to be produced by children with HI.
Article
Full-text available
The first medical device to restore a human sense, a cochlear implant converts sound into a train of current pulses that directly stimulate the auditory nerve of a profoundly deaf ear.
Article
Full-text available
Objectives Although cochlear implant (CI) users achieve good speech comprehension, they experience difficulty perceiving music and prosody in speech. As the provision of music training in rehabilitation is limited, a novel concept of music therapy for rehabilitation of adult CI users was developed and evaluated in this pilot study. Methods Twelve unilaterally implanted, postlingually deafened CI users attended ten sessions of individualized and standardized training. The training started about 6 weeks after the initial activation of the speech processor. Before and after therapy, psychological and musical tests were applied in order to evaluate the effects of music therapy. CI users completed the musical tests in two conditions: bilateral (CI + contralateral, unimplanted ear) and unilateral (CI only). Results After therapy, improvements were observed in the subjective sound quality (Hearing Implant Sound Quality Index) and the global score on the self-concept questionnaire (Multidimensional Self-Concept Scales) as well as in the musical subtests for melody recognition and for timbre identification in the unilateral condition. Discussion Preliminary results suggest improvements in subjective hearing and music perception, with an additional increase in global self-concept and enhanced daily listening capacities. Conclusions The novel concept of individualized music therapy seems to provide an effective treatment option in the rehabilitation of adult CI users. Further investigations are necessary to evaluate effects in the area of prosody perception and to separate therapy effects from general learning effects in CI rehabilitation.
Article
Thirty-three young people with cochlear implants, aged between 9 and 16 years, were assessed for use of their implant system, cognitive abilities, vocabulary, reading, and writing skills. The group came from throughout England and included 26 born deaf, six deafened by meningitis, one with auditory neuropathy, and five with additional needs. Nineteen had bilateral implants with a mean age at first implantation of three years six months. The majority were educated in mainstream, with 85 per cent using oral communication in school. The group was cognitively able, all scoring within or above the normal range. In terms of receptive and expressive vocabulary, 75 per cent and 67 per cent scored within the average range respectively. Using the Single Word Reading Test, 55 per cent were within the average range, and 21 per cent above. As measured by the York Assessment of Reading Comprehension, 72 per cent were commensurate with hearing peers, and 9 per cent above on reading rate, and 75 per cent within the average range, and 13 per cent above on comprehension. Free writing samples indicated that 25 per cent were performing at the expected level for their age, 19 per cent above, and 56 per cent below. Influences on outcomes were age at implantation, bilateral implantation, and age at testing. Overall this group demonstrated good use of their technology, and much stronger outcomes in vocabulary and reading than evidenced in the deaf population prior to implantation. Writing outcomes were not as strong as in reading, but were not showing the use of non-standard English as in the past, and were showing writing strategies such as invented spelling, common in hearing children.
Article
The Journal of Agricultural Education (JAE) requires authors to follow the guidelines stated in the Publication Manual of the American Psychological Association [APA] (2009) in preparing research manuscripts, and to utilize accepted research and statistical methods in conducting quantitative research studies. The APA recommends the reporting of effect sizes in quantitative research, when appropriate. JAE now requires the reporting of effect size when reporting statistical significance in quantitative manuscripts. The purposes of this manuscript are to describe the research foundation supporting the reporting of effect size in quantitative research and to provide examples of how to calculate effect size for some of the most common statistical analyses utilized in agricultural education research. Recommendations for appropriate effect size measures and interpretation are included. The assumptions and limitations inherent in the reporting of effect size in research are also incorporated.
Article
The primary goal was to expand our understanding of text reading fluency (efficiency or automaticity)-how its relation to other constructs (e.g., word reading fluency and reading comprehension) changes over time and how it is different from word reading fluency and reading comprehension. We examined (1) developmentally changing relations among word reading fluency, listening comprehension, text reading fluency, and reading comprehension; (2) the relation of reading comprehension to text reading fluency; (3) unique emergent literacy predictors (i.e., phonological awareness, orthographic awareness, morphological awareness, letter name knowledge, vocabulary) of text reading fluency vs. word reading fluency; and (4) unique language and cognitive predictors (e.g., vocabulary, grammatical knowledge, theory of mind) of text reading fluency vs. reading comprehension. These questions were addressed using longitudinal data (two timepoints; Mean age = 5;24 & 6;08) from Korean-speaking children (N = 143). Results showed that listening comprehension was related to text reading fluency at time 2, but not at time 1. At both times text reading fluency was related to reading comprehension, and reading comprehension was related to text reading fluency over and above word reading fluency and listening comprehension. Orthographic awareness was related to text reading fluency over and above other emergent literacy skills and word reading fluency. Vocabulary and grammatical knowledge were independently related to text reading fluency and reading comprehension whereas theory of mind was related to reading comprehension, but not text reading fluency. These results reveal developmental nature of relations and mechanism of text reading fluency in reading development.
Article
This study focused on the long-term speech perception performances of 34 prelingually deafened children who received multichannel cochlear implants manufactured by Cochlear Corporation. The children were grouped by the age at which they received cochlear implants and were characterized by the amount of time they used their devices per day. A variety of speech perception tests were administered to the children at annual intervals following the connection of the external implant hardware. No significant differences in performance are evident for children implanted before age 5 compared to children implanted after age 5 on closed-set tests of speech perception ability. All children demonstrated an improvement in performance compared to the pre-operative condition. Open-set word recognition performance is significantly better for children implanted before age 5 compared to children implanted after age 5 at the 36-month test interval and the 48-month test interval. User status, defined by the amount of daily use of the implant, significantly affects all measures of speech perception performance except pattern perception.
Article
Maciej Karpiński. The Boundaries of Language: Dealing with Paralinguistic Features. Lingua Posnaniensis, vol. LIV (2)/2012. The Poznań Society for the Advancement of the Arts and Sciences. PL ISSN 0079-4740, ISBN 978-83-7654-252-2, pp. 37-54. The paralinguistic component of communication attracted a great deal of attention from contemporary linguists in the 1960s. The seminal works written then by Trager, Crystal and others had a powerful influence on the concept of paralanguage that lasted for many years. But, with the focus shifting towards the socio-psychological context of communication in the 1970s, the development of spoken corpora and databases and the significant progress in speech technology in the 1980s and 1990s, the need has arisen for a more comprehensive, coherent and formalised - but also flexible - approach to paralinguistic features. This study advances some preliminary proposals for a revised treatment of paralanguage that would meet some of these requirements and provide a conceptual basis for a new system of annotation for paralinguistic features. A range of views on paralinguistic features, which come mostly from the fields of speech prosody and gesture analysis, are briefly discussed. A number of assumptions and postulates are formulated to allow for a more consistent approach to paralinguistic features. The study suggests that there should be more reliance on continua than on binary categorisations of features, that multi-functionality and multimodality should be fully acknowledged and that clear distinctions should be made among the levels of description, and between the properties of speakers and the speech signal itself.
Article
In the last two decades the population of deaf children has changed dramatically in these countries where universal hearing screening, early intervention, digital hearing aids, and cochlear implants are available. Most of these children can now acquire intelligible spoken language and they go to mainstream school in larger proportions. But mainstream placement does not eliminate the need for services, which will vary depending upon the child's age, school curriculum, language, and other child-specific factors. This paper reports on the content of all these changes and will also show you how one of the schools for the deaf in Belgium, called KIDS (Royal Institute for the deaf) has adapted his educational setting to their changing population of deaf children. The special school for the deaf became a service centre for the deaf. Within this service centre, which has to deal with the whole, very heterogeneous group of deaf children, there are several departments: early intervention, daycare centre, pre-, primary and vocational training school, mainstreamed support service, audiological centre, and residential department. All this is a big challenge for the management of the service centre, who must ensure that their staff have the skills to meet these challenges.