ArticlePDF Available

preliminary study exploring the relation between visual prosody and the prosodic components in sign language

Authors:
  • ONICI, Independent Information Centre on Cochlear Implants

Abstract and Figures

Type enriched with visual prosody is a powerful tool to encourage expressive reading. Visual prosody adds cues to text to guide vocal variations in loud-ness, duration, and pitch. More vocal variations result in a less monotonous voice and thus more expression. A positive e!ect of visual prosody is known on the voice of normal hearing readers and of signed bilingual deaf readers who developed signed language and spoken language. These deaf readers rely on speech as well as sign language and both modalities can be used interchangeably to compensate each other. This preliminary study explores visual prosody in text in relation to Flemish Sign Language to see if sign language can be used to explain prosody. We asked deaf readers between 7 and 18 to relate prosodic cues to videos presenting prosodic components of Flemish Sign Language. We found that those readers connect the prosodic cues with the components in Flemish Sign Language as intended. Larger word-spacing cor-relates with a pause between signs, a wider font with a sign with ‘longer du-ration’, a thicker font with more ‘displacement’ in the sign, a raised font with a ‘faster velocity’ in the sign. However, some confusion occurred as participants seemed to extract only two prosodic components in the sign language: both the ‘faster velocity’ and ‘longer duration’ were referred to in terms of 'speed' and were not perceived as separate prosodic components. Participants were confused about why there were three cues in the text. Therefore, it is advised to re-evaluate and to re-design visual prosody for sign language with only ‘displacement’ and ‘speed’ in mind.
Content may be subject to copyright.
48
Visible
Language
55 . 1
Renckens, et al.
A preliminary study
exploring the relation between visual prosody
and the prosodic components in sign language
48
Visible
Language
55 . 1
49
april .
2021
Type enriched with visual prosody is a powerful tool to encourage expressive
reading. Visual prosody adds cues to text to guide vocal variations in loud-
ness, duration, and pitch. More vocal variations result in a less monotonous
voice and thus more expression. A positive eect of visual prosody is known
on the voice of normal hearing readers and of signed bilingual deaf readers
who developed signed language and spoken language. These deaf readers
rely on speech as well as sign language and both modalities can be used
interchangeably to compensate each other.
This preliminary study explores visual prosody
in text in relation to Flemish Sign Language to see if sign language can be
used to explain prosody. We asked deaf readers between 7 and 18 to relate
prosodic cues to videos presenting prosodic components of Flemish Sign
Language. We found that those readers connect the prosodic cues with the
components in Flemish Sign Language as intended. Larger word-spacing cor-
relates with a pause between signs, a wider font with a sign with longer du-
ration’, a thicker font with more ‘displacement’ in the sign, a raised font with a
‘faster velocity’ in the sign. However, some confusion occurred as participants
seemed to extract only two prosodic components in the sign language: both
the ‘faster velocity’ and ‘longer duration’ were referred to in terms of 'speed'
and were not perceived as separate prosodic components. Participants were
confused about why there were three cues in the text. Therefore, it is advised
to re-evaluate and to re-design visual prosody for sign language with only
‘displacement’ and ‘speed’ in mind.
Keywords
type design,
visual prosody,
prosody,
deaf readers,
sign language,
expressiveness
Maarten Renckens1
Leo De Raeve2, 3
Erik Nuyts1
María Pérez Mena1
Ann Bessemans1
A preliminary study exploring the relation
between visual prosody and
the prosodic components in sign language
1 Hasselt University in
collaboration with PXL University
College, Belgium;
2 Independent Information Centre
about Cochlear Implantation
(ONICI) Zonhoven, Belgium;
3 KIDS (Royal Institute for the
Deaf) Hasselt, Belgium.
50
Visible
Language
55 . 1
Renckens, et al.
A preliminary study
exploring the relation between visual prosody
and the prosodic components in sign language
Introduction Visual prosody visualizes information that is
otherwise mostly absent in text: information about ‘how words are said. That
is the prosody, variations in the loudness, duration, and pitch of the voice.
Prosody is the motor of expressive speech and plays an important role in
understanding language. It distinguishes words such as PREsent versus
preSENT, and it adds additional information: emphasis, statements, ques-
tions, sarcasm, emotion, and more. Additionally, it can inuence the mean-
ing of a sentence: for example, “That old man CANNOT hear you very well”
or “That old man cannot hear YOU very well.” Visual prosody adds prosody
to text in a visual way using prosodic cues. There exist several dierent
approaches to visual prosody, both for reading a text with more speech
variations and for reading comprehension (Renckens et al., 2021; Bessemans
et al., 2019; Rude, 2016; Patel & McNab, 2010).
The perception and production of prosody of deaf
readers are not equal to that of their hearing peers (De Clerck et al., 2018;
Øydis, 2014; Boons et al., 2013; See et al., 2013; Chin, Bergeson & Phan, 2013;
Vander Beken et al., 2010; Markides, 1983). In our study, deaf readers’ refers
to ‘deaf students who have developed spoken language (legible enough
to be understood) and Flemish Sign Language in 1) regular education with
additional support from a school of the deaf or 2) a signed bilingual educa-
tional setting’.
Even high technological digital hearing devices
do not provide full access to speech since the perceived sound quality
is limited (Dorman et al., 2017; Scarbel, Vilain, Loevenbruck, Schmerber,
2012). Therefore, deaf readers could benet from visual prosody, which is
already applied in teaching materials to exercise vocal variations with deaf
readers by several institutions (KIDS, n.d.; Advanced bionics, unpublished;
Staum, 1987; van Uden, 1973). For some examples, see Figure 1. Prosodic
cues in reading materials help deaf readers aged 7 to 18 to read with more
expression and with understanding of the intended meaning of a sentence
(Renckens et al., 2021).
51
april .
2021
Figure 1.a, b & c.
Three examples of visual prosody
in (Dutch) deaf education, all
applying similar visualizations:
1. horizontally stretched words
(lllllaaaaannnngggg, translated as
lllloooonnnnngggg) correlate with
a longer duration. 2. bolder and
larger text correlates with more
loudness, and 3. a higher position,
vertically stretched text, a rising
line, or music notes correlate with
a higher pitch in the voice (left
to right: image from KIDS, n.d.;
image from Advanced Bionics,
unpublished; image based on van
Uden, 1973).
Until now, visual prosody was developed with
speech properties in mind (loudness, duration, pitch) but not sign language.
Because deaf readers often rely not only on spoken language but also on
sign language, this study aims to evaluate if visual prosody in text has a
consistent relation with (Flemish) sign language. If it has, then sign language
could possibly be used to explain/support (speech) prosody for deaf readers
in bilingual education.
There are similarities and dierences between oral
and sign languages. Similar to words in oral languages, signs follow one af-
ter another. But unlike in spoken languages, sign language can engage sev-
eral information sources simultaneously through dierent body parts and
signs require free space surrounding a person to perform a sign (Koenen
et al., 2005). Prosody in sign language can be found in two ways: in non-
manual markers and in variations on the signs. First, signs are accompanied
by non-manual markers (NMMs), which are all components of sign language
that are not formed by the hands (Brentari, Falk & Wolford, 2015; Elliott &
Jacobs, 2013; Van Herreweghe, 1995). Examples of prosody not made by the
hands are the non-manual markers such as mimicry, body posture, shoulder
raising, head position, facial expressions, eye blinks, eye shifts, gazes, the po-
sition of the lips, and position of the brows. Second, the literature review by
Brentari, Falk & Wolford (2015) pointed to language-independent prosodic
components in sign language, made by variations in duration, velocity, and
displacement. Those prosodic components are always present, independent
52
Visible
Language
55 . 1
Renckens, et al.
A preliminary study
exploring the relation between visual prosody
and the prosodic components in sign language
of how a language applies them, similar as to how pitch, loudness, and
duration are always in the voice, regardless of language. It is this second
kind of prosody that can be useful for this study: the progress of time is the
same for word duration and sign duration in both modalities. The velocity
in movement is supposed to resemble frequency in speech, and displace-
ment of movement would correlate with intensity in speech (Brentari, Falk &
Wolford, 2015).
But relating visual prosody to sign language is
more dicult than relating it to the voice. Current prosodic cues are de-
signed to be intuitive with the speech in mind. ‘Intuitive‘ means that their
intention can be interpreted without much explanation (see also Shaikh,
2009; Lewis and Walker, 1989). For example, ‘boldness’ correlates to ‘loud-
ness’ (Shaikh, 2009). But in sign language, the line between syntax and
prosody is not yet dened (Sandler, 2010). The prosodic components in sign
language are more interwoven with the modality of the language itself: for
example, while a signer consciously makes use of hands, face, and body to
represent a concept in a sign (van den Bogaerde, 2012), signing the sign
‘walking’ slowly can mean that the person is walking slow. This interwo-
venness with the content could inuence the intuitive understanding of
prosodic cues for sign language.
To evaluate the aim of this research, the hypothesis
of this research became “Deaf readers relate typographic prosodic cues
designed with the speech properties in mind consistently to prosodic compo-
nents in Flemish Sign Language.
Methodology
Prosodic cues
The existing prosodic cues within the typeface
Matilda were used for this study. The relation of these prosodic cues with
prosody in speech (according to Bessemans et al., 2019) and the compo-
nents of sign language (according to the suggestions of Brentari, Falk &
Wolford, 2015) is described in Table 1.
53
april .
2021
Figure 2.
A summary of the four prosodic
cues used in this study. a thicker
font, a wider space, a font raised
above the baseline, and a wider
font. The sentence is translated as
“The big bird ies high.”
Table 1.
The suggested relations/similarities
between the prosodic cues, the
speech variations, and the prosodic
components in sign language
(based on 1. Bessemans et al., 2019;
which is based on earlier studies,
and 2. Brentari, Falk & Wolford,
2015).
This table is not a full summary of prosody: the
opposite cues for the opposite direction of each speech variation exist: for
example, a softer voice instead of a louder voice. However, a study does not
have to include all possible cues to prove the eectiveness of a subset of
cues (as in, for example, Patel & McNab, 2010, Patel, Kember & Natale, 2014,
or Bessemans et al., 2019). To reduce complexity of this test for young read-
ers, these additional cues were not applied in this study.
Video fragments
Four video fragments were created. In those
videos, a ‘neutral’ sentence in Flemish Sign Language was shown rst [Figure
1.A.], followed by the same sentence in Flemish Sign Language but with
Visual cue in text for this
research
Correlates with prosody in
speech (Bessemans et al,
2019)
Correlates with prosody in
sign language (Brentari, Falk
& Wolford, 2015)
Thicker font
Louder voice
More displacement
Larger space
Longer pause
Longer pause
Font raised above the baseline Higher pitch Faster velocity
Wider font
Longer duration
Longer duration
54
Visible
Language
55 . 1
Renckens, et al.
A preliminary study
exploring the relation between visual prosody
and the prosodic components in sign language
a modulated prosody on the last sign: a “longer pause, “longer duration,
“faster velocity” or “more displacement” [Figure 1.B].
Please note that prosody in sign lan-
guage is much more interwoven with the content of the message than the
prosody in speech (e.g., the sign for walking, executed slowly). To avoid such
connections, prosody in this research was treated as an independent factor,
not connected to the meaning of the message. The eect of prosodic cues
on a sign’s meaning could become the subject of a follow-up study.
Figure 3.
A and B. In videos, a signer
performs one recurring sentence
with varying prosodic variations. A:
The neutral sign is shown rst. B:
Each neutral sign was followed by
a sentence containing one specic
variation (e.g., in image B more
displacement than in A).
55
april .
2021
Participants
The cues were tested by 38 deaf readers. In this
study, this term refers to readers with hearing remnants who are able to
speak. Their native language and preferred language could be either spoken
or signed, but all participants were educated 1) regular schools and received
additional support from schools of the deaf (often after starting their rst
education in a school of the deaf), or 2) in a bi-lingual educational environ-
ment. They thus had a high chance to come in contact with both spoken
and signed languages. All readers were between 7 and 18 years old. The
participants’ age was evenly spread: for all ages between 7 and 18, each age
was represented by at least two participants. These readers followed primary
and secondary education in regular schools or schools for the deaf, and
most wore one or two hearing devices. Some participants preferred Flemish
Sign Language as primary language, and others preferred speech.
Because this research was executed at the same
time as the study evaluating visual prosody’s inuence on the reading aloud
(Renckens et al., 2021), the participants enrolled in both research studies at
the same time. Therefore, the participants were aware that there was a rela-
tionship between the cues and speech prosody but did not yet know which
one. Participants were not provided any information about prosody in sign
language when this test started.
Procedure
The participants were asked to look carefully to
the video [Figure 3]. Then, they were asked to choose from a list of sentences
[Figure 2] the sentence that, according to him/her, corresponded with the
last video shown. Participants did not have to know Flemish Sign Language
uently to follow the video fragments. All participants were encouraged to
mark an answer. If they were not sure, they could see the video one more
time before marking an answer.
This procedure with the videos was repeated: after
connecting all movie fragments to written sentences a rst time, the movie
fragments were shown a second time without the participants knowing
that the same videos were shown again. This way, participants watched all
prosodic components in Flemish Sign Language twice. The order in which
all participants saw the videos was: pause, thicker, extended, higher, pause,
extended, higher, thicker.
Participants could provide feedback while answer-
ing the question, and the researcher asked questions when a participant
was stuck.
56
Visible
Language
55 . 1
Renckens, et al.
A preliminary study
exploring the relation between visual prosody
and the prosodic components in sign language
Statistical Analysis
While relating the cues to the movies, a binomial
test with one chance on four to guess correctly, results of 38 persons dier
signicantly from pure chance (p<0.05) if 14 or more (i.e., 36% or more) give
the same answer.
To see if there was a learning eect and to test if
there was a dierence between primary and secondary school, proportion
tests for two proportions were used.
Results
Relating cues to the movies
Except for the rst time that ‘more displacement’
was shown, all cues were related to their intended component in Flemish
Sign Language [Chart 1.]. The rst time that ‘faster displacement’ was
presented, participants marked the non-intended wider font more often
than the intended thicker font [as shown in Figure 1.C] (42% versus 39%).
All cues were signicantly more often related to their intended component
in Flemish Sign Language than if participants would have been guessing.
No statistically signicant learning eects were found between the rst and
second time a video fragment was shown.
Chart 1.
Except for the rst time that ‘more
displacement’ was shown (dashed
border), prosodic components
in Flemish Sign Language were
related most of their occurrences
to their intended cue (solid border).
53%74%
39%45%
61%58%
42%
42%
47%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Pause 1
Pause 2
Longer duration 1
Longer duration 2
Faster velocity 1
Faster velocity 2
Displacement 1
Displacement 2
TO WHICH PROSODIC CUES
PARTICIPANTS RELATED THE PROSODIC
COMPONENTS IN THE SIGN LANGUAGE
Pause Thicker Wider Raised No answer
57
april .
2021
No difference between primary
and secondary school
In total, 22 of the readers took classes in primary
school, and 16 of the readers took classes in secondary school. How the
participants related the videos with Flemish Sign Language to the intended
prosodic cues is stated in Table 2.
Table 2.
When participants had to relate
videos with prosodic cues, the
results varied between 38% and
77% correctness.
There was no signicant dierence between the
percentage of correct answers of primary school readers versus secondary
school readers for any of the cues presented. It can be argued that this lack
of signicance is due to the relatively small number of samples. However,
notice that in four situations, the students at the secondary school score bet-
ter, and that in the other four situations, the children of the primary school
score better. This favors the idea that there is no age eect.
Some comments of the children
during the sessions
The children’s comments were not recorded
explicitly, as the children were free to comment on anything that they felt
was important. However, two comments kept returning and were deemed
important enough to be mentioned here.
First, we noticed that not a single participant used
the terms “faster velocity” or “slower duration”; both were described in terms
of faster/slower “speed.
1
Number of correct answers when participants relate the videos to prosodic cues
all (#)
all (%)
Primary (#)
Primary (%)
Secondary (#)
Secondary (%)
Pause 1
20
53%
11
50%
9
56%
Displacement 1
15
39%
9
41%
6
38%
Longer duration 1
23
61%
13
59%
10
63%
Faster velocity 1
16
42%
10
45%
6
38%
Pause 2
28
74%
17
77%
11
69%
Displacement 2
17
45%
11
50%
6
38%
Longer duration 2 22 58% 12 55% 10 63%
Faster velocity 2
18
47%
9
41%
9
56%
58
Visible
Language
55 . 1
Renckens, et al.
A preliminary study
exploring the relation between visual prosody
and the prosodic components in sign language
Second, participants deemed the faster velocity
more ‘dicult’ to relate to a font. Four participants explicitly said that they
were searching for a prosodic cue that was not in this test but was present
in the other tests evaluating speech (such as a narrower font resembling a
faster speech, see Renckens et al., 2021). At least six more participants were
stuck on this cue, even to that level that the researcher had to intervene and
ask what the problem was. The researcher had to tell that, even when they
did not see the typeface they were looking for, an answer had to be chosen.
Thus, one participant on four was clearly struggling with the cue for a faster
velocity.
Discussion The cues designed with speech prosody in mind
do relate to the intended prosodic components in Flemish Sign Language.
A longer pause in sign language correlates with a larger word-space; a sign
with a longer duration with a wider font; a sign with more displacement
with a thicker font; a sign with a faster velocity correlates with a raised font.
These ndings are in line with Brentari, Falk & Wolford (2015)’s review of the
similarities between the prosody of speech and sign language (table 1).
We did not nd an eect of age. This implies that
participants in primary school similarly associated the textual prosodic cues
with prosody in Flemish Sign Language as the children in secondary school.
This has the advantage that, once the children can read, prosodic cues could
be introduced and afterward be used during the whole period they go to
school.
But the participants’ oral feedback
exposed confusion when these cues are applied to sign language. The pro-
sodic cues used in this study were designed with three very distinct speech
variations in mind: loudness (in decibel), duration (in milliseconds), and pitch
(in Hertz). The prosodic components in sign language are very dierent: dis-
placement (in distance), duration (in milliseconds) and velocity (distance per
millisecond). Here, velocity closely resembles duration: both are expressed
in ‘time’-units (often seconds or milliseconds). This close interwovenness
caused confusion for the participants. It seemed that they intuitively only
extracted two prosodic components in sign language: displacement and
speed. This constitutes the main drawback of this study. We did not antici-
pate the diculty the children had with the three prosodic components. We
only learned during the data collection that the interwovenness of duration
and velocity was exceptionally strong. Because we did not anticipate this
issue, we had no alternative prosodic cues to address it.
Despite this confusion, the intended cue for a
faster velocity (the raised text) was marked most of the time (42% and 47%).
59
april .
2021
That could be explained by the fact that the cue that some participants
expected, a narrower font, was not presented (participants knew that cue
from the research about the voice that ran at the same time). Participants
thus choose a raised font, probably because the bold and wider fonts were
already related to another prosodic component in sign language. If more
cues would have been provided to choose from, it is uncertain that the task
of relating cues to the movies would have been as consequently answered
as it was now.
While this study focused on Flemish Sign
Language, we can assume that the ndings are valid for multiple sign lan-
guages, in line with Brentari, Falk, and Wolford (2015), who discussed several
sign languages. This should be evaluated with dierent sign languages and
dierent cultures as a reference, which both could have another perception
of prosody.
At this moment, the practical usage of this prelimi-
nary study is limited. We evaluated the three prosodic cues most commonly
used in visual prosody for speech. The participants’ oral feedback pointed
out that this approach cannot be transferred to sign language easily. We
advise performing a new study that evaluates if two prosodic cues are suf-
cient to represent the intrinsic characteristics of (Flemish) Sign Language:
on one side ‘displacement,’ on the other side velocity and duration grouped
together as ‘speed.
Conclusion This research conrms the hypothesis that
“Deaf readers relate typographic prosodic cues designed with the speech
properties in mind consistently to prosodic components in Flemish Sign
Language. The test was done with participants relating three prosodic cues
in type to movies showing prosodic components in sign language. The
prosodic cues were correctly related to their intended prosodic component
in the sign language.
However, the three prosodic components in sign
language are more closely interwoven with each other than the three pro-
sodic speech variations on which the prosodic cues are based. That caused
confusion in the terminology used by the readers: “velocity” and “duration”
were both referred to as “speed.Thus, readers intuitively extract only two
prosodic components in sign language and encounter problems with three
prosodic cues in text.
Based on the results of this study, we recom-
mend that visual prosody has to be adapted for sign language. This requires
further research.
60
Visible
Language
55 . 1
Renckens, et al.
A preliminary study
exploring the relation between visual prosody
and the prosodic components in sign language
Acknowledgments We would like to thank all schools in the Dutch-
speaking regions that supported this research and facilitated connecting
with deaf students. In alphabetical order: Antwerp Plus (Antwerp), BUSO
Zonnebos (Schoten), Cor Emous (The Hague), Kasterlinden (Kasterlinden),
KIDS (Hasselt), Koninklijk instituut Woluwe (Woluwe), Sint Gregorius (Ghent-
Bruges), Sint-Lievenspoort (Ghent), Spermalie (Bruges). Without their help,
this research would not have been possible.
61
april .
2021
References
ADVANCED BIONICS. (unpublished version). A musical journey through the
rainforest. [Teaching material]
BESSEMANS, A.; RENCKENS, M.; BORMANS, K.; NUYTS, E.; & LARSON, K. (2019)
“Visual prosody supports reading aloud expressively.Visible
Language.53 (3): 28-49.
BOONS, T.; DE RAEVE, L.; LANGEREIS, M.; PEERAER, L.; WOUTERS, J.; VAN
WIERINGEN, A. (2013). “Expressive vocabulary, morphology,
syntax and narrative skills in profoundly deaf children after early
cochlear implantation. Research in Developmental Disabilities. 34:
2008–2022.
BRENTARI, D.; FALK, J.; & WOLFORD, G. (2015). The acquisition of prosody
in American Sign Language. Linguistic Society of America. 91 (3):
e144-e168. https://doi.org/10.1353/lan.2015.0042.
CHIN, S.B.; BERGESON, T.R.; PHAN, J. (2012). “Speech Intelligibility and
Prosody Production in Children with Cochlear Implants.Journal
of Communication Disorders. 45(5). 355–366. http://dx.doi.
org/10.1016/j.jcomdis.2012.05.003.
DE CLERCK, I; PETTINATO, M.; GILLIS, S.; VERHOEVEN, J.; & GILLIS, S. (2018).
“Prosodic modulation in the babble of cochlear implanted and
normally hearing infants: a perceptual study using a visual
analogue scale. First Language. 38 (5): 481–502. https://doi.
org/10.1177/0142723718773957.
DORMAN, M.F.; COOK NATALE, S.; BUTTS, A.M.; ZEITLER, D.M.; & CARLSON,
M.L. (2017). “The sound quality of cochlear implants - Studies
with single-sided deaf patients. Otol Neurotol. 38 (8): e268–e273.
https://doi:10.1097/MAO.0000000000001449.
ELLIOTT, E.A. & JACOBS, A.M. (2013). “Facial Expressions, Emotions, and
Sign Languages. Frontiers in Psychology. 4 (115) https://doi.
org/10.3389/fpsyg.2013.00115.
KIDS (Unpublished). Teaching materials for children with language disorders.
[Internal teaching materials].
KOENEN, L.; BLOEM, T.; JANSSEN, R.; & VAN DE VEN, A. (2005). “Gebarentaal.
De taal van doven in Nederland (Sign Language. The language of
deaf in the Netherlands). Vi-Taal: The Hague.
LEWIS, C., & WALKER, P. (1989). “Typographic inuences on reading. British
Journal of Psychology. 80, (2), 241-257.
62
Visible
Language
55 . 1
Renckens, et al.
A preliminary study
exploring the relation between visual prosody
and the prosodic components in sign language
MARKIDES, A. (1983). The speech of hearing-impaired children. Manchester
University Press, New Hampshire, USA.
ØYDIS, H. (2013). Acoustic Features of Speech by Young Cochlear Implant Users.
[Dissertation] Universiteit Antwerpen.
PATEL, R., KEMBER, H. & NATALE, E. (2014). “Feasibility of Augmenting
Text with Visual Prosodic Cues to Enhance Oral Reading.
Speech Communication. 65. https://doi.org/10.1016/j.
specom.2014.07.002
PATEL, R. & MCNAB, C. (2010). “Feasibility of augmenting text with visual
prosodic cues to enhance oral reading.Speech Communication.
53 (3): 431-441.
RENCKENS, M.; DE RAEVE, L.; NUYTS, E.; PEREZ MENA, M.; BESSEMANS, A.
(2021). Visual prosody supports reading aloud expressively for
deaf readers. Visible Language: 55 (1).
RUDE, M. (2016). Prosodic writing shows L2 learners intonation by 3d letter
shapes - state, results, and attempts to increase 3d perception.
[Intern document]. University of Nagoya.
SANDLER, W. (2010). “Prosody and syntax in sign languages.
Trans Philol Soc: 108(3): 298–328. https://doi.
org/10.1111/j.1467-968X.2010.01242.x.
SCARBEL, L.; VILAIN, A.; LOEVENBRUCK, H.; SCHMERBER, S. (2012). An
acoustic study of speech production by French children wearing
cochlear implants. 3rd Early Language Acquisition Conference, Dec
2012, Lyon, France.
SEE, R.L.; DRISCOLL, V.D.; GFELLER, K.; KLIETHERMES, S. & OLESON, J.
(2013). “Speech intonation and melodic contour recognition
in children with cochlear implants and with normal hearing.
Otology and Neurotology. 34 (3): 490–8. https://doi.org/10.1097/
MAO.0b013e318287c985.
SHAIKH, D. (2009). “Know your typefaces! Semantic dierential presentation
of 40 onscreen typefaces. Usability News. 11, (2).
STAUM, M.J. (1987). “Music Notation to Improve the Speech Prosody of
Hearing Impaired Children. Journal of Music Therapy. 24 (3):
146-159.
VAN DEN BOGAERDE, B. (2012). “Kun je alles zeggen in gebarentaal? Over
taal van doven en slechthorenden (Can you say anything in
sign language? About language of deaf and hard of hearing).
[Online] http://www.taalcanon.nl/vragen/kun-je-alles-zeggen-in-
gebarentaal. [1 March 2017].
63
april .
2021
VANDER BEKEN, K.; DEVRIENDT, V.; VAN DEN WEYNGAERD, R.; DE RAEVE, L.;
LIPPENS, K.; BOGAERTS, J.; MOERMAN, D. (2010). “Personen met
een auditieve handicap (Persons with an auditory handicap). In:
BROEKAERT et al. (2016: fourteenth press) Handboek bijzondere
orthopedagogiek (Manual for extraordinary remedial education).
Garant. Antwerpen-Apeldoorn. p 131-210.
VAN HERREWEGHE, M. (1995). De Vlaams-Belgische gebarentaal: een eerste
verkenning (The Flemish-Belgian Sign Language: a rst exploration).
Ghent: Academia Press.
VAN UDEN, A. (1973). Taalverwerving door taalarme kinderen (Language
acquisition by language-poor children). Universitaire Pers
Rotterdam, Rotterdam.
64
Visible
Language
55 . 1
Renckens, et al.
A preliminary study
exploring the relation between visual prosody
and the prosodic components in sign language
Authors
Maarten Renckens
Maarten Renckens is a teacher and design researcher with a love for let-
ters and a heart for people. Dealing with a reading diculty himself, he is
very interested in the reading process. His projects include the typeface
'Schrijfmethode Bosch' (Writing Method Bosch) that learns children how
to write and typefaces to encourage beginner readers and readers with
hearing loss to read more expressively. With a background in architectural
engineering, he is used to approach concepts technically and mathemati-
cally. He applies this technical knowledge to unravel letterforms, in order to
determine the eects of dierent letterforms on the reading process.
Leo De Raeve
Leo De Raeve PhD has 3 professions: he is a Doctor in Medical Sciences,
psychologist and teacher of the deaf. He is founding director of ONICI, the
Independent Information and Research Center on Cochlear Implants, is
lecturer at University College Leuven-Limburg and scientic advisor of the
European Users Association of Cochlear Implant (EURO-CIU).
Erik Nuyts
Prof. Dr. Erik Nuyts is researcher and lecturer at the University College PXL
and associate professor at the University of Hasselt. He got a master degree
in mathematics, and afterwards a PhD in biology.
Since his specialty is research methodology and
analysis, his working area is not limited to one specic eld. His experiences
in research, therefore, vary from mathematics to biology, trac engineering
and credit risks, health, physical education, (interior) architecture
and typography.
His responsibilities both at the University College
PXL as at the University of Hasselt involve preparation of research methodol-
ogy, data collection and statistical analyses in many dierent projects. He is
responsible for courses in research design, statistics, and mathematics.
65
april .
2021
María Pérez Mena
Dr. María Pérez Mena is an award-winning graphic and type designer. She is
postdoctoral researcher at the legibility research group READSEARCH at PXL-
MAD School of Arts and Hasselt University. María teaches typography and
type design in the BA in Graphic Design at PXL-MAD and is lecturer in the
International Master program ‘Reading Type & Typography’ and the Master
program ‘Graphic Design’ at the same institution. She received her PhD “with
the highest distinction” from University of Basque Country and is a member
of the Data Science Institute UHasselt.
Ann Bessemans
Prof. Dr. Ann Bessemans is a legibility expert and award-winning graphic
and type designer. She founded the READSEARCH legibility research group
at the PXL-MAD School of Arts and Hasselt University where she teaches ty-
pography and type design. Ann is the program director of the international
Master program ‘Reading Type & Typography’. Ann received her PhD from
Leiden University and Hasselt University under the supervision of
Prof. Dr. Gerard Unger. She is a member of the Data Science Institute
UHasselt, the Young Academy of Belgium and lecturer at the Plantin Institute
of Typography.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Type is not expressive enough. Even the youngest speakers are able to express a full range of emotions with their voice, while young readers read aloud monotonically as if to convey robotic boredom. We augmented type to convey expression similarly to our voices. Specifically, we wanted to convey in text words that are spoken louder, words that drawn out and spoken longer, and words that are spoken at a higher pitch. We then asked children to read sentences with these new kinds of type to see if children would read these with greater expression. We found that children would ignore the augmentation if they weren’t explicitly told about it. But when children were told about the augmentation, they were able to read aloud with greater vocal inflection. This innovation holds great promise for helping both children and adults to read aloud with greater expression and fluency.
Article
Full-text available
This is the first comparative analysis of prosody in deaf, native-signing children (ages 5;0–8;5) and adults whose first language is American Sign Language (ASL). The goals of this study are to describe the distribution of prosodic cues during acquisition, to determine which cues across age groups are most predictive in determining clausal and prosodic boundaries, and to ascertain how much isomorphy there is in ASL between syntactic and prosodic units. The results show that all cues are acquired compositionally, and that the prosodic patterns in child and adult ASL signers exhibit important differences regarding specific cues; however, in all groups the manual cues are more predictive of prosodic boundaries than nonmanual markers. This is evidence for a division of labor between the cues that are produced to mark constituents and those that contribute to semantic and pragmatic meaning. There is also more isomorphy in adults than in children, so these results add to the debates about isomorphy, suggesting that while there is clear autonomy between prosody and syntax, productions exhibiting nonisomorphy are relatively rare overall.
Article
Full-text available
Facial expressions are used by humans to convey various types of meaning in various contexts. The range of meanings spans basic possibly innate socio-emotional concepts such as "surprise" to complex and culture specific concepts such as "carelessly." The range of contexts in which humans use facial expressions spans responses to events in the environment to particular linguistic constructions within sign languages. In this mini review we summarize findings on the use and acquisition of facial expressions by signers and present a unified account of the range of facial expressions used by referring to three dimensions on which facial expressions vary: semantic, compositional, and iconic.
Article
Full-text available
Background: Cochlear implant (CI) users have difficulty perceiving some intonation cues in speech and melodic contours because of poor frequency selectivity in the cochlear implant signal. Objectives: To assess perceptual accuracy of normal hearing (NH) children and pediatric CI users on speech intonation (prosody), melodic contour, and pitch ranking, and to determine potential predictors of outcomes. Hypothesis: Does perceptual accuracy for speech intonation or melodic contour differ as a function of auditory status (NH, CI), perceptual category (falling versus rising intonation/contour), pitch perception, or individual differences (e.g., age, hearing history)? Method: NH and CI groups were tested on recognition of falling intonation/contour versus rising intonation/contour presented in both spoken and melodic (sung) conditions. Pitch ranking was also tested. Outcomes were correlated with variables of age, hearing history, HINT, and CNC scores. Results: The CI group was significantly less accurate than the NH group in spoken (CI, M = 63.1%; NH, M = 82.1%) and melodic (CI, M = 61.6%; NH, M = 84.2%) conditions. The CI group was more accurate in recognizing rising contour in the melodic condition compared with rising intonation in the spoken condition. Pitch ranking was a significant predictor of outcome for both groups in falling intonation and rising melodic contour; age at testing and hearing history variables were not predictive of outcomes. Conclusion: Children with CIs were less accurate than NH children in perception of speech intonation, melodic contour, and pitch ranking. However, the larger pitch excursions of the melodic condition may assist in recognition of the rising inflection associated with the interrogative form.
Article
Objective: The goal of the present study was to assess the sound quality of a cochlear implant for single-sided deaf (SSD) patients fit with a cochlear implant (CI). Background: One of the fundamental, unanswered questions in CI research is "what does an implant sound like?" Conventional CI patients must use the memory of a clean signal, often decades old, to judge the sound quality of their CIs. In contrast, SSD-CI patients can rate the similarity of a clean signal presented to the CI ear and candidate, CI-like signals presented to the ear with normal hearing. Methods: For Experiment 1 four types of stimuli were created for presentation to the normal hearing ear: noise vocoded signals, sine vocoded signals, frequency shifted, sine vocoded signals and band-pass filtered, natural speech signals. Listeners rated the similarity of these signals to unmodified signals sent to the CI on a scale of 0 to 10 with 10 being a complete match to the CI signal. For Experiment 2 multitrack signal mixing was used to create natural speech signals that varied along multiple dimensions. Results: In Experiment 1 for eight adult SSD-CI listeners, the best median similarity rating to the sound of the CI for noise vocoded signals was 1.9; for sine vocoded signals 2.9; for frequency upshifted signals, 1.9; and for band pass filtered signals, 5.5. In Experiment 2 for three young listeners, combinations of band pass filtering and spectral smearing lead to ratings of 10. Conclusion: The sound quality of noise and sine vocoders does not generally correspond to the sound quality of cochlear implants fit to SSD patients. Our preliminary conclusion is that natural speech signals that have been muffled to one degree or another by band pass filtering and/or spectral smearing provide a close, but incomplete, match to CI sound quality for some patients.
Article
Reading fluency has traditionally focused on speed and accuracy yet recent reports suggest that expressive oral reading is an important component that has been largely overlooked. The current study assessed the impact of augmenting text with visual prosodic cues to improve expressive reading in beginning readers. Customized reading software was developed to present text augmented with prosodic cues to convey changes in pitch, duration and/or intensity. Prosodic modulation was derived from the recordings of a fluent adult model and rendered as a set of visual cues that could be presented in isolation or in combination. To establish baseline measures, eight children aged 7-8 first read a five-chapter story in standard text format. In the subsequent three sessions, participants were trained to use each augmented text cue with the guidance of an auditory model. They also had the opportunity to practice reading aloud in each cue condition. At the post-training session, participants re-recorded the baseline story with each chapter read in one of the different cue conditions (standard, pitch, duration, intensity and combination). Post-training and baseline recordings were acoustically analyzed to assess changes in reading expressivity. Despite large individual differences in how each participant implemented the prosodic cues, as a group, there were notable improvements in marking pitch accents and elongating word duration to convey linguistic contrasts. In fact, even after only three training sessions, participants appeared to have generalized implementation of pitch and word duration cues when reading standard text at post-training. In contrast, while participants manipulated pause duration when provided with explicit visual cues, they did not transfer these cues to standard text at post-training. These findings suggest that beginning readers could benefit from explicit visual prosodic cues and that even limited exposure may be sufficient to learn and generalize skills. Further discussion focuses on the implications of this work on struggling readers and second language learners.
Article
Investigated whether a treatment program using music notation would improve the verbal rhythmic and intonational accuracy of 35 3–12 yr old hearing-impaired children. The degree of transfer to other reading and verbal skills was also examined. Ss participated in a treatment program for 40 consecutive days. Ss vocalized rhythmic and inflectional patterns at their highest individual verbal level. Results indicate that, while all Ss learned a substantial number of rhythmic and inflectional patterns, Ss capable of reading made the greatest gains in transferring their skill to novel verbal material. It is suggested that the use of music notation written below printed words could be a beneficial visual cue for verbal, rhythmic, and intonational accuracy in reading tasks with this population. For younger children, a more long-term treatment program is suggested to develop initial prosodic skills. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The purpose of the current study was to examine the relation between speech intelligibility and prosody production in children who use cochlear implants. The Beginner's Intelligibility Test (BIT) and Prosodic Utterance Production (PUP) task were administered to 15 children who use cochlear implants and 10 children with normal hearing. Adult listeners with normal hearing judged the intelligibility of the words in the BIT sentences, identified the PUP sentences as one of four grammatical or emotional moods (i.e., declarative, interrogative, happy, or sad), and rated the PUP sentences according to how well they thought the child conveyed the designated mood. Percent correct scores were higher for intelligibility than for prosody and higher for children with normal hearing than for children with cochlear implants. Declarative sentences were most readily identified and received the highest ratings by adult listeners; interrogative sentences were least readily identified and received the lowest ratings. Correlations between intelligibility and all mood identification and rating scores except declarative were not significant. The findings suggest that the development of speech intelligibility progresses ahead of prosody in both children with cochlear implants and children with normal hearing; however, children with normal hearing still perform better than children with cochlear implants on measures of intelligibility and prosody even after accounting for hearing age. Problems with interrogative intonation may be related to more general restrictions on rising intonation, and the correlation results indicate that intelligibility and sentence intonation may be relatively dissociated at these ages. As a result of this activity, readers will be able to describe (1) methods for measuring speech intelligibility and prosody production in children with cochlear implants and children with normal hearing, (2) the differences between children with normal hearing and children with cochlear implants on measures of speech intelligibility and prosody production, and (3) the relations between speech intelligibility and prosody production in children with cochlear implants and children with normal hearing.
Article
Abstract Prosodic structure in sign languages is encoded by articulations of the hands, face, and body. Despite the different physical system, there are many similarities to prosody of spoken language, such as the existence of a prosodic hierarchy, alignment of intonational elements (conveyed by the face) with temporally marked prosodic constituents (conveyed by the hands), and a close relation between prosody and syntax. The latter relation is indirect, however, and does not imply isomorphism. As such, the distribution of intonational elements cannot reliably be used as a diagnostic for syntactic structure, nor can their occurrence be predicted by syntax alone. While the existence of prosody in sign languages underscores the ‘naturalness’ of prosody in human language, the prosodic system emerges gradually, both in children acquiring established sign languages and in new sign languages, underscoring both its complexity and its grammatical character.