Dolly Goldenberg’s research while affiliated with Yale University and other places


Ad

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (6)


Concurrent aero-tactile stimulation does not bias perception of VOT for non-initial stops
  • Article

September 2018

·

17 Reads

·

1 Citation

The Journal of the Acoustical Society of America

Dolly Goldenberg

·

Mark Tiede

·

D. H. Whalen

Previous work has established that puffs of air applied to the skin and timed with listening tasks bias the perception of voicing in onset stops by naive listeners (Gick and Derrick, 2009; Goldenberg et al., 2015). While the primary cue for the voicing contrast in stops is VOT (Lisker and Abramson, 1964), in English aspiration typically functions as a cue foot initially. This study tests the effect of air puffs on perception of voicing for English stops in a non-foot-initial context (“apa/aba”) using VOT continua. Goldenberg et al. (2015) have shown that listeners are sensitive to aero-tactile effects only when these are congruent with the expected contrast (i.e., in VOT but not vowel quality distinctions). Since VOT is generally non-contrastive for English stops that are not foot-initial, air puffs were not expected to affect perception in the current case, and indeed, of 22 participants (11 females; mean age 34.2) tested, 20 showed no effect. Comparison of this null result to the significant bias observed in the earlier (foot-initial context) study extends the finding that, for aero-tactile stimulation to bias perception, the cues must be consistent with those expected in production of the perceived sounds.


Quantifying kinematic aspects of reduction in a contrasting rate production task

May 2017

·

308 Reads

·

64 Citations

The Journal of the Acoustical Society of America

Mark Tiede

·

·

Dolly Goldenberg

·

[...]

·

Electromagnetic articulometry (EMA) was used to record the 720 phonetically balanced Harvard sentences (IEEE, 1969) from multiple speakers at normal and fast production rates. Participants produced each sentence twice, first at their preferred “normal” speaking rate followed by a “fast” production (for a subset of the sentences two normal rate productions were elicited). They were instructed to produce the “fast” repetition as quickly as possible without making errors. EMA trajectories were obtained at 100 Hz from sensors placed on the tongue, lips, and mandible, corrected for head movement and aligned to the occlusal plane. Synchronized audio was recorded at 22050 Hz. Comparison of normal to fast acoustic durations for paired utterances showed a mean 67% length reduction, and assessed using Mermelstein's method (1975), two fewer syllables on average. A comparison of inflections in vertical jaw movement between paired utterances showed an average of 2.3 fewer syllables. Cross-recurrence analysis of distance maps computed on paired sensor trajectories comparing corresponding normal:normal to normal:fast utterances showed systematically lower determinism and entropy for the cross-rate comparisons, indicating that rate effects on articulator trajectories are not uniform. Examples of rate-related differences in gestural overlap that might account for these differences in predictability will be presented. [Work supported by NSF.]


Figure 3. The interaction of stress and interval effect on the duration of the intervals (the lags between temporal landmarks of the gestures). 
Figure 4. Gestural scores (in ms) of the target words. Boxes denote duration of the finger gesture, the H part of the L+H* pitch accent (and L in one sentence), consonants (C1 and C2) and vowels in the target words (MIma and miMA). Vertical lines indicate maximum constriction/displacement. The release of V1 (from maximum constriction to the end of the gesture) often overlapped with the constriction forming movement of V2 (from V2 onset to V2 maximum constriction). Because of this, only the constriction forming part of the vowel gestures are shown. 
Speech and manual gesture coordination in a pointing task
  • Conference Paper
  • Full-text available

May 2016

·

459 Reads

·

24 Citations

Download

Mind the gap: Electromagnetic articulometer observation of speech articulation in conversational turn-taking

April 2016

·

49 Reads

The Journal of the Acoustical Society of America

Joe Perkell pioneered the use of electromagnetic articulometry (EMA) for the observation and quantification of the kinematics of speech articulator movements. In the spirit of his research, we have extended these methods to EMA observation of two facing speakers interacting in conversation. The gaps or pauses between turns in speaking are known from acoustic measurement to be relatively short in duration, about 200 ms or the length of a syllable on average, and this gap duration has been shown to be consistent across widely diverse languages and cultures (Stivers et al., 2009). However, because the cognitive latencies for producing a response are much longer than this interval its planning must occur during the incoming turn. Here we provide evidence for this planning from articulator movements that anticipate speech. Movements of sensors attached to the tongue, jaw and lips have been tracked for each of 12 speaker pairs. Gaps are measured as the difference between the acoustic end of one speaker's turn to the onset of aggregated articulator movement above a 20% peak velocity threshold for the respondent. Results show that the speech articulators typically assume an appropriate posture for initiation of a speaking turn well before the onset of speech.


Dual electromagnetic articulometer observation of head movements coordinated with articulatory gestures for interacting talkers in synchronized speech tasks

April 2015

·

16 Reads

·

2 Citations

The Journal of the Acoustical Society of America

Previous research has demonstrated that speakers readily entrain to one another in synchronized speech tasks (e.g., Cummins 2002, Vatikiotis-Bateson et al. 2014, Natif et al. 2014), but the mixture of auditory and visual cues they use to achieve such alignment remains unclear. In this work, we extend the dual-EMA paradigm of Tiede et al. (2012) to observe the speech and coordinated head movements of speaker pairs interacting face-to-face during synchronized production in three experimental tasks: the “Grandfather” passage, repetition of short rhythmically consistent sentences, and competing alternating word pairs (e.g., “topper-cop” vs. “copper-top"). The first task was read with no eye contact, the second was read and then produced with eye contact, and the third required continuous eye contact. Head movement was characterized using the tracked position of the upper incisor reference sensor. Prosodic prominence was identified using F0 and amplitude contours from the acoustics, and gestural stiffness on articulator trajectories. Preliminary results show that frequency and amplitude of synchronized head movement increased with task eye contact, and that this was coordinated systematically with both acoustic and articulatory prosodic prominence. [Work supported by NIH.]


Temporal alignment between head gesture and prosodic prominence in naturally occurring conversation: An electromagnetic articulometry study

April 2014

·

45 Reads

·

3 Citations

The Journal of the Acoustical Society of America

Studies of the relationship between speech events and gesticulation have suggested that the peak of the prosodic pitch accent serves as a target with which body gestures may be coordinated (Roth, 2002; Loehr, 2004). While previous work has relied on controlled speech elicitation generally restricted to nonrepresentational extension/retraction (Leonard and Cummins, 2011) or iconic (Kelly et al., 2008) gestures, here we examine the kinematics of the speech articulators and associated head movements from pairs of individuals engaged in spontaneous conversation. Age and gender matched native speakers of American English seated 2 m apart were recorded using two electromagnetic articulometer (EMA) devices (Tiede and Mooshammer, 2013). Head movements were characterized by the centroid of reference sensors placed on the left and right mastoid processes and the upper incisors. Pitch accents were coded following the ToBI implementation of Pierrehumbert's intonational framework following Beckman and Elam (1997). Preliminary findings show that the apex (point of maximum excursion) of head movements within an IP in general precedes the peak of the associated pitch accent, and is consistently aligned with co-occurring articulatory events within the syllable. [Work supported by NIH NIDCD-DC-012350.].

Ad

Citations (5)


... We observe strong aero-tactile integration in speech categorization during two alternative forced-choice (2AFC) tasks (Derrick and Gick, 2013;Gick and Derrick, 2009;Gick et al., 2010). However, recent evidence goes against aero-tactile integration during more complex speech perception tasks (Derrick et al., 2019c;Goldenberg et al., 2018). It is possible that aero-tactile integration in speech perception can easily be overwhelmed by more influential auditory and visual information during complex speech. ...

Reference:

Tri-modal speech: Audio-visual-tactile integration in speech perception
Concurrent aero-tactile stimulation does not bias perception of VOT for non-initial stops
  • Citing Article
  • September 2018

The Journal of the Acoustical Society of America

... We use this dataset during stage-2 of the f-APTAI training and to evaluate speaker-independent performance. This dataset, the Haskins production rate comparison (HPRC) [32], contains recordings from four female and four male subjects reciting 720 phonetically balanced IEEE sentences at "normal" (HPRC-N) and "fast" (HPRC-F) speaking rates. Although the speakers repeat utterances, we randomly select only one repetition per utterance and speaker. ...

Quantifying kinematic aspects of reduction in a contrasting rate production task
  • Citing Article
  • May 2017

The Journal of the Acoustical Society of America

... Prior to presenting results of our study, we establish that overall patterns for co-speech gesture timing and stressbased enhancement are similar in our study to those that have been found in previous work. First, co-speech gesture apexes have been shown to correlate in time with pitch peak of stressed/pitch accented syllables 8,43,44 . In our own study, we find that the apex is timed to the target word (e.g. ...

Speech and manual gesture coordination in a pointing task

... supine vs. upright) (Stone et al., 2007;Tiede, Masaki, & Vatikiotis-Bateson, 2000), but head posture variation in an upright position has not been shown to influence articulation. Head movements can be coordinated with speech movements in some contexts (Goldenberg, Tiede, Honorof, & Mooshammer, 2014;Tiede & Goldenberg, 2015), and can accompany pitch excursions (Ishi, Ishiguro, & Hagita, 2014;Krivokapić, 2014), but no studies have examined whether head movement has inertial consequences for the jaw or lips. Because of their relatively small masses, inertial forces on articulators are likely negligible compared to forces generated by muscle contraction. ...

Dual electromagnetic articulometer observation of head movements coordinated with articulatory gestures for interacting talkers in synchronized speech tasks
  • Citing Article
  • April 2015

The Journal of the Acoustical Society of America

... These studies take the position of the accented syllables as the key prosodic landmark with which gesture movements align, but they do not take into account intonational phrase boundaries. In general, they found a similar temporal alignment pattern as had been shown for hand gestures: accented syllables are the anchoring point in speech for the most prominent part of a head movement, the gesture apex (defined as the specific point in time when the head changes its direction in the vertical or lateral movement) (Alexanderson et al., 2013;Ambrazaitis et al., 2015;Fern andez-Baena et al., 2014;Goldenberg et al., 2014;Graf et al., 2002;Hadar et al., 1983;Ishi et al., 2014;Kim et al., 2014). However, these studies also reported variability in this alignment pattern. ...

Temporal alignment between head gesture and prosodic prominence in naturally occurring conversation: An electromagnetic articulometry study
  • Citing Article
  • April 2014

The Journal of the Acoustical Society of America