
Debra M. Hardison- Doctor of Philosophy
- Professor at Michigan State University
Debra M. Hardison
- Doctor of Philosophy
- Professor at Michigan State University
About
37
Publications
7,325
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,634
Citations
Introduction
My research focuses on auditory-visual integration in spoken language processing, co-speech gesture, applications of technology in perception and production training, and the relationship between learner variables and oral communication skill development. My forthcoming book The Multimodal Context of Phonological Learning (Equinox) provides a comprehensive view of the multimodal context in which speech is perceived, produced, taught, and learned.
Current institution
Publications
Publications (37)
Few studies have explored the influence of a speaker’s accent and visual (facial and gestural) cues on second-language (L2) listening comprehension. The current mixed-methods between-groups design investigated: (1) the effects of accent and visual cues on Arab students’ comprehension of recorded lectures delivered by two speakers: first language (L...
This study explored the error gravity of learners’ mispronunciations involving orthographically nonsalient phonological processes in L2 Korean. A recorded DictoSpeak, similar to a dictogloss, was used to facilitate discussion between learners of Korean and a native-speaking interlocutor. Task content was seeded with exemplars of six phonological pr...
Assistive Design for English Phonetic Tools (ADEPT) was developed to improve inclusion in classrooms and enhance collaboration among blind, low vision, and sighted learners of American English (AE) as a second/foreign language through better access to the International Phonetic Alphabet (IPA) symbols and the sounds they represent. Grounded in multi...
Thirty-seven second language (L2) learners of Japanese (21 L1 English, 16 L1 Chinese) participated in an eight-week study abroad (SA) program to Japan. Pre- and post-SA oral proficiency interviews were used for ACTFL-level assessments and ratings of component skills (pronunciation, fluency, grammatical accuracy, vocabulary/content, interaction skil...
ASSISTIVE DESIGN FOR ENGLISH PHONETIC TOOLS (ADEPT) IN LANGUAGE LEARNING
This article reviews research findings involving visual input in speech processing in the form of facial cues and co-speech gestures for second-language (L2) learners, and provides pedagogical implications for the teaching of listening and speaking. It traces the foundations of auditory–visual speech research and explores the role of a speaker’s fa...
This timeline provides an update on research since 2009 involving auditory-visual (AV) input in spoken language processing (see Hardison, 2010 for an earlier timeline on this topic). A brief background is presented here as a foundation for the more recent studies of speech as a multimodal phenomenon (e.g., Rosenblum, 2005).
This paper reports on the role of technology in state-of-the-art pronunciation research and instruction, and makes concrete suggestions for future developments. The point of departure for this contribution is that the goal of second language (L2) pronunciation research and teaching should be enhanced comprehensibility and intelligibility as opposed...
This paper reports on the role of technology in state-of-the-art pronunciation research and instruction, and makes concrete suggestions for future developments. The point of departure for this contribution is that the goal of second language (L2) pronunciation research and teaching should be enhanced comprehensibility and intelligibility as opposed...
Perceivers’ attention is entrained to the rhythm of a speaker’s gestural and acoustic beats. When different rhythms (polyrhythms) occur across the visual and auditory modalities of speech simultaneously, attention may be heightened, enhancing memorability of the sequence. In this three-stage study, Stage 1 analyzed videorecordings of native English...
This study examined factors affecting perception training of vowel duration in L2 Japanese with transfer to production. In a pre-test, training, post-test design, 48 L1 English speakers were assigned to one of three groups: auditory-visual (AV) training using waveform displays, auditory-only (A-only), or no training. Within-group variables were vow...
Research on the effectiveness of short-term study-abroad (SA) programs for improving oral skills has shown mixed results. In this study, 24 L2 German learners (L1 English) provided pre- and post-SA speech samples addressing a hypothetical situation and completed surveys on cross-cultural interest and adaptability; L2 communication affect, strategie...
This study explored the second language perceptual accuracy of Japanese geminates (moraic units) by English-speaking learners at three proficiency levels: beginner (28), low–intermediate (42), and advanced (15). Stimuli included singleton and geminate /t/, /k/, /s/ followed by /a/ or /u/ produced by a native speaker in isolated words and carrier se...
The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on a...
The value of waveform displays as visual feedback was explored in a training study involving perception and production of L2 Japanese by beginning-level L1 English learners. A pretest-posttest design compared auditory-visual (AV) and auditory-only (A-only) Web-based training. Stimuli were singleton and geminate /t,k,s/ followed by /a,u/ in two cond...
The National Education Act of 1999 in Thailand mandated a transition from teacher- to learner-centred instruction for all subjects including English. This shift was associated with the development of communicative ability in English to meet the needs of globalization. The current study investigated the policy behind and implementation of the reform...
This volume is a collection of 13 chapters, each devoted to a particular issue that is crucial to our understanding of the way learners acquire, learn, and use an L2 sound system. In addition, it spans both theory and application in L2 phonology. The book is divided into three parts, with each section unified by broad thematic content: Part I, “The...
Research in the field of phonology has long been dominated by a focus on only one source or modality of input — auditory (i.e., what we hear). However, in face-to-face communication, a significant source of information about the sounds a speaker is producing comes from visual cues such as the lip movements associated with these sounds. Studies on t...
Two experiments explored factors affecting the influence of visual (lip-read) information on auditory speech perception, the “McGurk effect”, in 120 advanced ESL learners of 4 L1s (Japanese, Korean, Spanish, and Malay) and 50 native speakers (NSs) of English. The audio and video speech signals of a female English speaker producing CV syllables with...
Familiarity with a talker's voice and face was found to facilitate processing of second-language speech. This advantage is accentuated when visual cues are limited to either the mouth and jaw area, or eyes and upper cheek areas of a talker's face. Findings are compatible with a multiple-trace model of bimodal speech processing.
This paper provides a sequence of specific techniques and examples for implementing theatre voice training and technology in teaching ESL/EFL oral skills. A layered approach is proposed based on information processing theory in which the focus of learner attention is shifted in stages from the physiological to the linguistic and then to the discour...
This study investigated the contribution of gestures and facial cues to second‐language learners’ listening comprehension of a videotaped lecture by a native speaker of English. A total of 42 low-intermediate and advanced learners of English as a second language were randomly assigned to 3 stimulus conditions: AV‐gesture‐face (audiovisual including...
Experiments using the gating paradigm investigated the effects of auditory–visual (AV) and auditory-only perceptual training on second-language spoken-word identification by Japanese and Korean learners of English. Stimuli were familiar bisyllabic words beginning with /p/, /f/, //, /l/, and /s, t, k/ combined with high, low, and rounded vowels. Res...
Experiments using the gating paradigm investigated the influence of speech style (unscripted vs. scripted), visual cues from a talker’s face (auditory-visual vs. auditory-only presentations), word length (one vs. two syllables) and initial consonant (IC) visual category in spoken word identification by native- (NSs) and nonnative speakers (NNSs) of...
Two types of contextualized input in prosody training were investigated for 28 advanced L2 speakers of English (L1 Chinese). Their oral presentations pro-vided training materials. Native-speakers (NSs) of English provided global prosody ratings, and participants completed questionnaires on perceived train-ing effectiveness. Two groups received trai...
Two experiments investigated the effectiveness of computer-assisted prosody training, its generalization to novel sentences and segmental accuracy, and the relationship between prosodic and lexical information in long-term memory. Experiment 1, using a pretest-posttest design, provided native English-speaking learners of French with 3 weeks of trai...
The influence of a talker's face (e.g., articulatory gestures) and voice, vocalic context, and word position were investigated in the training of Japanese and Korean English as a second language learners to identify American English /I/ and /I/. In the pretest-posttest design, an identification paradigm assessed the effects of 3 weeks of training u...
This research provides a multidisciplinary perspective on the factors influencing the process of integrating auditory and visual information in speech perception, and the nature of the mental representations to which speech input is matched for identification. Most previous studies of L2 speech perception had focused on only one source of input—the...