[Show abstract][Hide abstract] ABSTRACT: This study examined how correlated, or filtered, noise affected efficiency for recognizing two types of signal patterns, Gabor patches and three-dimensional objects. In general, compared with the ideal observer, human observers were most efficient at performing tasks in low-pass noise, followed by white noise; they were least efficient in high-pass noise. Simulations demonstrated that contrast-dependent internal noise was likely to have limited human performance in the high-pass conditions for both signal types. Classification images showed that observers were likely adopting different strategies in the presence of low-pass versus white noise. However, efficiencies were underpredicted by the linear classification images and asymmetries were present in the classification subimages, indicating the influence of nonlinear processes. Response consistency analyses indicated that lower contrast-dependent internal noise contributed somewhat to higher efficiencies in low-pass noise for Gabor patches but not objects. Taken together, the results of these experiments suggest a complex interaction among signals, external noise spectra, and internal noise in determining efficiency in correlated and uncorrelated noise.
Journal of the Optical Society of America A 11/2009; 26(11):B94-109. · 1.67 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Normal-hearing observers typically have some ability to "lipread," or understand visual-only speech without an accompanying auditory signal. However, talkers vary in how easy they are to lipread. Such variability could arise from differences in the visual information available in talkers' speech, human perceptual strategies that are better suited to some talkers than others, or some combination of these factors. A comparison of human and ideal observer performance in a visual-only speech recognition task found that although talkers do vary in how much physical information they produce during speech, human perceptual strategies also play a role in talker variability.
Vision Research 11/2006; 46(19):3243-58. · 2.38 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Previous research has identified a "synchrony window" of several hundred milliseconds over which auditory-visual (AV) asynchronies are not reliably perceived. Individual variability in the size of this AV synchrony window has been linked with variability in AV speech perception measures, but it was not clear whether AV speech perception measures are related to synchrony detection for speech only or for both speech and nonspeech signals. An experiment was conducted to investigate the relationship between measures of AV speech perception and AV synchrony detection for speech and nonspeech signals. Variability in AV synchrony detection for both speech and nonspeech signals was found to be related to variability in measures of auditory-only (A-only) and AV speech perception, suggesting that temporal processing for both speech and nonspeech signals must be taken into account in explaining variability in A-only and multisensory speech perception.
The Journal of the Acoustical Society of America 07/2006; 119(6):4065-73. · 1.56 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Native speakers of a language are often unable to consciously perceive, and have altered neural responses to, phonemic contrasts not present in their language. This study examined whether speakers of dialects of the same language with different phoneme inventories also show measurably different neural responses to contrasts not present in their dialect. Speakers with (n=11) and without (n=11) an American English I/E (pin/pen) vowel merger in speech production were asked to discriminate perceptually between minimal pairs of words that contrasted in the critical vowel merger and minimal pairs of control words while their event-related potential (ERPs) were recorded. Compared with unmerged dialect speakers, merged dialect speakers were less able to make behavioral discriminations and exhibited a reduced late positive ERP component (LPC) effect to incongruent merger vowel stimuli. These results indicate that between dialects of a single language, the behavioral response differences may reflect neural differences related to conscious phonological decision processes.
Brain and Language 01/2006; 95(3):435-49. · 3.31 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The identification of the gender of an unfamiliar talker is an easy and automatic process for naïve adult listeners. Sociolinguistic research has consistently revealed gender differences in the production of linguistic variables. Research on the perception of dialect variation, however, has been limited almost exclusively to male talkers. In the present study, naïve participants were asked to categorize unfamiliar talkers by dialect using sentence-length utterances under three presentation conditions: male talkers only, female talkers only, and a mixed gender condition. The results revealed no significant differences in categorization performance across the three presentation conditions. However, a clustering analysis of the listeners' categorization errors revealed significant effects of talker gender on the underlying perceptual similarity spaces. The present findings suggest that naïve listeners are sensitive to gender differences in speech production and are able to use those differences to reliably categorize unfamiliar male and female talkers by dialect.
Journal of Language and Social Psychology 06/2005; 24(2):182-206. · 1.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Two experiments were conducted to examine the temporal limitations on the detection of asynchrony in auditory-visual (AV) signals. Each participant made asynchrony judgments about speech and nonspeech signals presented over an 800-ms range of AV onset asynchronies. Consistent with previous findings, all conditions revealed a wide window of several hundred milliseconds over which AV signals were judged to be synchronous. In addition, signals in which the visual component led the auditory component were more likely to be judged as synchronous than signals in which the auditory component led the visual component. In contrast with earlier reports (Dixon & Spitz, 1980; McGrath & Summerfield, 1985), the present results also demonstrated a similar AV synchrony window for speech and nonspeech signals, even when these signals were matched for duration. Visual phonetic characteristics of the speech signals, however, did influence the size and shape of the AV synchrony window. Finally, the onset of the relevant aspects of the stimulus, rather than the duration or offset, was most important for asynchrony judgments for both speech and nonspeech signals. Relationships with recent data on neural mechanisms of multisensory enhancement and convergence are discussed.