Adam Weisser's research while affiliated with Macquarie University and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (8)
During conversations people effortlessly coordinate simultaneous channels of verbal and nonverbal information to hear and be heard. But the presence of common background noise levels such as those found in cafes and restaurants can be a barrier to conversational success. Here, we used speech and motion-tracking to reveal the behavioral processes th...
Everyday environments impose acoustical conditions on speech communication that require interlocutors to adapt their behavior to be able to hear and to be heard. Past research has focused mainly on the adaptation of speech level, while few studies investigated how interlocutors adapt their conversational distance as a function of noise level. Simil...
Everyday listening environments are characterized by far more complex spatial, spectral and temporal sound field distributions than the acoustic stimuli that are typically employed in controlled laboratory settings. As such, the reproduction of acoustic listening environments has become important for several research avenues related to sound percep...
Whether animal or speech communication, environmental sounds, or music -- all sounds carry some information. Sound sources are embedded in acoustic environments that contain any number of additional sources that emit sounds that reach the listener's ears concurrently. It is up to the listener to decode the acoustic informational mix, determine whic...
Estimating the basic acoustic parameters of conversational speech in noisy real-world conditions has been an elusive task in hearing research. Nevertheless, these data are essential ingredients for speech intelligibility tests and fitting rules for hearing aids. Previous surveys did not provide clear methodology for their acoustic measurements and...
The concept of complex acoustic environments has appeared in several unrelated research areas within acoustics in different variations. Based on a review of the usage and evolution of this concept in the literature, a relevant framework was developed, which includes nine broad characteristics that are thought to drive the complexity of acoustic sce...
Real-life acoustic environments are commonly considered to be complex, because they contain multiple sound sources in different locations, room reverberation, and movement—all at continuously varying levels. In comparison, most laboratory-generated sound fields in hearing research are much simpler. However, the continuum between simple and complex...
Speech intelligibility is commonly assessed in rather unrealistic acoustic environments at negative signal-to-noise ratios (SNRs). As a consequence, the results seem unlikely to reflect the subjects’ experience in the real world. To improve the ecological validity of speech tests, different sound reproduction techniques have been used by researcher...
Citations
... Apparently, voluntary control can also be influenced by visual and auditory information about the relative position of the speaker and the listener. This information makes it possible to estimate the distance between them and, accordingly, to voluntary (consciously) adjust the strength of the voice, since an increase in the communicative distance leads to a decrease in the sound pressure level created by the speaker's voice at the listener's location [14,15]. ...
... To bridge the gap between simplified laboratory measurements and real-life listening and communication situations, complex acoustic environments (CAEs; [2,3]) can be used in conjunction with appropriate techniques for acoustical reproduction and rendering [4,5,6]. Hereby, reallife CAEs [7,8,9] provide "ground truth" data, establishing a benchmark against which laboratory-based measurements can be evaluated. ...
... This test was administered in the anechoic chamber of the Australian Hearing Hub (Sydney, New South Wales, Australia). The background noise was actual dinner restaurant noise obtained from the Ambisonics Recordings of Typical Environments database (Weisser et al. 2019), presented at 73 dB SPL from an array of 41 speakers spherically distributed in five rows. The target sentences were presented from a speaker situated in front of the participant-level started at 78 dB SPL (i.e., at a signal to noise ratio [SNR] of +5 dB) and varied according to the staircase method until the 50% speech reception threshold (SRT-50, i.e., the SNR corresponding to 50% intelligibility) was estimated. ...
... Clear speech, for example, can be naturally elicited when a listener has difficulty hearing or understanding. Loud speech can be elicited in noisy environments or when a talker is interacting with interlocutors located at further than typical distances (e.g., Koenig & Fuchs, 2019;Picheny et al., 1986;Smiljanić & Gilbert, 2017;Weisser & Buchholz, 2019). Additionally, clear and loud speech forms can be explicitly elicited in laboratory or clinical environments by instructing talkers to speak clearer or louder than usual (e.g., Lam et al., 2012;Smiljanić & Gilbert, 2017;Whitfield et al., 2018). ...
Reference: Order Affects Clear and Loud Speech Response
... It might well be that the failure to demonstrate benefits in terms of speech recognition scores is due to ceiling effects: the in studies commonly used measure of speech reception threshold (SRT) in people with mild to moderate hearing impairment corresponds to SNRs of between -10dB and 0dB. NR algorithms, however, have been shown to be most effective in positive SNRs (Fredelake, Holube, Schlueter, & Hansen, 2012;Smeds, Wolters, & Rung, 2015), which are in turn by far the most common signal to noise ratios listeners are exposed to in real life (Buchholz et al., 2016;Smeds et al., 2015). ...