Adam Weisser's research while affiliated with Macquarie University and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (7)
Everyday environments impose acoustical conditions on speech communication that require interlocutors to adapt their behavior to be able to hear and to be heard. Past research has focused mainly on the adaptation of speech level, while few studies investigated how interlocutors adapt their conversational distance as a function of noise level. Simil...
Everyday listening environments are characterized by far more complex spatial, spectral and temporal sound field distributions than the acoustic stimuli that are typically employed in controlled laboratory settings. As such, the reproduction of acoustic listening environments has become important for several research avenues related to sound percep...
Whether animal or speech communication, environmental sounds, or music -- all sounds carry some information. Sound sources are embedded in acoustic environments that contain any number of additional sources that emit sounds that reach the listener's ears concurrently. It is up to the listener to decode the acoustic informational mix, determine whic...
Estimating the basic acoustic parameters of conversational speech in noisy real-world conditions has been an elusive task in hearing research. Nevertheless, these data are essential ingredients for speech intelligibility tests and fitting rules for hearing aids. Previous surveys did not provide clear methodology for their acoustic measurements and...
The concept of complex acoustic environments has appeared in several unrelated research areas within acoustics in different variations. Based on a review of the usage and evolution of this concept in the literature, a relevant framework was developed, which includes nine broad characteristics that are thought to drive the complexity of acoustic sce...
Real-life acoustic environments are commonly considered to be complex, because they contain multiple sound sources in different locations, room reverberation, and movement—all at continuously varying levels. In comparison, most laboratory-generated sound fields in hearing research are much simpler. However, the continuum between simple and complex...
Speech intelligibility is commonly assessed in rather unrealistic acoustic environments at negative signal-to-noise ratios (SNRs). As a consequence, the results seem unlikely to reflect the subjects’ experience in the real world. To improve the ecological validity of speech tests, different sound reproduction techniques have been used by researcher...
Citations
... Others have used a referential task where interactive conversations can be monitored (Beechey et al., 2019;Weisser and Buchholz, 2019). Another relevant set of studies is exploring how head orientation and movement in realistic environments intersects with speech intelligibility (Hadley et al., 2019;Hendrikse et al., 2019;Weisser et al., 2021). The inclusion of visual information in speech intelligibility testing is an area of active investigation (Devesse et al., 2020;Llorach et al., 2021) and is the next step planned for the ECO-SiN materials. ...
... Another explanation for the differences in performance measured for the different speech materials in certain environments is that the complexities of the noise may have differentially interacted with the speech materials (cf. Weisser et al., 2019a, for an in-depth discussion on acoustic complexity). For example, some background noises may contain informational masking due to competing speech (e.g., advertisements are playing on a TV in the living room background noise, people are talking over a table in the dinner party background noise), which may have interfered more strongly with the conversational ECO-SiN sentences. ...
... For the perturbation set, we consider all degradations commonly found in various audio processing tasks including additive noise, speech distortions (e.g. clipping and frequency masking, frequency resampling, pitch shifting), compression (e.g., mu-law and MP3), and recorded binaural sounds [30,31]. We also use the data collected using the binaural multichannel wiener filter (MWF) [32] algorithm, and find that adding datasets and perturbations with subtle differences increases the robustness of our model to small differences. ...
... Studies of turn-taking using simultaneous speech have been criticized for a low ecological validity, because in reallife group conversations, talkers usually do not start simultaneously. The use of alternating talkers is a logical approach, but such designs have to deal with the fact that speech intelligibility is at ceiling if performed at realistic SNR (Bronkhorst, 2000; for realistic SNR see: Mansour et al., 2021;Smeds et al., 2015;Weisser & Buchholz, 2019;Wu et al., 2018). Kitterick et al. (2010) compared different starting times of phrases across talkers. ...
... It might well be that the failure to demonstrate benefits in terms of speech recognition scores is due to ceiling effects: the in studies commonly used measure of speech reception threshold (SRT) in people with mild to moderate hearing impairment corresponds to SNRs of between -10dB and 0dB. NR algorithms, however, have been shown to be most effective in positive SNRs (Fredelake, Holube, Schlueter, & Hansen, 2012;Smeds, Wolters, & Rung, 2015), which are in turn by far the most common signal to noise ratios listeners are exposed to in real life (Buchholz et al., 2016;Smeds et al., 2015). ...