Objectives
Misophonia is a highly prevalent yet understudied condition characterized by aversion toward particular environmental sounds. Oral/nasal sounds (e.g., chewing, breathing) have been the focus of research, but variable experiences warrant an objective investigation. Experiment 1 asked whether human-produced oral/nasal sounds were more aversive than human-produced nonoral/nasal sounds and non-human/nature sounds. Experiment 2 additionally asked whether machine-learning algorithms could predict the presence and severity of misophonia.
Method
Sounds were presented to individuals with misophonia (Exp.1: N = 48, Exp.2: N = 45) and members of the general population (Exp.1: N = 39, Exp.2: N = 61). Aversiveness ratings to each sound were self-reported.
Results
Sounds from all three source categories—not just oral/nasal sounds—were rated as significantly more aversive to individuals with misophonia than controls. Further, modeling all sources classified misophonia with 89% accuracy and significantly predicted misophonia severity (r = 0.75).
Conclusions
Misophonia should be conceptualized as more than an aversion to oral/nasal sounds, which has implications for future diagnostics and experimental consistency moving forward.