Debiasing the FAIRFACE dataset with respect to gender and race for STEREOTYPE queries.

Debiasing the FAIRFACE dataset with respect to gender and race for STEREOTYPE queries.

Source publication
Preprint
Full-text available
Vision-language model (VLM) embeddings have been shown to encode biases present in their training data, such as societal biases that prescribe negative characteristics to members of various racial and gender identities. VLMs are being quickly adopted for a variety of tasks ranging from few-shot classification to text-guided image generation, making...

Similar publications

Preprint
Full-text available
Code-switching automatic speech recognition (ASR) aims to transcribe speech that contains two or more languages accurately. To better capture language-specific speech representations and address language confusion in code-switching ASR, the mixture-of-experts (MoE) architecture and an additional language diarization (LD) decoder are commonly employ...