Yaohai Huang’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (5)


Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network
  • Conference Paper

October 2019

·

212 Reads

·

434 Citations

Mei Wang

·

·

Jiani Hu

·

[...]

·

Yaohai Huang



We select MF2( 9 images/id, 2.3M images and 100K identities) as the head data, denoted as MF2-h9. The rest of the data is the tail data, denoted as MF2-t9. "Rank1 acc." refers to the rank-1 face identification accuracy under 1M distractors on MF2.
Face identification and verification evaluation on MF2(ResNet50). "Rank1 acc." refers to the rank-1 face identification accuracy under 1M distractors and "ver." refers to face verification TAR(True Accepted Rate) at 10 −6 FAR(False Accepted Rate).
Comparison with 64-layer ResNet results. "Rank1 acc." refers to the rank-1 face identification accuracy under 1M distrac- tors and "ver." refers to face verification TAR(True Accepted Rate) at 10 −6 FAR(False Accepted Rate).
Unequal-Training for Deep Face Recognition With Long-Tailed Noisy Data
  • Conference Paper
  • Full-text available

June 2019

·

183 Reads

·

168 Citations

Download

Racial Faces in-the-Wild: Reducing Racial Bias by Deep Unsupervised Domain Adaptation

December 2018

·

171 Reads

Despite of the progress achieved by deep learning in face recognition (FR), more and more people find that racial bias explicitly degrades the performance in realistic FR systems. Facing the fact that existing training and testing databases consist of almost Caucasian subjects, there are still no independent testing databases to evaluate racial bias and even no training databases and methods to reduce it. To facilitate the research towards conquering those unfair issues, this paper contributes a new dataset called Racial Faces in-the-Wild (RFW) database with two important uses, 1) racial bias testing: four testing subsets, namely Caucasian, Asian, Indian and African, are constructed, and each contains about 3000 individuals with 6000 image pairs for face verification, 2) racial bias reducing: one labeled training subset with Caucasians and three unlabeled training subsets with Asians, Indians and Africans are offered to encourage FR algorithms to transfer recognition knowledge from Caucasians to other races. For we all know, RFW is the first database for measuring racial bias in FR algorithms. After proving the existence of domain gap among different races and the existence of racial bias in FR algorithms, we further propose a deep information maximization adaptation network (IMAN) to bridge the domain gap, and comprehensive experiments show that the racial bias could be narrowed-down by our algorithm.

Citations (4)


... In the 4th row, the reconstructed faces are overlapped with the target images to show misalignment are occlusions and when the faces are under large poses. Though such misalignment is subtle, it could induce bias in pose, expression and identity [28][29][30] for face analysis and avatar building. For instance, expressions might be misrepresented [31][32][33] due to the millimetre-level error at the mouth corner or head pose. ...

Reference:

Refined dense face alignment through image matching
APA: Adaptive Pose Alignment for Robust Face Recognition
  • Citing Conference Paper
  • June 2019

... Extensive research has been conducted on evaluating and mitigating social biases in both image-only models [8,27,32,40,42,53,58,59,63] and text-only models [2,7,21,31,54]. More recently, efforts have expanded to multimodal models and datasets, addressing biases in various languagevision tasks. ...

Fair Loss: Margin-Aware Reinforcement Learning for Deep Face Recognition
  • Citing Conference Paper
  • October 2019

... In the literature, we identified three demographic-based datasets: Balanced Faces in the Wild (BFW) [50], De-mogPairs [51], and Racial Faces in-the-Wild (RFW) [52]. Among these, BFW is the best-balanced dataset, featuring eight demographic groups based on four races (White, Black, Asian, and Indian) and two genders (Male and Female). ...

Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network
  • Citing Conference Paper
  • October 2019

... The training of ArcFace [3] and CosFace [81] on MS1MV0 is hindered by the gradient conflict arising from massive finegrained intra-class noise and inter-class noise, resulting in limited performance. Approaches like NT [82], NR [83], and Co-mining [84] assign different weights or margins for clean and noisy samples, effectively improving performance by fully leveraging clean data. Sub-center [85] introduces multiple class centroids in the classifier to address intra-class conflict, while SKH [86] goes a step further by assigning multiple switchable classification layers to handle inter-class conflict. ...

Unequal-Training for Deep Face Recognition With Long-Tailed Noisy Data