October 2019
·
212 Reads
·
434 Citations
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
October 2019
·
212 Reads
·
434 Citations
October 2019
·
124 Reads
·
110 Citations
June 2019
·
44 Reads
·
9 Citations
June 2019
·
183 Reads
·
168 Citations
December 2018
·
171 Reads
Despite of the progress achieved by deep learning in face recognition (FR), more and more people find that racial bias explicitly degrades the performance in realistic FR systems. Facing the fact that existing training and testing databases consist of almost Caucasian subjects, there are still no independent testing databases to evaluate racial bias and even no training databases and methods to reduce it. To facilitate the research towards conquering those unfair issues, this paper contributes a new dataset called Racial Faces in-the-Wild (RFW) database with two important uses, 1) racial bias testing: four testing subsets, namely Caucasian, Asian, Indian and African, are constructed, and each contains about 3000 individuals with 6000 image pairs for face verification, 2) racial bias reducing: one labeled training subset with Caucasians and three unlabeled training subsets with Asians, Indians and Africans are offered to encourage FR algorithms to transfer recognition knowledge from Caucasians to other races. For we all know, RFW is the first database for measuring racial bias in FR algorithms. After proving the existence of domain gap among different races and the existence of racial bias in FR algorithms, we further propose a deep information maximization adaptation network (IMAN) to bridge the domain gap, and comprehensive experiments show that the racial bias could be narrowed-down by our algorithm.
... In the 4th row, the reconstructed faces are overlapped with the target images to show misalignment are occlusions and when the faces are under large poses. Though such misalignment is subtle, it could induce bias in pose, expression and identity [28][29][30] for face analysis and avatar building. For instance, expressions might be misrepresented [31][32][33] due to the millimetre-level error at the mouth corner or head pose. ...
June 2019
... Extensive research has been conducted on evaluating and mitigating social biases in both image-only models [8,27,32,40,42,53,58,59,63] and text-only models [2,7,21,31,54]. More recently, efforts have expanded to multimodal models and datasets, addressing biases in various languagevision tasks. ...
October 2019
... In the literature, we identified three demographic-based datasets: Balanced Faces in the Wild (BFW) [50], De-mogPairs [51], and Racial Faces in-the-Wild (RFW) [52]. Among these, BFW is the best-balanced dataset, featuring eight demographic groups based on four races (White, Black, Asian, and Indian) and two genders (Male and Female). ...
October 2019
... The training of ArcFace [3] and CosFace [81] on MS1MV0 is hindered by the gradient conflict arising from massive finegrained intra-class noise and inter-class noise, resulting in limited performance. Approaches like NT [82], NR [83], and Co-mining [84] assign different weights or margins for clean and noisy samples, effectively improving performance by fully leveraging clean data. Sub-center [85] introduces multiple class centroids in the classifier to address intra-class conflict, while SKH [86] goes a step further by assigning multiple switchable classification layers to handle inter-class conflict. ...
June 2019