Content uploaded by Muhammad Habib ur Rehman
Author content
All content in this area was uploaded by Muhammad Habib ur Rehman on Sep 26, 2020
Content may be subject to copyright.
FairFed: Cross-Device Fair Federated Learning
Muhammad Habib ur Rehman, Ahmed Mukhtar Dirir, Khaled Salah, Davor Svetinovic
Center for Cyber-Physical Systmems, Electrical Engineering & Computer Science Department
Khalifa University of Science and Technology
Abu Dhabi 127788, UAE
Email: {muhammad.rehman, 100057669, khaled.salah, davor.svetinovic}@ku.ac.ae
Abstract—Federated learning (FL) is the rapidly developing
machine learning technique that is used to perform collaborative
model training over decentralized datasets. FL enables privacy-
preserving model development whereby the datasets are scattered
over a large set of data producers (i.e., devices and/or systems).
These data producers train the learning models, encapsulate the
model updates with differential privacy techniques, and share
them to centralized systems for global aggregation. However,
these centralized models are always prone to adversarial attacks
(such as data-poisoning and model poisoning attacks) due to
a large number of data producers. Hence, FL methods need
to ensure fairness and high-quality model availability across all
the participants in the underlying AI systems. In this paper, we
propose a novel FL framework, called FairFed, to meet fairness
and high-quality data requirements. The FairFed provides a
fairness mechanism to detect adversaries across the devices and
datasets in the FL network and reject their model updates.
We use a Python-simulated FL framework to enable large-scale
training over MNIST datasets. We simulate a cross-device model
training settings to detect adversaries in the training network. We
used TensorFlow Federated and Python to implement the fairness
protocol, the deep neural network, and the outlier detection
algorithm. We thoroughly test the proposed FairFed framework
with random and uniform data distributions across the training
network and compare our initial results with the baseline fairness
scheme. Our proposed work shows promising results in terms of
model accuracy and loss.
Keywords: Data Quality, Deep Learning, Fairness, Federated
Learning, Model Development, Outlier Detection.
I. INTRODUCTION
Federated learning (FL) represents a new class of dis-
tributed machine learning techniques whereby the training
process is actuated without centralizing the data on cloud
data centers [1]. A typical FL process is executed between
centralized internet-enabled servers and the distributed devices
and systems connected to them, via the internet. FL lowers the
data communication cost by enabling the model-first approach
whereby the centralized servers maintain a global model and
push the model parameters to connected devices and systems
instead of pulling the large datasets. Also, FL is a privacy-
preserving distributed machine learning approach wherein the
devices bootstrap using the global model and then perform
local model training over their local datasets. The devices
then apply the differential privacy preservation techniques
(such as Homomorphic encryption or secure multiparty com-
putation) and upload their local model updates to centralized
servers. Furthermore, the FL enables secure model aggregation
wherein the centralized servers aggregate global model updates
by performing encrypted computations over reported local
model updates without looking into the identity of devices
or systems [2].
Despite the efficient data communication, privacy preserva-
tion, and secure model training, FL systems need to mitigate
a few pertinent requirements [3]. Primarily, FL systems need
to enable an asynchronous communication model because the
participating devices may leave the training process at any
time due to limited network bandwidth, low battery power, and
mobility. The ”information freshness” is another requirement
because the data on the device may continuously change (e.g.,
an autonomous car moving on a new road for the first time)
and FL systems are required to maintain the model state by
training over fresh data. Also, FL systems need to incentivize
the devices and systems to actively participate in the training
process. Also, the inconsistency in the number of devices
and the information freshness leads towards non-IID (non-
independent-identical data) which results in unbalanced, user-
specific, and self-correlated datasets.
Considering the decentralized nature of datasets, the dis-
tribution of devices across the FL systems, and the require-
ments to train high quality and population-wide representative
models, the issue of fairness require prime attention [4]. The
tendency of exhibiting unintended, surprising, and adversarial
behaviors leads to unfairness in FL models. Therefore FL
systems need to ensure fairness at multiple levels [5]. For
example, FL systems should meet the individual fairness
criterion where similar devices with similar data and same
global model configurations should receive the same results.
Similarly, FL models should comply with the criterion of de-
mographic fairness where the subsets of devices and systems
should equally represent all the subsets of overall populations
under observations. Likewise, FL systems should meet the
criterion of counterfactual fairness where all devices and
systems should be equally treated with the same global model
configurations and the same expected output and all rewards
should be equally distributed among all the honest participants
which are involved in the FL model development process.
FL models are normally aggregated via trusted central-
ized servers whereby the burden of trusted computing is
completely shifted to the centralized entities. However, there
may exist adversaries at multiple levels. For example, a
malicious client with root-access to his/her device can inspect
all the communication with centralized servers to infer the
patterns or user-behaviors. This malicious client can tamper
the training process by generating model-poising attacks via
Fig. 1: Decentralized Federated Learning Environments
adversarial examples [6]. Similarly, a malicious server with
root-access can inspect the models and generate adversarial
attacks or add biases during the training. Finally, the malicious
model engineers can change the training configurations and
a compromised client device can easily become a source of
adversarial attack by delegating a white-box (by collecting all
the model configurations, inspecting different variations, and
generating model poisoning attacks) or black-box attacks (by
observing the model behavior with given data-label pairs and
then generating adversarial examples to attack the models in
subsequent training iterations) on the learned models.
In this study, we aim to develop a new model development
framework in the decentralized settings (as depicted in Fig. 1)
which detect the adversaries during the model development
process and then penalize the malicious devices by rejecting
their model updates. The main contributions in this paper are:
•We propose a framework to develop a cross-device fair
FL system. The framework discusses the three different
types of components to run and monitor the FL processes
and ensure fairness among participants in the FL systems.
•We present the novel design of a fairness scheme to per-
form cross-device population-wide training, validation,
and testing of FL models. The proposed fairness scheme
identifies the adversaries by calculating the mean (µi) and
standard deviation σiof reported accuracies Aiand then
finding the outliers using statistical control limits (i.e.,
Ai∈ {µi±σi}).
•We define an end-to-end protocol to ensure robust and fair
FL model development within the proposed framework.
•We implement and compare our protocol with the base-
line methods in adversarial and non-adversarial settings.
The remainder of this paper is structured as follows: Section II
presents the related work and the proposed framework is
presented in section III. The details about the experimental
setup are presented in section IV. Section V describes the
outcomes of conducted experiments and the comparison with
existing similar work. Finally, the paper is concluded in
section VI.
II. RE LATE D WORK
The issue of fairness in FL systems has been addressed
in multiple research works considering the device hetero-
geneity, statistical bias, and fairness requirements. Researchers
proposed the Agnostic Federated Learning (AFL) scheme to
address the issue of bias introduced by individual clients [7].
AFL enables an overall population-wide optimization per-
formed over a mixture of client distributions. It introduces
a good-intent fairness mechanism that reduces the bias while
training and it defines learning bounds considering the concept
of weighted Rademacher complexity. Finally, it executes a
stochastic-AFL strategy to perform population-wide AFL to
minimize the worst-case loss which affects its robustness
against adversarial attacks. FedMGDA+ resolves this issue
by performing multi-objective optimization to converge to
a Pareto stationary solution [8]. The FedMGDA+ proved
to be robust against bias and scaling (i.e., multiplying bias
with a scaling factor) attacks. Researchers also addressed
the issue of individual bias and introduced the population-
wide optimization of FL models considering the fairness and
uniformity requirements in FL systems [9]. They set the q-
Fair Federated Learning (q-FFL) objective to efficiently learn
over wireless networks and they proposed q-FedAvg which is
a communication-efficient method for model aggregation over
federated networks. Although q-FFL and q-FedAvg outper-
formed the baseline methods, however, researchers found that
heterogeneity (both in terms of devices and their behaviors)
reduces the fairness performance of q-FedAvg [10]. Also, the
identification of individual adversaries is still an open issue.
Fair and Privacy-Preserving Deep Learning (FPPDL) frame-
work proposes different fairness criteria by providing different
models to FL participants considering their contributions dur-
ing the learning process [11], [12]. FPPDL guarantees fairness
using local credibility mutual evaluation mechanism whereby
each participant share dummy data created using differen-
tially private generative adversarial networks (DPGAN) and
evaluate the similarity among their local distributions. FPPDL
distributes the rewards among dissimilar participants. Further-
more, it enables three-layer onion-style gradient encryption
to share the protected gradients among FL participants via
blockchain technologies. However, FPPDL still needs to ad-
dress the issues of adversarial attacks on FL models.
Gradient sparsification (GS) is used to select the represen-
tative gradients during the FL training process. Researchers
proposed a fairness-aware GS method to ensure that all
FL participants share a similar number of gradients hence
reduce communication and computation overhead across the
FL network [13]. Alternately, researchers proposed the hier-
archically fair federated learning (HFFL) framework whereby
the models are distributed among training agents considering
their commitments about the amount of data they furnish
TABLE I: Comparison of current fairness schemes in FL
systems
Comparison Parameter [7] [10] [12] [13] [14]
Individual Fairness 33333
Demographic Fairness 33333
Counterfactural Fairness 33333
Statistical Bias 33333
Device Heterogeneity 3 3 7 3 3
Global Optimization 33737
Robustness 7 3 3 7 7
Bias Attack 7 3 7 7 7
Scaling Attack 7 3 7 7 7
Uniform Model Distribution 3 3 7 7 7
Fair Model Distribution 7 7 7 3 3
Gradient Sparsification 7 7 7 3 3
during training [14]. The training agents on the high-level in
the hierarchy are rewarded more as compared to the training
agents at the lower level. The HFFL ensures fairness among
training agents considering the notion that more contributions
reduce the generalization errors in FL models. However, it
is hard to identify the adversaries in the training network
who has the potential to increase the generalization errors by
providing malicious or noisy data distributions. Table I outlines
the comparison between exiting fairness schemes however the
fairness related research still needs to address the issues of
finding adversarial agents and handling the individual bias-
related concerns.
III. PROP OS ED FR AM EW OR K
This section elaborates on our proposed framework as
depicted in Fig. 2. The framework capacitates the interactions
between different FL application components, 1) to enable
FL environment, 2) to manage FL training, to monitor the
FL processes, and to interpret the model statistics, and 3) to
ensure fairness by actively managing the FL participants and
calculating the fairness among them.
A. Developing the FL models
FL models are required to be continuously trained and
evaluated over siloed datasets considering the information
freshness requirements of the FL systems. Therefore, the pro-
posed framework enables compute-intensive and data-intensive
FL components that benefit in handling a large number of
participating devices and systems in the FL environments.
Also, these FL components facilitate in iterative model training
on each batch of data and interact with external devices
and systems to communicate changes in the local model
parameters. The conventional FL process is executed between
a set of centralized parameter servers (Si) and a set of workers
Wi(i.e., devices or systems) distributed across the FL systems
whereby each participant (Pi) can act as a parameter server
to aggregate the model updates from other P i and as a
worker to train the model on their local datasets and report
the model updates to requesting server. Mathematically, it is
denoted as (Pi⊂Si)|| (Pi⊂Wi)||(Pi⊂Si&& Pi⊂Wi) where
Picould be an arbitrary subset of either Sior Wi. For each
training iteration among Siand Wi, Bonawitz et al. [15]
Fig. 2: Framework for Fair FL
elaborated on the execution of the FL process in three stages
namely, 1) selection, 2) configuration, and 3) reporting. At the
selection stage, the Wireport or show interest in the model
training process and Si, in turn, selects a subset of reporting
Wi, however, the selection criteria may vary based upon the
application requirements and contextual information about Wi.
The Sicommunicates the reason for not selecting the arbitrary
Wiand asks them to return in the subsequent training rounds if
they can meet the specified criteria for selection. At the config-
uration stage, the Sichecks and reads the global model check-
points (e.g., initial model weights [W0]) in the persistent cloud
storage) and then communicate them to selected Wi. The Si
also sends the model configurations such as learning rates, the
number of epochs, and the momentum to selected Wi. At the
reporting stage, the Wiruns the model configurations on their
local datasets DSiand produces the new model updates (e.g.,
new model weights W1). At the reporting stage, the privacy
information (e.g., gradient encryption of W1) is added and the
encrypted model updates are communicated back to Siwhich
in turn performs the secure aggregation and writes the global
model updates on the persistent centralized storage systems.
Our proposed framework extends the conventional central-
ized FL framework proposed by Bonawitz et al by enabling a
fully decentralized cross-device FL model training in peer-to-
peer networks. Hence, each Pican arbitrarily act as Si,Wi, or
both. However, each Pican independently actuate, participate,
or monitor the model training process.
B. Monitoring the FL Processes
FairFed enables training and monitoring components to
govern the FL models development process.
1) Training Manager: The accumulation of learning pa-
rameters (such as gradient vectors in DNNs) at the Siattracts
the adversaries to poison the learning models during training.
Hence, the adverse Wican easily learn the behavior of fair Wi
and generate the model poisoning attacks accordingly. Also,
repetitive selection of the same group of Wican result in
a low model convergence rate due to the gradual absence
of fresh data and infrequent behavioral changes of the Wi.
Hence the training manager randomly selects a cohort (i.e.,
a subset) of Wi. Considering the decentralized settings, each
device maintains its own distribution ρ(H|ψ)over the space
of hyper-parameters H. Then at the beginning of each training
round ti, the training manager delegates FL tasks to each Wi
by sending global model hyper-parameters htsampled from
ρ(H|ψ)and initial global model weights W0. The Wiperform
on-device model training, generate new model weights W1,
append the privacy information Enc(W1), and send back
the encrypted model weights to the parameter server. The
training manager also performs the cross-validation of model
updates by performing cross-group weight assignment i.e.,
W1
W1→W2and W1
W2→W1and sends the same model
hyper-parameters htinitially sampled from ρ(H|ψ). The Wi
once again train the models and produce a new set of model-
weights W2. The training manager then retrains the global
model by aggregating the model updates considering W2and
it generates model performance statistics (e.g., training loss,
validation loss, training accuracy, validation accuracy, etc.) at
the end of each training round.
2) Model Statistics Interpreter: The model statistics inter-
preter maintains a distribution of reported model statistics to
examine the cohort-wise model performance and based on
the variations in each training sub-round, it accepts or rejects
the W2for final aggregation. The model statistics interpreter
maintains two different accuracy distributions for Wibased
upon the training (A1) and validation accuracy A2of each
worker. It then interprets the model statistics using mean
and standard deviation methods to detect the outliers in the
training population. For each accuracy distribution, the mean
and standard deviation are calculated as given in Eq. 1 and
Eq. 2.
µ=Pxi
N(1)
σ=r(Pxi−µ)2
N(2)
Considering the statistical coverage of normally distributed
values, the model statistics interpreter finds the outliers by
finding if A1and A2fall within the control limits as follows
Ai∈ {µi±σi}. Here the control limits represent the coverage
of 95% population which is significant enough to generalize
the FL training and to ensure individual and demographic
fairness across the populations.
C. Fairness Management in FL Systems
1) Worker Manager: The FL-Core enables a separate run-
time for each FL application whereby it enables two types
of workers (in the form of virtual workers) namely, 1) model
aggregator and 2) peers. Since the framework complements the
requirements of a fully decentralized cross-device FL training
therefore the traditional FL model training does not need to
be controlled from a centralized cloud server. The training
manager at each Pimanages the training process and the
model aggregator aggregates the resulting W2and stores the
updated model weights on the local storage as well as global
Algorithm 1: Fair FL Protocol
Input: reqAccuracy, minNumOfDevices
Output: averageAccuracy
model = model.download()
initModel = model
devices[] = findDevices(minNumOfDevices)
parameters[] = TrainingParameters[]
tacc[], vacc []
for i = 0 to devices[].length/2 do
model.send(devices[i])
tacc[i] = model.train(devices[i], parameters[])
for j = devices[].length/2 to devices[].length do
model.send(devices[j])
vacc[j] = model.valid(devices[j], parameters[])
for k = 0 to devices[].length/2 do
model.send(devices[k])
vacc[k] = model.valid(devices[k], parameters[])
for l = devices[].length/2 to devices[].length do
model.send(devices[l])
tacc[l] = model.train(devices[l], parameters[])
fairdevices[] = calculatef airness(tacc [], vacc[])
averageAccuracy = model.aggregate(fairdevices )
if averageAccuracy ≤reqAccuracy then
Print(”%averageAccuracy% is less than
%reqAccuracy%”)
Discard(model)
model = initModel
model.store()
else
initModel = model
Return averageAccuracy
storage on the cloud data centers. The peers communicate with
Pion the P2P network and collect Wifor model aggregation.
2) Fairness Calculator: Considering the distributed and
decentralized FL settings, the fairness calculator component
calculates and maintains fairness for model training at each Pi.
It uses the mean and standard deviation based outlier detection
methods to find out the adversaries and penalize them by not
considering their reported model weights for final aggregation.
D. Fair FL Protocol
The key steps in the protocol are defined in Algorithm 1,
however, the protocol runs the following steps.
•Each device publishes QoS on the network
•Subscribing devices on the network show their interest
•Central device selects the devices based on the assump-
tion that all subscribing devices behave with honesty
•Selected worker devices manage same types of learning
model architectures
•Central device distributes the selected devices into two
sets
(a) Accuracy with Uniform Data Distribution (b) Accuracy with Dynamic Data Distribution
Fig. 3: Model accuracy comparisons showing the accuracies of DNN with (a) different sizes of workers populations and (b)
also the variations in data distributions across the FL network.
•In the first round central device sends model configura-
tions to all selected devices along with training parame-
ters.
•The devices in the first group perform training and report
back with updated model parameters and accuracy.
•The devices in the second group perform validation and
report back with updated model parameters and accuracy.
•The central device swaps the groups and repeat the train-
ing and validation across the groups to ensure fairness
across FL environment
•The central device interprets the reported statistics and
decides if the central model needs to be updated or not.
•Central device stores a new version of the updated model
in their local storage and their cloud replicas.
IV. EXP ER IM EN TAL SE TUP
We implemented the proposed FairFed framework using
Python programming language. We used a deep neural net-
work (DNN) in the FL settings whereby we used the Ten-
sorFlow Federated to simulate the FL environments. Also,
all experiments were performed on MNIST which is the
baseline handwritten digit recognition dataset. The 3-layer
DNN was configured as follows. The input layer takes 784
neurons representing 28 ×28 pixels of each input image.
The middle dense layer represents 32 neurons, and the output
dense layer is mapped on 10 neurons each representing a
handwritten digit from 0 to 9. Also, we used ReLU and
Softmax as activation functions at the middle and output
layers, respectively. Each experiment was run with 10 epochs
and the Adam optimizer was used for weight adjustments.
We tested FairFed with different worker populations of up to
20 workers. To actuate the fairness strategy, each experiment
was performed on two worker subgroups where the workers
train and validate the models to simulate the cross-device FL
environments. For each iteration, we added random noise to
a worker by adding perturbations to input images and each
experiment was performed in two rounds to calculate fairness
across worker populations.
V. RE SU LTS AND DISCUSSION
We tested the FairFed in terms of model accuracy, model
loss, and the ability to detect adversaries. Fig 3 presents
the comparison of FairFed (i.e., the absence of outlier) with
baseline FL systems that do not remove the outliers. Fig 3a
compares accuracy while performing uniform data distribution
across the training networks. For example, in our experiment,
we used 60,000 images in each experiment starting from a
worker population of 4 and ending with a worker population
of 20. It was observed that small worker populations produced
comparatively higher accuracy than the large worker popula-
tion. This phenomenon happened because each worker (in the
small population) has more local data hence they were able to
attain higher accuracy. However, the presence of adversaries in
small populations may become a serious threat to model per-
formance. By resolving the data starvation problem in the large
population a reliable accuracy is achievable. We confirmed
this observation by randomly distributing the data across all
workers as shown in Fig 3b. The accuracy comparison of
FairFed with baseline shows that our proposed scheme always
yields better accuracy irrespective of population size because
of its ability to identify and remove the outliers before training
the model.
The model loss comparison of FairFed with baseline is
presented in Fig 4 whereby Fig 4a depicts the loss comparison
with uniform data distribution (i.e., data starvation mode in the
case of large worker populations) across the training network
whereas Fig 4b shows the loss comparison with random data
distribution (with data starvation) on the training network. The
results show that FairFed achieves lower loss because of early
detection and removal of outliers in the FL training network.
Finally, using FairFed we were able to accurately identify the
malicious users as it can be seen in Fig 5 i.e., Worker-3 in
(a) Loss with Uniform Data Distribution (b) Loss with Dynamic Data Distribution
Fig. 4: Model loss comparisons showing the model loss during training of DNN with (a) different sizes of workers populations
and (b) also the variations in data distributions across the FL network.
(a) Outliers in 10 workers (b) Outliers in 20 workers
Fig. 5: Outlier detection in different worker populations i.e., (a) with the population of 10 workers and (b) with the population
of 20 workers.
Fig 5a and Worker-12 in Fig 5b. We used Ai∈ {µi±σi}
as initial statistical control limits for outlier detection because
of the small population sizes and this control limit provides
coverage to 95% of the population. Therefore, we were only
able to identify the significantly malicious adversaries. Since
the commercial-grade FL networks recruit FL participants
from a large-scale population (from few millions to 1000s
of millions) therefore FairFed could easily detect the hidden
adversaries as well because training on the large-scale network
will minimize the gap between average model accuracy and
it will gradually fit well within the control limits. Similarly
increasing the control limits with an increase in population
sizes and the average accuracy level will also help in fair
model convergence across the FL network. Therefore, a deep
correlation analysis of average accuracy level and control
limits will help in finding out the optimal control limits for
different sizes of worker populations.
VI. CONCLUSION
The presence of adversaries and malicious participants
can jeopardize model accuracy in federated learning (FL)
environments. We proposed the FairFed framework, which
enables cross-device and cross-population fairness over FL
training networks. FairFed ensures fairness by distributing
the training network participants into multiple subgroups and
then by performing the training and validation across the
devices. Also, it ensures fairness to detect outliers in the FL
network. As future work, we plan to improve the performance
by considering large-sized populations and various datasets
in real-life settings. Also, we plan to integrate the fairness
and incentive mechanisms to attain better statistical guarantees
from the FairFed.
REFERENCES
[1] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning:
Challenges, methods, and future directions,” IEEE Signal Processing
Magazine, vol. 37, no. 3, pp. 50–60, 2020.
[2] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning:
Concept and applications,” ACM Transactions on Intelligent Systems and
Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019.
[3] M. H. ur Rehman, K. Salah, E. Damiani, and D. Svetinovic, “Towards
blockchain-based reputation-aware federated learning,” in IEEE INFO-
COM 2020-IEEE Conference on Computer Communications Workshops
(INFOCOM WKSHPS). IEEE, 2020, pp. 183–188.
[4] M. Aledhari, R. Razzak, R. M. Parizi, and F. Saeed, “Federated learning:
A survey on enabling technologies, protocols, and applications,” IEEE
Access, vol. 8, pp. 140 699–140 725, 2020.
[5] C. Dwork, C. Ilvento, and M. Jagadeesan, “Individual fairness in
pipelines,” arXiv preprint arXiv:2004.05167, 2020.
[6] H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J.-y.
Sohn, K. Lee, and D. Papailiopoulos, “Attack of the tails: Yes, you really
can backdoor federated learning,” arXiv preprint arXiv:2007.05084,
2020.
[7] M. Mohri, G. Sivek, and A. T. Suresh, “Agnostic federated learning,”
arXiv preprint arXiv:1902.00146, 2019.
[8] Z. Hu, K. Shaloudegi, G. Zhang, and Y. Yu, “Fedmgda+: Fed-
erated learning meets multi-objective optimization,” arXiv preprint
arXiv:2006.11489, 2020.
[9] T. Li, M. Sanjabi, A. Beirami, and V. Smith, “Fair resource allocation
in federated learning,” arXiv preprint arXiv:1905.10497, 2019.
[10] C. Yang, Q. Wang, M. Xu, S. Wang, K. Bian, and X. Liu,
“Heterogeneity-aware federated learning,” arXiv preprint
arXiv:2006.06983, 2020.
[11] L. Lyu, X. Xu, and Q. Wang, “Collaborative fairness in federated
learning,” arXiv preprint arXiv:2008.12161, 2020.
[12] L. Lyu, J. Yu, K. Nandakumar, Y. Li, X. Ma, J. Jin, H. Yu, and K. S.
Ng, “Towards fair and privacy-preserving federated deep models,” IEEE
Transactions on Parallel and Distributed Systems, vol. 31, no. 11, pp.
2524–2541, 2020.
[13] P. Han, S. Wang, and K. K. Leung, “Adaptive gradient sparsification for
efficient federated learning: An online learning approach,” arXiv preprint
arXiv:2001.04756, 2020.
[14] J. Zhang, C. Li, A. Robles-Kelly, and M. Kankanhalli, “Hierarchically
fair federated learning,” arXiv preprint arXiv:2004.10386, 2020.
[15] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman,
V. Ivanov, C. Kiddon, J. Koneˇ
cn`
y, S. Mazzocchi, H. B. McMahan et al.,
“Towards federated learning at scale: System design,” arXiv preprint
arXiv:1902.01046, 2019.