ChapterPDF Available

An Efficient Federated Learning Scheme with Differential Privacy in Mobile Edge Computing


Abstract and Figures

In this paper, we consider a mobile edge computing (MEC) system that multiple users participate in the federated learning protocol by jointly training a deep neural network (DNN) with their private training datasets. The main challenges of applying federated learning to MEC are: (1) it incurs tremendous computational cost by carrying out the deep neural network training phase on the resource-constraint mobile edge devices; (2) existing literature demonstrates that the parameters of a DNN trained on a dataset can be exploited to partially reconstruct the training samples in original dataset. To address the aforementioned issues, we introduce an efficiently private federated learning scheme in mobile edge computing, named FedMEC, with model partition technique and differential privacy method in this work. The experimental results demonstrate that our proposed FedMEC scheme can achieve high model accuracy under different perturbation strengths.
Content may be subject to copyright.
An Efficient Federated Learning Scheme
with Differential Privacy in Mobile Edge
Jiale Zhang(B
), Junyu Wang, Yanchao Zhao, and Bing Chen
College of Computer Science and Technology,
Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
{jlzhang,wangjunyu,yczhao,cb china}
Abstract. In this paper, we consider a mobile edge computing (MEC)
system that multiple users participate in the federated learning proto-
col by jointly training a deep neural network (DNN) with their private
training datasets. The main challenges of applying federated learning to
MEC are: (1) it incurs tremendous computational cost by carrying out
the deep neural network training phase on the resource-constraint mobile
edge devices; (2) existing literature demonstrates that the parameters of
a DNN trained on a dataset can be exploited to partially reconstruct
the training samples in original dataset. To address the aforementioned
issues, we introduce an efficiently private federated learning scheme in
mobile edge computing, named FedMEC, with model partition technique
and differential privacy method in this work. The experimental results
demonstrate that our proposed FedMEC scheme can achieve high model
accuracy under different perturbation strengths.
Keywords: Federated learning ·Mobile edge computing ·
Deep neural network ·Differential privacy
1 Introduction
Nowadays the Internet of Things (IoT) devices, such as smartphones, cameras,
and medical tools, have shown explosive growth and became nearly ubiquitous.
As a distributed intelligent computation architecture, mobile edge computing [1]
shows the powerful real-time and on-devices data processing capability, which
achieved great success in numerous networking applications. Along with edge
computing, on-device deep learning has turned into a universal and indispens-
able service [2], including recommendation systems, language translation, secu-
rity surveillance, and health monitoring. However, such intelligent computation
Supported in part by the National Key Research and Development Program of China,
under Grant 2017YFB0802303, in part by the National Natural Science Foundation of
China, under Grant 61672283, and in part by the Postgraduate Research & Practice
Innovation Program of Jiangsu Province under Grant KYCX18 0308.
ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2019
Published by Springer Nature Switzerland AG 2019. All Rights Reserved
X. B. Zhai et al. (Eds.): MLICOM 2019, LNICST 294, pp. 538–550, 2019.
Federated Learning with Differential Privacy in MEC 539
scenario rely on the users to outsource their sensitive data to the cloud in order
to carry out deep learning services, which causes a number of privacy concerns
and resources impacts for the smartphone users [3,4].
Federated learning [5,6] is a recent concept which enables training a deep
learning model across thousands of participants in a collaborative manner. It
allows the users locally train their model in a distributed manner, and upload
its local model update, i.e., parameters of gradient and weight, instead of sharing
their private data samples to the central server. Participants in federated learn-
ing act as the data provider to train a local deep model, and the server maintains
a global model by averaging local model parameters (i.e., gradients) generated
by randomly selected participants until it tends to convergence [7]. One biggest
achievement for federated learning is the corresponding model average algorithm
[8], which can benefit from a wide range of non-IID and unbalanced data distri-
bution among diversity participants.
It seems that federated learning is a promising approach to provide on-device
deep learning services on mobile edge computing architecture while protecting
user-side data privacy. However, we notice that applying the federated learning
approach to mobile edge computing environment would face two practical issues:
It presents tremendous computational cost by carrying out the deep neural
network training phase on the resource-constraint mobile edge devices, mean-
ing that the mobile devices cannot afford such heavy computation processing
required in federated learning approach [911];
– The parameters of a DNN trained on a dataset can still be exploited to
partially reconstruction the training examples in that dataset, which means
the conventional federated learning mechanism cannot provide strong privacy
guarantee against malicious entities, such as edge and cloud servers [12,13].
To address the above problems, we propose an efficiently private federated
learning scheme in mobile edge computing, named FedMEC, based on model
partition technique and differential privacy method. The main contributions can
be summarized as follows:
– We design a flexible framework which enabling federated learning in the
mobile edge computing environment based on the model partitioned tech-
nique, reducing the computation overhead on the mobile devices. Specifically,
the FedMEC framework partitions a deep neural network into two parts: the
client-side DNN and edge-side DNN, so the most complex computations can
be outsourced to the edge server.
We also propose a differentially private data perturbation mechanism on the
clients-side to prevent the privacy leakage from the local model parameters.
In particular, the edge clients and edge server run the different portion of a
deep neural network, and the updates from an edge device to the edge server
is perturbed by Laplace noise to achieve differential privacy.
The rest of this paper are organized as follows. In Sect. 2, we briefly introduce
the basic knowledge of federated learning and differential privacy. The system
540 J. Zhang et al.
framework is presented in Sect. 3, and the construction of proposed FedMEC
scheme is detailed in Sect. 4. Extensive experimental evaluation is conducted in
Sect. 5. Finally, Sect. 6gives the conclusion and future work.
2 Preliminaries
2.1 Federated Learning
Federated learning was firstly proposed by Google [8] which aims to build a dis-
tributed machine learning models based on massive distribution datasets across
multiple devices. Compared to the conventional centralized training method,
participants in the federated learning system can locally train a global model
using their private data and upload the model update in form of gradients. Such
a localized model training method presents significant advantages in privacy pre-
serving because the clients do not need to share their private data to any third
During the federated learning, all the clients agree on a common learning
objective and model structure. Assuming that mtis a fraction of sampled partic-
ipants who own the different private dataset. In a certain communication round
t, each client downloads the global model parameters from the server, then the
model is trained locally to generate the local model update Δw(i)
t+1 using its own
private dataset. Finally, each participant sends the resulting updates back to
the server, where the updates are averaged by the central server to obtain a new
joint global model:
t+1 =w(global)
where w(global)
tindicates the global model at the t-th communication round, and
t+1 denotes the local update from the i-th participant at communication
round t+1.
2.2 Differential Privacy
Differential privacy [14] provides a rigorous privacy guarantee for randomized
algorithms on aggregated sensitive datasets. It is defined in terms of the data
query on two adjacent databases Dand Dwhere the query results are statisti-
cally similar, but differing in one data item. The formal definition of -differential
privacy can be described as follow:
Definition 1 (-differential privacy):A randomized mechanism M:D→R
fulfills -differential privacy for certain non-negative number , iff for any adja-
cent input d∈D and d∈D
, and any output S⊆R, it holds that
where is defined as the privacy budget, which measures the level of privacy
guarantee of the randomized mechanism M: the smaller , the stronger privacy
Federated Learning with Differential Privacy in MEC 541
3 System Framework
3.1 Federated Learning with MEC
In this section, we present a mobile edge computing structure for federated learn-
ing tasks as shown in Fig. 1. Assume a scenario where all the edge devices intend
to obtain desired machine learning services from a cloud central server. At the
same time, these users try to prevent the leakage of any private information to
the cloud server by executing the federated learning protocol. In this situation,
we consider a three-layer mobile edge computing framework that provides the
perfect architecture supportive of federated learning protocol with multiple par-
ticipants. Specifically, the entities involved in our framework including the edge
devices, the edge servers, and a cloud central server.
server Edge-side
Client Client Client Client Client
Local update
Local update
Local update
Server side
Global model update
Cloud server
Edge Side
Client Side
Fig. 1. Federated learning with mobile edge computing
Specifically, implementing the federated learning framework in mobile edge
computing faced on two practical issues. Firstly, carrying out the DNN train-
ing phase on the mobile devices will definitely present incredible computational
cost, while the terminals connected to the mobile edge computing system are
usually resource-constraint devices. Secondly, we must consider the user’s pri-
vacy contained in the outsourced data (or features) due to the edge server and
cloud server may not be trusted. Thus, the main challenge of applying federated
learning with mobile edge computing is how to design a valid scheme to reduce
the computation overhead on edge devices without broke the federated learn-
ing mechanism, while protecting user-side data privacy contained in the original
542 J. Zhang et al.
3.2 Overview of FedMEC
To solve the aforementioned challenge, we consider to partition the neural net-
work along the last layer of convolutional layers and all the intermediate results
generated by the user-side DNN are hidden from the other entities. The effective-
ness of the partition mechanism in DNN architecture lies in the loosely coupled
property among multiple insider layers. That is, each hidden layer in DNN can
be executed separately by taking the previous layer’s output as its input.
Client-Side Edge-Side
Side DNN
Edge- Side
Local Model
Global Model
Global UpdatePre-trained DNN
Fig. 2. Overview of proposed FedMEC framework
The overview of FedMEC is presented in Fig. 2. FedMEC relies on the mobile
edge computing environment and divides the whole federated learning process
into three parts: client-side part, edge-side part, and server-side part. The client-
side neural network is assigned by the cloud serve whose network structure and
parameters are frozen and the edge-side DNN is fine-turned, the biggest differ-
ence between our work and [9] is the iteratively model updates will be aggregated
and averaged in the cloud server. In this situation, edge devices merely undertake
the simple and lightweight feature extraction and perturbation.
In order to guarantee the performance of the frozen neural network in the
client side, we use the public data which has the similar distribution with private
data as the auxiliary information dataset to pretrain a deep neural network as an
initialized global model in cloud side. Then the pretrained global neural network
will be partitioned along the last layer of the convolution layer. Later, the well-
trained convolution layer will send to each client for feature extraction. Based on
our three-layer federated learning architecture with mobile edge computing, all
the resource-hungry tasks are offloaded to the edge servers and cloud center while
mobile edge devices merely undertake the simple feature extraction through
a local neural network assigned by the cloud center. At last, for the privacy
concerns, we perturb the results computed from the original data before being
transmitted to the edge server to protect the privacy contained in the raw data.
Federated Learning with Differential Privacy in MEC 543
4 Efficient Federated Learning with Differential Privacy
4.1 Deep Neural Network Partition
In the deep neural network partition strategy, we set the pivot on the last layer
of the conversational layers and separate a large DNN into two parts: client-
side DNN and edge-side DNN. Specifically, the client-side DNN forms the front
portions of a DNN structure (i.e., convolution layers) which are deployed on
edge devices to extract features from the raw data. Note that the client-side
network is pretrained by the cloud server and the structure and parameters are
frozen during the whole training phase in federated learning procedure. The edge-
side DNN containing the remaining portions of the DNN network (i.e., dense
layers) to update the model parameters by executing the forward and backward
propagation procedures. The whole partition process on the deep neural network
is illustrated in Fig. 3.
Client-Side DNN
Convolution Layers
Noise Features
Edge-Side DNN
Wi+1 Wk
Dense Layers
Partitioned Deep Neural Network
Fig. 3. Partition process on the deep neural network
Therefore, based on our DNN partition mechanism, the complex computation
operations on the client side can be greatly reduced. As the experiment shown in
[15], the partitioned mechanism can perform lightweight resource consumption
when a part of the DNN is offloaded to the third party. In addition to resource
and energy considerations, partitioning solutions are attractive to deep learning
service providers, paving the way for federated learning applications on mobile
edge devices.
4.2 Differentially Private Data Perturbation
Federated learning protocol is designed for providing basic privacy guarantee
for each participants’ raw data due to its local training property. However, a
participant’s sensitive data is still possibly leaked to the untrusted third parties,
544 J. Zhang et al.
such as edge server and cloud server, even with a small portion of updated
parameters (i.e., features and gradients). For examples, according to [12], the
server in federated learning can easily launch the model inversion attack to
obtain parts of training data distributions, and the gradient backward inference
described in [13] also enables an adversary to get a fraction of private data from
the participants’ local updates. Therefore, it is necessary to design a practical
preserving mechanism to protect the privacy of each participant against the
untrusted third parties in federated learning.
Differential privacy [14] is a great solution to provide the rigorous privacy
guarantee by adding deliberate perturb on the sensitive datasets. However,
adding the perturb to the original data directly may lead to significant negative
effects about learning performance. Thus, we can perturb the features generated
by the convolutional layers of partitioned DNN, so as to preserve the privacy
contained in the raw data. In this paper, we solve the aforementioned problem by
considering a differentially private data perturbation mechanism which can pro-
tect the privacy information contained in the extracted features after executing
the client-side DNN.
Following by the work from [9], we consider the deep neural network as a
deterministic function xl=F(xr), where xrrepresents the private raw data and
xlstands for the l-th layer output of a neural network. For the privacy concern,
we applying the differential privacy method to the DNN and further construct
our private federated learning protocol in mobile edge computing paradigm.
One efficient way to realize the -differential privacy is to adding controlled
Laplace noise which is sampled from the Laplace distribution with scale ΔF/
into the output xl. According to the definition of differential privacy described
in Sect. 2.2, the global sensitivity for a query f:D→Rcan be defined as follow:
Δf = max
dD,dD||f(d)f(d)|| (3)
However, the biggest challenge here is that the global sensitivity ΔFis difficult
to quantification in the deep neural network. Directly adding the Laplace per-
turbations into the output features will destroy the utility of the representations
for the future predictions.
To address this problem, we employ the nullification and norm bounding
methods to enhance the availability of differential privacy in deep neural net-
works. Specifically, before a participant starting to extract the features from
his sensitive raw data xrusing the pretrained client-side DNN, he firstly per-
forms the nullification operation to masking the high sensitive data items as
r=xrIn, where is the multiplication operation and Inis the nullification
matrix with the same dimensions as input sensitive raw data. Besides, the nul-
lification matrix Inis a random binary matrix (i.e., consisted of 0 and 1) and
its structure is determined by a nullification rate μ, meaning that the number
of zeros is the supremum of Sup(n·μ). Apparently, μhas a significant impact
on the prediction accuracy which will be discussed in Sect. 5.
After the nullification operation on the sensitive raw data, each participant
needs to run the client-side DNN on x
rto extract the features as xl=F(x
Federated Learning with Differential Privacy in MEC 545
Then, we consider the norm bounding method to enforce a certain global sensi-
tivity as follow:
B) (4)
where ||xl||represents the infinite norm of the l-th layer outputs. This formula
indicates that x
lis upper bounded by S, meaning that the sensitivity of xl
can be preserved as long as ||xl||B, whereas it will be scaled by Bwhen
||xl||>B.Accordingto[16], the scaling factor Busually be set as the median
of ||xl||. The Laplace perturbation (scaled to B) now is added into the bounded
features x
lto further preserve the privacy as follow:
l+Lap(B/σI) (5)
Note that the Laplace noise is added into the final output of the convolutional
layers. Due to the same network structure for each client-side DNN, we use the
same notation ˜xlto represent the latest perturbed features for all participants.
4.3 Differentially Private Federated Learning
According to the standard federated learning protocol [8], after adding the
Laplace perturbation on the features extracted from the client-side DNN, all
the perturbed features will be fed to the edge-side DNN to further generate the
local model update by running the SGD algorithm. For simplicity, we use ˜xito
represent the i-th participant’s update (i.e., participant i’s perturbed features),
where i[1,n]. The SGD mechanism is an optimization method to find the
parameter wby minimizing the loss function L(w, ˜xi). In a certain communi-
cation round t, SGD algorithm first compute the gradient gtxi) for any input
features ˜xias follow:
t=wtL(wt,˜xi) (6)
To achieving distributed computation capability, we adopt the distributed selec-
tive stochastic gradient descent (DSSGD) mechanism instead of the conven-
tional SGD algorithm into the federated learning procedure. DSSGD splits the
weight wtand the gradient gtinto nparts, namely wt=(w1
t,··· ,w
t,··· ,g
t), so the local parameters update rule becomes as follow:
t+1 =w(i)
Then the conventional SGD algorithm was executed to calculate the local model
update as:
t+1 =w(i)
t+1 w(i)
At last, each edge server sends the local model updates Δw(i)
t+1 to the cloud
server to further executing the federated average procedure:
t+1 =w(global)
The whole federated learning procedure will be executed iteratively until the
global model w(global)
ttends to convergence.
546 J. Zhang et al.
5 Experimental Evaluation
5.1 Dataset and Experiment Setup
Dataset: MNIST (Modified National Institute of Standards and Technology)
is one of the popular benchmark datasets which is commonly used in training
and testing of deep learning related research fields. The MNIST dataset contains
70000 handwritten grayscale digits images ranging from 0 to 9 (i.e., 10 classes).
Each image is with the size of 28 ×28 pixels, and the whole MNIST dataset is
divided into the 60000 training records and 10000 testing data records.
Experiment Setup: In order to estimate our proposed FedMEC algorithm,
we run the federated learning protocol on an image classification task. We use
the Convolutional Neural Network (CNN) based architecture to construct the
classifier in our FedMEC system. The deep neural network structure for MNIST
dataset consists of 3 convolutional layers and 2 dense layers. The kernel size
of all three convolutional layers is 3 ×3 and the stride for these convolutional
layers is set as 2. In particular, the activation functions applied in the neu-
ral network structure is LReLU. As aforementioned in Sect. 4, the perturbation
strength (μ, b) are the main parameters in our FedMEC scheme, where μis the
nullification rate and bis the diversity of the Laplace mechanism. According
to these two parameters, we test the effectiveness of our differentially private
data perturbation method by applying the convolutional denoising autoencoder
[17] under different perturbation strength. Then, we give a general experimental
evaluation under the setting of μ= 10% and b= 3 to demonstrate the accuracy
of our FedMEC scheme. Furthermore, we also test the changes in accuracy when
pre-assign different perturbation strengths to the edge clients.
5.2 Experimental Results
Effectiveness of Data Perturbation: To evaluate the effectiveness of our
differentially private data perturbation mechanism, we adopt the convolutional
denoising autoencoder under the settings of federated learning to visualize the
noise and reconstruction, which the perturbation strength is represented by
(μ, b). We train our model based on two perturbation strengths (μ=1%,b=1)
and (μ= 10%,b = 5). Figure 4shows the results of visualizing noise and recon-
struction. The first row is the real samples from MNIST dataset and the second
row shows the perturbed results under two perturbation strengths by using our
differentially private data perturbation mechanism. The last row represents the
reconstructed samples based on the convolution denoising autoencoder. Accord-
ing to the perturbation and reconstruction results, we can see that the perturbed
digits can be reconstructed to a certain degree at the perturbation strengths of
(μ=1%,b = 1) as shown in Fig. 4(a). However, as shown in Fig.4(b), it is
hard to reconstruct the original digital when the perturbation strength reaches
(μ= 10%,b= 5), even the perturbed data is public.
Federated Learning with Differential Privacy in MEC 547
(a) (1%,1) (b) (10%,5)
Fig. 4. Visualization of noise and reconstruction
Impact of Data Perturbation: As we know, federated learning allows each
participant to training their data locally and only updating the parameters. In
this situation, edge device users could change their perturbation strength before
sending to the edge server. Thus, we estimate the impact of our differentially
private data perturbation mechanism under different perturbation strength on
the model accuracy, meaning that the client-side DNN will be trained by the
pre-assigned perturbation strength. In our experiments, we set two scenarios
that the numbers of edge clients nare 100 and 300, and the training is stopped
when the communication round reaches 30 and 50 for 100 clients and 300 clients,
0 5 10 15 20 25 30
Communication rounds
mu=10%, b=3
(a) (100 clients)
0 1020304050
Communication rounds
mu=10%, b=3
(b) (300 clients)
Fig. 5. Accuracy for µ= 10% and b=3.
The goal of our first group of experiments is to estimate the changes of
accuracy under strength (μ= 10%,b = 3). From the results shown in Fig. 5,
548 J. Zhang et al.
0.5 1.5 2.5 3.5 4.5
(a) (100 clients)
0.5 1.5 2.5 3.5 4.5
(b) (300 clients)
Fig. 6. Effect of b.
0 5 10 15 20
mu (%)
(a) (100 clients)
0 5 10 15 20
mu (%)
(b) (300 clients)
Fig. 7. Effect of µ.
we can see that the model can get high accuracy very quickly within several
communication rounds in both 100 clients and 300 clients settings, meaning our
FedMEC scheme works well in the settings of federated learning while providing
sufficient privacy guarantees. We also design a group of experiments to evaluate
the global model accuracy by changing one of the parameters in perturbation
strength (μ, b), while keeping another parameter as a fixed value. Here, we con-
sider the mean accuracy for each parameter setting by averaging all the results
with 30 and 50 communication rounds for 100 clients and 300 clients. As shown
in Figs. 6and 7, our FedMEC scheme can perform more than 85% classification
accuracy for all the parameter combinations. Besides, with the gradual increase
of perturbation strength, the model accuracy tends to the decreasing trend due
to the large perturbation on the features will bring a negative impact in the
prediction stage. Despite this, the change range of classification accuracy is less
than 5%, which shows the stability and validity of our FedMEC scheme.
Federated Learning with Differential Privacy in MEC 549
6 Conclusion
In this work, we proposed the FedMEC framework which enables highly effi-
cient federated learning service on the mobile edge computing environment. To
reduce the computation complexity on the mobile edge devices, we designed
a new framework based on the model partition technique to split a deep neu-
ral network into two parts, where the most part of heavy computation works
can be offloaded to the edge server. Besides, we also presented a differentially
private data perturbation mechanism to perturb the Laplacian random noises
to the client-side features before uploading to the edge server. The extensive
experimental results on a benchmark dataset demonstrated that our proposed
FedMEC scheme can achieve high model accuracy while providing sufficient pri-
vacy guarantees.
Acknowledgment. This work was supported in part by the National Key Research
and Development Program of China under Grant 2017YFB0802303, in part by the
National Natural Science Foundation of China under Grant 61672283 and Grant
61602238, in part by the Natural Science Foundation of Jiangsu Province under Grant
BK20160805, and in part by the Postgraduate Research & Practice Innovation Program
of Jiangsu Province under Grant KYCX18 0308.
1. Mach, P., Becvar, Z.: Mobile edge computing: a survey on architecture and com-
putation offloading. IEEE Commun. Surv. Tutor. 19(3), 1628–1656 (2017)
2. Hesamifard, E., Takabi, H., Ghasemi, M., Wright, R.N.: Privacy-preserving
machine learning as a service. In: Proceedings of 19th Privacy Enhancing Tech-
nologies Symposium, PETS, Barcelona, Spain, July 2018, pp. 123–142 (2018)
3. Zhang, Q., Yang, L.T., Chen, Z.: Privacy preserving deep computation model on
cloud for big data feature learning. IEEE Trans. Comput. 65(5), 1351–1362 (2016)
4. Zhang, J., Chen, B., Zhao, Y., Cheng, X., Hu, F.: Data security and privacy-
preserving in edge computing paradigm: survey and open issues. IEEE Access 6,
18209–18237 (2018)
5. Smith, V., Chiang, C.-K., Sanjabi, M., Talwalkar, A.S.: Federated multi-task learn-
ing. In: Proceedings of the 32nd Annual Conference on Neural Information Pro-
cessing Systems, NIPS, Long Beach, CA, USA, December 2017, pp. 4427–4437
6. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and
applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019)
7. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the
22nd ACM Conference on Computer and Communications Security, CCS, Denver,
Colorado, USA, October 2008, pp. 1310–1321 (2015)
8. McMahan, H.B., Moore, E., Ramage, D., Hampson, S., Ag¨uera y Arcas, B.:
Communication-efficient learning of deep networks from decentralized data. In:
Proceedings of the 20th International Conference on Artificial Intelligence and
Statistics, AISTATS, Fort Lauderadale, Florida, USA, April 2017, pp. 1–10 (2017)
550 J. Zhang et al.
9. Wang, J., Zhang, J., Bao, W., Zhu, X., Cao, B., Yu, P.S.: Not just privacy: improv-
ing performance of private deep learning in mobile cloud. In: Proceedings of the
24th ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, KDD, London, United Kingdom, August 2018, pp. 2407–2416 (2018)
10. Mao, Y., Yi, S., Li, Q., Feng, J., Xu, F., Zhong, S.: Learning from differen-
tially private neural activations with edge computing. In: Proceedings of the 3rd
IEEE/ACM Symposium on Edge Computing, SEC, Seattle, WA, USA, October
2018, pp. 90–102 (2018)
11. Osia, S.A., et al.: A hybrid deep learning architecture for privacy-preserving mobile
analytics. ACM Trans. Knowl. Discov. Data 1(1), 1–21 (2018)
12. Fredrikson, F., Jha, S., Ristenpart, T.: Model inversion attacks that exploit con-
fidence information and basic countermeasures. In: Proceedings of the 22th ACM
Conference on Computer and Communications Security, CCS, Denver, Colorado,
USA, October 2015, pp. 1322–1333 (2015)
13. Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep
learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur.
13(5), 1333–1345 (2018)
14. Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found.
Trends Theor. Comput. Sci. 9(3), 211–407 (2014)
15. Lane, N.D., Georgiev, P.: Can deep learning revolutionize mobile sensing? In: Pro-
ceedings of the 16th International Workshop on Mobile Computing Systems and
Applications, HotMobile, Santa Fe, New Mexico, USA, February 2015, pp. 117–122
16. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the
23th ACM Conference on Computer and Communications Security, CCS, Vienna,
Austria, October 2016, pp. 308–318 (2016)
17. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convo-
lutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016)
... Considering the huge computational cost of large-scale DNN training, the aforementioned works on communication and computation resource allocation are not adequate to reduce the computational burden on lightweight IIoT devices during the FL training process. To further reduce the computational cost of FL training for IIoT devices, recent works [19]- [21] on DNN partition assisted FL propose to divide the DNN model into two continuous portions, and separately train bottom and top layers of the DNN model at the device and edge server sides. However, these works focus on differentially private data perturbation mechanism designs to preserve the privacy of training data, and adopt predefined DNN partition strategies for all devices regardless of limited and heterogeneous computational resources. ...
... The proposed DDSRA as a centralized scheduling algorithm is performed by the BS. Compared with the existing DNN partition approaches using a predefined DNN partition point for all devices during the FL training process [19]- [21], the proposed DDSRA algorithm dynamically optimizes DNN partition point, channel assignment, transmit power, and computation frequency with time-varying channels and stochastic energy arrivals. Optimize DNN partirion point l(t), computation frequency f G (t) and transmit power P (t) by solving (21), (22), and (23) with block coordinate descent method, and compute Λ m,j (t) according to (18); 7 Given the optimized auxiliary variable Λ m,j (t), find the channel assignment policy I(t) by solving (26) with Hungarian method; 8 Update Q(t) according to (14); ...
Federated Learning (FL) empowers Industrial Internet of Things (IIoT) with distributed intelligence of industrial automation thanks to its capability of distributed machine learning without any raw data exchange. However, it is rather challenging for lightweight IIoT devices to perform computation-intensive local model training over large-scale deep neural networks (DNNs). Driven by this issue, we develop a communication-computation efficient FL framework for resource-limited IIoT networks that integrates DNN partition technique into the standard FL mechanism, wherein IIoT devices perform local model training over the bottom layers of the objective DNN, and offload the top layers to the edge gateway side. Considering imbalanced data distribution, we derive the device-specific participation rate to involve the devices with better data distribution in more communication rounds. Upon deriving the device-specific participation rate, we propose to minimize the training delay under the constraints of device-specific participation rate, energy consumption and memory usage. To this end, we formulate a joint optimization problem of device scheduling and resource allocation (i.e. DNN partition point, channel assignment, transmit power, and computation frequency), and solve the long-term min-max mixed integer non-linear programming based on the Lyapunov technique. In particular, the proposed dynamic device scheduling and resource allocation (DDSRA) algorithm can achieve a trade-off to balance the training delay minimization and FL performance. We also provide the FL convergence bound for the DDSRA algorithm with both convex and non-convex settings. Experimental results demonstrate the derived device-specific participation rate in terms of feasibility, and show that the DDSRA algorithm outperforms baselines in terms of test accuracy and convergence time.
... Next, we review the work [25,27,49,62,87] that apply DP in FL. The common goal of these work is to ensure that a learned model does not reveal whether a client participated during decentralized training. ...
... Specifically, the FedMEC framework in [87] is an efficient federated learning service on the mobile edge computing environment, which allocates the heavy computations to the edge devices and makes the computation results differentially private before sending back to the server. On the other hand, [49] and [25] independently propose a user-level DP algorithm in the federated learning setting and provide a tight privacy guarantee. ...
Full-text available
In the mobile Internet era, recommender systems have become an irreplaceable tool to help users discover useful items, thus alleviating the information overload problem. Recent research on deep neural network (DNN)-based recommender systems have made significant progress in improving prediction accuracy, largely attributed to the widely accessible large-scale user data. Such data is commonly collected from users’ personal devices and then centrally stored in the cloud server to facilitate model training. However, with the rising public concerns on user privacy leakage in online platforms, online users are becoming increasingly anxious over abuses of user privacy. Therefore, it is urgent and beneficial to develop a recommender system that can achieve both high prediction accuracy and strong privacy protection. To this end, we propose a DNN-based recommendation model called PrivRec running on the decentralized federated learning (FL) environment, which ensures that a user’s data is fully retained on her/his personal device while contributing to training an accurate model. On the other hand, to better embrace the data heterogeneity (e.g., users’ data vary in scale and quality significantly) in FL, we innovatively introduce a first-order meta-learning method that enables fast on-device personalization with only a few data points. Furthermore, to defend against potential malicious participants that pose serious security threat to other users, we further develop a user-level differentially private model, namely DP-PrivRec, so attackers are unable to identify any arbitrary user from the trained model. To compensate for the loss by adding noise during model updates, we introduce a two-stage training approach. Finally, we conduct extensive experiments on two large-scale datasets in a simulated FL environment, and the results validate the superiority of both PrivRec and DP-PrivRec.
... - [ 188], [177], [197], [6], -- [152] Incentive mechanisms -- [ 196] - [ 46], [140], ...
... --------- [ 106], [178], [11], [109], [152], [115], [139], [49], [91], [156], [16], [169] [158], [70], [197], [35] Decentralised aggregation - [ 79], [143], [86], [87], [63] [59] ...
Federated learning is an emerging machine learning paradigm where clients train models locally and formulate a global model based on the local model updates. To identify the state-of-the-art in federated learning and explore how to develop federated learning systems, we perform a systematic literature review from a software engineering perspective, based on 231 primary studies. Our data synthesis covers the lifecycle of federated learning system development that includes background understanding, requirement analysis, architecture design, implementation, and evaluation. We highlight and summarise the findings from the results and identify future trends to encourage researchers to advance their current work.
... (DP-FL) Furthermore, they presented how the DP-FL framework works in the cloud. The authors of Zhang, Wang, Zhao, and Chen (2019) presented an efficiently private FL scheme in Mobile Edge Computing (MEC), called FedMEC, to solve several issues in MEC systems. A Deep Learning network was trained on a dataset that can be exploited to partially reconstruct the training samples of the original dataset. ...
Federated Learning (FL) has been foundational in improving the performance of a wide range of applications since it was first introduced by Google. Some of the most prominent and commonly used FL-powered applications are Android’s Gboard for predictive text and Google Assistant. FL can be defined as a setting that makes on-device, collaborative Machine Learning possible. A wide range of literature has studied FL technical considerations, frameworks, and limitations with several works presenting a survey of the prominent literature on FL. However, prior surveys have focused on technical considerations and challenges of FL, and there has been a limitation in more recent work that presents a comprehensive overview of the status and future trends of FL in applications and markets. In this survey, we introduce the basic fundamentals of FL, describing its underlying technologies, architectures, system challenges, and privacy-preserving methods. More importantly, the contribution of this work is in scoping a wide variety of FL current applications and future trends in technology and markets today. We present a classification and clustering of literature progress in FL in application to technologies including Artificial Intelligence, Internet of Things, blockchain, Natural Language Processing, autonomous vehicles, and resource allocation, as well as in application to market use cases in domains of Data Science, healthcare, education, and industry. We discuss future open directions and challenges in FL within recommendation engines, autonomous vehicles, IoT, battery management, privacy, fairness, personalization, and the role of FL for governments and public sectors. By presenting a comprehensive review of the status and prospects of FL, this work serves as a reference point for researchers and practitioners to explore FL applications under a wide range of domains.
... Truex et al. [76] combine secure multiparty communication and differential privacy to guarantee the confidentiality of training data during the aggregation of local models. Zhang et al. [77] combine learning model partitioning and differential privacy to provide confidentiality guarantees on FL. The proposal aims to maintain a balanced trade-off between computational cost and privacy guarantees. ...
The use of machine learning (ML) with electronic health records (EHR) is growing in popularity as a means to extract knowledge that can improve the decision-making process in healthcare. Such methods require training of high-quality learning models based on diverse and comprehensive datasets, which are hard to obtain due to the sensitive nature of medical data from patients. In this context, federated learning (FL) is a methodology that enables the distributed training of machine learning models with remotely hosted datasets without the need to accumulate data and, therefore, compromise it. FL is a promising solution to improve ML-based systems, better aligning them to regulatory requirements, improving trustworthiness and data sovereignty. However, many open questions must be addressed before the use of FL becomes widespread. This article aims at presenting a systematic literature review on current research about FL in the context of EHR data for healthcare applications. Our analysis highlights the main research topics, proposed solutions, case studies, and respective ML methods. Furthermore, the article discusses a general architecture for FL applied to healthcare data based on the main insights obtained from the literature review. The collected literature corpus indicates that there is extensive research on the privacy and confidentiality aspects of training data and model sharing, which is expected given the sensitive nature of medical data. Studies also explore improvements to the aggregation mechanisms required to generate the learning model from distributed contributions and case studies with different types of medical data.
... Federated learning is a form of multi-party machine learning in which a common model is trained between cooperating institutions or individuals while each party maintains exclusive access to and control over its own private database. These methods need not be mutually exclusiveprevious work has combined differential privacy with both synthetic data generation (Beaulieu-Jones et al., 2019;Jordon, Yoon, & van der Schaar, 2019;Li, Xiong, & Jiang, 2014) as well as multi-party deep learning Zhang, Wang, Zhao, & Chen, 2019). ...
Full-text available
Augmenting a dataset with synthetic samples is a common processing step in machine learning with imbalanced classes to improve model performance. Another potential benefit of synthetic data is the ability to share information between cooperating parties while maintaining customer privacy. Often overlooked, however, is how the distribution of the data affects the potential gains from synthetic data augmentation. We present a case study in credit card fraud detection using Generative Adversarial Networks to generate synthetic samples, with explicit consideration given to customer distributions. We investigate two different cooperating party scenarios yielding four distinct customer distributions by credit quality. Our findings indicate that institutions skewed towards higher credit quality customers are more likely to benefit from augmentation with GANs. Relative gains from synthetic data transfer, in the absence of feature set heterogeneity, also appear to asymmetrically favour banks operating on the lower end of the credit spectrum, which we hypothesise is due to differences in spending behaviours.
Edge computing has been widely used in recent years for bringing services closer to end users, resulting in faster response for applications. However, the sensitive information that leaves the data owner is at risk of being disclosed because the service provider is generally honest-but-curious. Federated learning (FL) is a popular method for preserving privacy by transferring the model from the edge node to local devices and training on the local data set. Nonetheless, the training parameter that communicates between local mobile devices and the edge node may contain the original data and be guessed by adversaries. In order to address the privacy threats, we propose the PL-FedIPEC scheme in this article, which is a privacy-preserving and low-latency FL method that transmits parameters encrypted with the improved Paillier, a homomorphic encryption algorithm, to protect the privacy of end devices without transmitting data to the edge node. Our method introduces the improved Paillier encryption, which brings a new hyperparameter and previously computes multiple random intermediate values in the key generation phase so that the time for the encryption phase has a significant reduction. With this new algorithm, the time for model training is decreased, and the sensitive information is in ciphertext format and cannot be analyzed. To evaluate the efficiency of our proposed scheme, we conduct extensive experiments and the results validate and demonstrate that our scheme with the improved Paillier algorithm can achieve the same accuracy as the original Paillier algorithm and the baseline FedAVG algorithm. At the same time, our method can save a massive amount of time when training the learning model with various settings.
Full-text available
Connected and automated vehicles (CAVs) are becoming a reality. Prototyping and testing of self-driving vehicle technology are becoming more popular around the world. The secure deployment of self-driving vehicles necessitates a wide range of technology, competencies, and procedures, all of which must be thoroughly checked and assessed, as road safety may be a risk. As a result, it’s critical to recognize and develop a thorough understanding of the cyber security and privacy concerns with CAVs and of the way these can be prioritized as well as addressed. This chapter investigates falsified information attacks against the RSU’s ongoing FL operation. We discovered a variety of attack tactics used by malicious CAVs to disrupt global system training in vehicular ad hoc networks (VANETs). In which, demonstrate the attacks effectively increased the convergence time and reduced the model’s accuracy.
Full-text available
Machine learning algorithms based on deep Neural Networks (NN) have achieved remarkable results and are being extensively used in different domains. On the other hand, with increasing growth of cloud services, several Machine Learning as a Service (MLaaS) are offered where training and deploying machine learning models are performed on cloud providers’ infrastructure. However, machine learning algorithms require access to the raw data which is often privacy sensitive and can create potential security and privacy risks. To address this issue, we present CryptoDL, a framework that develops new techniques to provide solutions for applying deep neural network algorithms to encrypted data. In this paper, we provide the theoretical foundation for implementing deep neural network algorithms in encrypted domain and develop techniques to adopt neural networks within practical limitations of current homomorphic encryption schemes. We show that it is feasible and practical to train neural networks using encrypted data and to make encrypted predictions, and also return the predictions in an encrypted form. We demonstrate applicability of the proposed CryptoDL using a large number of datasets and evaluate its performance. The empirical results show that it provides accurate privacy-preserving training and classification.
Full-text available
With the explosive growth of IoT (Internet of Things) devices and massive data produced at the edge of the network, the traditional centralized cloud computing model has come to a bottleneck due to the bandwidth limitation and resources constraint. Therefore, edge computing, which enables storing and processing data at the edge of the network, has emerged as a promising technology in recent years. However, the unique features of edge computing, such as content perception, real-time computing, and parallel processing, has also introduced several new challenges in the field of data security and privacy-preserving, which are also the key concerns of the other prevailing computing paradigms, such as cloud computing, mobile cloud computing, and fog computing. Despites its importance, there still lacks a survey on the recent research advance of data security and privacy-preserving in the field of edge computing. In this paper, we present a comprehensive analysis of the data security and privacy threats, protection technologies, and countermeasures inherent in edge computing. Specifically, we first make an overview of edge computing, including forming factors, definition, architecture, and several essential applications. Next, a detailed analysis of data security and privacy requirements, challenges, and mechanisms in edge computing are presented. Then, the cryptography-based technologies for solving data security and privacy issues are summarized. The state-of-the-art data security and privacy solutions in edge-related paradigms are also surveyed. Finally, we propose several open research directions of data security in the field of edge computing.
Full-text available
We explore the use of tools from differential privacy in the design and analysis of online learning algorithms. We develop a simple and powerful analysis technique for Follow-The-Leader type algorithms under privacy-preserving perturbations. This leads to the minimax optimal algorithm for k-sparse online PCA and the best-known perturbation based algorithm for the dense online PCA. We also show that the differential privacy is the core notion of algorithm stability in various online learning problems.
Full-text available
The increasing quality of smartphone cameras and variety of photo editing applications, in addition to the rise in popularity of image-centric social media, have all led to a phenomenal growth in mobile-based photography. Advances in computer vision and machine learning techniques provide a large number of cloud-based services with the ability to provide content analysis, face recognition, and object detection facilities to third parties. These inferences and analytics might come with undesired privacy risks to the individuals. In this paper, we address a fundamental challenge: Can we utilize the local processing capabilities of modern smartphones efficiently to provide desired features to approved analytics services, while protecting against undesired inference attacks and preserving privacy on the cloud? We propose a hybrid architecture for a distributed deep learning model between the smartphone and the cloud. We rely on the Siamese network and machine learning approaches for providing privacy based on defined privacy constraints. We also use transfer learning techniques to evaluate the proposed method. Using the latest deep learning models for Face Recognition, Emotion Detection, and Gender Classification techniques, we demonstrate the effectiveness of our technique in providing highly accurate classification results for the desired analytics, while proving strong privacy guarantees.
Today’s artificial intelligence still faces two major challenges. One is that, in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these challenges: secure federated learning. Beyond the federated-learning framework first proposed by Google in 2016, we introduce a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federated learning, and federated transfer learning. We provide definitions, architectures, and applications for the federated-learning framework, and provide a comprehensive survey of existing works on this subject. In addition, we propose building data networks among organizations based on federated mechanisms as an effective solution to allowing knowledge to be shared without compromising user privacy.
We present a privacy-preserving deep learning system in which many learning participants perform neural network-based deep learning over a combined dataset of all, without revealing the participants’ local data to a central server. To that end, we revisit the previous work by Shokri and Shmatikov (ACM CCS 2015) and show that, with their method, local data information may be leaked to an honest-but-curious server. We then fix that problem by building an enhanced system with the following properties: (1) no information is leaked to the server; and (2) accuracy is kept intact, compared to that of the ordinary deep learning system also over the combined dataset. Our system bridges deep learning and cryptography: we utilize asynchronous stochastic gradient descent as applied to neural networks, in combination with additively homomorphic encryption. We show that our usage of encryption adds tolerable overhead to the ordinary deep learning system. IEEE
Conference Paper
Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that col- lect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents ob- vious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data own- ers—for example, medical institutions that may want to apply deep learning methods to clinical records—are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning. In this paper, we design, implement, and evaluate a practical sys- tem that enables multiple parties to jointly learn an accurate neural- network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradi- ent descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy- preserving deep learning on benchmark datasets.
Conference Paper
Sensor-equipped smartphones and wearables are transform- ing a variety of mobile apps ranging from health monitoring to digital assistants. However, reliably inferring user behav- ior and context from noisy and complex sensor data collected under mobile device constraints remains an open problem, and a key bottleneck to sensor app development. In recent years, advances in the field of deep learning have resulted in nearly unprecedented gains in related inference tasks such as speech and object recognition. However, although mobile sensing shares many of the same data modeling challenges, we have yet to see deep learning be systematically studied within the sensing domain. If deep learning could lead to significantly more robust and efficient mobile sensor infer- ence it would revolutionize the field by rapidly expanding the number of sensor apps ready for mainstream usage. In this paper, we provide preliminary answers to this po- tentially game-changing question by prototyping a low-power Deep Neural Network (DNN) inference engine that exploits both the CPU and DSP of a mobile device SoC. We use this engine to study typical mobile sensing tasks (e.g., activity recognition) using DNNs, and compare results to learning techniques in more common usage. Our early findings pro- vide illustrative examples of DNN usage that do not over- burden modern mobile hardware, while also indicating how they can improve inference accuracy. Moreover, we show DNNs can gracefully scale to larger numbers of inference classes and can be exibly partitioned across mobile and remote resources. Collectively, these results highlight the critical need for further exploration as to how the field of mobile sensing can best make use of advances in deep learn- ing towards robust and efficient sensor inference.
Federated learning poses new statistical and systems challenges in training machine learning models over distributed networks of devices. In this work, we show that multi-task learning is naturally suited to handle the statistical challenges of this setting, and propose a novel systems-aware optimization method, MOCHA, that is robust to practical systems issues. Our method and theory for the first time consider issues of high communication cost, stragglers, and fault tolerance for distributed multi-task learning. The resulting method achieves significant speedups compared to alternatives in the federated setting, as we demonstrate through simulations on real-world federated datasets.