Conference PaperPDF Available

A Partial Offloading Technique for Wireless Mobile Cloud Computing in Smart Cities


Abstract and Figures

A smart city scenario based on an efficient wireless network allows users to benefit from multimedia services in an ubiquitous, seamless and interoperable way. In this context Mobile Cloud Computing (MCC) and Heterogeneous Networks (HetNets) are viewed as infrastructures providing together a key solution for the major facing problems: the former allows to offload application to powerful remote servers, shortening execution time and extending battery life of mobile devices, while the latter allows the use of small cells in addition to macrocells, exploiting high-speed and stable connectivity in an ever grown mobile traffic trend. In this paper, we propose a technique aiming to move towards the cloud only a fraction of the computing application by minimizing a cost function, that take into account a tradeoff between energy consumption and execution time, in a non-trivial multi-objective optimization approach. The results show that when the application requires high execution and data workload and simultaneously the network is overloaded, a particular value of this percentage best fits the performance.
Content may be subject to copyright.
A Partial Offloading Technique for Wireless Mobile
Cloud Computing in Smart Cities
Daniela Mazza, Daniele Tarchi, and Giovanni E. Corazza
Department of Electrical, Electronic and Information Engineering
University of Bologna
40136 Bologna, Italy
Email: {daniela.mazza6,daniele.tarchi,giovanni.corazza}
Abstract—A smart city scenario based on an efficient wireless
network allows users to benefit from multimedia services in
an ubiquitous, seamless and interoperable way. In this context
Mobile Cloud Computing (MCC) and Heterogeneous Networks
(HetNets) are viewed as infrastructures providing together a
key solution for the major facing problems: the former allows
to offload application to powerful remote servers, shortening
execution time and extending battery life of mobile devices, while
the latter allows the use of small cells in addition to macrocells,
exploiting high-speed and stable connectivity in an ever grown
mobile traffic trend. In this paper, we propose a technique aiming
to move towards the cloud only a fraction of the computing
application by minimizing a cost function, that take into account
a tradeoff between energy consumption and execution time, in
a non-trivial multi-objective optimization approach. The results
show that when the application requires high execution and
data workload and simultaneously the network is overloaded,
a particular value of this percentage best fits the performance.
Smart cities are considered a paradigm where wireless
communications are an enhancing factor to make better urban
services and improve the quality of life for citizens and
visitors. The smart city scenario is composed by several
parts: the wireless infrastructure, the user devices, sensing
nodes, machine devices, access points, one or more cloud
infrastructures. Moreover, for delivering the requested services
lots of data are exchanged among the citizens and the devices,
and these data need also to be elaborated in order to give the
correct information to the users.
Thanks to wireless communications, users can move
through different environments, indoor and outdoor, providing
data to the cloud and receiving access services as browsing,
video on demand, video streaming, information about location
and maps. In this context energy saving and performance
improvement of Smart Mobile Devices (SMDs) have been
widely recognized as primary issues. In fact the execution of
every complex application is a big challenge due to the limited
battery power and computation capacity of the mobile devices,
especially in a smart environment where communication is
considered a key to get better features in important areas such
as mobility and transportation.
The exploitation of Heterogeneous Networks (HetNets) in-
frastructures together with the opportunity to delegate com-
putation load to Mobile Cloud Computing (MCC), as shown
in Fig. 1, is an appealing connection achieving the aims of
Fig. 1. The reference scenario with access nodes in HetNet for Mobile Cloud
saving SMD’s power resource and executing the requested
tasks faster [1].
HetNets involve multiple types of low power radio access
nodes in addition to the traditional macrocell nodes in a wire-
less network, reaching the major goal to enhance connectivity,
also expecting that WiFi access points along with femtocells
are projected to carry over 60% of all the global data traffic
by 2015 [2]. On the other hand, MCC aims to increase
the computing capabilities of mobile devices, conserve local
resources - especially battery - extend storage capacity and
enhance data safety to enrich the computing experience of
mobile users [3].
The distributed execution (i.e., computation/code offload-
ing) between the cloud and mobile devices has been widely
investigated [4], highlighting the challenges towards a more
efficient cloud-based offloading framework and also suggest-
ing some opportunities that may be exploited. Indeed, the
joint optimization of HetNets and distributed processing is a
promising research trend [5].
Several works have already analyzed characteristics and
capacity of MCC offloading, for example aiming to extract of-
floading friendly parts of codes from existing applications [6],
[7]. Also, in [8] the key issues are identified when devel-
oping new applications which can effectively leverage cloud
resources. Furthermore, in [9] a real-life scenarios, where
each device is associated to a software clone on the cloud, is
considered and in [10] a system that effectively accounts for
the power usage of all of the primary hardware subsystems
on the phone has been implemented, distinguishing between
CPU, display, graphics, GPS, audio, microphone, and WiFi.
In [11] an offloading framework, named Ternary Decision
Maker (TDM), is developed, aiming to shorten response time
and reduce energy consumption simultaneously with targets
of execution including on-board CPU and GPU in addition
to the cloud, from the point of view of the single device.
In addition there are many studies that focus on whether to
offload computation to a server, providing solutions related to
a yes/no decision for the entire task at one time [12], [13], or
studies that focus on optimization of the energy consumption
in SMDs necessary to run a given application under execution
time constraint [14].
The aim of this paper is to propose a partial offloading
technique able to exploit the HetNets scenario and the pres-
ence of MCC devices, by optimizing the amount of partial
offloading of the computational tasks depending on the number
of devices connected to a network and their location with
respect to the WiFi Access Points or LTE eNodeBs. Differently
from the literature, we tackle the optimization of the entire
system and not on the single device, by taking into account
partial offloading in a non trivial multi-objective optimization
approachg where both energy consumption and execution time
constraints are tackled. A cost function considering the trade-
off between energy consumption of mobile devices versus the
time to offloading data and to compute tasks on a remote cloud
server is provided, evaluating the optimal offloading fraction
depending on the network’s load. We provide a function that
a centralized network management can exploit to evaluate the
best percentage amount to offload in very crowded situation,
when the network is overloaded and tasks are requesting both
large amount of computation and data to be exchanged.
The reference scenario we are focusing on is characterized
by an urban area with a pervasive wireless coverage, where
several mobile devices are interacting with a traditional cen-
tralized cloud service and request for services from a remote
data center, as illustrated in Fig. 1. In order to connect to the
cloud and the data centers we consider the presence of two
types of Radio Access Technologies (RATs) that compose the
basic elements of the HetNet: macrocells and small cells.
a) Macrocells: The distance between the access points
(base stations of the macrocells) is usually higher than 500 m.
Thanks to this type of base stations the environment is
completely covered and devices can move by minimizing the
handover frequency. On the other hand, in macrocells the
system suffers for channel fading and traffic congestion. This
leads to a lack of stability, not allowing to reach very high
data rate. The technology used for this type of cells refers to
the cellular networks, e.g., 3G, LTE.
b) Small Cells: Small cells are characterized by low
power radio access nodes, which have a cover range of about
100-200 m or less. We can distinguish between Picocells
(for providing hotspot coverage in public places, e.g., malls,
airports and stadiums without limits in terms of number
symbol meaning unit of measure
Plpower for local computing W
Pid power while being idle W
Ptr power for sending and receiving
Smd SMD’s calculation speed no. of instructions / s
Str SMD’s transmission speed bit / s
Scs cloud server’s calculation speed no. of instructions / s
Cinstructions required by the task no. of instructions
Dexchanged data bit
of connected devices) and Femtocells (for covering a home
or small business area, available only for selected devices).
Picocells and Femtocells have been recently introduced as a
way for increasing the coverage and maximize the resource
allocation in LTE networks. We also consider WiFi access
points as nodes with a small cover range (less than 100 m)
which can typically communicate with a small number of
client devices. However, the actual range of communication
can vary significantly, depending on such variables as indoor
or outdoor placement, the current weather, operating radio
frequency, and the power output of devices.
Alongside the presence of a pervasive wireless network, a
smart city environment is characterized by the presence of
sensing and user terminals that generate and exploit a large
amount of data. These data, in order to be user friendly,
need to be elaborated by some centralized or distributed data
centers. If on one side the centralized approach allows to
exploit high performance computing centers, the distributed
approach, residing in high performance smartphones and user
terminals, needs to face with the problem of a lower computing
power and, in particular, with the energy issues of the mobile
The aim of this paper is to analyze how the SMDs can
exploit a partial data offloading to distribute high computa-
tional tasks among centralized servers and local computing;
the optimization is done by exploiting an opportunely defined
cost function that takes into account both the SMD power
consumption and the computational time. The SMD power
consumption is related to the transmission speed, that is related
to the time performance of the offloading activity; hence, there
is a tradeoff between power consumption and execution time.
For this reason our model provides a cost function by resorting
to a previously introduced model in [12], [13] which compares
the energy used for a 100% offloading with the ones used to
perform the task locally. The parameters used in the following
are listed in Tab. I.
In our scenario we suppose that the computation of a certain
task requires Cinstructions. Smd and Scs are, respectively,
the speeds in instructions per second of the mobile device
and the cloud server. Hence, a certain task can be completed
in an amount of time equal to C/Smd on the device and
C/Scs on the server. On the other hand, let us suppose that
Dcorresponds to the amount of bits of data that the device
and the server must exchange for the remote computation, and
Str is the transmission speed, in bit per second between the
SMD and the access point; hence, the transmission of data lasts
an amount of time equal to D/Str . In this case we consider
that the transmission time is mostly due to the access network
transfer, because the transfer rate of the backbone network
can be considered as negligible due to the higher data rate.
Moreover, we consider as negligible the transfer time from
the access point to the user terminal because the amount of
data in response to the elaboration in centralized server is little
with respect to the data sent to the centralized server [12], [13].
Hence, it is possible to derive the energy for local comput-
as the product of the power consumption of the mobile device
for computing locally, Pl, and the time C/Smd needed for
the computation. Similarly, it is possible to derive the energy
needed for performing the task computation on the cloud as
the energy used while being in idle for the remote computation
plus the energy used to transmit the whole data from the SMD
to the cloud:
Eod =Pid ×C
+Ptr ×D
where Pid and Ptr are the power consumptions of the mobile
device, in watts, during idle and data transmission periods,
Similarly, it is possible to derive the time needed for the
local computing as:
and the time for the whole offloading computing as
Tod =C
In many applications, this approach is not efficient or
feasible, and it is necessary to partition the application at a
finer granularity into local and remote parts, which is a key
step for offloading.
We first provide two equations to represent the energy used
by a SMD to execute an application in partial offloading and
express the time needed to execute such application. Secondly,
the impact of the traffic workload in the wireless network
is taken into account, since the Radio Access Technologies
(RATs) and the number of SMDs entails the transmission
speed of the the offloading data. Thirdly, a cost function is
introduced, to evaluate the percentage of offloading which
minimizes both energy and time.
In order to analyze the energy spent to offload only a part
of the application, we must introduce the weight coefficients
γand δ, satisfying 0γ, δ 1, representing respectively the
percentage of the computational task and the percentage of the
exchanged data for offloading. Then we can compute the used
energy of a single device Epart od as the sum of the one spent
to perform a part of the task locally plus the one spent to idle
and transmit the other part of the task to the cloud:
Epart od =Pl×(1 γ)·C
+Pid ×γ·C
+Ptr ×δ·D
Taking into account the same coefficients γand δused
in (5), we can calculate the time for the partial offloading
Tpart od as the maximum between the times needed to compute
the local part of the task and that needed for the offloading,
considering the two phases performed in the same time:
Tpart od = max (1 γ)·C
Str (6)
The structure and the workload of the network are implicitly
considered in (5) and (6). Now we are going to describe
the effect on Epart od and Tpart od due to the different RATs
performing in the HetNet and to the amount of devices
connected to this different RATs. The HetNet mainly consists
of two components, macrocells and small cells, with different
bandwidths BW. Since, for a single SMD, the speed rate of
the data exchange Str is affected by the bandwidth of the node
to which the SMD is connected, by the distance dfrom this
node, and by the number nof the overall SMDs connected to
the same node, Str can be written in an explicit way as:
Str =BW
n·log21 + SNR
where SNR is the SMD’s Signal-to-Noise Ratio, typical pa-
rameter of the device.
In order to allow the evaluation of the offloading percentage,
aiming to save energy and improve performance, the introduc-
tion of a cost function that can consider the minimization of
both (5) and (6) for the entire set of SMDs is required. This
is a non-trivial multi-objective optimization problem that we
addressed by setting the cost function as a weighted sum of
both the average values, with αand βcoefficients with the
constraints 0α, β 1and α+β= 1,Nnumber of
network’s devices and El,Tlreference values representing
average energy and time spent when the task is computed
locally by a SMD:
k=1 Epart od,k(γ , δ)
k=1 Tpart od,k(γ , δ)
This cost function is based on a network centric approach in
which a central entity is responsible for choosing values of
the offloading percentage γand δafter collecting informa-
tions about the SMDs’ features. Furthermore, in the partial
offloading procedure, γand δare bounds, because before a
task is executed it may require certain amount of data from
other tasks [1]. Moreover, the weighted coefficients αand β
are chosen at a main level to give a major importance to energy
or time saving.
Application Computation Data
C D δ/γ
1 - Real time traffic
High Low 1071050.25
2 - Mobile Video
and Audio Communi-
Low High 1051070.75
3 - Mobile Social Net-
High High 1071070.50
During a partial offloading the amount of energy and time
in (5) and (6) is affected by the percentage of computation
and communication exchanged, represented respectively by
the coefficients γand δ. These are correlated each other,
since the execution of a remote computation task requires a
certain amount of input/output data to be exchanged. So we
can consider the ratio δ
γas a typical value, peculiar of a type
of application. To summarize typical scenarios we have taken
into account three kinds of applications represented in Tab. II,
according to the aims to analyze cases of a smart transportation
system [15].
We considered a deployment area of 1000 ×1000 m2,
where one LTE eNodeB with channel capacity equal to 100
MHz and three WiFi access point with channel capacities
equal to 22 MHz are positioned to cover the entire area.
The SMDs, positioned randomly, are connected to the nearest
node, independently by the number of SMD existing in the
network. Fig.2 represents the area in case of 500 and 5000
SMD connected, where the access points are positioned at
point (0,0), (500,1000) and (0,1000), and the LTE station at
The values of Smd,Pid,Ptr and Plare specific parameters
of the mobile device. For example we utilized the values of an
HP iPAQ PDA with a 400 MHz Intel XScale processor (Smd =
400) and the following values: Pl0.9W,Pid 0.3Wand
Ptr 1.3W[12].
As for the cost function coefficients we set both αand β
to 0.5 aiming to give the same importance to both timing
and energy consumptions. In Fig. 3 the performance results
of the cost function are represented for the three applications
described in Tab. II.
Fig.3a shows that when a task requires high computation
and low communication (as Application 1), it is better to
offload the task totally, no matter how many devices are
connected to the network. In fact the curves are overlapped
and the cost function assumes the sames values for the same
percentage γ.
On the other hands, Fig.3b shows that when a task requires
low computation and high communication (as Application 2),
it is better to compute the task locally. In this case a big
number of connected devices affect the cost function in a
0 100 200 300 400 500 600 700 800 900 1000
0 100 200 300 400 500 600 700 800 900 1000
Fig. 2. Area in case of 500 and 5000 SMD connected, where the access
points are positioned at point (0,0), (500,1000) and (0,1000), and the LTE
station at (500,500)
negative way; it is possible to see that there is a minimum
for γ= 0.
The most interesting case is shown in Fig.3c. In fact when
the network is overloaded, tasks (as Application 3), with both a
large amount of computation to execute and data to exchange,
are better performed for a specific value of γ. For example, in
this case, the best performance is for γ= 0.4for a network
with 5000 devices and γ= 0.7for a network with 2000
devices. For not overloaded networks it’s better to perform
the total offloading.
Finally, as shown in Figs. 4 and 5, we compare energy
and time spent in the adaptive case with those spent for
the local execution and the total offloading case to perform
Application 3; for the adaptive algorithm we have considered
to use the the optimized γparameter following the previous
analysis. While for the energy there is a compromise between
the two boundary cases, the adaptive function allows the best
performance considering time as the primary issue.
Detailed analysis for other values of coefficients αand β
will be carried in our future research.
In this article we focused on the definition of a cost model
for optimizing time and energy consumption in a smart city
HetNets scenario where smart mobile devices are supposed to
perform an application; the aim was to optimize the amount
of computation performed locally and remotely. The remote
execution is faster and can relieve mobile devices from the
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Cost Function
(a) Application 1 - Real Time Traffic Analysis
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Cost Function
n = 5000
n = 2000
n = 1000
n = 500
(b) Application 2 - Mobile Video and Audio Com-
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Cost Function
n = 2000
n = 1000
n = 5000
n = 500
(c) Application 3 - Mobile Social Networking
Fig. 3. Cost function behavior for the three considered applications
Mobile Devices [n]
Ene rg y Co nsu m pti on [W ·s]
Offloading 100%
Fig. 4. Energy for Application 3 - Mobile Social Networking
Mobile Devices [n]
Time [s]
Offloading 100%
Fig. 5. Time for Application 3 - Mobile Social Networking
correlated energy consumption, but it involves data exchange
with the cloud server, spending time and energy to transmit,
depending also from the load of the HetNet. We proposed
a cost function to evaluate the percentage of application to
offload for the time and energy optimization. The results show
that for applications requesting both high execution work and
data exchange a particular value of this percentage, depending
on the number of devices, optimize the performance.
[1] L. Lei, Z. Zhong, K. Zheng, J. Chen, and H. Meng, “Challenges on
wireless heterogeneous networks for mobile cloud computing,” IEEE
Wireless Commun. Mag., vol. 20, no. 3, pp. 34–44, Jun. 2013.
[2] N. Bhas, “Data offload connecting intelligently,” White paper, Juniper
Research, Apr. 2013.
[3] L. Jiao, R. Friedman, X. Fu, S. Secci, Z. Smoreda, and H. Tschofenig,
“Cloud-based computation offloading for mobile devices: State of the
art, challenges and opportunities,” in Proc. of FutureNetworkSummit
2013, Lisboa, Portugal, Jul. 2013.
[4] H. T. Dinh, C. Lee, D. Niyato, and P. Wang, “A survey of mobile
cloud computing: architecture, applications, and approaches,” Wireless
Communications and Mobile Computing, vol. 13, no. 18, pp. 1587–1611,
Dec. 2013.
[5] R. Fantacci, M. Vanneschi, C. Bertolli, G. Mencagli, and D. Tarchi,
“Next generation grids and wireless communication networks: towards
a novel integrated approach,Wireless Communications and Mobile
Computing, vol. 9, no. 4, pp. 445–467, Apr. 2009.
[6] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu,
R. Chandra, and P. Bahl, “Maui: Making smartphones last longer with
code offload,” in Proc. of MobiSys ’10, San Francisco, CA, USA, Jun.
2010, pp. 49–62.
[7] B.-G. Chun, S. Ihm, P. Maniatis, M. Naik, and A. Patti, “CloneCloud:
Elastic execution between mobile device and cloud,” in Proc. of EuroSys
’11, Salzburg, Austria, Apr. 2011, pp. 301–314.
[8] X. Ma, Y. Zhao, L. Zhang, H. Wang, and L. Peng, “When mobile
terminals meet the cloud: computation offloading as the bridge,” IEEE
Netw., vol. 27, no. 5, pp. 28–33, Sep./Oct. 2013.
[9] M. Barbera, S. Kosta, A. Mei, and J. Stefa, “To offload or not to offload?
The bandwidth and energy costs of mobile cloud computing,” in Proc.
of IEEE INFOCOM 2013, Turin, Italy, Apr. 2013, pp. 1285–1293.
[10] R. Murmuria, J. Medsger, A. Stavrou, and J. Voas, “Mobile application
and device power usage measurements,” in Proc. of IEEE SERE 2012,
Jun. 2012, pp. 147–156.
[11] Y.-D. Lin, E.-H. Chu, Y.-C. Lai, and T.-J. Huang, “Time-and-energy-
aware computation offloading in handheld devices to coprocessors
and clouds,” IEEE Syst. J., 2013, in press. [Online]. Available:
[12] K. Kumar and Y.-H. Lu, “Cloud computing for mobile users: Can
offloading computation save energy?” IEEE Computer, vol. 43, no. 4,
pp. 51–56, Apr. 2010.
[13] H. Wu, Q. Wang, and K. Wolter, “Tradeoff between performance
improvement and energy saving in mobile cloud offloading systems,
in Proc. of IEEE ICC 2013 Workshops, Budapest, Hungary, Jun. 2013,
pp. 728–732.
[14] S. Barbarossa, S. Sardellitti, and P. Di Lorenzo, “Computation offloading
for mobile cloud computing based on wide cross-layer optimization,” in
Proc. of FutureNetworkSummit 2013, Lisboa, Portugal, Jul. 2013.
[15] R. Yu, Y. Zhang, S. Gjessing, W. Xia, and K. Yang, “Toward cloud-
based vehicular networks with efficient resource management,IEEE
Netw., vol. 27, no. 5, pp. 48–55, Sep. 2013.
... The VEC offloading latency comprises of three-parts: the latency to transmit the data to its nearest VEC server, ready time of task on the VEC server, and the execution time on the VEC server. Regarding the delay in transmitting the result back, we tend to neglect it following the footsteps of given references [31,33]. The latency for transmitting the data to the VEC server can be given by, ...
... When the VEC server finishes the computing, the output results will be sent back to the vehicle V n . We neglect the transmission time from the VEC server to V n , since the amount of data as compared to input is very little [33]. The cost is evaluated by the utilization of the processor. ...
... In addition, the radius of V2V communication C limit is set to 150m [28]. Similarly, the White Gaussian noise power N0 = 3 × 10 −13 , V2I and V2V communication bandwidth B V 2I = B V 2V = 1MHz, the V2I path loss exponent σ = 2, and the transmit power of onboard unit P t = 1.3W [33]. As the qualified vehicle (RRV) acts as a mini server for the requested vehicle (RHV). ...
Full-text available
Abstract Vehicular edge computing (VEC) is a promising paradigm to offload resource-intensive tasks at the network edge. Owing to time-sensitive and computation-intensive vehicular applications and high mobility scenarios, cost-efficient task offloading in the vehicular environment is still a challenging problem. In this paper, we study the partial task offloading problem in vehicular edge computing in an urban scenario. Where the vehicle computes some part of a task locally, and offload the remaining task to a nearby vehicle and to VEC server subject to the maximum tolerable delay and vehicle’s stay time. To make it cost-efficient, including the cost of the required communication and computing resources, we consider to fully exploit the vehicular available resources. We estimate the transmission rates for the vehicle to vehicle and vehicle to infrastructure communication based on practical assumptions. Moreover, we present a mobility-aware partial task offloading algorithm, taking into account the task allocation ratio among the three parts given by the communication environment conditions. Simulation results validate the efficient performance of the proposed scheme that not only enhances the exploitation of vehicular computation resources but also minimizes the overall system cost in comparison to baseline schemes.
... For the sake of simplicity, these cells along roads are marked and represented by a set U = U 1 , U 2 , ⋯ , U i , ⋯ , where U i represents the i cell. The communication range of cells discussed in the scenario in this paper is relatively small, and its coverage is about 50 to 100 m or less (Mazza et al. 2014). It is precisely because the cell area is small that a large task offload may need to pass through multiple cells. ...
Full-text available
Mobile edge computing has been deeply integrated with internet of vehicles (IoV) due to its efficient computing capabilities close to devices. However, the inefficiency of storage and computing capabilities for vehicle terminals is in conflict with the diversification of network application services, which poses a huge challenge to the high-performance computing in IoV. In response to the high-performance computing requirements, a mobile edge computing task distribution and offloading algorithm based on deep reinforcement learning is proposed in order to solve low terminal storage capacity and diversified network service problems. Firstly, taking the energy consumption and transmission bandwidth of vehicle terminals as constraints, this paper establishes a task offloading and resource allocation model based on mathematical model using in-vehicle communication network. Besides, the model takes the maximum task processing rate as the objective function. Secondly, the AHP-DQN framework is used to solve the model, and the optimization variables are allocated according to the real-time state of the network to ensure the better performance of the task allocation algorithm in the multi-user scenario of IoV. Finally, simulation experiments show that the proposed algorithm can effectively realize the effective distribution of computing tasks in IoV.
... However, a particular mobile device is at the centre which initiates the computation and various other mobile devices are well connected in a network to perform the execution of a task that is logically split amongst them. In case the computation task is very huge and cannot be taken care by a network of SMD's, then the offloading approach is followed, wherein the task is executed completely in the cloud and only the final results are sent back to the mobile device for the user [21]. WSNs are very important in today's world as they are used widely to monitor various parameters in the environment and based on these observation values corrective action can be taken for the betterment of the society. ...
Full-text available
Abstract In today's fast advancing world, sensors are used in various applications to provide complete data about different objects present around us. The sensor data when integrated with Mobile Cloud Computing (MCC) is used for further computation to provide useful results advanced warning, forecasting, planning, and plethora of other applications in this uncertain world. Few works have been proposed where sensor nodes are integrated with Wireless Sensor Networks (WSNs) which give a new direction in modern research for developing new technologies which help the users for fast access of sensory data over mobile devices. Herein, a systematic literature review of both MCC and WSNs are conducted and subsequently assimilated them via an architecture which is efficient and achievable within short time‐span for a variety of applications. The MCC applications are reviewed from two main perspectives that are energy and information management. The communication aspect in MCC and WSNs has been discussed and the need for integration between the two has been justified in this work. The work done on mobility management in MCC and WSNs is reviewed. Also, the challenges in MCC‐WSNs integration along with the comprehensive analysis and findings of the review are identified. Finally, the conclusion and specific future direction of research in this area are provided.
... In [23], Kiani and Ansari proposed a task scheduling scheme designed for code partitioning over time and the hierarchical cloudlets in a mobile edge network. Similar work includes [24], which proposed a partial offloading technique for wireless mobile cloud computing. In [7], Wang et al. also divided the whole task into several small task units, taking into account the divisibility of task, and proposed a dynamic offloading in MEC-enabled vehicular networks, which is similar to our work. ...
Full-text available
Taking the mobile edge computing paradigm as an effective supplement to the vehicular networks can enable vehicles to obtain network resources and computing capability nearby, and meet the current large-scale increase in vehicular service requirements. However, the congestion of wireless networks and insufficient computing resources of edge servers caused by the strong mobility of vehicles and the offloading of a large number of tasks make it difficult to provide users with good quality of service. In existing work, the influence of network access point selection on task execution latency was often not considered. In this paper, a pre-allocation algorithm for vehicle tasks is proposed to solve the problem of service interruption caused by vehicle movement and the limited edge coverage. Then, a system model is utilized to comprehensively consider the vehicle movement characteristics, access point resource utilization, and edge server workloads, so as to characterize the overall latency of vehicle task offloading execution. Furthermore, an adaptive task offloading strategy for automatic and efficient network selection, task offloading decisions in vehicular edge computing is implemented. Experimental results show that the proposed method significantly improves the overall task execution performance and reduces the time overhead of task offloading.
... The average number of CPU cycles consumed by each vehicle to calculate its own task is λ 1 = 45 cycles, λ 2 = 60 cycles, λ 3 = 100 cycles, λ 4 = 20 cycles, λ 5 = 80 cycles, respectively. The data transmission power of all PMs is p tr = 0.1 W [39,40], computation power is p c = 0.5 W [41], and idle power is p i = 0.001 W. The distance is the same between adjacent vehicles. Each time interval of simulation is set as 0.1 s, and the total simulation time is 30 s. ...
Full-text available
Due to limited computation resources of a vehicle terminal, it is impossible to meet the demands of some applications and services, especially for computation-intensive types, which not only results in computation burden and delay, but also consumes more energy. Mobile edge computing (MEC) is an emerging architecture in which computation and storage services are extended to the edge of a network, which is an advanced technology to support multiple applications and services that requires ultra-low latency. In this paper, a task offloading approach for an MEC-assisted vehicle platooning is proposed, where the Lyapunov optimization algorithm is employed to solve the optimization problem under the condition of stability of task queues. The proposed approach dynamically adjusts the offloading decisions for all tasks according to data parameters of current task, and judge whether it is executed locally, in other platooning member or at an MEC server. The simulation results show that the proposed algorithm can effectively reduce energy consumption of task execution and greatly improve the offloading efficiency compared with the shortest queue waiting time algorithm and the full offloading to an MEC algorithm.
... The waiting time for the task to be processed plus the computing time at the F-AP corresponds to the FN idle time when the FN waits for the result back. The concept behind partial offloading is to delegate only a portion of the computational load to another device to optimize energy and time [13]. We define α l as the portion of the lth task that is offloaded. ...
Conference Paper
Full-text available
Edge Computing refers to a recently introduced approach aiming to bring the storage and computational capabilities of the cloud to the proximity of the edge devices. Edge Computing is one of the main techniques enabling Fog Computing and Networking. Among several application scenarios , the urban scenario seems one of the most attractive for exploiting edge computing approaches. However, in an urban scenario, mobility becomes a challenge to be addressed, affecting the edge computing. By gaining from the the presence of two types of devices, Fog Nodes (FNs) and Fog-Access Points (F-APs), the idea in this paper is that of exploiting Device to Device (D2D) communications between FNs for assisting computation offloading requests between FNs and F-APs by exchanging status information related to the F-APs. With this knowledge, this paper proposes a partial offloading approach where the optimal tasks amount to be offloaded is estimated for minimizing the outage probability due to the mobility of the devices. In order to reduce the outage probability we have further considered a relaying approach among F-APs. Moreover, the impact of the number of tasks that each F-AP can manage is shown in terms of task processing delay. Numerical results show that the proposed approaches allow to achieve performance closer to the lower bound, by reducing the outage probability and the task processing delay.
Conference Paper
Future vehicular applications require sufficient computing and storage resources. Actual on-board resources are limited and inadequate to deal with steadily increasing requirements. In order to overcome the computational limitation in vehicles, computing intensive functions can be offloaded to a cloud. The so-called cloud-based vehicle functions run in the cloud and utilize cloud capability instead of vehicle’s on-board resources. The suitability of the vehicle functions for outsourcing to the cloud should be ensured in an early stage of development. Otherwise, shifting improper functions can lead to serious consequences for the vehicle control system. By the suitability analysis several criteria like functional safety, data dependency and response time should be considered. In this paper, we focus on the criterion response time and present a framework to predict the response time along a sample route.
Conference Paper
Full-text available
The cloud seems to be an excellent companion of mobile systems, to alleviate battery consumption on smartphones and to backup user's data on-the-fly. Indeed, many recent works focus on frameworks that enable mobile computation offloading to software clones of smartphones on the cloud and on designing cloud-based backup systems for the data stored in our devices. Both mobile computation offloading and data backup involve communication between the real devices and the cloud. This communication does certainly not come for free. It costs in terms of bandwidth (the traffic overhead to communicate with the cloud) and in terms of energy (computation and use of network interfaces on the device). In this work we study the fmobile software/data backupseasibility of both mobile computation offloading and mobile software/data backups in real-life scenarios. In our study we assume an architecture where each real device is associated to a software clone on the cloud. We consider two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user's data and apps is needed. We give a precise evaluation of the feasibility and costs of both off-clones and back-clones in terms of bandwidth and energy consumption on the real device. We achieve this through measurements done on a real testbed of 11 Android smartphones and an equal number of software clones running on the Amazon EC2 public cloud. The smartphones have been used as the primary mobile by the participants for the whole experiment duration.
Conference Paper
Full-text available
The aim of this paper is to propose a computation offloading strategy to be used in mobile cloud computing in order to minimize the energy expenditure at the mobile handset necessary to run an application under a delay constraint. The main novelty of the proposed strategy is a wide cross-layer optimization that encompasses the application, MAC and physical layers within a joint framework. We consider a wireless channel affected by fading, with statistics depending on the number of antennas, and we incorporate packet retransmission strategies. The result of the optimization is the joint dynamic allocation of radio resources and offload scheduling that guarantee the stability of the queue of instructions to be executed, in order to minimize the energy consumption at the mobile handset, under a constraint on the average delay. We provide theoretical results proving the existence of an optimal solution of the problem and then we corroborate the theoretical findings with simulation results. The results show for which classes of application and under what kind of channel conditions, computation offloading can provide a significant performance gain.
Full-text available
Mobile cloud computing (MCC) is an appealing paradigm enabling users to enjoy the vast computation power and abundant network services ubiquitously with the support of remote cloud. However, the wireless networks and mobile devices have to face many challenges due to the limited radio resources, battery power and communications capabilities, which may significantly impede the improvement of service qualities. Heterogeneous Network (HetNet), which has multiple types of low power radio access nodes in addition to the traditional macrocell nodes in a wireless network, is widely accepted as a promising way to satisfy the unrelenting traffic demand. In this article, we first introduce the framework of HetNet for MCC, identifying the main functional blocks. Then, the current state of the art techniques for each functional block are briefly surveyed, and the challenges for supporting MCC applications in HetNet under our proposed framework are discussed. We also envision the future for MCC in HetNet before drawing the conclusion.
Full-text available
In the era of Internet of Things, all components in intelligent transportation systems will be connected to improve transport safety, relieve traffic congestion, reduce air pollution and enhance the comfort of driving. The vision of all vehicles connected poses a significant challenge to the collection and storage of large amounts of traffic-related data. In this article, we propose to integrate cloud computing into vehicular networks such that the vehicles can share computation resources, storage resources and bandwidth resources. The proposed architecture includes a vehicular cloud, a roadside cloud, and a central cloud. Then, we study cloud resource allocation and virtual machine migration for effective resource management in this cloud-based vehicular network. A game-theoretical approach is presented to optimally allocate cloud resources. Virtual machine migration due to vehicle mobility is solved based on a resource reservation scheme.
Conference Paper
Full-text available
Cloud computation offloading is a promising method that sending heavy computation to resourceful servers on cloud and then receiving the results from them. In this paper, we study the offloading techniques and further explore the tradeoff between shortening execution time and extending battery life of mobile devices. A novel adaptive offloading scheme is proposed and analyzed based on the tradeoff analysis. And it can be realized thanks to the elasticity of cloud computing that the resources can be bought on demand. We have tried to find a server on cloud with a critical value of speedup F for a specified mobile device. When satisfying the requirement such as performance improvement by the system, it is worth sacrificing large F when taking economic factor into consideration.
Conference Paper
Full-text available
Mobile cloud computing is a new rapidly growing field. In addition to the conventional fashion that mobile clients access cloud services as in the well-known client/server model, existing work has proposed to explore cloud functionalities in another perspective - offloading part of the mobile codes to the cloud for remote execution in order to optimize the application performance and energy efficiency of the mobile device. In this position paper, we investigate the state of the art of code offloading for mobile devices, highlight the significant challenges towards a more efficient cloud-based offloading framework, and also point out how existing technologies can provide us opportunities to facilitate the framework implementation.
Cloud computing heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems. Is cloud computing the ultimate solution for extending battery lifetimes of mobile systems?
Running sophisticated software on smart phones could result in poor performance and shortened battery lifetime because of their limited resources. Recently, offloading computation workload to the cloud has become a promising solution to enhance both performance and battery life of smart phones. However, it also consumes both time and energy to upload data or programs to the cloud and retrieve the results from the cloud. In this paper, we develop an offloading framework, named Ternary Decision Maker (TDM), which aims to shorten response time and reduce energy consumption at the same time. Unlike previous works, our targets of execution include an on-board CPU, an on-board GPU, and a cloud, all of which combined provide a more flexible execution environment for mobile applications. We conducted a real-world application, i.e., matrix multiplication, in order to evaluate the performance of TDM. According to our experimental results, TDM has less false offloading decision rate than existing methods. In addition, by offloading modules, our method can achieve, at most, 75% savings in execution time and 56% in battery usage.
Conference Paper
Reducing power consumption has become a crucial design tenet for both mobile and other small computing devices that are not constantly connected to a power source. However, unlike devices that have a limited and predefined set of functionality, recent mobile smart phone devices have a very rich set of components and can handle multiple general purpose programs that are not a-priori known or profiled. In this paper, we present a general methodology for collecting measurements and modelling power usage on smart phones. Our goal is to characterize the device subsystems and perform accurate power measurements. We implemented a system that effectively accounts for the power usage of all of the primary hardware subsystems on the phone: CPU, display, graphics, GPS, audio, microphone, and Wi-Fi. To achieve that, we make use of the per-subsystem time shares reported by the operating system's power-management module. We present the models capability to further calculate the power consumption of individual applications given measurements, and also the feasibility of our model to operate in real-time and without significant impact in the power footprint of the devices we monitor.
The emergence of cloud computing has been dramatically changing the landscape of services for modern computer applications. Offloading computation to the cloud effectively expands the usability of mobile terminals beyond their physical limits, and also greatly extends their battery charging intervals through potential energy savings. In this article, we present an overview of computation offloading in mobile cloud computing. We identify the key issues in developing new applications that effectively leverage cloud resources for computation-intensive modules, or migrating such modules in existing applications to the mobile cloud. We then analyze two representative applications in detail from both the macro and micro perspectives, cloud-assisted distributed interactive mobile applications and cloud-assisted motion estimation for mobile video compression, to illustrate the unique challenges, benefit, and implementation of computation offloading in mobile cloud computing. We finally summarize the lessons learned and present potential future avenues.