ArticlePDF Available
International Journal of
Modern Research in Electrical
and Electronic Engineering
Vol. 1, No. 1, 47-52, 2017
http://www.asianonlinejournals.com/index.php/IJMREER
47
An Approach to Improve the Performance of Mobile
Computation Technology Using Data Offloading
Md. Ashraful Islam1
Md. Mostofa Kamal Tareq2
Ishrat Khan Mohona3
Ifthekhar Ahammad4
Mohammad Shamim Kaiser5
(
Corresponding Author)
1,4Leading University, Sylhet, Bangladesh
2Incepta Vaccine Ltd, Bangladesh
3University of Dhaka, Bangladesh
5Jahangirnagar University, Bangladesh
Abstract
Modern mobile phones have a powerful processing unit that can perform multiple operations
simultaneously. But the main constrain is the processing power and energy required to drive it. Cloud
computing has come as a blessing for the mobile computing as it possesses a vast resource, high storage
capacity and high processing power. In Mobile Cloud Computing (MCC), computation is offloaded to
the cloud, cloud processes the data and generates information; and sends it back to the mobile device.
Probability of offloading depends on line bandwidth, line length, size of data, processing speed of
mobile, processing speed of cloud, storage and energy capacity of mobile etc. This paper presents a
model for efficient data offloading decision, depending on the total execution time of a task when the
data computation happens in the mobile and when it is offloaded to the cloud. An algorithm has been
proposed here on data offloading decision. As thus an equation has been proposed to observe the
probability of offloading process.
Keywords: Cloud computing, Offloading, Data computation, Bandwidth, Processing speed.
Contents
1. Introduction ......................................................................................................................................................................... 48
2. Offloading ............................................................................................................................................................................ 48
3. Methodology......................................................................................................................................................................... 48
4. Result and Discussion .......................................................................................................................................................... 49
5. Conclusion ............................................................................................................................................................................ 51
References ................................................................................................................................................................................ 51
Citation | Md. Ashraful Islam; Md. Mostofa Kamal Tareq; Ishrat Khan Mohona; Ifthekhar Ahammad; Mohammad Shamim Kaiser (2017). An Approach to
Improve the Performance of Mobile Computation Technology Using Data Offloading. International Journal of Modern Research in Electrical and Electronic
Engineering, 1(1): 47-52.
DOI:
10.20448/journal.526.2017.11.47.52
Licensed:
Contribution/Acknowledgement:
This work is licensed under a Creative Commons Attribution 3.0 License
All authors contributed to the conception and design of the study.
Funding:
This study received no specific financial support.
Competing Interests:
The authors declare that they have no conflict of interests.
Transparency:
The authors confirm that the manuscript is an honest, accurate, and transparent account of the study was reported; that no
vital features of the study have been omitted; and that any discrepancies from the study as planned have been explained.
History:
Received: 7 March 2017/ Revised: 28 March 2017/ Accepted: 31 March 2017/ Published: 5 April 2017
Ethical:
This study follows all ethical practices during writing.
Publisher:
Asian Online Journal Publishing Group
International Journal of Modern Research in Electrical and Electronic Engineering, 2017, 1(1): 47-52
48
1. Introduction
Cloud computing, in a whole, refers the sharing of application, platform and infrastructure. Remote servers are
connected to make a broad network to share software, hardware and storage [1]. Users from anywhere having the
access to this network can use this resource any time. This facilitates every user with cost effectiveness (CE),
mobility, high computation power etc. In cloud based computation, end users have different devices e.g. smartphone,
computer, tablet, video game console, smart TV, printer, security camera etc. but everybody can utilize the same
services like data processing and storage, arranged in a central way. End users offload data to the cloud where data is
analyzed according to the user requirement and resultant data is sent back to the sender. As the amount of
computational data increases, the probability of using cloud computation increases. This probability also depends on
bandwidth of communication line, end user’s processing speed and cloud processing speed. With the advancement of
mobile applications, large amount of data processing in a short time has become a big concern. Large amount of
computation demands availability of complex hardware, large storage, reliable software and huge power sources
which make a mobile device costly and inconvenient. So, hardware free computation is the most important factor
that makes cloud computing unique [2].
Data offloading is an important and basic factor of cloud computing. A lot of research has already been done in
recent past. So far Offloading policy has been studied extensively. In Valeria Cardellini’s research [3] consideration
has been made with three tier architecture, a system model has been proposed to capture the user’s interaction wit h
the external servers and investigating the effects of computation offloading. A data offloading algorithm has been
proposed in Amin & Mahtab’s work [4] to assign offloaded mobile stations to access points, based on mapping the
problem to find a perfect matching algorithm. In another research [5] a technique is introduced to automatically
generate accurate and efficient method-wise performance predictors for mobile applications to enhance the
performance of offloading. In Xing & Yuan’s study [6] collaborative task execution between mobile devices and
cloud has been considered and a scheduling algorithm has also been provided to solve the optimization problem. In
both K. Kumar’s [7] and Feng’s [8] research, the offloading policy has been investigated, whether the data/task is to
be offloaded to cloud or to be processed in mobile phone in terms of computation execution time and energy
consumption.
In this paper, we analyze the data computation time in two scenarios. In the first scenario, we consider data to be
processed in mobile and in the second scenario we consider data to be off-loaded to the cloud, processed in the cloud
and on-loaded (processed data) to the mobile. We observed that all the previous researches barely considered the
propagation delay/ path delay while off-loading. In this paper, we consider the path delay in cloud computing
process. An algorithm has been proposed to determine whether the data is needed to be offloaded, in terms of data
execution time. Offloading probability hugely depends on the size of total data to be computed and offered
bandwidth. At the later part of this paper an equation has also been proposed to observe how the probability changes
with respect to data size and bandwidth.
2. Offloading
With the progressive advancement of mobile technology, the users of computer are expanding their workstations
from desktops to a greater range of mobile devices- like mobile phones, surveillance, environmental sensing, etc. But
here, mobile devices are battery powered, have less memory, and slow processor. Low bandwidth in wireless
communication is one of the main challenges for data offloading. These limitations have been creating the gap
between the demand of complex programs and the availability of resources [9].
By offloading our computation (Figure 1) these limitations could be overcome. Offloading increases the
capabilities of a mobile system by shifting the work i.e. computation, to a more resourceful cloud or server [7].
Figure-1. Offloading
Since offloading shifts computation to a more resourceful server, decision should have been made: of when and
what computation, someone should offload. These decisions have to be taken considering performance of the mobile
device and energy efficiency.
3. Methodology
To analyze the scenario a consideration has been made where a mobile phone is connected to a cloud server with
some routers or access points (black dots). Bandwidth and distance of paths connecting the routers are denoted by B
and D respectively.
Assuming,
Total Data Computation, 
Processing speed of the mobile system, Sm = 50 MB/s
Sending data, ds = 20 MB
International Journal of Modern Research in Electrical and Electronic Engineering, 2017, 1(1): 47-52
49
Processing speed in cloud, Sc = 200 MB/s
Receiving data, dr = 5 MB
Cloud
Mobile
Different paths
Figure-2. Considered scenario for analyzing offloading decision
The paths are,
1. D1D2D3D4D5D6
2. D1D11D13D14D6
3. D1D11D12D4D5D6
4. D1D2D7D8D9D10D6
And,
B1 = B5 = B10 = B14 = 500 MHz
B2 = B4 = B9 = B13 = 400 MHz
B3 = B6 = B8 = B11 = B12 = 450 MHz and
D1 = D6 = D11 = 10 km
D2 = D10 = D12 = D14 = 8 km
D3 = D5 = D13 = 15 km
D4 = D8 = D9 = 12 km
Here, Smallest Bandwidth is 400 MHz
4. Result and Discussion
A. Equations and Algorithm
To get the data computation time in a mobile device, it is only needed to compute the processing time in that device.
Data computation time in mobile device [7]; [10].


(1)
And to get data computation time while offloading, we have to consider queue delay, path delay and processing
time in cloud [7]; [10].
Here,
Sending queue delay =
Receiving queue delay =
Path delay = 
, where  is the velocity of light and D is the distance of respective paths (For 1st path, D = D1 + D2 +
D3 + D4 + D5 + D6).
Processing time in cloud =
So, Data computation time while offloading,
  

󰇛 󰇜 
󰇛 󰇜
󰇛
󰇜
󰇛
󰇜 (2)
According to equation (1), computation time in mobile of our considered scenario is 1 second.
To compute in cloud, we have four different possible paths. According to equation (2):
For 1st path,  
For 2nd path,  
For 3rd path,  
For 4th path, 
It is obvious that 2nd path has the shortest delay. This calculation clearly shows that the overall data computation
time in mobile (1 sec) is slower than the 2nd offloading path (0.3128533333 sec). On the considered scenario, it has
been seen that total execution time while offloading & on-loading is much less than the computation time in mobile.
So, an algorithm has been proposed for offloading decision (Figure 3).
International Journal of Modern Research in Electrical and Electronic Engineering, 2017, 1(1): 47-52
50
Figure-3. Algorithm for offloading decision
B. Graphical Representation:
Mobile computation:
Consider Data computation  and using equation (1), we get the plot in Figure 4.
Figure-4. Plot of computation time in mobile (sec) vs. processing speed in mobile (MB/sec)
Figure-4 shows that if the processing speed in mobile device increases, the data processing time decreases.
Cloud computation:
Considering ds=20 MB; W=50 MB; D=53KM; Sc = 200 MB/s; dr= 5 MB. From equation (2) we get the plot of
Figure 5, which shows data computation time with respect to line bandwidth.
Figure-5. Data computation time while offloading (sec) vs. Bandwidth (MHz)
0200 400 600 800 1000
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
Bandwidth(MHz)
Data computation time while offloading(sec)
International Journal of Modern Research in Electrical and Electronic Engineering, 2017, 1(1): 47-52
51
Figure-5 shows that the higher the line bandwidth is, the lower time it takes to compute the data. This figure
clearly shows why the line bandwidth is so important in cloud computing.
Considering ds=20 MB; B= 400 MHz; W=50 MB; D=53KM; dr= 5 MB. From equation (2) we get the plot of
Figure 6, which shows the relation of data computation time with respect to processing speed in cloud.
Figure-6. Data computation time while offloading (sec) vs. processing
speed in cloud (MB/sec)
We can understand from Figure-6 that if the processing speed in cloud/server is high, it will take less time to
compute the data.
C. Probability of offloading
Offloading probability (OP) depends on two parameters- “Data computation” and “Offered bandwidth”.
Consider maximum data computation is 100 GB and maximum offered bandwidth is  Hz [11]. For this scenario,
we have chosen not to offload until 10% of maximum data is to be computed or 10% of maximum BW is offered.
Probability follows the equation:  .
Where, .

Figure-7. Probability of offloading with respect to Data
computation and bandwidth
OP increases exponentially with the increase of data computation and offered bandwidth. This proposed equation
shows that if the amount of data computation is large and offered bandwidth is high, then probability of offloading is
also high. This 3D plot (Figure-7) shows every possible scenario as the product of two probabilities- Probability due
to data computation and probability due to offered bandwidth.
5. Conclusion
Mobile cloud computing is one of the mobile technologies that combine the advantages of both mobile
computing and cloud computing- providing optimal services for mobile users. This technology offers us much more
efficient computing by centralizing data storage, processing and bandwidth. Since mobile systems have limited
resources, hereby data offloading in cloud may alleviate these limitations. It is important to consider path delay along
with queue delay while taking offloading decision. It is obvious that if we follow the algorithm stated in this paper, it
will both save our time and energy; and increase the performance of mobile computation technology.
References
[1] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, and M. Zaharia, "A view of cloud computing,"
Communications of the ACM, vol. 53, pp. 50-58, 2010. View at Google Scholar | View at Publisher
[2] V. Namboodiri, "Towards sustainability in portable computing through cloud computing and cognitive radios," presented at the 39th
International Conference on Parallel Processing Workshops, 2010.
[3] V. Cardellini, V. D. Personé, V. D. Valerio, F. Facchinei, V. Grassi, F. L. Presti, and V. Piccialli, "A game-theoretic approach to
computation offloading in mobile cloud computing," Mathematical Programming, vol. 157, pp. 421-449, 2015.
[4] A. M. Hatami, M. Mirmohseni, and F. Ashtiani, "A new data offloading algorithm by considering interactive preferences," presented
at the 2016 IEEE 27th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), 2016.
0200 400 600 800 1000
0
0.5
1
1.5
2
2.5
3
processing speed in cloud(Mb/sec)
Data computation time while offloading(sec)
020 40 60 80 100 0
2
4
6
8
10
x 105
0
0.5
1
Bandwidth (Hz)
Computation (Gb)
Offload Probability
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
International Journal of Modern Research in Electrical and Electronic Engineering, 2017, 1(1): 47-52
52
[5] Y. Kwon, H. Yi, D. Kwon, S. Yang, Y. Cho, and Y. Paek, "Precise execution offloading for applications with dynamic behavior in
mobile cloud computing," Pervasive and Mobile Computing, vol. 27, pp. 58-74, 2016. View at Google Scholar | View at Publisher
[6] X. Liu, C. Yuan, Y. Li, Z. Yang, and B. Cao, "A lightweight algorithm for collaborative task execution in mobile cloud computing,"
Wireless Personal Communications, vol. 86, pp. 579-599, 2015. View at Google Scholar | View at Publisher
[7] K. Kumar, J. Liu, Y. Lu, and B. Bhargava, "A survey of computation offloading for mobile systems," Mobile Networks and
Applications, vol. 18, pp. 129-140, 2012. View at Google Scholar | View at Publisher
[8] F. Xia, F. Ding, J. Li, X. Kong, L. T. Yang, and J. Ma, "Phone2cloud: Exploiting computation offloading for energy saving on
smartphones in mobile cloud computing," Information Systems Frontiers, vol. 16, pp. 95-111, 2013. View at Google Scholar | View at
Publisher
[9] R. K. Balan, Simplifying cyber foraging. Pittsburgh, PA: School of Computer Science, Carnegie Mellon U, 2006.
[10] R. Roostaei and Z. Movahedi, "Mobility and context-aware offloading in mobile cloud computing," presented at the 2016 Intl IEEE
Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and
Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congres
(UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld), 2016.
[11] X. Chen, "Decentralizes computation offloading game for mobile cloud computing," IEEE Transactions on Parallel and Distributed
Systems, vol. 26, pp. 974-983, 2015. View at Google Scholar | View at Publisher
Asian Online Journal Publishing Group is not responsible or answerable for any loss, damage or liability, etc. caused in relation to/arising out of
the use of the content. Any queries should be directed to the corresponding author of the article.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
We consider a three-tier architecture for mobile and pervasive computing scenarios, consisting of a local tier of mobile nodes, a middle tier (cloudlets) of nearby computing nodes, typically located at the mobile nodes access points but characterized by a limited amount of resources, and a remote tier of distant cloud servers, which have practically infinite resources. This architecture has been proposed to get the benefits of computation offloading from mobile nodes to external servers while limiting the use of distant servers whose higher latency could negatively impact the user experience. For this architecture, we consider a usage scenario where no central authority exists and multiple non-cooperative mobile users share the limited computing resources of a close-by cloudlet and can selfishly decide to send their computations to any of the three tiers. We define a model to capture the users interaction and to investigate the effects of computation offloading on the users’ perceived performance. We formulate the problem as a generalized Nash equilibrium problem and show existence of an equilibrium. We present a distributed algorithm for the computation of an equilibrium which is tailored to the problem structure and is based on an in-depth analysis of the underlying equilibrium problem. Through numerical examples, we illustrate its behavior and the characteristics of the achieved equilibria.
Article
Full-text available
With prosperity of applications on smartphones, energy saving for smartphones has drawn increasing attention. In this paper we devise Phone2Cloud, a computation offloading-based system for energy saving on smartphones in the context of mobile cloud computing. Phone2Cloud offloads computation of an application running on smartphones to the cloud. The objective is to improve energy efficiency of smartphones and at the same time, enhance the application’s performance through reducing its execution time. In this way, the user’s experience can be improved. We implement the prototype of Phone2Cloud on Android and Hadoop environment. Two sets of experiments, including application experiments and scenario experiments, are conducted to evaluate the system. The experimental results show that Phone2Cloud can effectively save energy for smartphones and reduce the application’s execution time.
Article
Full-text available
Mobile systems have limited resources, such as battery life, network bandwidth, storage capacity, and processor performance. These restrictions may be alleviated by computation offloading: sending heavy computation to resourceful servers and receiving the results from these servers. Many issues related to offloading have been investigated in the past decade. This survey paper provides an overview of the background, techniques, systems, and research areas for offloading computation. We also describe directions for future research.
Article
Full-text available
The rapid proliferation of mobile handheld computing devices, such as cellphones and PDAs, has led to an unfortunate conflict. On one hand, we have light mobile computing devices that can be carried anywhere. However, on the other hand, these devices are frequently unable to execute applications that are of highest value to a mobile user such as language translators and speech recognizers. One way to resolve this conflict is to use cyber foraging ---utilize compute resources available in the environment to augment the capabilities of mobile devices. A key challenge in enabling cyber foraging is that there exist many applications of high value to a mobile user that must be quickly, easily, and effectively retargeted to support cyber foraging. This retargeting is made more difficult as applications can be written in any programming language and style. In this thesis, quickly refers to the retargeting time, easily refers to the retargeting effort, and effectively refers to the retargeted application's runtime performance. This dissertation shows that it is possible to quickly, easily, and effectively retarget computationally-intensive useful applications for cyber foraging. I developed a process called RapidRe that allows even novice developers to easily, quickly, and effectively retarget large unfamiliar applications for cyber foraging. To create RapidRe, I first developed a powerful remote execution system, called Chroma, that is able to achieve excellent application performance in mobile environments. Chroma uses the concept of tactics to greatly reduce its search space when deciding the optimal remote partitioning of applications. Tactics are enumerations of the useful application partitionings. At runtime, Chroma picks the tactics that would have the optimal performance for the given resource environment and user preferences. I then developed a domain-specific language, called Vivendi, that allows developers to specify the adaptive characteristics of an application that are relevant for mobile computing. These characteristics include the parameters, fidelity variables, and tactics of the application. The parameters give hints about the expected application resource usage. These hints are used by Chroma to decide the optimal tactic. The fidelity variables are application settings that Chroma must set based on the available resources. Finally, tactics are described in two parts; the first part is the list of application procedures that can be remotely executed and the second is the possible ways to combine these procedures to do useful work. (Abstract shortened by UMI.)
Article
In order to accommodate the high demand for performance in smartphones, mobile cloud computing techniques, which aim to enhance a smartphone’s performance through utilizing powerful cloud servers, were suggested. Among such techniques, execution offloading, which migrates a thread between a mobile device and a server, is often employed. In such execution offloading techniques, it is typical to dynamically decide what code part is to be offloaded through decision making algorithms. In order to achieve optimal offloading performance, however, the gain and cost of offloading must be predicted accurately for such algorithms. Previous works did not try hard to do this because it is usually expensive to make an accurate prediction. Thus in this paper, we introduce novel techniques to automatically generate accurate and efficient method-wise performance predictors for mobile applications and empirically show they enhance the performance of offloading.
Article
Mobile cloud computing (MCC) combines mobile internet and cloud computing to improve the performance of mobile applications. In MCC, the performance of mobile devices (MDs) can be significantly improved by offloading the mobile applications to the remote cloud. However, the data, which is transmitted on wireless networks, is increasing rapidly since users’ mobile applications have to get support from the remote cloud, therefore, these applications offloading face the problem of energy efficiency because of stochastic wireless channel. In this paper, we investigate collaborative task execution between MD and cloud side for mobile applications. In our study we assume the mobile application is composed by a sequence of tasks that are independent of each other, and can be executed on the MD or on the cloud side. We aim to minimize the energy consumption on the MD while meeting a deadline, by offloading a part of tasks of mobile application to the cloud. We formulate this collaborative task execution problem as an energy optimization problem. Then, we derive several offloading thresholds by characterizing the optimal solution and propose several algorithms for the collaborative task execution. Further, using Lagrange duality principle and these algorithms, we propose a collaborative task execution scheduling (CTES) algorithm to solve the optimization problem approximately. Simulation results suggest that our proposed CTES algorithm is energy efficient for the MCC environment. Moreover, compared to the local execution and the cloud execution, our proposed CTES algorithm can significantly save the energy consumption on the MD.
Article
Mobile cloud computing is envisioned as a promising approach to augment computation capabilities of mobile devices for emerging resource-hungry mobile applications. In this paper, we propose a game theoretic approach for achieving efficient computation offloading for mobile cloud computing. We formulate the decentralized computation offloading decision making problem among mobile device users as a decentralized computation offloading game. We analyze the structural property of the game and show that the game always admits a Nash equilibrium. We then design a decentralized computation offloading mechanism that can achieve a Nash equilibrium of the game and quantify its efficiency ratio over the centralized optimal solution. Numerical results demonstrate that the proposed mechanism can achieve efficient computation offloading performance and scale well as the system size increases.
Conference Paper
It is imperative to consider the concept of sustainable portable computing as the role of such devices increases our lives. With the emergence of the cloud computing paradigm, there will be an increased reliance on wireless communication from portable computing devices to more powerful centralized servers. This paradigm shift to `thin-clients' presents an opportunity to make portable computing more sustainable by shifting more functionality to centralized servers. Reduced functionality needed on these devices could mean a slower rate of hardware replacement. This could significantly cut the electronic waste that is currently attributed to the frequent replacements of these devices. One of the challenges in such a paradigm shift to thin portable clients through reduced local computation would be the additional burden imposed on the wireless communication technologies used. Wireless communication technologies must be improved to handle the additional burden that will be imposed. Any proposed wireless technology must also be energy-efficient to maximize the operating life time of these battery operated, energy-constrained devices. Software approaches to achieve energy-efficient operation are preferable as they reduce the dumping of existing hardware due to upgrades or replacements, and help reduce electronic waste. This paper discusses these challenges and describes one way to move forward towards sustainable portable computing by considering application scenarios based on cloud computing and communication through software-defined cognitive radios.