Conference PaperPDF Available

Fundamental Tradeoffs in Resource Provisioning for IoT Services over Cellular Networks

Authors:

Abstract and Figures

Performance tradeoffs in resource provision-ing for mixed internet-of-things (IoT) and human-oriented-communications (HoC) services over cellular networks are investigated. First, we present a low-complexity model of cellular connectivity in the uplink direction in which both access reservation and scheduled data transmission procedures are included. This model is employed subsequently in deriving analytical expressions for energy efficiency, spectral efficiency, and experienced delay in data transmission of connected devices as well as energy consumption of base stations. The derived expressions indicate that the choice of uplink resource provi-sioning strategy introduces tradeoffs between battery lifetime for IoT communications, quality of service (QoS) for HoC, spectral efficiency and energy consumption for the access network. Then, the impacts of system and traffic parameters on the introduced tradeoffs are investigated. Performance analysis illustrates that improper resource provisioning for IoT traffic not only degrades QoS of high-priority services and decreases battery lifetime of IoT devices, but also increases energy consumption of the access network. The presented analytical and simulations results figure out the ways in which spectral/energy efficiency for the access network and QoS for high-priority services could be traded to prolong battery lifetimes of connected devices by compromising on the level of provisioned radio resources.
Content may be subject to copyright.
Fundamental Tradeoffs in Resource Provisioning for
IoT Services over Cellular Networks
Amin Azari and Guowang Miao
KTH Royal Institute of Technology
Email: {aazari, guowang}@kth.se
Abstract—Performance tradeoffs in resource provision-
ing for mixed internet-of-things (IoT) and human-oriented-
communications (HoC) services over cellular networks are in-
vestigated. First, we present a low-complexity model of cellu-
lar connectivity in the uplink direction in which both access
reservation and scheduled data transmission procedures are
included. This model is employed subsequently in deriving
analytical expressions for energy efficiency, spectral efficiency,
and experienced delay in data transmission of connected devices
as well as energy consumption of base stations. The derived
expressions indicate that the choice of uplink resource provi-
sioning strategy introduces tradeoffs between battery lifetime for
IoT communications, quality of service (QoS) for HoC, spectral
efficiency and energy consumption for the access network. Then,
the impacts of system and traffic parameters on the introduced
tradeoffs are investigated. Performance analysis illustrates that
improper resource provisioning for IoT traffic not only degrades
QoS of high-priority services and decreases battery lifetime of
IoT devices, but also increases energy consumption of the access
network. The presented analytical and simulations results figure
out the ways in which spectral/energy efficiency for the access
network and QoS for high-priority services could be traded to
prolong battery lifetimes of connected devices by compromising
on the level of provisioned radio resources.
Index Terms—Internet of things, Machine-type communica-
tions, Resource provisioning, Energy efficiency, Green cellular
network.
I. INTRODUCTION
Mobile network operators are growing targeting internet
of things (IoT) services in order to decrease the revenue
gap. Motivated by the fact that cellular networks have deeply
penetrated everywhere, cellular IoT also named as machine-
type communications, is expected to play an important role
in realization of IoT [1]. Cellular IoT is generally charac-
terized by the massive number of concurrent active devices,
small payload size, and vastly diverse quality-of-service (QoS)
requirements [2]. Also, most IoT devices are battery driven
and once deployed, their batteries will not be replaced. Then,
long battery lifetime is of paramount importance for them
[3]. Serving the coexistence of ordinary cellular services like
mobile broadband for human-oriented communications, and
the IoT services with ultra-long battery lifetime and ultra-
high reliability requirements within one network constitutes
an interesting research problem which has attracted lots of
interests in recent years [4, 5]. The capacity limits of random
access channel (RACH) of LTE-A for serving IoT services
and a survey of improved solutions are studied in [6]. Among
the alternatives, access class barring (ACB), which reduces the
contention among nodes at the cost of introducing delay, and
cluster-based communications are promising solutions [7]. The
impact of massive IoT on the access probability of human-
type users to the random access channel (RACH) of LTE
has been investigated in [8]. In [9], separation of random
access resources between IoT and HoC has been investigated.
A through survey on LTE scheduling algorithms for mixed
IoT/HoC traffic is presented in [10], which indicates that the
existing IoT scheduling schemes are mainly overlay to HoC
scheduling, i.e. in each radio frame the unused resources for
HoC are allocated to IoT [10]. A time-controlled scheduling
framework for IoT traffic has been proposed in [11], which
divides the traffic into classes based on the QoS requirements
in order to preserve QoS for HoC while satisfying delay
requirements of IoT. Energy-efficient uplink scheduler design
for IoT traffic traffic has been investigated in [12, 13].
The literature study reveals that while the impact of IoT
traffic on the access reservation and scheduled data trans-
mission of cellular networks has been partially investigated,
performance tradeoffs in resource provisioning for upcoming
massive IoT services is missing. The present work is devoted
to deriving a low-complexity model for cellular connectiv-
ity, analytically formulating the key performance indicators
(KPIs), and figuring out performance tradeoffs between intro-
duced KPIs. Towards this end, the main contributions of this
paper include:
Develop a tractable framework to model energy consump-
tions of IoT-type devices deployed over cellular networks,
experienced delay and spectral efficiency of IoT/HoC
traffic in uplink transmissions, and energy consumption
of the access network.
Introduce battery lifetime, spectral efficiency, energy ef-
ficiency, and delay tradeoffs in serving mixed IoT/HoC
traffic.
Explore the impact of resource provisioning, traffic, and
medium access parameters on the introduced tradeoffs.
The rest of this paper is organized as follows. System model
is presented in the next section. Performance indicators are for-
mulated in section III. Performance tradeoffs are investigated
in section IV. Numerical results are presented in section V.
The concluding remarks are given in section VI.
II. SYSTEM MODEL
Consider a single cell with one base station at the center,
and a massive number of devices, including IoT-type devices
(c)IEEE copyright applies to this document.
߬;ƐĞĐͿ
ܯ
ܶ
ோ஺ை;ƐĞĐͿ
Wh^,ƌĞƐŽƵƌĐĞ;,ŽͿ
Wh^,ƌĞƐŽƵƌĐĞ;/ŽdͿ
WZ,ƌĞƐŽƵƌĐĞ;,ŽͿ
WZ,ƌĞƐŽƵƌĐĞ;/ŽdͿ
ܯ
ƚŝŵĞ
ĨƌĞƋƵĞŶĐLJ
ܤ;,njͿ
ܤ;,njͿ
߬;ƐĞĐͿ
Fig. 1: Uplink resource provisioning for mixed traffic
and user equipments (UEs), which are uniformly distributed
in the cell. The links between devices and the BS experience
Rayleigh fading. Here, we focus on the coexistence of two
distinct types of traffic, named PIand PH, in the uplink di-
rection. PIstands for generated traffic from a massive number
of low-cost, energy, and radio front-end devices with small
data payload size, while PHstands for generated traffic from
conventional user equipments like smart-phones. Motivations
for considering only the uplink direction can be the proposed
uplink/downlink decoupling architecture for the 5G [14], and
the proposed multi radio access technology (RAT) architecture
[15], in which cellular users can be served with LTE and WiFi
for uplink and downlink transmissions respectively. Splitting
radio resources for IoT and HoC traffic has been frequently
used in literature [9–11] and standards [16] as their characteris-
tics and QoS requirements are fundamentally different. In split
resource provisioning, two sets of RACH and physical uplink
shared channel (PUSCH) resources are reserved to occur
periodically with period TRAO for serving the mixed traffic.
In each period, RACH and PUSCH resources span over τrand
τpseconds respectively, as depicted in Fig. 1. PIand PHare
generated according to two Poisson processes with arrival rates
of λIand λHrespectively. The numbers of available RACH
preambles in each random access opportunity (RAO) for PI
and PHtraffic are denoted as MIand MHrespectively. Also,
the allocated bandwidths for uplink transmission of PIand PH
traffic over PUSCH are denoted as BIand BHrespectively.
III. FORMULATING THE PERFORMANCE METRICS
A. Energy Efficiency for IoT Communications
For most reporting IoT applications, packet generation at
each device can be modeled as a Poisson process [17], and
hence, energy consumption of a device can be seen as a semi-
regenerative process where the regeneration point is at the end
of each successful data transmission. In each period of activity,
IoT devices wake up, gather and process data, establish uplink
connection with the BS, reserve uplink resources, and transmit
data over granted physical uplink resources. The energy effi-
ciency (EE) metric for IoT communications, denoted as UE,I,
is defined as the ratio between amount of useful transmitted
bits per reporting period and the total consumed energy in the
respective period for communications. Then, UE,I for node i
is formulated as:
Ui
E,I =˜
DI
Ei
CS +Ei
DT
,(1)
h ^
Ğůů/ŶĨŽ
WZ,͗ZĂŶĚŽŵĐĐĞƐƐ
ZĞƋƵĞƐƚ;ZE͕^Z͕ĂƵƐĞ͕
W,Ϳ
W,͗hƉůŝŶŬƐƐŝŐŶŵĞŶƚ
;Z,ƌĞĨĞƌĞŶĐĞ͕Wh^,
ĂůůŽĐĂƚŝŽŶ͕^sZсϬ͕Ͳ
ZEd/ĂƐƐŝŐŶŵĞŶƚͿ
Wh^,͗ĂƚĂƚƌĂŶƐĨĞƌ
;d>>/ͬ^ͲdD^/͕D^s^сϬ͕
ůĂƐƚсƚƌƵĞ͕ĚĂƚĂͿ
W,͗hƉůŝŶŬĐŬ
;d>>/ͬ^ͲdD^/͕ͲZEd/
ĐŽŶĨŝƌŵĂƚŝŽŶ͕^sZсϭͿ
ܶ
௦௬௡
൅ܶ
௣௦௜
ܶ
௧௫ǡ௥
ܶ
௧௫ǡ௣
ܶ
௔௖௞
;dƵƌŶƌĂĚŝŽŽŶͿ
;^ůĞĞƉͿ
ƵƚLJLJĐůĞ
ZĞƉŽƌƚŝŶŐWĞƌŝŽĚ
;tĂŬĞƵƉͿ
ĂƚĂŐĂƚŚĞƌŝŶŐ
;tĂŬĞƵƉͿ
ܶ
௔௦
Fig. 2: Connection establishment and data transmission pro-
cedure for cellular IoT
where ˜
DIdenotes the average size of useful data to be
transmitted; Ei
CS the average consumed energy in connection
establishment; and Ei
DT the average consumed energy in
scheduled data transmission. The presented energy efficiency
model can be used along with any medium access control
(MAC) solution for IoT, including cellular-based solutions. Let
us focus on LTE Release 13 for narrow band IoT [18, chapter
7] as an example, where its connection establishment and data
transmission procedures have been depicted in Fig. 2. In this
case, Ei
CS consists of: (i) energy consumption in receiving
primary system information (PSI) and performing synchro-
nization: (Tsyn +Tpsi)Pc,where Pcdenotes the consumed
energy in electronic circuits and Tsyn +Tpsi has been specified
in [18]; and (ii) energy consumption in random access:
1
Fr,I Ttx,r (Pc+ξPi,r )+PcTas,(2)
where Ttx,r and Tas denote the average spent times in sending
a preamble and receiving response as specified in [18], ξ
the inverse power amplifier efficiency, Fr,I is the average
probability of successful resource reservation over RACH. The
transmit power over RACH can be modeled as
Pi,r =Po,r/gi,
where Po,r is the preamble received target power, which is
broadcast by the BS [18]. Also, the consumed energy in data
transmission over physical uplink shared channel (PUSCH) is
modeled as
Ei
DT =1
Fp,I PcTack +(Pc+ξPi,p)τp
¯
SI,
where the average probability of successful transmission over
PUSCH is denoted as Fp,I , and the average spent time in
receiving acknowledgment is denoted as Tack. Denote channel
gain between device iand the BS as gi=hdσ
i, where h
exp(1),didenotes the communications distance, and σthe
pathloss exponent. Then, the transmit power over PUSCH is
modeled as:
Pi,p =[2
DI
τpBI/[¯
SI+¯
WI]1]ΦIγ0diσ/h, (3)
in which, noise power spectral density (PSD) at the receiver
is modeled as N0, size of data plus overhead to be transmitted
as DI, numbers of present and newly arriving devices to be
scheduled as ¯
WIand ¯
SIrespectively, SINR gap between chan-
nel capacity and a practical coding and modulation scheme as
γ0,ΦI=(ΘI+N0)BI, and the upperbound on the PSD of out-
of-cell interference as ΘI. Regarding the uniform distribution
of devices in the cell, one can find the average required
transmit power over PUSCH and RACH as:
¯
Pp=R
0
Pi,p
2y
R2dy =σ+1
0.5R2σ[2
DI
τpBI/[SI+WI]1]ΦIγ0,
(4)
¯
Pr=R
0
Pi,r
2y
R2dy =Rσ2Po,r.(5)
One sees that all parameters that contribute in formulating
UE,I have been specified, unless ¯
SI,¯
WI,Fp,I , and Fr,I ,
which are derived using steady state analysis in the following
subsections.
1) Markov Chain for Modeling Uplink IoT Communica-
tions: The number of deployed IoT devices in future cellular
networks is expected to become much higher than the existing
UEs. Due to the surge in the number of IoT devices, access
class barring has been standardized to prevent network con-
gestion [6]. Based on the ACB, in case collision occurs in
preambles allocated to IoT, the colliding nodes will contend
after QT
RAO seconds, where Qis geometrically distributed
with parameter q. Denote the total number of PIdevices in
the cell as NI, total number of available preambles for PIas
MI, number of devices attempting for RACH access in the
kth RAO as Uk, and number of connected devices which have
requested uplink service before kth RAO but have not been
scheduled yet as Wk. By considering (Uk,W
k)as the state of
the system, one can form a Markov chain in order to evaluate
the performance. We have:
Uk+1 =UkSk+Ak+1,
Wk+1 =Wk+SkZk,
where Skdenotes number of devices which successfully pass
the RACH procedure in the kth RAO, Ak+1 number of newly
arriving devices to the system between kth and k+1
th RAO,
and Zknumber of nodes which successfully receive uplink
service and leave the system. The probability of transition from
state (i1,j
1)to (i2,j
2)is derived as:
p(i1,j1);(i2,j2)=
min{M,i1}
s=0
j1+s
z=0
pr(Uk+1=i2,W
k+1=j2|Uk=
i1,W
k=j1,S
k=s, Zk=z)pr(Sk=s, Zk=z|Uk=i1,W
k=j1),(6)
where,
pr(Uk+1=i2,W
k+1=j2|Uk=i1,W
k=j1,S
k=s, Zk=z)=
pr(Uk+1=i2|Uk=i1,W
k=j1,S
k=s, Zk=z)×
pr(Wk+1=j2|Uk=i1,W
k=j1,S
k=s, Zk=z,Uk+1=i2),
pr(Wk+1=j2|Uk=i1,W
k=j1,S
k=s, Zk=z, Uk+1=i2)
=1if j2=j1+sz,
0O.W.
Also,
pr(Uk+1=i2|Uk=i1,W
k=j1,S
k=s, Zk=z)
=pr(Ak+1=i2-(i1-s)|Uk=i1,W
k=j1,S
k=s, Zk=z),
=B(Nhi1j1+z,i2i1+s, 1-eλhTRAO ),
for i1-si2Nh-s-j1+z,
and 0otherwise. In this expression,
B(x, y, z)=x
yzy(1 z)xy.
Furthermore,
pr(Sk=s, Zk=z|Uk=i1,W
k=j1)=
pr(Sk=s|Uk=i1,W
k=j1)pr(Zk=z|Sk=s, Uk=i1,W
k=j1),
pr(Sk=s|Uk=i1,W
k=j1)=pr(Sk=s|Uk=i1)=
i1
v=1 pr(Sk=s|Vk=v, Uk=i1)pr(Vk=v|Uk=i1),
where Vkdenotes number of nodes that decide to contend for
channel access out of Ukattempting nodes due to the ACB
scheme, and we have:
pr(Vk=v|Uk=i1)=B(i1,v,q).(7)
In the following subsections, we investigate probabili-
ties of failure in RACH and PUSCH transmissions, i.e.
pr(Sk=s|Vk=v, Uk=i1)and pr(Zk=z|Sk=s, Uk=i1,W
k=j1).
2) Probability of Failure in RACH Transmission: If several
devices select a preamble simultaneously in a RAO, collision
in RACH occurs. Considering the problem of distributing v
distinct objects in MIdistinct boxes,
pr(Sk=s|Vk=v, Uk=i1)=pr(Sk=s|Vk=v)
is the probability of having boxes occupied with one object,
and is derived as [19]:
[1]sMI!v!
[MI]vs!
min(MI,v)
l=s
[1]l[MIv]vl
[ls]![MIl]![vl]!.(8)
3) Probability of Failure in PUSCH Transmission: If the
required transmit power for successful transmission of DI
bits over the assigned PUSCH resources is higher than the
maximum allowed transmit power, uplink transmission will be
unsuccessful. Then, one can derive the successful transmission
probability over PUSCH as:
pr(Pi,p <P
max,I )(a)
=exp dσ
iγ0ΦγI
Pmax,I ,(9)
where Pmax,I is the maximum allowed transmit power, and (a)
is due to the fact that his exponentially distributed. Regarding
the uniform distribution of devices in the cell, the probability
distribution function (PDF) of the distance between a device
and the BS is f(y)= 2y
R2, where Ris the cell radius and y
is the communications distance. Then, the average probability
of successful transmission over PUSCH is derived as:
Fp,I =R
0exp(yσγ0ΦIγI
Pmax,I
)2y
R2dy. (10)
The integral in (10) can be evaluated using the integral tables
in [20]. For example, when σ=4:
Fp,I =πErfc(AR2)/2AR2,
where A=γ0ΦIγI
Pmax,I , and Erfc(·) is the error function [20].
Now, we have:
pr(Zk=z|Sk=s, Uk=i1,W
k=j1)=B(s+j1,z,F
p,I ).
4) Deriving Steady-State Probabilities: To finalize our
analysis, the steady state probabilities are derived here. As
Ukand Wkchoose their values from {0,1,···,N
I}, the total
number of states is (NI+1)2. In order to simplify the analysis,
one can reduce the number of states to NuNw, where:
Nu=argmin
i2{p(i1,j1);(i2,j2)ε, i1,j
1,j
2},
Nw=argmin
j2{p(i1,j1);(i2,j2)ε, i1,i
2,j
1},
and εis a very small value. Now, the steady state probabilities
of the system, i.e. π=[π(0,0),···
(Nu,Nw)], are calculated
by solving:
π=πP,
Nu
i=0 Nw
j=0 π(i,j)=1,(11)
where Pis the transition matrix, and its entries have been
derived in (6). Also, the average probability of successful
RACH transmission can be derived as:
Fr,I =Nu
i=0 Nw
j=0 π(i,j)i
l=2 B(i, l, q)fr,I (l, MI),
(12)
where fr,I (l, MI)is the success probability when lde-
vices contend for MIpreambles, and is derived as
T[(MI1)/MI]l1.Furthermore, the average number of PI
devices that successfully pass the RACH procedure in each
RAO is derived as:
¯
SI=Nu
i=0 Nw
j=0 π(i,j)
i
l=1 B(i, l, q)l
x=1 xB(l, x, fr,I (j)),(13)
while the average number of devices that are to be scheduled
per RAO is:
¯
SI+¯
WI=¯
SI+Nu
i=0 Nw
j=0 π(i,j)j.
One sees solving (11) requires knowledge on ¯
SI,¯
WI, and
Fr,I , which are in turn functions of π(i,j). Then, this problem
can be solved in an iterative way. Now, one can assemble the
derived expressions in the previous subsection, and formulate
the energy efficiency metric as:
UE,I =˜
DIPc(Tsyn +Tpsi)+ PcTtx,r +ξRσ2Po,rTas
Fr,I
+
1
Fp,I PcTack +[Pc+ξ2(σ+1)
R2σ[2
DI
τpBI/[SI+WI]1]ΦIγ0]τp
¯
SI.
(14)
The couplings between energy efficiency, individual and net-
work battery lifetime for PItraffic have been investigated in
[21], where the interested reader may refer to see how network
lifetime can be derived from the EE expression.
B. Experienced Delay in Data Transmission
The second KPI which is investigated in this work is the
experienced delay (ED) in uplink communications, i.e. from
first access reservation transmission to service completion
epoch. The average experienced delay by PHtraffic in uplink
communications can be modeled as:
UD,H =1
Fr,H 1+TRAO +1
Fp,H
TRAO,(15)
where [x]+=max{0,x},Fr,H is the probability of successful
reservation over allocated RACH resources to PHtraffic, and
Fp,H =Dtx
DHis defined as the ratio between average size of
data that can be transmitted over allocated PUSCH resources
per device per RAO and the average size of queued data to
be transmitted per PHdevice denoted by DH.Dtx is derived
as:
Dtx =R
0
BHτp
¯
SH+¯
WH
log(1 + Pmax,H
ΦHγ0
rσ)dr, (16)
σ=4
=BHτp/ln(2)
¯
SH+¯
WHaR2
2Atan(R2
a)+log(1+ a
R4),
where a=Pmax,H
ΦHγ0.Also, ¯
SH,¯
WH, and Fr,H are derived
following the same procedure as we presented in subsections
III-A1 - III-A4 for ¯
SI,¯
WI, and Fr,I . Furthermore, the average
experienced delay by PItraffic can be modeled as:
UD,I =1
Fr,I 1+TRAO +1
Fp,I
TRAO,(17)
where Fp,I and Fr,I have been found in (10) and (12)
respectively.
C. Uplink Spectral Efficiency
Spectral efficiency (SE) in uplink communications is an-
other important KPI for cellular networks which is investigated
in this section. Spectral efficiency, in terms of bit/sec/Hz, can
be modeled as:
US=MI
MI+MH
τr
τr+τp
+BI
BI+BH
τp
τr+τp
US,I
+MH
MI+MH
τr
τr+τp
+BH
BI+BH
τp
τr+τp
US,H (18)
where,
US,I =Fp,I (¯
SI+¯
WI)DI
MI
MI+MH(BI+BH)τr
τr+τp+BI
τp
τr+τp
,
US,H =(¯
SH+¯
WH)Dtx
MI
MI+MH(BI+BH)τr
τr+τp+BH
τp
τr+τp
.
D. Energy Consumption Modeling for the Uplink Module
Energy consumption (EC) of the uplink module of the BS
per unit of time can be modeled as:
UBS =TsPs+1TsPes,(19)
where Psand Pes denote power consumption in service and
energy-saving modes respectively. Also, Tsindicate percent-
age of time the BS spends in the service mode. As PIand PH
traffic are served separately, Tsfor PItraffic can be modeled
using an M/D/1 queuing model as:
Ts,I =τr
τr+τp
+λI
1
Fp,I
τp
¯
SI+¯
WI
.
Also, the Tsfor PHtraffic is modeled as:
Ts,H =τr
τr+τp
+λH
1
Fp,H
τp
¯
SH+¯
WH
.
Then, one may derive the average percentage of time the BS
spends in the service mode as:
Ts=τr
τr+τp
+max{λI
1
Fp,I
τp
¯
SI+¯
WI
;λH
1
Fp,H
τp
¯
SH+¯
WH}.
IV. TRADEOFFS IN RESOURCE PROVISIONING
From the derived expressions in the previous section, one
sees that the energy/spectral efficiency, delay, and energy
consumption performance metrics can be controlled by: (i)
frequency of occurrence of RAOs, i.e. 1/TRAO; (ii) RACH
resource partitioning, i.e. MIand MH; and (iii) PUSCH
resource partitioning, i.e. BIand BH.
From (2), one sees that the average energy efficiency for PI
traffic decreases when the success probability in RACH reser-
vation decreases, i.e. when the allocated RACH resources to
PIdecreases, which in turn increases the collision probability
in respective resources. Also, one sees in (3) that an increase
in the amount of allocated PUSCH resources to PItraffic
increases energy efficiency as it decreases the required transmit
power for reliable data transmission to the BS. In other words,
with more radio resources, each device can decrease the data
transmission rate and send data with a lower transmit power.
Furthermore, a decrease in TRAO or τp, results in a lower
collision probability because in turn it decreases the number
TABLE I: Parameters for numerical analysis
Parameters Value
Cell inner and outer radius 50 m, 500 m
Pathloss 128.1+37.6 log( d
1000 )
Bandwidth 10 MHz
NI
I,D
I24000,1/450,62 Bytes
NH
H,D
H6000,1/150,30 KBytes
RACH configuration Config. 0 in LTE-A
Number of preambles per RAO 54
τr
p2, 16 ms
Pc,Ps,Psl 0.05, 130, 10 W
Pmax,I ,P
max,H 0.2,2W
of devices that contend at each RAO, and hence, decreases the
collision probability.
From (16), one sees the average experienced delay by PH
traffic decreases when the success probability over RACH
increases, i.e. when the allocated RACH resources to PH
traffic increases. Also, as the average data payload size for PH
traffic is much larger than the PItraffic, the experienced delay
for PHtraffic decreases when Dtx increases. Dtx increases
when either the allocated PUSCH resources to PHtraffic, i.e.
BH, increases or the allocated uplink resources to PUSCH, i.e.
τp, increases. From (19), one sees a non-optimal allocation
of RACH and PUSCH resources to PIand PHtraffic not
only degrades QoS of connected devices by decreasing the
energy efficiency and increasing the experienced delay, but
also increases the energy consumption of the BS because
in this case BS will be in the service mode for a longer
period of time. Also, from (19) one sees that change in
the frequency of occurrence of RACH resources can either
increase or decrease energy consumption of the BS. The
optimal τp, which minimizes BS’ energy consumption for a
given set of system and traffic parameters can be derived from
(19). Finally, comparing (14) with (18) indicates that while
further RACH and PUSCH resource allocation to PItraffic
can improve their energy efficiency and battery lifetimes, it
may reduce spectral efficiency of the network. This is due to
the fact that PItraffic usually consists of a large number of
short-lived sessions, where each of them delivers only a small
amount of data to the network.
From an overall system perspective, we aim at minimizing
the consumed energy and radio resources in the access net-
work, maximizing the energy efficiency of communications,
and minimizing the experienced delay in data transmission.
Unfortunately, as discussed in the above, these objectives
cannot be treated separately because they are coupled in
conflicting ways such that improvements in one objective may
lead to deterioration of the other objectives. In the next section,
we investigate these tradeoffs by simulations.
Energy Efficiency (bits/joule)
α
: RACH alloc. to IoT
β
: PUSCH alloc. to IoT
0
10
0
1
5
×10
5
0.8
0.6
10
-2
10
0.4
0.2
0
Max energy efficiency
(a) Energy efficiency for IoT traffic
Exprienced Delay (sec)
β: PUSCH alloc. to IoT
0
100
10-1
10-2
α
: RACH alloc. to IoT
1
0.8
0.6
10-3 0.4
0.2
0
1
2
Min delay
(b) Experienced delay for HoC traffic
β
: PUSCH alloc. to IoT
α
: RACH alloc. to IoT
0
100
0.5
1
Spectral Efficiency (bits/sec/Hz)
1
0.8
10-2 0.6
1.5
0.4
0.2
10-4 0
Maximum
spectral efficiency
X: 0.92
Y: 0.004
Z: 1.311
(c) Uplink spectral efficiency
β
: PUSCH alloc. to IoT
α
: RACH alloc. to IoT
110
1
0.5
10-3
10-2
10-1
120
0100
Avg. cons. energy
per unit time (J)
130
X: 0.582
Y: 0.07
Z: 112
Min Energy
(d) Energy consumption for the BS
Fig. 3: Performance analysis
10
-3
10
-2
10
-1
10
0
β
: PUSCH alloc. to IoT
110
120
130
Ave. cons. energ
y
per unit time (J)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
α
: RACH alloc. to IoT
110
120
130
Ave. cons. energy
per unit time (J)
Min Energy
Min Energy
β
=0.07
α
=0.582
(a) Energy consumption for the BS versus αand β
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 10
5
0
0.2
0.4
0.6
0.8
1
Time (× reporting period)
CDF of individual lifetimes
α=28%, β=0.5%
α=28%, β=5%
α=37%, β=0.5%
α=37%, β=5%
α=74%, β=0.5%
α=74%, β=5%
(b) CDF of individual battery lifetimes. Battery capacity: 500 J, static
energy consumption per reporting period: 50 μJ, and λI=1/150.
0.6
0.8
1
β: PU
SC
H alloc. to IoT
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
α
: RACH alloc. to IoT
10-2
Max energy efficiency for IoT
EE, EC, and ED
increase; SE decrease
EC increases;
EE and ED decrease
Min BS energy consumption
EC and SE
increase
Min Delay for HoC
Max uplink spectral
efficiency (HoC+IoT)
(c) Performance tradeoffs
Fig. 4: Detailed performance analysis
V. P ERFORMANCE EVALUATIONS
The implemented system model in this section is based on
the uplink of a single cell with coexistence of IoT and HoC
traffic, in which devices are randomly distributed according
to a spatial Poisson point process. The simulation parameters
can be found in Table I. Fig. 3a illustrates energy efficiency
in uplink communications for IoT traffic versus αand β, the
fraction of allocated RACH and PUSCH resources to IoT and
HoC traffic respectively. It is evident that by increasing the
amount of provisioned resources, the energy efficiency in data
transmission, and hence the battery lifetime, can be improved
significantly. Fig. 3b represents the experienced delay in data
transmission for HoC Traffic. One sees that increase in the
amount of allocated RACH and PUSCH resources to IoT
traffic increases the experienced delay by HoC traffic. Fig.
3c depicts network spectral efficiency (SE) versus αand β.
We see that the maximum SE is achieved when 0.92% of
RACH resources and 0.4% of PUSCH resources are allocated
to IoT traffic. This is due to the fact that the payload size for
IoT traffic is much less than the payload size for HoC traffic,
while the expected number of concurrent access requests from
IoT traffic is higher than the one of HoC traffic. Fig. 3d
illustrates average energy consumption of the BS in serving
mixed IoT/HoC traffic. It is interesting to see that EC is
jointly quasi-convex over αand β. One sees that the minimum
EC for the BS is achieved when 58% of RACH and 0.7%
of PUSCH resources are allocated to IoT traffic. Detailed
analysis of EC as a function of αand βis depicted in
Fig. 4a, in which, one sees how uplink resource improvising
affects energy consumption of the BS, and where are the
optimized operation points. Fig. 4b represents the cumulative
distribution function (CDF) of individual lifetimes of IoT-type
devices. From this figure, it is evident that increase in the
amount of allocated RACH/PUSCH resources to IoT traffic
can substantially prolong the battery lifetime. For example,
by considering the first energy drain (FED) as the network
lifetime, increasing βfrom 0.5% to 5% can prolong the
FED network lifetime by 30% when α= 37%. To ease
understanding of coupling among battery lifetime for IoT, ED
for HoC, CE for the BS, and network SE, in Fig. 4c the
above derived individually optimized operation points have
been depicted together. The background of this figure is a 2D
view of Fig. 3d, in which different energy consumption levels
are depicted by different colors. Considering the optimized
EC operation point as a reference, one sees in Fig. 4c that the
average energy consumption of the BS, energy efficiency of
IoT communications, and experienced delay by HoC increase
when extra radio resources are allocated to IoT traffic. The
increase in energy efficiency of IoT communications is due to
the fact that the access probability over RACH and success
probability over PUSCH increase in the amount of allocated
resources, which in turn results in decreasing QoS for HoC.
VI. CONCLUSION
In this work, we have investigated battery lifetime, spectral
efficiency, energy efficiency, and delay tradeoffs in green
cellular network design with coexistence of IoT and HoC
traffic. For a cellular network in which two types of distinct
traffic are served, analytical expressions for energy consump-
tion of the BS in serving the mixed traffic, as well as
energy/spectral efficiency and experienced delay of connected
devices in uplink data transmission have been derived as a
function of system, traffic, and resource provisioning param-
eters. Then, the performance impacts of control parameters
on the introduced tradeoffs have been studied. Significant
impacts of uplink resource provisioning on the battery lifetime
of energy-limited devices, energy/spectral efficiency of the
network, and experienced delay in uplink communications
have been presented using analytical and simulation results.
The derived results figure out the ways in which scarce radio
and energy resources for the BS and QoS for human-oriented
communications could be preserved while coping with the
ever increasing number of energy-limited IoT-type devices in
cellular networks.
REFERENCES
[1] S. Andreev et al., “Understanding the IoT connectivity landscape: a
contemporary M2M radio technology roadmap,” IEEE Communications
Magazine, vol. 53, no. 9, pp. 32–40, September 2015.
[2] 3GPP TS 22.368 V13.1.0, “Service requirements for machine-type
communications,” Tech. Rep., 2014.
[3] Nokia Networks, “Looking ahead to 5G: Building a virtual zero latency
gigabit experience,” Tech. Rep., 2014.
[4] H. Tullberg et al., “Towards the METIS 5G concept: First view on
horizontal topics concepts,” in European Conference on Networks and
Communications. IEEE, 2014, pp. 1–5.
[5] M. I. Hossain, A. Azari, and J. Zander, “DERA: Augmented random
access for cellular networks with dense H2H-MTC mixed traffic,” 2016,
to be published.
[6] A. Biral et al., “The challenges of M2M massive access in wireless
cellular networks,” Digital Communications and Networks, vol. 1, no. 1,
pp. 1–19, 2015.
[7] A. Azari and G. Miao, “Energy efficient MAC for cellular-based
M2M communications,” in 2nd IEEE Global Conference on Signal and
Information Processing, 2014.
[8] T. P. de Andrade et al., “The impact of massive machine type commu-
nication devices on the access probability of human-to-human users in
lte networks,” in 2014 IEEE Latin-America Conference on Communica-
tions, 2014, pp. 1–6.
[9] Y.-C. Pang et al., “Network access for M2M/H2H hybrid systems: a
game theoretic approach,” IEEE Communications Letters, vol. 18, no. 5,
pp. 845–848, 2014.
[10] M. Mehaseb, Y. Gadallah, A. Elhamy, and H. El-Hennawy, “Classifica-
tion of LTE uplink scheduling techniques: An M2M perspective,IEEE
Communications Surveys Tutorials, no. 99, 2015.
[11] S. Y. Lien and K. C. Chen, “Massive access management for QoS
guarantees in 3GPP machine-to-machine communications,” IEEE Com-
munications Letters, vol. 15, no. 3, pp. 311–313, March 2011.
[12] A. Aijaz et al., “Energy-efficient uplink resource allocation in LTE net-
works with M2M/H2H co-existence under statistical QoS guarantees,”
IEEE Transactions on Communications, vol. 62, no. 7, pp. 2353–2365,
July 2014.
[13] A. Azari and G. Miao, “Lifetime-aware scheduling and power control
for cellular-based M2M communications,” in IEEE Vehicular Technology
Conference (VTC Spring), 2015.
[14] H. Elshaer, F. Boccardi, M. Dohler, and R. Irmer, “Downlink and uplink
decoupling: A disruptive architectural design for 5G networks,” in IEEE
Global Communications Conference, Dec 2014, pp. 1798–1803.
[15] O. Galinina et al., “5G Multi-RAT LTE-WiFi ultra-dense small cells:
Performance dynamics, architecture, and trends,” IEEE J. Sel. Areas
Commun, vol. 33, no. 6, pp. 1224–1240, June 2015.
[16] 3GPP TS 45.820, “Cellular system support for ultra-low complexity and
low throughput internet of things (ciot,” Tech. Rep., (Rel. 13).
[17] 3GPP, “USF capacity evaluation for MTC,” Tech. Rep., 2010, TSG
GERAN 46 GP-100894.
[18] 3GPP TS 36.213, “Evolved universal terrestrial radio access , physical
layer procedures,” Tech. Rep., (Rel. 13).
[19] D. Shen and V. O. Li, “Performance analysis for a stabilized multi-
channel slotted ALOHA algorithm,” in IEEE Proceedings onPersonal,
Indoor and Mobile Radio Communications, vol. 1, 2003, pp. 249–253.
[20] D. Zwillinger, Table of integrals, series, and products. Elsevier, 2014.
[21] G. Miao, A. Azari, and T. Hwang, “E2-MAC: energy efficient medium
access for massive M2M communications,IEEE Transactions on
Communications, no. 99, 2016, to be published.

Supplementary resource (1)

... Resource allocation and energy efficiency of two-hop IoT networks have been recently investigated [17,18,19,20,21,22,16]. Resource allocation to perform device grouping and an energy-efficiency cluster head selection scheme was developed in [17,18,19,20,21] to maximize the network life. ...
... Resource allocation and energy efficiency of two-hop IoT networks have been recently investigated [17,18,19,20,21,22,16]. Resource allocation to perform device grouping and an energy-efficiency cluster head selection scheme was developed in [17,18,19,20,21] to maximize the network life. Wael and Rezki [22] study the reliability enhancement and resource allocation of smart metering systems using the millimeter wave technology in two-phase uplink networks with wireless gateways. ...
... Despite previous research efforts, e.g., [92,94,9,93,10,11,12,13,14,15,16,77,22,17,18,19,20,21], to the best of our knowledge, there has been no previous work that focuses on modeling and analysis of the mean DRUF of devices in the uplink direction of two-hop IoT cellular networks. Our work in Chapter 3 is the first that mathematically models the log utility of data rate over two consecutive links in the uplink direction (i.e., data from a device to an aggregator to a MBS). ...
Thesis
Full-text available
The Internet of Things (IoT) is a system of inter-connected computing devices, objects and mechanical and digital machines, and the communications between these devices/objects and other Internet-enabled systems. Scalable, reliable, and energy-efficient IoT connectivity will bring huge benefits to the society, especially in transportation, connected self-driving vehicles, healthcare, education, smart cities, and smart industries. Multi-tier heterogeneous cellular networks have been a key technology in the development and implementation of the IoT. In this dissertation, we study heterogeneous cellular networks that are composed of two tiers: Tier 1 consists of macro base stations (MBSs) with higher transmission power and lower deployment density. Tier 2 consists of small base stations (SBSs) with lower transmission power and higher density deployment. SBSs are also termed aggregators in the context of several IoT applications that involve heavy data traffic in the uplink direction from devices to the core network. The objective of this dissertation is to model and analyze the performance of large-scale heterogeneous two-tier IoT cellular networks, and offer design insights to maximize their performance. Using stochastic geometry, we develop realistic yet tractable models to study the performance of such networks. In particular, we propose solutions to the following research problems: • We propose a novel analytical model to estimate the mean uplink device data rate utility function under both spectrum allocation schemes, full spectrum reuse (FSR) and orthogonal spectrum partition (OSP), for uplink two-hop IoT networks. The model takes into account the aggregator spatial density, aggregator association bias and spectrum partition ratio across the MBS tier and the aggregator tier, and device and aggregator power control fractionals (PCFs). We develop constraint gradient ascent optimization algorithms to obtain the optimal aggregator association bias (for the FSR scheme) and the optimal joint spectrum partition ratio and optimal aggregator association bias (for the OSP scheme). • We study the performance of two-tier IoT cellular networks in which one tier operates in the traditional sub-6GHz spectrum and the other, in the millimeter wave (mmwave) spectrum. In particular, we characterize the meta distributions of the downlink signal-to-interference ratio (sub-6GHz spectrum), the signal-to-noise ratio (mm-wave spectrum) and the data rate of a typical device in such a hybrid spectrum network. We characterize the conditional success probability (CSP) and its bth moment for (1) a typical device when it associates with a sub-6GHz MBS (direct transmission), (2) a typical device when it associates with a mm-wave SBS (access transmission), and (3) the tagged SBS when it associates with a sub-6GHz MBS (backhaul transmission). Finally, we characterize the meta distributions of the SIR/SNR and data rate of a typical device by substituting the cumulative moment of the CSP of a user device into the Gil-Pelaez inversion theorem. • We propose to split the control plane (C-plane) and user plane (U-plane) as a potential solution to harvest densification gain in heterogeneous two-tier networks while minimizing the handover rate and network control overhead. We develop a tractable mobility-aware model for a two-tier downlink cellular network with high density small cells and a C-plane/U-plane split architecture. The developed model is then used to quantify the effect of mobility on the foreseen densification gain with and without C-plane/U-plane splitting.
... (Reprinted from [41], ©2017 IEEE, reused with permission.) Scheduling Schemes [83]. In the following, one of the key findings of this paper is discussed. ...
... EC, EE, ED, and SE refer to energy consumption of BS, energy efficiency of IoT communications, experienced delay of non-IoT communications, and spectral efficiency of radio resources. (Reprinted from[83], ©2017 IEEE, reused with permission.) ...
Thesis
Full-text available
Internet of Things (IoT) communications refer to the interconnections of smartdevices,withreducedhumanintervention,whichenablethemtoparticipate more actively in everyday life. It is expected that introduction of a scalable, energy efficient, and reliable IoT connectivity solution can bring enormous benefits to the society, especially in healthcare, wellbeing, and smart homes and industries. In the last two decades, there have been efforts in academia and industry to enable IoT connectivity over the legacy communications infrastructure. In recent years, it is becoming more and more clear that the characteristics and requirements of the IoT traffic are way different from the legacy traffic originating from existing communications services like voice and web surfing, and hence, IoT-specific communications systems and protocols have received profound attention. Until now, several revolutionary solutions, including cellular narrowband-IoT, SigFox, and LoRaWAN, have been proposed/implemented. As each of these solutions focuses on a subset of performance indicators at the cost of sacrificing the others, there is still lack of a dominant player in the market capable of delivering scalable, energy efficient, and reliable IoT connectivity. The present work is devoted to characterizing state-of-the-art technologies for enabling large-scale IoT connectivity, their limitations, and our contributions in performance assessment and enhancement for them. Especially, we focus on grant-free radio access andinvestigateitsapplicationsinsupportingmassiveandcriticalIoTcommunications. The main contributions presented in this work include (a) developing an analytical framework for energy/latency/reliability assessment of IoT communications over grant-based and grant-free systems; (b) developing advanced RRM techniques for energy and spectrum efficient serving of massive and critical IoT communications, respectively; and (c) developing advanced data transmission/reception protocols for grant-free IoT networks. The performance evaluation results indicate that supporting IoT devices with stringent energy/delay constraints over limited radio resources calls for aggressive technologies breaking the barrier of the legacy interference-free orthogonal communications.
... The suggested optimization system was based on MEC users clustering, resource allocation, power transmission, and computing, reducing energy consumption. In [107], a low-complexity approach for enhancing uplink connection was presented. The suggested model was developed for trade-offs between QoS and battery lifespan of IoE for human-oriented communication, spectrum efficiency, and energy efficiency of access networks. ...
Article
Full-text available
The dramatic recent increase of the smart Internet of Everything (IoE) in Industry 4.0 has significantly increased energy consumption, carbon emissions, and global warming. IoE applications in Industry 4.0 face many challenges, including energy efficiency, heterogeneity, security, interoperability, and centralization. Therefore, Industry 4.0 in Beyond the Sixth-Generation (6G) networks demands moving to sustainable, green IoE and identifying efficient and emerging technologies to overcome sustainability challenges. Many advanced technologies and strategies efficiently solve issues by enhancing connectivity, interoperability, security, decentralization, and reliability. Greening IoE is a promising approach that focuses on improving energy efficiency, providing a high Quality of Service (QoS), and reducing carbon emissions to enhance the quality of life at a low cost. This survey provides a comprehensive overview of how advanced technologies can contribute to green IoE in the 6G network of Industry 4.0 applications. This survey provides a comprehensive overview of advanced technologies, including Blockchain, Digital Twins (DTs), Unmanned Aerial Vehicles (UAVs, a.k.a. drones), and Machine Learning (ML), to improve connectivity, QoS, and energy efficiency for green IoE in 6G networks. We evaluate the capability of each technology in greening IoE in Industry 4.0 applications and analyze the challenges and opportunities to make IoE greener using the discussed technologies.
... We are still working on RQ8 and RQ9. Our preliminary results on these research problems can be found in [66,67,68,69]. Here, we have incorporated lifetime-awareness to a limited set of system design problems. ...
Thesis
Full-text available
Internet of Things (IoT) refers to the interconnection of uniquely identifi- able smart devices which enables them to participate more actively in every- day life. Among large-scale applications, cheap and widely spread machine-to- machine (M2M) communications supported by cellular networks, also known as machine-type communications (MTC), will be one of the most important enablers for the success of IoT. As the existing cellular infrastructure has been optimized for a small number of long-lived communications sessions, serving a massive number of machine-type devices with extremely diverse quality of service requirements is a big challenge. Also, most machine nodes are battery-driven, and hence, long battery life is crucial for them especially when deployed in remote areas. The present work is devoted to energy consumption modeling, battery life- time analysis, and lifetime-aware network design for massive MTC over cellu- lar networks. We first develop a realistic energy consumption model for MTC, based on which, network battery lifetime models are defined. To address the massive concurrent access issue and save energy in data transmission, we first consider cluster-based MTC and let machine devices organize themselves lo- cally, create machine clusters, and communicate through the cluster-heads to the base-station (BS). Toward this end, we need to find where clustering is feasible, and how the clusters must be formed. These research problems as well as some other aspects of cluster-based MTC are investigated in this work, battery lifetime-aware solutions are derived, and performance evaluation for the proposed solutions are provided. For direct communications of the unclustered nodes and cluster-heads with the BS, we investigate the potential benefit in lifetime-aware uplink scheduling and transmit power control. Analytic expressions are derived to demonstrate the impact of scheduling on the individual and network bat- tery lifetimes. The derived expressions are subsequently employed in uplink scheduling and transmit power control for mixed-priority MTC traffic in order to maximize the network lifetime. Besides the main solutions, low-complexity solutions with limited feedback requirement are also investigated. Finally, we investigate the impact of energy saving for the access network on battery lifetime of machine-type devices. We present a queuing system to model the uplink transmission of a green base station which serves two types of distinct traffics with strict requirements on delay and battery lifetime. Then, the energy-lifetime and energy-delay tradeoffs are introduced, and closed-form expressions for energy consumption of the base station, average experienced delay in data transmission, and expected battery lifetime of machine devices are derived. Numerical results show the impact of energy saving for the access network on the introduced tradeoffs, and figure out how to trade the tolerable lifetime/delay of the users for energy saving in the access network. The derived solutions are finally extended to the existing LTE networks, and simulation results in the context of LTE are presented. The simulation results show that the proposed solutions can provide substantial network life- time improvement and network maintenance cost reduction in comparison with the existing approaches.
Article
The fifth-generation (5G) and beyond networks are expected to accommodate both the original human-to-human (H2H) communication and the emerging massive machine-type communication (mMTC). To enable a harmonious coexistence between the two different types of services, we propose a resource-efficient mMTC/H2H coexistence scheme by jointly considering the random access (RA) and data transmission, where the entire uplink resources are divided for the proposed RA and data transmission procedures. Based on the proposed scheme, we derive the average achievable throughput of the bursty mMTC service and develop a time-nonhomogeneous Markov chain model to characterize the joint state transition of H2H user equipments (HUEs). To tackle the cumbersome Markov model, we approximately decompose the constructed time-nonhomogeneous Markov model into multiple independent Markov chains, where each decomposed Markov chain characterizes one single HUE’s state transition. Then, the decomposed Markov model is transformed into a semi-Markov process and the corresponding steady-state condition is obtained based on the queueing network analysis for H2H service. By approximating the evolution of number of HUEs in different states as M/M/1 queues, we derive the stationary probabilities for the embedded Markov chain of the semi-Markov process and obtain the data transmission success probability of each HUE. Based on the abovementioned analytical framework, we formulate a constrained nonlinear integer programming (NLIP) problem to maximize the mMTC throughput under the constraints of H2H quality-of-service (QoS) stabilization and resource allocation. By adopting the modified particle swarm optimization (PSO) algorithm, we solve the formulated problem and obtain the efficient resource allocation strategy for the mMTC/H2H coexistence. Simulation results demonstrate that the developed analytical framework and modified PSO algorithm achieve close to the optimal mMTC/H2H coexisting performance and can be adapted to various network settings.
Chapter
With advancement in Wireless Internet of Things (IoT), the connectivity between the devices increases, and hence the effective resource management is considered challenging. In this paper, we use a scheduler that adopts Gray Wolf Optimization (GWO) model to learn the allocation of resources flexibly to the users. This model acts as an adequate model to allocate resources on large-scale systems and it does not avoids the setbacks like slow convergence, analytical complexity, and increase overhead due to multiple IoT inputs. The comparison between the proposed and existing resource management techniques shows that the GWO offers improved resources management efficiency than the existing methods. KeywordsGray wolf optimizationResource managementWireless IoT
Chapter
Full-text available
This paper discusses the application of NodeMCU to intelligent monitoring of bearings via an online method using an accelerometer to detect the vibration level. An accelerometer was used to detect the vibration level and NodeMCU module for sending a message to the end-user regarding excessive vibration levels. NodeMCU module serves as a low-cost industrial-internet-of-things setup for online monitoring of bearings. In the experiment, the set-up had a motor (to provide torque to the shaft), two ball bearings set, a shaft coupling (to connect main shaft to motor shaft), a NodeMCU (for sending a warning message), an accelerometer (to detect the vibration level), and Blynk app (to control the NodeMCU). The experimental setup was designed to detect the vibration level in time domain as well as in frequency domain and the setup was able to send the warning message in both the cases. By using this type of experimental setup, the unwanted breakdown and uncertain failure of machines due to bearing failure can be avoided. The setup helped in alerting the user about any failure in real time whenever the magnitude of vibrations exceeded its predetermined threshold limit. This experimental setup is found to be very relevant for applications in small- and medium-scale industries due to its low-cost, ease of operation, and good accuracy. KeywordsAccelerometerBearingsBlynk appIndustrial-internet-of-thingsNodeMCU
Article
With the flourishing growth of Internet-of-Things (IoT) applications, cellular-assisted IoT networks are promising to support the ever-increasing wireless traffic demands generated by various kinds of IoT devices. As one of the emerging technologies in next-generation cellular systems, small cells, or so-called “femtocells” in indoor environments, are expected to be densely deployed for ubiquitous coverage and capacity enhancement in a cost-effective and energy-efficient way. However, unplanned femtocell deployment and complex interior layouts of buildings may lead to severe coverage and capacity problems and even possible negative impacts on the dense deployment. In this paper, we aim at developing a data-driven indoor diagnostic framework for fault detection in cellular-assisted IoT networks. The framework utilizes machine learning techniques to analyze crowd-sourced measurements uploaded from diverse end devices. Then some well-trained prediction models are used to construct a global view of a radio environment, namely radio environment map (REM), for diagnosis and management purposes. Moreover, REMs enable operators to detect existing coverage and capacity problems at any interested location. To verify the feasibility of indoor diagnoses, we collect real trace data from an indoor testbed and then conduct a series of experiments to evaluate the performance of the machine-learning algorithms in terms of the accuracy, the time complexity, and the sensitivity to data volumes. The experimental results provide an insightful guideline to the indoor deployment of small cells in cellular-based IoT networks.
Article
We study the fundamental problem of spectrum allocation and device association in uplink two-hop Internet of Things (IoT) networks under two spectrum allocation schemes: orthogonal spectrum partition (OSP) and full spectrum reuse (FSR). We propose a novel analytical model to estimate the uplink data rate utility function (DRUF), which takes into account power control fractional (PCF) and spatial density of aggregators. We then compute the optimal aggregator association bias (for the FSR scheme) and the optimal joint spectrum partition ratio and optimal aggregator association bias (for the OSP scheme) using constraint gradient ascent optimization. Using the above obtained optimal values and the proposed model, we compare the performance of the optimized OSP and FSR schemes with the benchmark maximum-SIR based association scheme and the minimum-distance association scheme in terms of the cumulative distribution function (CDF) of device uplink data rate. By optimizing key network parameters, namely, the spectrum partition ratio and aggregator association bias, we mitigate interference and enhance the mean uplink per-device data rate for both FSR and OSP. To the best of our knowledge, our work is the first that proposes an analytical model to estimate the log utility of the uplink data rate of two-hop IoT networks.
Conference Paper
Full-text available
In addition to the Mobile Broadband (MBB) services , future cellular networks will need to cope with a range of new " Internet of things " (IoT) services. LTE-Advanced and future generation cellular technologies should support both service sets within one network in order to keep the service costs and deployment expenses low. Massive IoT services require much less bandwidth than the MBB services, but the network need to serve a massive number of devices in each cell. The IoT applications put stringent demands on the service reliability and energy efficiency, and some will require very low delay. The existing random access (RA) procedures of cellular networks are not really designed for large numbers of terminals, which may result in excessive collisions, and hence, link delay and waste of precious battery energy. The present work aims at solving this problem by proposing a novel delay-estimation based random access scheme that improves the resolution mechanism of the conventional RA procedure in order to lower delay and energy consumption. The performance evaluation results show that the proposed scheme can significantly reduce the access delay in densely deployed scenarios.
Article
Full-text available
In this paper, we investigate energy-efficient clustering and medium access control (MAC) for cellular-based M2M networks to minimize device energy consumption and prolong network battery lifetime. First, we present an accurate energy consumption model that considers both static and dynamic energy consumptions, and utilize this model to derive the network lifetime. Second, we find the cluster size to maximize the network lifetime and develop an energy-efficient cluster-head selection scheme. Furthermore, we find feasible regions where clustering is beneficial in enhancing network lifetime. We further investigate communications protocols for both intra-and inter-cluster communications. While inter-cluster communications use conventional cellular access schemes, we develop an energy-efficient and load-adaptive multiple access scheme, called n-phase CSMA/CA, which provides a tunable tradeoff between energy efficiency, delay, and spectral efficiency of the network. The simulation results show that the proposed clustering, cluster-head selection, and communications protocol design outperform the others in energy saving and significantly prolong the lifetimes of both individual nodes and the whole M2M network.
Article
Full-text available
This article addresses the market-changing phenomenon of the Internet of Things (IoT), which relies on the underlying paradigm of machine-to-machine (M2M) communications to integrate a plethora of various sensors, actuators, and smart meters across a wide spectrum of businesses. Today the M2M landscape features an extreme diversity of available connectivity solutions which, due to the enormous economic promise of the IoT, need to be harmonized across multiple industries. To this end, we comprehensively review the most prominent existing and novel M2M radio technologies, as well as share our first-hand real-world deployment experiences, with the goal to provide a unified insight into enabling M2M architectures, unique technology features, expected performance, and related standardization developments. We pay particular attention to the cellular M2M sector employing 3GPP LTE technology. This work is a systematic recollection of our many recent research, industrial, entrepreneurial, and standardization efforts within the contemporary M2M ecosystem.
Article
Full-text available
The ongoing densification of small cells yields an unprecedented paradigm shift in user experience and network design. The most notable change comes from cellular rates being comparable to next-generation WiFi systems. Cellular-to-WiFi offloading, the standard modus operandi of recent years, is therefore shifting towards a true integration of both technology families. Users in future 5G systems will thus likely be able to use 3GPP, IEEE, and other technologies simultaneously, so as to maximize their quality of experience. To advance this high-level vision, we perform a novel performance analysis specifically taking the system-level dynamics into account and thus giving a true account on the uplink performance gains of an integrated multi radio access technology (RAT) solution versus legacy approaches. Further, we advocate for an enabling architecture that embodies the tight interaction between the different RATs, as we lay out a standardization roadmap able to materialize the envisaged design. 3GPP-compliant simulations have also been carried out to corroborate the rigorous mathematical analysis and the superiority of the proposed approach.
Conference Paper
Full-text available
Machine-type Communication (MTC) enables devices to exchange information in an autonomous way without human intervention. Hence, new applications can be developed benefiting from a richer awareness of the surrounding environment. However, the deployment of MTC over cellular networks creates new challenges to the contention-based Random Access (RA) procedure as well as to resource allocation for MTC devices with low impact on Human-to-Human (H2H) services. In this paper, we analyze the impact of massive number of MTC devices on traditional H2H users in Long Term Evolution (LTE) network. We compare the performance of three Radio Access Network (RAN) overload control schemes proposed by the 3rd Generation Partnership Project (3GPP) for the contention-based RA procedure in this network. Results derived via simulation show that the access probability of Human-type Communication (HTC) users can be jeopardized by the large number of MTC devices. Therefore, enhanced mechanisms for the contention-based RA procedure in LTE network need to be investigated in order to support the expected large number of MTC devices and the traditional H2H users in the same infrastructure.
Conference Paper
Full-text available
information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. Abstract—In Machine-to-Machine (M2M) networks, an energy efficient scalable medium access control (MAC) is crucial for serving massive battery-driven machine-type devices. In this paper, we investigate the energy efficient MAC design to minimize battery power consumption in cellular-based M2M communi-cations. We present an energy efficient MAC protocol that not only adapts contention and reservation-based protocols for M2M communications in cellular networks, but also benefits from partial clustering to handle the massive access problem. Then we investigate the energy efficiency and access capacity of contention-based protocols and present an energy efficient contention-based protocol for intra-cluster communication of the proposed MAC, which results in huge power saving. The simulation results show that the proposed MAC protocol outperforms the others in energy saving without sacrificing much delay or throughput. Also, the lifetimes of both individual nodes and the whole M2M network are significantly extended.
Conference Paper
Full-text available
information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. Abstract—In this paper the uplink scheduling and transmit power control are investigated to minimize the energy consumption for battery-driven devices deployed in cellular networks. A lifetime metric, based on the accurate energy consumption model for cellular-based machine devices, is provided and used to formulate the uplink scheduling and power control problems as network-lifetime maximization problems. Then, lifetime-aware uplink scheduling and power control protocols which maximize the overall network-lifetime are investigated based on the different lifetime definitions. Besides the exact solution, a low-complexity suboptimal solution is presented in this work which can achieve near optimal lifetime-performance with much lower computational complexity. The performance evalu-ation shows that the network lifetime is significantly extended under proposed protocols.
Article
Full-text available
Long Term Evolution-Advanced (LTE-A) is prospering as one of the promising mobile communication systems. In the near future, it is expected that in addition to traditional humanto- human (H2H) communications, an LTE-A system needs to support many applications with the technique of machine type communications (MTC), or machine-to-machine communications (M2M). Since M2M features a large number of devices, a mechanism to guarantee the performance of H2H and M2M in the random access procedure of LTE-A (RACH procedure) should be considered. However, little research provides dynamic RACH resource allocation approaches for H2H and M2M. In this paper, we propose a game-theoretic framework, which divides its random access resources into three groups: for H2H, for M2M, and for the hybrid usage. Under this framework, the Nash Equilibrium (NE) guarantees the system throughput by adaptively redistributing the traffic loading, and the NE can be approached rapidly even if the information of traffic loading is quite limited.
Article
The Internet of Things (IoT) aims at connecting a very large number of devices using an Internet-like architecture. Machine-to-machine (M2M) networks are considered the main component of the IoT. Long-term evolution (LTE) and LTE-advanced (LTE-A) are excellent candidates for supporting M2M communications due to their native IP connectivity and scalability for massive number of devices. Therefore, LTE schedulers should be capable of satisfying the needs of M2M devices such as timing constraints and specific quality of service (QoS) requirements. In this paper, we present a survey on uplink scheduling techniques over LTE and LTE-A from an M2M perspective. We focus on the aspects associated with M2M communications; namely, power efficiency, QoS support, multihop connectivity, and scalability for massive number of devices.
Conference Paper
METIS is developing a 5G system concept that meets the requirements of the beyond-2020 connected information society and supports new usage scenarios. To meet the objectives METIS uses Horizontal Topics (HT) that addresses a key new challenge, identifies necessary new functionalities and proposes HT-specific concepts. This paper presents an initial view of the HT-specific concepts for each of the METIS HTs: Direct Device-to-Device Communication, Massive Machine Communication, Moving Networks, Ultra-Dense Networks, and Ultra-Reliable Communication. It also describes how the HT-specific concepts will be integrated into one overall METIS 5G concept.