Conference PaperPDF Available

On the Performance of Piecewise Linear Approximation Techniques in WSNs

Authors:

Abstract and Figures

Energy consumption is one of the most important factors to consider in the design of Wireless Sensor Networks (WSNs). Many research works have been focused on lowering the transmission of the raw data by adopting lossy compression methods in order to improve energy efficiency. In this paper we perform a thorough energetic assessment of lossy compression scheme in multi-hop wireless sensor network. We present the evaluation of multi-hop network which adopts MTE routing approach and Piecewise Linear Approximation as lossy compression method. A comparative analysis aims to evaluate the performance of the performed configuration in terms of energy consumption using real-world sensor datasets.
Content may be subject to copyright.
On the Performance of Piecewise Linear
Approximation Techniques in WSNs
Samia Al Fallah
National School of Applied
Sciences, Tangier, Morocco
samia.alfallah@gmail.com
Mounir Arioua
National School of Applied
Sciences, Tetuan, Morocco
m.arioua@ieee.org
Ahmed El Oualkadi
National School of Applied
Sciences, Tangier, Morocco
eloualkadi@gmail.com
Jihane El Asri
National School of Applied
Sciences, Tetuan, Morocco
jihaneelaasri@gmail.com
Abstract—Energy consumption is the major constraint in
the design and the deployment of Wireless Sensor Networks
(WSNs). Since the transmission of data induces high energy
costs in WSN device, many research efforts focus on reducing
the transmission of the raw data by using lossy compression
methods in order to improve energy efficiency with an acceptable
data reconstruction tolerance. Thus, an intricate trade-off exists
between energy saving using sampling compression, and the
distortion of reconstructed data samples. In this paper, we
present a survey on Piecewise Linear Approximation methods.
A comparative analysis aims to evaluate the performance of the
selected techniques in terms of energy consumption, compression
ratio and distortion.
Index Terms : WSNs, Lossy compression, PLA, Energy effi-
ciency, Distortion.
I. INTRODUCTION
Wireless Sensor Networks (WSNs) have recently received
a lot of attention due to a wide range of applications such as
animal monitoring, agriculture transforming, health care, IoT,
indoor surveillance and smart buildings [1]. Typical WSN is
expected to consist of a large number of sensors deployed in
regions of interest in order to observe specific phenomena or
track objects. Energy consumption is one of the most important
factors in WSNs due to the resource constrained transmission
devices [2]. Hence, various approaches have been proposed
to prolong sensor nodes lifetime such as data compression,
due to the fact that the major part of energy is consumed in
data transmission [3] [4] [5] [6]. Some compression algorithms
are designed to support exact reconstruction of the original
data after de-compression (lossless compression) [4]. In other
cases, the reconstructed data is only an approximation of the
original information (lossy compression) [5]. The use of lossy
algorithms may lead to loss of information (distortion), but
generally ensures some additional gains in terms of compres-
sion ratio and most importantly in terms of energy saving. In
this paper, we focus on the performance offered by Piecewise
Linear Approximation (PLA) techniques in terms of energy
saving, compression ratio and reliability of reconstructed data.
With lossy algorithms, the original data is compressed by
eliminating some of the original information in it so that,
at the receiver side, the decompressor can reconstruct the
original signal up to certain accuracy [5]. Depending on the
application, small inaccuracy in the reconstructed data can be
acceptable.
The rest of this paper is organized as follows: Section II
reviews some of research efforts on the performance of lossy
compression methods. PLA techniques are introduced in sec-
tion III, especially LTC, PLAMLiS and Enhanced PLAMLiS.
Section IV provides an overview of some compression metrics
followed by a comparative study of selected techniques in
section V. Finally, the conclusion is given in section VI.
II. RELATED WORKS
Several works have been carried out for what regards
lossy compression schemes [5] [6] [8] [10] [11]. On the one
hand, some of these approaches are based on transforming
the input signal into coefficients in order to facilitate signal
representation [7]. As an example, FFT [8], DCT [9] and
Wavelet transform [10] represent time series into the frequency
domains, but differ in how transformation coefficients are
picked. Specifically, as mentioned in [11], transformation
methods achieve a good performance in terms of compression
ratio, but unfortunately, incur high energy expenditure due to
their computational cost. On the other hand, adaptive modeling
techniques aim to represent the input signal through linear [12]
[13], polynomial [14] or autoregressive methods [15]. Hence,
the input time series is collected according to N samples for
each time window transmission. Then the selected compres-
sion method is applied obtaining a set of model parameters
that will be transmitted in the place of the original data. In
the case of linear techniques, PLA represent a time series of
environmental measures with a sequence of line segments up
to a desired approximated accuracy [12] [13]. In the case
of polynomial approaches, the input signal is approximated
through polynomial coefficients, instead of transmitting the
original data samples [14]. In the case of autoregressive
methods, a model of basic coefficients is built using the history
of data samples exploiting the correlation of the signal. As
mentioned in [11], PLA approaches ensure a better energy
cost with a very low computational complexity, contrary to
Polynomial Regression (PR) which induces a high complexity
cost but performs well in terms of accuracy while increasing
the polynomial order. In addition, increasing the length of the
correlation signal increases the length of the autoregressive
model that may lead to high energy consumption [5].
978-1-5386-4609-0/18/$31.00 c
2018 IEEE
In this paper, we perform a comparative study on lossy com-
pression methods, especially PLA approaches. The main goal
is to solve the trade-off existing between energy consumption
for compression, and reliability of reconstructed data at the
receiver side.
III. PLA COMPRESSION METHODS
PLA is a family of linear approximation techniques based
on representing data samples with a sequence of line segments
that preserves original samples within a desired approximation
tolerance. In fact, the objective of PLA methods is to approx-
imate the time series with a sequence of lines (only two end
points for each line) in order to reduce the energy consumption
on data transmission. Since a line segment can be determined
by only two end points, PLA leads to efficient representation of
time series in terms of transmission requirements and memory
[12] [13].
012345
n
0
1
2
3
4
5
x(n)
Fig. 1: Approximation of a time series x(n) by a segment.
At the receiver side, nobservations are approximated
through the vertical projection of the actual samples over the
corresponding line segment (Figure 1). The approximated sig-
nal in what follows is referred to as ˆx(n). The error introduced
is the distance from the actual samples to the segment along
the vertical projection, i.e.|x(n)ˆx(n)|. Following this simple
idea, several methods have been proposed in the literature.
Lightweight Temporal Compression (LTC) is a lightweight
technique to compress environmental measurements [15]. It
is a simple method to represent a time series by a number
of line segments. Algorithm 1 shows the pseudo code of this
technique.
For both a given time series x(n)and error tolerance ε, the
algorithm fixes the first measurement x(1) at the beginning
of a line. The second measurement x(2) is transformed into a
vertical segment whose extremities are x(2)+εand x(2)ε.
The sensor stores a Highline connecting x(1) and x(2)+ε,
and a Lowline connecting x(1) and x(2)εas shown in Figure
2(a). With the third measurement x(3), the node tightens these
bounds to ensure that can represent the third measurement
within ε(Figure 2(b)). The process is repeated until a sample
x(s)cannot be accurately represented by any line segments
within the bounds (Figure 2(c)). Once this occurs, the node
transmits a packet containing the first endpoint and computes
the midpoint of the upper and lower bounds. Then, the
algorithm starts over looking for a new line segment.
Thereby, LTC algorithm encodes the time series
incrementally, which makes the number of operations
Algorithm 1 LTC Algorithm
Inputs x, ε// Time series, Error tolerance
for i=1 to length(x) do
j=i+1
Highline=Line-function[x(i),x(j)+ε]
Lowline=Line-function[x(i),x(j)-ε]
while j < length(x) do
if Highline below x(j+1)- εor Lowline above x(j+1)+ε
then
Save x(j)
i=j Break
else
if Highline above x(j+1)+εthen
Highline=Line-function[x(i),x(j+1)+ε]
end if
if Lowline below x(j+1)- εthen
Lowline=Line-function[x(i),x(j+1)-ε]
end if
end if
j=j+1
end while
end for
012345
n
0
1
2
3
4
x(n)
(a)
012345
n
0
1
2
3
4
x(n)
(b)
012345
n
0
1
2
3
4
x(n)
(c)
Fig. 2: Steps of the Lightweight Temporal Compression Tech-
nique
(complexity) independent of the correlation of the original
signal [5]. In addition, LTC may be less efficient in terms of
compression ratio when the data values change significantly
over time [11].
Another significant PLA algorithm is Piecewise Linear Ap-
proximation with Minimum number of Line Segment (PLAM-
LiS). This approach represents the time series through a
sequence of line segments [12]. In fact, the goal of this
algorithm (Algorithm 2) is, for both a given time series and
error tolerance ε, to find a minimum number of segments to
approximate the time series such that the difference between
any approximation value and its actual value is less than ε.
The endpoints of the line segments must be the points in the
time series.
Algorithm 2 PLAMLiS Algorithm
Inputs x, ε// Time series, Error tolerance
for i=1 to length(x)-1 do
j=i+2
while j < length(x) do
Line=Line-function (x(i),x(j))
for k=i to j do
if Calculate-error(Line, x(k)) < εthen
k=k+1
else
Segment=[x(i),x(j-1)] Break
end if
end for
j=j+1
end while
end for
For each data sample x(i), segments are built associating
x(i)with x(j)(j>i) if the line segment [x(i),x(j)] meets
the error bound ε. Specifically, the difference between the
approximating value ˆx(k)(i<k<j) and the actual value of
x(k)is computed by Calculate error function, in order to
verify if the distance |x(n)ˆx(n)|is not larger than ε. Then
this procedure is iterated for all points of the time series.
After obtaining the set of segments, the algorithm pick the
minimum number of line segments that covers all the points
of the time series [12].
In order to reduce the computational cost of PLAMLiS al-
gorithm, Enhanced PLAMLiS (EPLAMLiS) has been proposed
in the literature [13]. It is based on a recursive segmentation as
shown in the Algorithm 3. The algorithm starts with the first
segment [x(1),x(N)]. If this segment approximates all points
within the maximum allowed tolerance ε, the two endpoints
are transmitted and the algorithm ends. Otherwise, the segment
is split in two segments at the point x(i),1<i<N, where
the error is maximum, obtaining two segments [x(1),x(i)]
and [x(i),x(N)]. This procedure is applied for each part of
the line segments until all of the sub line segments meet the
error bound as shown in Figure 3.
Algorithm 3 Enhanced PLAMLiS Algorithm
Inputs x, ε// Time series, Error tolerance
Approximating(x(1), x(N))
Segment=Line-function(x(1),x(N))
for i=2 to N do
if Max-error(Segment, x(i)) > εthen
Approximate(x(1), x(i))
Approximate(x(i), x(N)) Break
end if
end for
012345678
n
0
0.5
1
1.5
2
2.5
3
x(n)
(a)
012345678
n
0
0.5
1
1.5
2
2.5
3
x(n)
(b)
012345678
n
0
0.5
1
1.5
2
2.5
3
x(n)
(c)
012345678
n
0
0.5
1
1.5
2
2.5
3
x(n)
(d)
Fig. 3: Steps of the Enhanced PLAMLiS Compression tech-
nique
EPLAMLiS algorithm aims to be applicable to the sensed
data in sensor networks which have significant temporal corre-
lation [11]. Owing to the correlation, the values in data series
are quite similar therefore approximating them by the line
segments will lead to benefits in terms of compression ratio,
and the number of line segments obtained is likely to be small.
IV. COMPRESSION METRICS
Before getting into the comparative study of PLA
techniques, we introduce in the following compression
metrics that assess the overall performance of selected
methods.
Compression ratio is one of the major evaluation parameter
in data compression [17]. It characterizes the compression
effect of the technique, and it is defined as a ratio between
the volume of the compressed and the raw data.
CR =Volume of compressed data
Volume of raw data (1)
In WSN, compression ratio is also considered as one of the
major evaluation parameter. Since it indicates the reduction of
communication energy costs, several researches are focused
on selecting well-performed compression algorithms in view
of lowering energy consumptions in data communication [5]
[6].
Another important metric adopted in apprising compression
techniques is energy consumption for compression. It can be
defined as the energy needed to accomplish the compression
task. For each type of compression algorithms, we calculate
the number of operations accounting additions, subtractions,
multiplications, divisions and comparisons. Depending on the
type of the micro-controller used in the study, we map the
corresponding number of clock cycles and subsequently we
calculate the energy consumed for processing of each algo-
rithm.
Total energy consumption is the sum of the energies for
compression and transmission. For assessing the performance
of compression algorithms in WSN, a proper criterion is
needed, which focuses on the energy efficiency of each al-
gorithm. Energy Saving Benefit (ESB) exposes the energy
saving introduced by compression algorithms [17]. The ESB
expression is formulated as:
η=EuncompEcomp
Euncomp
100 (2)
Where Euncomp is the total energy cost without compres-
sion, and Ecomp is the total energy cost with compression.
The energy consumption without compression is expressed
as follows:
Euncomp =Ptran LTtran (3)
Where Ptran is the transmit power, Lis the volume of raw
data and Ttran is the time overhead on transmitting one byte.
However, the energy consumption with compression, it can
be formulated as:
Ecomp =PMCU LTMCU +Ptran LTtran CR (4)
Where PMCU is the computation power of the compression
algorithm and TMCU is time overhead on compressing one
byte. By using equation (3) and (4), the relation of ηbecomes:
η=1CR PMCUTMCU
PtransTtr ans
100 (5)
Thus, the evaluation criterion includes almost all the main
metrics to evaluate compression, and provides important infor-
mation on whether data compression can bring energy saving
or not.
Otherwise, in the case of lossy compression, and at the
receiver side, the reconstructed data is only an approximation
of the original information. Hence, this loss of information
can be measured by a distortion parameter defined as follows:
D=1
N
n
i=1
|ˆx(i)x(i)|(6)
Where x(i)is an element of a given time series and ˆx(i)
represents its compression version.
The prescribed signal representation accuracy depends nec-
essarily on WSN applications. Hence, selected data compres-
sion methods exploit signal correlation in order to minimize
energy expenditure and ensure a high reliability of recon-
structed data [11].
V. T ESTS AND RESULTS
This section provides a comparison of the performance
of each type of compression algorithms described in the
previous section. We have selected in this study TI MSP430
micro-controller, using 16 bits floating point package for the
calculations [18]. The TI MSP430 is powered by a current of
I= 330 μA, a voltage of V= 2.2 V and a clock rate of C=1
MHz. Hence, the energy consumed per a clock cycle is given
by:
E0=VCI=0.726μJ
Table I exposes the CPU cycles needed for each type of
calculation. Hence, energy for compression is computed by
recording the number of clock cycles needed for each type of
operations while executing a data compression algorithm.
TABLE I: CPU cycles needed for processing
Operation Clock cycle
Addition X+Y 184
Subtraction X-Y 177
Multiplication X*Y 395
Division X /Y 405
Comparison X<=>Y 37
For this analysis, we have selected the TI CC2420 RF
transceiver [19] which follows IEEE 802.15.4 standard [20].
Energy cost for the transmission of one bit can be defined as
follows:
ET=UI
D=0.23μJ
Where I=17.4 mA is the current consumption for transmis-
sion at a voltage of U= 3.3 V for an effective data rate of
D=250 kbps.
In this study, we have considered a time series of N=24
temperature samples collected during one day (one sample per
hour) in Tetuan city as shown in Figure 4.
20
22
24
26
28
30
Temperature °C
2PM
8AM
2AM8PM
3PM
Fig. 4: Collected temperature samples of Tetuan City
The analysis aims to evaluate the performance of lossy
compression methods presented in section II, in terms of
compression effectiveness and energy saving. For each type
of compression, we have changed the compression ratio by
tuning the error tolerance.
A. Compression Ratio vs Energy for compression
The performance in terms of energy for compression as a
function of the compression ratio is studied.
0 50 100 150 200 250
Energy for compression (μJ)
0.2
0.4
0.6
0.8
1
Compression Ratio
EPLAMLiS
PLAMLiS
LTC
Fig. 5: Energy for compression vs Compression Ratio.
Figure 5 shows the energy for compression as a function
of the compression ratio for each compression method. For
increasing values of the error tolerance ε, the compression ratio
becomes systematically small for all schemes, but the energy
consumed for compression differs. In fact, PLAMLiS algo-
rithm require a large amount of energy, contrary to EPLAMLiS
and LTC which requires a small energy expenditure.
The energy for compression is strongly related to the com-
plexity of the algorithm. LTC encodes the time series sample
by sample incrementally regardless of the error tolerance
value. Thus, the number of operations depends weakly on
the compression ratio. EPLAMLiS has to work fewer for
increasing values of εdue to the fact that the number of opera-
tions (divide and reiterate) becomes smaller, and consequently
the energy consumption is reduced. For PLAMLiS case, for
each point of the time series, the algorithm finds the longest
segment that meets the error tolerance. For high values of ε,
these segments become longer. For this reason, the algorithm
becomes more complex when the error bound is increased, as
a result the energy for compression increments.
B. Compression Ratio vs Total Energy
Total energy consumption presents the sum of computa-
tional and transmission energy. Figure 6 shows the influence
of the compression ratio on the energy saving. In fact, the
three curves have almost the same shape as Figure 5, with a
little difference in the slope. Thus, this difference is due to
the transmission energy that decreases when the compression
ratio becomes smaller. As a result, only LTC and EPLAM-
LiS can achieve some energy saving, contrary to PLAMLiS
that requires more energy expenditure. Hence, compared to
EPLAMLiS, LTC presents the most significant energy saving.
50 100 150 200 250 300
Total Energy (μJ)
0.2
0.4
0.6
0.8
1
Compression Ratio
EPLAMLiS
PLAMLiS
LTC
Fig. 6: Total Energy Consumption vs Compression Ratio.
C. Compression Ratio vs Distortion
The distortion is the major parameter in measuring the
reliability of the compression method. Figure 7 shows the
variation of the distortion as a function of the compression
ratio.
For all compression methods, the distortion increases with
increasing values of ε, which makes it inversely proportional
to the compression ratio. Compared to LTC, EPLAMLiS has
the lowest distortion for a given compression ratio, due to the
fact that the endpoints of all line segments must be points in
the data series, which is not the case concerning LTC. For this
reason, EPLAMLiS shows a high level of signal representation
accuracy.
VI. CONCLUSION
In this paper, we have compared the performance of PLA
techniques in terms of compression ratio, energy saving and
0 0.1 0.2 0.3 0.4 0.5 0.6
Distortion (C°)
0.2
0.4
0.6
0.8
1
Compression Ratio
EPLAMLiS
PLAMLiS
LTC
Fig. 7: Distortion vs Compression Ratio.
accuracy of reconstructed data for wireless sensor network
devices. The obtained results revealed that there is a trade-
off between the energy saving and the reliability of recon-
structed data. LTC is a lightweight compression method that
incurs the smallest energy expenditure. However, compared to
EPLAMLiS algorithm, LTC represents a drawback in terms
of distortion of reconstructed data samples. For EPLAMLiS
case, the signal is decompressed with high level of accuracy
but at the cost of some energy expenditure. Future works aim
to propose a combined algorithm which will be based on the
compelling features of LTC and EPLAMLiS algorithms in
order to optimize the trade-off between energy cost and data
accuracy.
REFERENCES
[1] Murtadha M. N. Aldeer.(December 2013). A Summary Survey on
Recent Applications of Wireless Sensor Networks. In IEEE Student
Conference on Research and Development, Putrajaya, Malaysia,
doi:10.1109/SCOReD.2013.7002637.
[2] A. Damaso, D. Freitas, N. Rosa , B. Silva and P. Maciel. (2013).
Evaluating the Power Consumption of Wireless Sensor Networks
Applications Using Models. In Sensors 2013, 13, 3473-3500,
doi:10.3390/s130303473
[3] T. Sheltamia, M. Musaddiqa , E. Shakshuki. (November
2016). Data compression techniques. In Wireless Sensor
Networks Future Generation Computer Systems, 64 151-162,
doi:10.1016/j.future.2016.01.015.
[4] Yao Liang and Yimei Li. (March 2014). An Efficient and Robust
Data Compression Algorithm in Wireless Sensor Networks. In IEEE
COMMUNICATIONS LETTERS, VOL. 18, NO. 3.
[5] D.Zordan , B.Martinez , I.Vilajosana and M.Rossi. (November 2014).
On the Performance of Lossy Compression Schemes for Energy
Constrained Sensor Networking, in ACM Transactions on Sensor
Networks. 11(1), 34 pages, doi: 10.1145/2629660.
[6] M. A.Razzaque , C.Bleakley , and S. Dobson. (2013). Compression
in wireless sensor networks: A survey and comparative evaluation, in
ACM Transactions on Sensor Networks. 11(1), Article 5 , 44 pages,
doi:10.1145/2528948.
[7] Wallace, G. K. (February 1992). The JPEG still picture compression
standard. In IEEE Transactions on Consumer Electronics 38, 1, xviii
- xxxiv.
[8] B. Narang, A Kaur, D. Singh. (July 2015). Comparison of DWT
and DFT for energy efficiency in underwater WSNs. In International
Journal of Computer Science and Mobile Computing, Vol. 4, Issue.
7, pp.128-137.
[9] G. Quer, R. Masiero, D. Munaretto, M. Rossi, J. Widmer, and
M. Zorzi. (2009). On the interplay between routing and signal
representation for compressive sensing in wireless sensor networks,
in Proceedings of the IEEE International Conference on Information
Theory and Applications in San Diego, USA. IEEE, pp. 206-215,
doi : 10.1109/ITA.2009.5044947.
[10] G.Shen , A.Ortega. (2010). Transform-based distributed data gather-
ing. In IEEE Transactions on Signal Processing, 58, 3802-3815, doi
: 10.1109/TSP.2010.2047640.
[11] D.Zordan, B. Martinez, I.Vilajosana and Rossi. (2012). To Com-
press or Not To Compress: Processing vs Transmission Tradeoffs
for Energy Constrained Sensor Networking. In Computer Science,
Networking and Internet Architecture, arXiv:1206.2129.
[12] T.Schoellhammer, B.Greenstein , E.Osterweil , M.Wimbro, and
D.Estri. (2004). Lightweight temporal compression of microcli-
mate datasets. Paper presented at IEEE International Conference
on Local Computer Networks (LCN). Tampa, FL, US, doi :
10.1109/LCN.2004.72.
[13] N. D. Pham, T. D. Le, H. Choo. (2008). Enhance exploring tem-
poral correlation for data collection in WSNs. Paper presented at
IEEE International Conference on Research, Innovation and Vi-
sion for the Future (RIVF), Ho Chi Minh City, Vietnam, doi :
10.1109/RIVF.2008.4586356.
[14] S. Ozdemir and Y. Xiao. (2011). Polynomial Regression Based
Secure Data Aggregation for Wireless Sensor Networks. Paper
presented at Global Telecommunications Conference (GLOBECOM
2011) in Kathmandu, Nepal, doi :10.1109/GLOCOM.2011.6133924.
[15] J.-L.LU, F.Valois and M.Dohler, (2010). Optimized Data Aggregation
in WSNs Using Adaptive ARMA. In International Conference on
Sensor Technologies and Applications (SENSORCOMM). Venice,
Italy, doi : 10.1109/SENSORCOMM.2010.25.
[16] Liu, C., Wu, K., and Pei, J.(June 2007). An energy-efficient data col-
lection framework for wireless sensor networks by exploiting spatio-
temporal correlation. IEEE Transactions on Parallel and Distributed
Sys-tems 18, 7, 1010-1023, doi :10.1109/TPDS.2007.1046.
[17] B. Ying, Y. Liu, H. Yang, and H. Wang. (2013) Evaluation of Tunable
Data Compression in Energy-Aware Wireless Sensor Networks. In
Sensors, 10, 3195-3217; doi:10.3390/s100403195.
[18] L.Bierl. MSP430 Family Mixed-Signal Microcontroller Application
Reports,Technical Report, Texas Instruments Incorporated, 2000.
[19] Chipcon. (2007) SmartRF CC2420: 2.4 GHz
IEEEE802.15.4/ZigBee-ready RF Transceiver, Technical Report,
Texas Instruments Incorporated.
[20] IEEE P802.15 Working Group, Std. IEEE 802.15.4-2003:Wireless
Medium Access Control (MAC) and Physical Layer(PHY) Specifica-
tions for Low-Rate Wireless Personal Area Networks (LR-WPANs),
in IEEE Standard, pp. 1-89.
[21] L. Yin , Ch.Liu , X. Lu , J. Chen and C. Liu. (November 2016).
Efficient Compression Algorithm with Limited Resource for Contin-
uous Surveillance. KSII TRANSACTIONS ON INTERNET AND
INFORMATION SYSTEMS VOL. 10, NO. 11, Nov. 2016.
[22] E. Fasolo, M. Rossi, J. Widmer, and M. Zorzi. (2007). In-
network aggregation techniques for wireless sensor networks:
a survey. Wireless Communications, IEEE, 14(2):70-87, doi :
10.1109/MWC.2007.358967.
[23] M. Razzaque and S Dobson.(2014). Energy-Efficient Sensing in
Wireless Sensor Networks Using Compressed Sensing. In Sensors,
14, 2822-2859, doi : 10.3390/s140202822.
[24] Alsheikh, Mohammad Abu, LIN, Shaowei, Niyato, Dusit, and Hwee-
Pink TAN. (2016). Rate Distortion Balanced Data Compression in
Wireless Sensor Networks. In IEEE Sensors Journal. 16, (12), 5072-
5083, doi : 10.1109/JSEN.2016.2550599.
[25] E. Berlin and K. Van Laerhoven. (2010). An on-line piecewise
linear approximation technique for wireless sensor networks. In
IEEE Local Computer Network Conference in Denver, USA, doi
: 10.1109/LCN.2010.5735832.
[26] I.Ez-zazi, M. Arioua, A. El Oualkadi, P. Lorenz.(2017). Hybrid
Adaptive Coding and Decoding Scheme for Multi-hop Wireless
Sensor Networks.In Wireless Personal Communication, 94(4), pp
3017-3033.
[27] N. Li, Y. Liu, F. Wu, B. Tang, (December 2010). WSN Data Distor-
tion Analysis and Correlation Model Based on Spatial Locations. In
Journal of Networks, 5(12):1442-1449, doi : 10.4304/jnw.5.12.1442-
1449.
[28] S.Pradhan, J.Kusuma, and K.Ramchandran.(2002). Distributed com-
pression in a dense microsensor network. IEEE Signal Processing
Magazine, vol 19, no. 2, pp 51-60, doi : 10.1109/79.985684.
... 18 In this work, we propose a hybrid and adaptive spatiotemporal compression approach Distributed Temporal Source Coding (DTSC), typically applied in a clustered WSN. The proposed method presents a combination of Distributed Source Coding (DSC) as an energy efficient and accurate spatial compression 19,20 and Lightweight Temporal Compression (LTC) as a performed temporal compression on the first hand 21,22 and temporal correlation measuring tool on the other hand. DTSC is used to compress the network data in double dimension to alleviate the communication burden which can significantly reduce the overall power consumption. ...
... 15 Notably, LTC is a Piecewise Linear Approximation (PLA) method that has been shown a significant performance in terms of energy saving, owing to its lightweight algorithm. 22 In addition, LTC takes advantage of the incurred temporal correlation to thoroughly compress the collected data with low processing energy for different network topologies (multi-hop 33 and cluster-based WSNs 34 ). Therefore, designing an adequate compression scheme that exploits the data correlation in both space and time is an appealing solution to reduce the energy expenditure and extend the network lifetime with high data accuracy. ...
... LTC is a simple linear approximation technique, which has proven its performance in terms of resources saving. 21,22 It is designed to approximate a given time series x(n) with a sequence of line segments through a low approximation tolerance ε. ...
Article
Full-text available
Energy efficiency and data reliability are the crucial issues to be explored in wireless sensor networks (WSNs). A large amount of power consumption is depleted by each sensor node to transmit its sensed data which typically induces high temporal and spatial dependencies. Therefore, efficient compression approaches are designed to exploit the data correlation in both space and time so as to discard information redundancies with accurate recovery. In this paper, we propose a hybrid and adaptive spatiotemporal compression approach named Distributed Temporal Source Coding, designed to extend the lifespan of cluster based WSNs. The proposed scheme takes advantage of the performed Distributed Source Coding as an effective spatial compression algorithm and adaptively exploits the Lightweight Temporal Compression as a promising solution to reduce the temporally correlated information. The obtained results have revealed that the proposed algorithm outperforms other compression methods, promotes the network battery life, and ensures compelling data accuracy. The proposed scheme has succeeded to reduce the gap between the energy efficiency and data reliability in WSNs.
... Unlike traditional compression methods that treat each data point independently, LTC leverages the temporal relationships between successive data points, exploiting patterns and redundancies within the time series to achieve high compression ratios [15]. LTC's algorithm is particularly well-suited for sensor nodes with limited computational capabilities owing to its streamlined implementation [16]. Moreover, it offers adjustable error tolerance, allowing users to strike a balance between achieving a desired compression ratio and maintaining acceptable data accuracy. ...
Conference Paper
Full-text available
The Internet of Things (IoT) is a revolutionary paradigm that has gained significant prominence in recent years. It represents the interconnection of everyday objects, devices and machines to the internet, allowing them to collect, exchange and analyze data. As the number of connected objects grows exponentially, energy consumption and data security become essential challenges to ensure a sustainable development of green IoT. Data compression theory can be used as an enticing key to downsize the large amount of circulated information, since wireless communication is the prime energy consuming component in IoT devices. In addition, data encryption algorithms are crucial to protect sensitive information and ensure the integrity and confidentiality of IoT systems. Therefore, combining compression and encryption can lead to improved overall IoT system performance in terms of both power cost and data privacy. In this paper, a lightweight and secure compression approach is introduced to efficiently manage the energy expenditure and provide end-to-end security for energy constrained devices. This proposed algorithm combines the performance of LTC compression and AES encryption to securely encode collected data in IoT devices. The energy model analysis has shown that the suggested method is effective in terms of processing energy, reduces the volume of transmitted data and guarantees end-to-end information privacy
... In this performed work, the analysis of sensor datasets is performed vertically, this compression technique is considered in the rest of this paper as Vertical Compression (VC). In our previous work [12][13], a comparison of different compression algorithms is carried out and reveals that Piecewise Linear Approximation (PLA) techniques offer noteworthy enhancements in terms of energy consumption and data reliability [13]. These algorithms consider the collected information as time series and discard redundant signal using line segments, PLA is considered in the rest of this paper as Horizontal Compression (HC). ...
Article
Full-text available
Energy efficiency is an essential issue to be reckoned in wireless sensor networks development. Since the low-powered sensor nodes deplete their energy in transmitting the collected information, several strategies have been proposed to investigate the communication power consumption, in order to reduce the amount of transmitted data without affecting the information reliability. Lossy compression is a promising solution recently adapted to overcome the challenging energy consumption, by exploiting the data correlation and discarding the redundant information. In this paper, we propose a hybrid compression approach based on two dimensions specified as horizontal (HC) and vertical compression (VC), typically implemented in cluster-based routing architecture. The proposed scheme considers two key performance metrics, energy expenditure, and data accuracy to decide the adequate compression approach based on HC-VC or VC-HC configuration according to each WSN application requirement. Simulation results exhibit the performance of both proposed approaches in terms of extending the clustering network lifetime.
... Where P is the power, U and I are tension and intensity respectively, and the D is the rate of data [13]. ...
Conference Paper
Today, we are witnessing the overpopulation of connected objects due to the ubiquitous computing or pervasive computing known as the Internet of Things. Recent developments in the field of industry have heightened the need for the Internet of Things. Therefore, the use of IoT implies the industrial internet of things. In fact, to be interested in its evolution, it is necessary to ensure the ease of its deployment as well as the gains of the industrial society on the economic level. The main challenges of IoT are: Speed computing, energy saver, bandwidth saver and providing low latency. However, these parameters have a serious effect on the use of the Internet of Things, which is necessary to find a solution to optimize its uses in the industry. This article seeks to address these issues by analyzing the IoT literature and its theoretical modeling of fog computing architecture and comparing its performance with the traditional cloud computing model. We suggest a global model and architecture, on which industrial companies can rely in order to optimize the internet of things resources and provide a better result.
... In this paper, we consider a cluster-based sensor network as an energy efficient routing architecture [3], where each sensor monitors the given phenomenon and periodically sends its collected data to its Cluster-Head (CH). At the first level, a horizontal compression is applied in sensor nodes devices, exploiting the performance offered by Piecewise Linear Approximation (PLA) techniques in terms of energy saving, compression ratio and reliability of reconstructed data revealed in the literature [4] [5] [6] [7]. Afterwards, a vertical compression is performed at the CH level by means of similarity function to look for datasets redundancies generated by neighboring sensor nodes that belongs to the same cluster in order to construct groups of nodes with similar data evolution that will be represented by only one node. ...
Article
We consider the remote estimation of a stochastic piecewise linear signal, observed by a sensor, at a monitor. The sensor transmits a packet whenever the observed signal’s slope changes. The packets are transmitted from the sensor to the monitor through an unreliable channel which randomly loses packets. The monitor sequentially estimates the signal using the information obtained from successfully received packets. The sensor does not have any feedback from the monitor.We derive an analytical expression for the average age of incorrect information, a recently proposed information freshness metric. The average age of incorrect information is shown to be a function of success probability of transmission and signal parameters representing the rate and clustering of slope changes. We obtain an upper bound on the mean absolute error of the remote estimate using the slope-weighted age of incorrect information. The age of incorrect information is also studied for a homogeneous multisensor scenario, where sensors use slotted ALOHA and the links between the sensors and the monitor are unreliable due to contention.
Chapter
Edge computing is currently one of the main research topics in the field of Internet of Things. Edge computing requires lightweight and computationally simple algorithms for sensor data analytics. Sensing edge devices are often battery powered and have a wireless connection. In designing edge devices the energy efficiency needs to be taken into account. Pre-processing the data locally in the edge device reduces the amount of data and thus decreases the energy consumption of wireless data transmission. Sensor data compression algorithms presented in this paper are mainly based on data linearity. Microclimate data is near linear in short time window and thus simple linear approximation based compression algorithms can achieve rather good compression ratios with low computational complexity. Using these kind of simple compression algorithms can significantly improve the battery and thus the edge device lifetime. In this paper linear approximation based compression algorithms are tested to compress microclimate data.
Conference Paper
Lightweight Temporal Compression (LTC) is an energy-efficient lossy compression algorithm that maintains a memory usage and per-sample computational cost in O(1). The method provides a trade-off between compression ratio and accuracy using an error bound. In this paper, we present the Refined LTC (RLTC) algorithm, which uses a binning approach to widen the search space and increase the LTC’s compression ratio and reduce its dynamic energy consumption, which is characterized by CPU computations and radio transmissions, without compromising the error bound. The proposed RLTC algorithm adds negligible overhead to the memory usage and latency of LTC. Experimental results on an environmental sensor dataset have shown that the LTC’s compressed byte stream can be further reduced in size by up to 18%, while the dynamic energy consumption is reduced by 9.5% on average.
Article
Full-text available
In this paper, we have proposed a hybrid adaptive coding and decoding scheme for multi-hop wireless sensor networks (WSNs). Energy consumption and transmission reliability are used as performance metrics for multi-hop communications in WSNs. The presented scheme takes into account distance, channel conditions and correction codes performance to decide coding and decoding procedure, and considers Reed Solomon code and Low Density Parity Check code to provide error protection on the transmitted data. The proposed approach aims to reduce the decoding power consumption and to prolong the lifetime of the network as well as improve the reliability of the transmission. Simulation results show that our performed scheme enhances both energy efficiency and communication reliability of multi-hop sensor networks.
Article
Full-text available
This paper presents a data compression algorithm with error bound guarantee for wireless sensor networks (WSNs) using compressing neural networks. The proposed algorithm minimizes data congestion and reduces energy consumption by exploring spatio-temporal correlations among data samples. The adaptive rate-distortion feature balances the compressed data size (data rate) with the required error bound guarantee (distortion level). This compression relieves the strain on energy and bandwidth resources while collecting WSN data within tolerable error margins, thereby increasing the scale of WSNs. The algorithm is evaluated using real-world datasets and compared with conventional methods for temporal and spatial data compression. The experimental validation reveals that the proposed algorithm outperforms several existing WSN data compression methods in terms of compression efficiency and signal reconstruction. Moreover, an energy analysis shows that compressing the data can reduce the energy expenditure, and hence expand the service lifespan by several folds.
Article
Full-text available
The advancement in the wireless technologies and digital integrated circuits led to the development of Wireless Sensor Networks (WSN). WSN consists of various sensor nodes and relays capable of computing, sensing, and communicating wirelessly. Nodes in WSNs have very limited resources such as memory, energy and processing capabilities. Many image compression techniques have been proposed to address these limitations; however, most of them are not applicable on sensor nodes due to memory limitation, energy consumption and processing speed. To overcome this problem, we have selected Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques as they can be implemented on sensor nodes. Both DCT and DWT allow an efficient trade-off between compression ratio and energy consumption. In this paper, both DCT and DWT are analyzed and implemented using TinyOS on TelosB hardware platform. The metrics used for performance evaluation are peak signal-to-noise ratio (PSNR), compression ratio (CR), throughput, end-to-end (ETE) delay and battery lifetime. Moreover, we also evaluated DCT and DWT in a single-hop and in multi-hop networks. Experimental results show that DWT outperforms DCT in terms of PSNR, throughput, ETE delay and battery lifetime. However, DCT provides better compression ratio than DWT. The average media access control layer (MAC) delay for both DCT and DWT is also calculated and experimentally demonstrated.
Conference Paper
Full-text available
Wireless Sensor Networks (WSNs) are networks composed of a number of sensor nodes that communicate wirelessly. WSNs are utilized over a wide range of applications. This paper looks at the WSNs from the applications point of view. A survey of some of the key-Applications that utilize WSN nowadays, along with their specifications and capabilities, is briefly presented.
Article
Full-text available
Wireless sensor networks (WSNs) are highly resource constrained in terms of power supply, memory capacity, communication bandwidth, and processor performance. Compression of sampling, sensor data, and communications can significantly improve the efficiency of utilization of three of these resources, namely, power supply, memory and bandwidth. Recently, there have been a large number of proposals describing compression algorithms for WSNs. These proposals are diverse and involve different compression approaches. It is high time that these individual efforts are put into perspective and a more holistic view taken. In this article, we take a step in that direction by presenting a survey of the literature in the area of compression and compression frameworks in WSNs. A comparative study of the various approaches is also provided. In addition, open research issues, challenges and future research directions are highlighted.
Article
Full-text available
Sensing of the application environment is the main purpose of a wireless sensor network. Most existing energy management strategies and compression techniques assume that the sensing operation consumes significantly less energy than radio transmission and reception. This assumption does not hold in a number of practical applications. Sensing energy consumption in these applications may be comparable to, or even greater than, that of the radio. In this work, we support this claim by a quantitative analysis of the main operational energy costs of popular sensors, radios and sensor motes. In light of the importance of sensing level energy costs, especially for power hungry sensors, we consider compressed sensing and distributed compressed sensing as potential approaches to provide energy efficient sensing in wireless sensor networks. Numerical experiments investigating the effectiveness of compressed sensing and distributed compressed sensing using real datasets show their potential for efficient utilization of sensing and overall energy costs in wireless sensor networks. It is shown that, for some applications, compressed sensing and distributed compressed sensing can provide greater energy efficiency than transform coding and model-based adaptive sensing in wireless sensor networks.
Article
Full-text available
Data compression is a useful technique in the deployments of resource-constrained wireless sensor networks (WSNs) for energy conservation. In this letter, we present a new lossless data compression algorithm in WSNs. Compared to existing WSN data compression algorithms, our proposed algorithm is not only efficient but also highly robust for diverse WSN data sets with very different characteristics. Using various real-world WSN data sets, we show that the proposed algorithm significantly outperforms existing popular lossless compression algorithms for WSNs such as LEC and S-LZW. The robustness of our algorithm has been demonstrated, and the insight is provided. The energy consumption of our devised algorithm is also analyzed.
Article
Energy efficiency of resource-constrained wireless sensor networks is critical in applications such as real-time monitoring/surveillance. To improve the energy efficiency and reduce the energy consumption, the time series data can be compressed before transmission. However, most of the compression algorithms for time series data were developed only for single variate scenarios, while in practice there are often multiple sensor nodes in one application and the collected data is actually multivariate time series. In this paper, we propose to compress the time series data by the Lasso (least absolute shrinkage and selection operator) approximation. We show that, our approach can be naturally extended for compressing the multivariate time series data. Our extension is novel since it constructs an optimal projection of the original multivariates where the best energy efficiency can be realized. The two algorithms are named by ULasso (Univariate Lasso) and MLasso (Multivariate Lasso), for which we also provide practical guidance for parameter selection. Finally, empirically evaluation is implemented with several publicly available real-world data sets from different application domains. We quantify the algorithm performance by measuring the approximation error, compression ratio, and computation complexity. The results show that ULasso and MLasso are superior to or at least equivalent to compression performance of LTC and PLAMlis. Particularly, MLasso can significantly reduce the smooth multivariate time series data, without breaking the major trends and important changes of the sensor network system.
Article
Lossy temporal compression is key for energy-constrained wireless sensor networks (WSNs), where the imperfect reconstruction of the signal is often acceptable at the data collector, subject to some maximum error tolerance. In this article, we evaluate a number of selected lossy compression methods from the literature and extensively analyze their performance in terms of compression efficiency, computational complexity, and energy consumption. Specifically, we first carry out a performance evaluation of existing and new compression schemes, considering linear, autoregressive, FFT-/DCT- and wavelet-based models, by looking at their performance as a function of relevant signal statistics. Second, we obtain formulas through numerical fittings to gauge their overall energy consumption and signal representation accuracy. Third, we evaluate the benefits that lossy compression methods bring about in interference-limited multihop networks, where the channel access is a source of inefficiency due to collisions and transmission scheduling. Our results reveal that the DCT-based schemes are the best option in terms of compression efficiency but are inefficient in terms of energy consumption. Instead, linear methods lead to substantial savings in terms of energy expenditure by, at the same time, leading to satisfactory compression ratios, reduced network delay, and increased reliability performance.
Article
This paper is a revised version of an article by the same title and author which appeared in the April 1991 issue of Communications of the ACM. For the past few years, a joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG’s proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based method is specified for “lossy’ ’ compression, and a predictive method for “lossless’ ’ compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. This article provides an overview of the JPEG standard, and focuses in detail on the Baseline method. 1