Conference PaperPDF Available

Resilient image sensor networks in lossy channels using compressed sensing

Authors:

Abstract

Data loss in wireless communications greatly affects the reconstruction quality of wirelessly transmitted images. Conventionally, channel coding is performed at the encoder to enhance recovery of the image by adding known redundancy. While channel coding is effective, it can be very computationally expensive. For this reason, a new mechanism of handling data losses in wireless multimedia sensor networks (WMSN) using compressed sensing (CS) is introduced in this paper. This system uses compressed sensing to detect and compensate for data loss within a wireless network. A combination of oversampling and an adaptive parity (AP) scheme are used to determine which CS samples contain bit errors, remove these samples and transmit additional samples to maintain a target image quality. A study was done to test the combined use of adaptive parity and compressive oversampling to transmit and correctly recover image data in a lossy channel to maintain Quality of Information (QoI) of the resulting images. It is shown that by using the two components, an image can be correctly recovered even in a channel with very high loss rates of 10%. The AP portion of the system was also tested on a software defined radio testbed. It is shown that by transmitting images using a CS compression scheme with AP error detection, images can be successfully transmitted and received even in channels with very high bit error rates.
Resilient Image Sensor Networks in Lossy
Channels Using Compressed Sensing
Scott Pudlewski, Arvind Prasanna, Tommaso Melodia
Wireless Networks and Embedded Systems Laboratory
Department of Electrical Engineering
State University of New York (SUNY) at Buffalo
e-mail: {smp25, ap92, tmelodia}@buffalo.edu
Abstract—Data loss in wireless communications greatly affects
the reconstruction quality of a signal. In the case of images,
data loss results in a reduction in quality of the received image.
Conventionally, channel coding is performed at the encoder to
enhance recovery of the signal by adding known redundancy.
While channel coding is effective, it can be very computationally
expensive. For this reason, a new mechanism of handling data
losses in Wireless Multimedia Sensor Networks (WMSN) using
Compressed Sensing (CS) is introduced in this paper. This system
uses compressed sensing to detect and compensate for data loss
within a wireless network. A combination of oversampling and an
adaptive parity scheme are used to determine which CS samples
contain bit errors, remove these samples and transmit additional
samples to maintain a target image quality
A study was done to test the combined use of adaptive parity
and compressive oversampling to transmit and correctly recover
image data in a lossy channel to maintain Quality of Information
(QoI) of the resulting images. It is shown that by using the
two components, an image can be correctly recovered even in
a channel with very high loss rates of 10%. The AP portion of
the system was also tested on a software defined radio testbed. It
is shown that by transmitting images using a CS compression
scheme with AP error detection, images can be successfully
transmitted and received even in channels with very high bit
error rates.
I. INTRODUCTION
Wireless Multimedia Sensor Networks (WMSN) [1] are
self-organizing wireless systems of embedded devices de-
ployed to retrieve, distributively process in real-time, store,
correlate, and fuse multimedia streams originated from het-
erogeneous sources. Even though multimedia content can be
transmitted successfully even with some losses, it is still
important to ensure that the quality of the received content (i.e.
Quality of Information (QoI)) is maintained at an acceptable
level for the end user.
Wireless transmissions are notoriously prone to losses [5].
Two main causes of data loss are bit errors due to noisy
channels and missing packets due to transmitter or receiver
errors. To combat this, forward error correction (FEC) is often
used to add known redundancy into the data stream and allow
the receiving node to detect and correct a fixed number of bit
errors. The two types of FEC coding commonly used for this
purpose are block coding such as Reed-Solomon coding [14]
[18], and convolutional codes [15] [16] [17]. Although FEC
coding is effective, it can be very computationally expensive.
The advantage of using FEC in sensor networks has been
demonstrated in recent work. For example, in [6] the MicaZ
platform is used to evaluate the performance of turbo codes.
It is shown that it is less energy consuming to use turbo codes
than to retransmit lost data. However, this is true only for low
data rate applications. Automatic Repeat reQuest (ARQ) is
another way of handling losses that is based on timeouts and
retransmissions of missing/incorrect data. However, for real
time data streams (i.e., video, VOIP), transmitting old packets
may result in the media being recreated out of order at the
receiver.
Another challenge in WMSNs is the need for compression.
The amount of data needed for many applications (such as
images) requires that redundant information be removed from
the data stream before transmission, thereby reducing the
amount of data transmitted. One negative effect of this is that
the “importance” of each transmitted bit increases. In the case
of multimedia transmission, the loss of a small amount of
data can cause a dramatic effect in the quality of the received
content.
In this paper, as in [19] and our previous work [13], we
use Compressed Sensing (CS) [7], [3], [8], [4] for both com-
pression and channel coding of images. Compressed sensing
(aka “compressive sampling”) is a new paradigm that allows
the faithful recovery of signals from M << N measurements
where N is the number of samples required for the Nyquist
sampling. Hence, CS can offer an alternative to traditional
video encoders by enabling imaging systems that sense and
compress data simultaneously. One major advantage to CS
encoded data is that the number of unique samples received
is the only factor in determining the successful recovery of
the image. In other words, no sample is more important than
any other sample [12]. Because of this, the loss of any single
sample can be replaced by another different sample from the
same image. The authors of [19] use this property to introduce
a method for oversampling a signal to increase the chance
of recovering a signal that has been subjected to losses. We
extend this concept for use with real image signals.
In a real channel, errors within an image transmission will
manifest as the inversion of one or more bits within the image
signal. These errors can be detected using an Adaptive Parity
(AP) scheme [13]. The AP scheme uses a simple parity scheme
to determine which samples contain errors. Oversampling and
adaptive parity are then used together to find both missing
2
samples and incorrect samples, and compensate for both types
using oversampling in a CSEC-AP system. By using this joint
system, the transmitting node can both detect and correct bit
errors with very little cost to the transmitting node in terms
of both complexity and overhead.
In this paper, we propose a system for both determining bit
errors in a CS data stream, and consequently compensate for
those errors. Specifically:
Integrated Adaptive Loss Detection and Compensa-
tion. In our previous work [13], we introduced an AP
scheme to find bit errors in a CS data stream. In this
work, we combine this concept with oversampling and
evaluate an integrated system which can both detect and
compensate for errored samples.
Experimental Evaluation. We have implemented the AP
portion of the protocol on a USRP2 [9] testbed, and are
able to show that the results are comparable to those
obtained through simulation.
The remainder of this paper is structured as follows. In Sec-
tion II we give a concise introduction to compressed sensing.
In Section III, the joint error detection and oversampling sys-
tem is presented. In Section IV, we introduce the Compressed
Sensed Erasure Coding (CSEC). Section V presents the AP
error detection scheme. Finally, the performance results are
presented in Section VI, while in Section VII we draw the
main conclusions and discuss future work.
II. COMPRESSED SENSING PRELIMINARIES
We consider an image signal represented through a vector
xRN, where Nis the vector length. We assume that there
exists an invertible N×Ntransform matrix Ψsuch that
x=Ψs (1)
where sis a K-sparse vector, i.e., ||s||0=Kwith K <
N, and where || · ||prepresents p-norm. This means that the
image has a sparse representation in some transformed domain,
e.g., wavelet. [10]. The signal is measured by taking M < N
measurements from linear combinations of the element vectors
through a linear measurement operator Φ. Hence,
y=Φx =ΦΨs =˜
Ψs.(2)
We would like to recover xfrom measurements in y. However,
since M < N the system is underdetermined. Hence, given a
solution s0to (2), any vector ssuch that s=s0+n, and
n N (˜
Ψ)(where N(˜
Ψ)represents the null space of ˜
Ψ), is
also a solution to (3). However, it was proven in [3] that if the
measurement matrix Φis sufficiently incoherent with respect
to the sparsifying matrix Ψ, and Kis smaller than a given
threshold (i.e., the sparse representation sof the original signal
xis “sparse enough”), then the original scan be recovered by
finding the sparsest solution that satisfies (2), i.e., the sparsest
solution that “matches” the measurements in y. However, the
problem above is in general NP-hard [2]. For matrices ˜
Ψwith
sufficiently incoherent columns, whenever this problem has a
sufficiently sparse solution, the solution is unique, and it is
Estimated Sample
Loss Rate
Compressed
Samples
Oversampled
Signal Estimated Correctly
Received Samples C
Estimated BER
CSEC
AP
CSEC−AP System
Compressed and
Encoded Samples
Fig. 1. System Architecture for CSEC-AP System
equal to the solution of the following problem:
P1:minimize ||s||1
subject to :||y˜
Ψs||2
2< ǫ, (3)
where ǫis a small tolerance.
III. IMAGE ENCODING AND RECOVERY USING
COMPRESSED SENSING WITH CSEC AND AP
We propose a system for image transmission using both
CSEC and AP. The architecture for this system if shown in
Fig. 1. There are two main goals to this system.
Maintain Target Image Quality. The CSEC portion of
the system is charged with maintaining the image quality
given a lossy channel. This system takes as input both the
number of packets expected to be lost due to collision or
transmitter errors and the number of samples expected to
be lost due to bit errors that would be detected by the AP
system. Oversampling is then used to make up for these
errors and allow the receiver to recover the image as if
the original number of samples were sent. For example,
assume that the transmitter intended to transmit 10,000
samples to the receiver to recover some image. Also
assume that 5% of the packets will be lost due to collision
or transmission errors, and 3% of the remaining samples
will be lost due to bit errors, which results in a total error
rate of 7.85%. By oversampling the signal to compensate
for the expected loss (as in [19]), the total number of
samples Kcan be found to be 10,852. This tells the
transmitter that, based on the loss estimate of 7.85%, if
10,852 samples are transmitted, roughly 10,000 samples
will eventually be received correctly at the receiver. The
details of the CSEC oversampling rate will be explained
in detail in Section IV.
Minimize the Number of Transmitted Samples for
a Target Desired Quality. The AP portion of this
system uses the estimated bit error rate of the channel to
determine the optimal number of samples to include for
3
each parity bit. This system will then use this information
to determine the expected number of correctly received
samples. This is done by analytically determining the
optimal number of parity bits needed to maximize the
number of correctly received samples at the receiver. The
details of the AP calculation will be explained in detail
in Section V.
The basis for both of these systems is that the compressed
samples which are created using the CS paradigm are all
equally important and losing a single sample does not affect
the receivers ability to be able to recover any other sample.
Also, the specific samples chosen for use in the recovery of
the image is arbitrary. This means that, if a sample is lost, a
different sample can be transmitted in its place with no effect
on the quality of the recovered image.
IV. ERASURE CHANNEL CODING USING COMPRESSED
SENSING (CSEC)
CSEC has the ability to recreate the signal with some degra-
dation even if the errors exceed the threshold for recovery. This
is possible by oversampling the signal to compensate for the
losses. The total number of samples needed, K, depends on
the channel loss probability and is given by
K=m
(1 p),(4)
where Kis the number of samples needed for a lossless
transmission and is a function of the sparsity of the signal and
mis the number of correctly received samples samples needed
to achieve a desired image quality. Basically, the coding is
done such that the number of correctly received samples for
a given error probability pis equal to the number of samples
in the original signal without errors, i.e. (1 p)·(K) = m.
To demonstrate the effectiveness of oversampling, A Monte
Carlo simulation of 1000 iterations is performed for a signal
of length 256 byte and of varying sparsity in a channel
with a sample loss probability of 0.2. Since the number of
samples is the determining factor in the reconstruction of the
original signal, there should be no difference between the
lossless reconstruction and the oversampled reconstruction.
The sampling matrix is incoherent Gaussian. As Fig. 2 shows,
as the sparsity increases, the probability of exact recovery
of the signal goes down for any channel condition, which
corresponds to the results obtained in [19]. Sparsity here is
defined as the number of non-zero elements in a signal. This
is because as sparsity increases, the information content in
the signal increases. If a sufficient number of samples are not
generated to compensate for this, all the information conveyed
by the signal is not captured and exact reconstruction is not
possible. We see that CSEC is able to recover the signal as
well as in the case of no loss. This shows that oversampling
compensates for the losses in the channel.
Though this shows that oversampling is effective for “ideal”
sparse signals, using CS to compress and reconstruct an image
could behave differently. This is because an image is not
inherently sparse, but is only sparse in the frequency domain
0 5 10 15 20 25 30
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Sparsity
Prob of Recovery
Prob .of Recovery vs Sparsity
No loss
With 20% loss
With 20% loss and Oversamples
Fig. 2. Probability of exact recovery for recreation of Fig. 8 in [19]
after a wavelet or DCT transform. Any image reconstructed
this way will always be different from the original image,
and the more samples transmitted, the closer the reconstructed
image will be to the original.
To see how the recreation of an image is affected by
oversampling, we simulated the recovery of a 32x32 image
under three conditions; no loss, 20% sample loss, and CSEC
with 20% oversampling. The sampling matrix is assumed to
be Gaussian with mean zero and variance 1
1024 . An image
size of 32 ×32 was chosen. The number of measurements in
lossless case (m) is taken to be 800. We choose PSNR as the
reconstructed image quality indicator, which is defined as
P SN R = 10 ·log10 M AX 2
I
MSE ,(5)
where M AXIis the maximum possible pixel value for each
frame. MSE is the mean squared error, which is defined as
MSE =1
mn
m1
X
i=1
n1
X
j=1
kI(i, j)K(i, j )k2.(6)
We use the Discrete Cosine Transform (DCT) as the sparsi-
fying transform and CVX to solve the reconstruction problem
(3).
In the lossless case, the PSNR is found to be 21.40 dB.
With a sample loss rate of 20% and no oversampling, the
PSNR drops to 16.78 dB. Finally, with 20% loss and 20%
oversampling, the PSNR value is 20.10 dB. Comparing PSNR
values of the lossless and oversampled recovery cases, we can
see that the images in both cases have similar reconstruction
quality. The differences between the errorless case and the
oversampling case can be accounted for by variations in the
sampling matrix, which was different for each image.
V. ADAPTIVE PARITY-BASED TRANSMISSION
For a fixed number of bits per frame, the perceptual quality
of images can be improved by dropping errored samples
that would contribute to image reconstruction with incorrect
information. This is demonstrated in Fig. 3 which shows
the image quality both with and without including samples
containing errors. Though the plots in Fig. 3 assume that the
receiver knows which samples have errors, it does demonstrate
that there is a very large possible gain in received image
4
10−7 10−6 10−5 10−4 10−3 10−2 10−1
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Bit Error Rate
SSIM
SSIM vs Bit Error Rate Keeping Bad
Samples and Discarding Bad Samples
Discarding bad Samples
Keeping bad Samples
Fig. 3. SSIM for images with and without errored samples
quality is those samples containing errors can be found without
adding too much overhead.
We studied this for images in [12]. It was shown that in
CS, the transmitted samples constitute a random, incoherent
combination of the original image pixels. This means that,
unlike traditional wireless imaging systems, no individual sam-
ple is more important for image reconstruction than any other
sample. Instead, the number of correctly received samples is
the only main factor in determining the quality of the received
image. Because of this, a sample containing an error can
simply be discarded and the impact on the video quality,
as shown in Fig. 3, is negligible as long as the amount or
errors is small. This can be realized by using even parity
on a predefined number of samples, which are all dropped
at the receiver or at an intermediate node if the parity check
fails. This is particularly beneficial in situations when the BER
is still low, but too high to just ignore errors. To determine
the amount of samples to be jointly encoded, the amount of
correctly received samples is modeled as
C=Q·b
Q·b+ 1 (1 BER)Q·b,(7)
where Cis the estimated amount of correctly received
samples, bis the number of jointly encoded samples, and Q
is the quantization rate per sample. To determine the optimal
value of bfor a given BER, (7) can be differentiated, set equal
to zero and solved for b, resulting in
b=
1 + q14
log(1BE R)
2Q.(8)
The optimal channel encoding rate can then be found from
the measured/estimated value for the end-to-end BER and
used to encode the samples based on (7). The received video
quality using the parity scheme described was compared to
different levels of channel protection using rate compatible
punctured codes (RCPC). Specifically, we use the 1
4mother
codes discussed in [11]. Briefly, a 1
4convolutional code is
punctured to decrease the amount of redundancy needed for
the encoding process. These codes are punctured progressively
so that every higher rate code is a subset of the lower rate
codes. For example, any bits that are punctured in the 4
15 code
must also be punctured in the 1
3code, the 4
9code, and so
10−6 10−5 10−4 10−3 10−2
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Bit Error Rate
Structural Similarity (SSIM)
SSIM vs Bit Error Rate using FEC and Simple Parity
Uncoded
Parity
RCPC 8/9
RCPC 2/3
RCPC 1/2
RCPC 2/5
RCPC 1/3 "mother code"
Fig. 4. Adaptive Parity vs RCPC Encoding for Variable Bit Error rates
10−4 10−3 10−2 10−1
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
Bit Error Rate (BER)
Structural Similarity (SSIM)
SSIM vs BER With and Without Oversampling
Perfect Channel
With Oversampling
Without Oversampling
Fig. 5. Performance of CSEC-AP System
on down to the highest rate code. Because of this setup, the
receiver can decode the entire family of codes with the same
decoder. This allows the transmitter to choose the most suitable
code for the given data. Clearly, as these codes are punctured to
reduce the redundancy,the effectiveness of the codes decreases
as far as the ability to correct bit errors. Therefore, we are
trading BER for transmission rate.
Figure 4 shows the adaptive parity scheme compared to
RCPC codes. For all reasonable bit error rates, the adaptive
parity scheme outperforms all levels of RCPC codes. The
parity scheme is also much simpler to implement than more
powerful forward error correction (FEC) schemes. This is
because, even though the FEC schemes show stronger error
correction capabilities, the additional overhead does not make
up for the video quality increase compared to just dropping
the samples which have errors.
VI. PERFORMANCE EVALUATION
We performed two sets of experiments to assess the perfor-
mance of the proposed error correction architecture. First, a set
of images was transmitted using CSEC with AP for different
sample loss rates. The image quality is shown for different
bit error rates, along with the increase in size necessary to
maintain a constant image quality. Secondly, the adaptive
parity scheme is tested on a USRP-2 software defined ratio
testbed to determine how many errors are correctly detected
and the image quality of the detected samples.
A. Simulations
The results shown in Fig. 5 show the reconstruction of
images encoded using the CSEC-AP system, only the AP
5
10−4 10−3 10−2 10−1
0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
Bit Error Rate (BER)
Total Transmitted Data
(bits in / bits out)
Overhead vs BER for CSEC−AP System
Fig. 6. Overhead of CSEC-AP System
system (detecting and simply removing bad samples) and
with an ideal lossless situation. Clearly, the proposed system
results in image quality very near to the ideal no-loss case at
reasonable BER rates. In all cases, it is assumed that a sample
is 8 bits and that even a single bit error within a sample results
in that entire sample being discarded. The cost in terms of
overhead for this error correction scheme is shown in Fig.
6, which shows the additional transmitted information as a
function of the bit error rate. We can see that even for the
worst case of 1 error for every 10 bit (resulting in a sample
error rate of 0.5695), the CSEC-AP scheme only requires the
transmitter to send twice the required bits for a channel without
errors. As the error rate drops down to more reasonable values,
the overhead decreases very quickly.
B. Testbed Experiments
The adaptive parity portion of the scheme was also tested
using a USRP2 software defined radio platform. The per-
formance of the algorithm was evaluated on a testbed that
comprised of USRP2s. A two hop network was setup to
evaluate the performance of the algorithm.
The medium access control (MAC) layer protocol selected
was 802.11b and differential quadrature phase shift keying
(DQPSK) was used as the modulation scheme to achieve a
physical layer data rate of 2 Mbit/s. The maximum size of
each packet was 2100 bytes. The packets were transmitted in
burst mode with each burst consisting of at most six packets.
At the transmitter, a parity bit was appended for a certain
number of samples that was determined from the current bit
error rate of the channel and on the encoding of the frame
using (8). A 100 byte header for bit error estimation preceded
the data in the packet. Transmissions were made on a selected
frequency in the 2.4 GHz ISM band. A CSMA/CA scheme
with random backoff time was implemented at the MAC
layer to alleviate the effects of packet collisions. The relay
node has a queue structure that simply forwards the received
packets to the destination node. The receiver after receiving
the packets decodes the data and determines which samples
were corrupted during transmission based on the parity bit.
If there is a parity bit inversion, all the samples that were
included in that parity bit calculation are dropped. Also, the
receiver estimates the bit error rate of the channel through the
100 byte header. The transmitter, after obtaining the estimate
0 5 10 15 20 25 30 35 40 45 50
0.5
0.55
0.6
0.65
0.7
0.75
0.8
Frame Index
Structural Similarity (SSIM)
SSIM for Video Frames
Transmitted Using a USRP2 Testbed
Without Adaptive Parity
With Adaptive Parity
Fig. 7. Results of Testbed Implementation of CSEC-AP system
0 5 10 15 20 25 30 35 40 45 50
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Frame Index
Bit Error Rate (BER)
BER vs Frame Index
Fig. 8. Measured BER in Testbed Experiments
of the bit error rate, calculates the new number of samples/bit
and the cycle continues. We assume the initial bit error rate
of the channel to be zero. So, the samples transmitted during
the first burst will not have any parity bits appended.
This system was used to transmit and decode 50 frames of
a security video. The results of this simulation are shown in
Fig. 7, while the measured BER is shown in Fig. 8. Even with
very high bit error rates, the algorithm was still able to recover
the images nearly as well as predicted by the simulations.
Whenever there were sample errors, the results using AP were
better than those using no error correction at all.
VII. CONCLUSION AND FUTURE WORK
In this paper, we have presented a system which uses
compressed sensing to compress an image and protect that
image from channel errors and packet losses. We have ex-
panded on the work done in [19] and [13] to present a
complete system which deals with the detection of bit errors
and provides a system for compensating for these bit errors in
such a way as to maintain image quality at the receiver. We
also presented a testbed setup using USRP2 software defined
radios. We implemented a portion of the system on the testbed
and demonstrated that the performance is very close to the
simulation results.
Future plans for this work include expanding the use of CS
encoded images to video encoding. Also, we are using the
properties of CS encoded images in other networking layers
such as the transport and MAC layers in order to create a
system which will be able to transmit video from very simple
low-cost image sensors. This system will be tested using the
USRP2 testbed.
6
REFERENCES
[1] I. F. Akyildiz, T. Melodia, and K. R. Chowdhury. A Survey on
Wireless Multimedia Sensor Networks. Computer Networks (Elsevier),
51(4):921–960, March 2007.
[2] A. Bruckstein, D. Donoho, and M. Elad. From Sparse Solutions of
Systems of Equations to Sparse Modeling of Signals and Images. SIAM
Review, 51(1):34–81, February 2009.
[3] E.J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: ex-
act signal reconstruction from highly incomplete frequency information.
IEEE Transactions on Information Theory, 52(2):489–509, February
2006.
[4] E.J. Candes and T. Tao. Near-optimal Signal Recovery from Random
Projections and Universal Encoding Strategies? IEEE Transactions on
Information Theory, 52(12):5406–5425, December 2006.
[5] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory.
Wiley, July 2006.
[6] D. Schmidt, M. Berning, and N. Wehn. Error Correction in Single-
Hop Wireless Sensor Networks-A Case Study. In Proc. of Design,
Automation and Test in Europe (DATE), Nice, France, April 2009.
[7] David Donoho. Compressed Sensing. IEEE Transactions on Information
Theory, 52(4):1289–1306, April 2006.
[8] E.J. Candes and J. Romberg and T. Tao. Stable Signal Recovery from
Incomplete and Inaccurate Measurements. Communications on Pure and
Applied Mathematics, 59(8):1207–1223, August 2006.
[9] Ettus Research LLC. http://www.ettus.com/.
[10] A. Graps. An Introduction to Wavelets. IEEE Computational Science
and Engineering, 2:50–61, 1995.
[11] J. Hagenauer. Rate-compatible punctured convolutional codes (RCPC
codes) and their applications. IEEE Transactions on Communications,
36(4):389–400, Apr 1988.
[12] T. Melodia and S. Pudlewski. A Case for Compressive Video Streaming
in Wireless Multimedia Sensor Networks. IEEE COMSOC MMTC E-
Letter, 4(9), October 2009.
[13] S. Pudlewski and T Melodia. On the Performance of Compressive Video
Streaming for Wireless Multimedia Sensor Networks. In Proc. of IEEE
Int Conf on Communications (ICC), Cape Town, South Africa, May
2010.
[14] I. S. Reed and G. Solomon. Polynomial Codes Over Certain Finite
Fields. Journal of the Society for Industrial and Applied Mathematics,
8(2):300–304, 1960.
[15] A. Viterbi. Convolutional Codes and Their Performance in Commu-
nication Systems. IEEE Transactions on Communication Technology,
19(5):751–772, October 1971.
[16] Viterbi, A. Error Bounds for Convolutional Codes and an Asymptotically
Optimum Decoding Algorithm. IEEE Transactions on Information
Theory, 13(2):260–269, Apr 1967.
[17] Viterbi, A.J. and Wolf, J.K. and Zehavi, E. and Padovani, R. A Prag-
matic Approach to Trellis-Coded Modulation. IEEE Communications
Magazine, 27(7):11–19, Jul 1989.
[18] Stephen B. Wicker. Error Control Systems for Digital Communications
and Storage. Prentice-Hall, Inc., New Jersey 1995.
[19] Z. Charbiwala and S. Chakraborty and S. Zahedi and Y. Kim and M. B.
Srivastava and T. He and C. Bisdikian. Compressive Oversampling for
Robust Data Transmission in Sensor Networks. In Proc. of IEEE Conf.
on Computer Communications (INFOCOM), San Diego, CA, March
2010.
... To overcome the disadvantages of the above methods, a powerful and generic technique for estimating missing data based on compressive sensing is proposed. The existing methods based on CS [9][10][11] can recover an entire dataset from only a small fraction of data. Kadhe et al. [9] integrated the emerging framework of CS with real expander codes for reliably transmitting image data in multimedia sensor networks. ...
... Kadhe et al. [9] integrated the emerging framework of CS with real expander codes for reliably transmitting image data in multimedia sensor networks. Pudlewski et al. [10] presented a system which uses CS to encode, compress, and protect an image from channel errors and packet losses. Although they can realize the reliable data transmission for two-dimensional image data, the two methods can not be directly applied to SHM's data loss. ...
Article
Full-text available
In practical structural health monitoring (SHM) process based on wireless sensor network (WSN), data loss often occurs during the data transmission between sensor nodes and the base station, which will affect the structural data analysis and subsequent decision making. In this paper, a method of recovering lost data in WSN based on compressive sensing (CS) is proposed. Compared with the existing methods, it is a simple and stable data recovery method and can obtain lower recovery data error for one-dimensional SHM's data loss. First, response signal x is measured onto the measurement data vector y through inner products with random vectors. Note that y is the linear projection of x and y is permitted to be lost in part during the transmission. Next, when the base station receives the incomplete data, the response signal x can be reconstructed from the data vector y using the CS method. Finally, the test of active structural damage identification on LF-21M aviation antirust aluminum plate is proposed. The response signal gathered from the aluminum plate is used to verify the data recovery ability of the proposed method.
... In one technique these excess measurements (oversampling) are adapted based on data loss rate. For more details about this technique refer (Charbiwala et al., 2010;Pudlewski et al., 2010;W. Yu et al., 2016). ...
Article
Recent developments in Wireless Sensor Networks (WSN) benefited various fields, among them Structural Health Monitoring (SHM) is an important application of WSNs. Using WSNs provides multiple advantages such as continuous monitoring of structure, lesser installation costs, fewer human inspections. However, because of the wireless medium, hardware faults, etc., data loss is an unavoidable consequence of WSNs. Recently, a new class of data loss recovery technique using Compressive Sensing (CS) is getting attention from the research community. In these methods, the transmitter sends encoded acceleration data and receiver uses a CS recovery method to recover the original signal. Usually, the encoding process uses a random measurement matrix which makes the process computationally complex to implement on sensor nodes. This paper presents a technique where the signal is encoded using Scrambled Identity Matrix. Using this method reduces the computational complexity and also robust to data loss. A performance analysis of the proposed technique is presented for random and continuous data loss. A comparison with the existing data loss recovery techniques is also shown using simulated data loss (both random and continuous data loss). It is observed that the proposed technique using Scrambled Identity Matrix can reconstruct the signals even after significant loss of data.
... Recently, a smoothing technique 10 was also proposed to mitigate the artifacts in the block boundaries, which improves the recovered image quality for given number of measurements. In the absence of quantization, wireless image transmission using compressed sensing has been evaluated [11][12][13][14] with a focus on minimizing the effect of packet (or measurement) loss on the restored images. ...
Article
Full-text available
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
... Hence, CS can offer an alternative to traditional video encoders by enabling imaging systems that sense and compress data simultaneously at very low computational complexity for the encoder. CS images and video are also resilient to bit errors [14] . Based on the lowcomplexity and high error resilience of CS signals, CSV was designed to achieve an acceptable level of compression with the lowest possible energy consumption at the video source. ...
Conference Paper
Full-text available
Real-time encoding and error-resilient wireless transmission of multimedia content require high processing and transmission power. This paper investigates the rate-distortion performance of video transmission over lossy wireless links for low-complexity multimedia sensing devices with a limited budget of available energy per video frame. An analytical/empirical model is developed to determine the received video quality when the overall energy allowed for both encoding and transmitting each frame of a video is fixed and the received data is affected by channel errors. The model is used to compare the received video quality, computation time, and energy consumption per frame of different wireless streaming systems. Furthermore, it is used to determine the optimal allocation of encoded video rate and channel encoding rate for a given available energy budget. The proposed model is then applied to compare the energy-constrained wireless streaming performance of three encoders suitable for a wireless multimedia sensor network environment; H.264, motion JPEG (MJPEG) and our recently developed compressed sensing video encoder (CSV). Extensive results show that CSV, thanks to its low complexity, and to a video representation that is inherently resilient to channel errors, is able to deliver video at good quality (an SSIM value of 0.8) through lossy wireless networks with lower energy consumption per frame than competing encoders.
Conference Paper
Existing works in image compressed sensing (CS) often focus on improving the rate-distortion sensing performance but have less consideration of the effect of channel errors on the transmission of CS measurements. This paper explores how the transmission channel errors affect the PSNR performance of the quantized sensing measurements and then increases the resistance of the transmitted data to the noisy channel. We show that the multi-scale block based compressed sensing (MSBCS) using quantization with differential phase code modulation (DPCM), though achieves compression efficiency higher than the regular scalar quantization-based CS, is more vulnerable to channel errors. To improve the error resistance, a method of optimized energy allocation across bit-layers in quantized measurements is presented, which leads to a large PSNR performance improvement for CS data transmission in wireless environment.
Article
Data loss is inevitable in multi-hop wireless sensor networks. Multiple packet erasures during transmission can necessitate the use of error-control mechanisms for loss recovery. Energy consumption is also very critical in embedded sensor networks. These problems are even more severe in wireless camera sensor networks (WCSNs), owing to the large data size. Compressive Sampling (CS) turns out to be an effective solution on both the issues. The compression obtained through the linear projections allows transmission of lesser bits than the original. The inherent randomness in CS makes the system tolerant to losses without requiring transmission of redundant parity bits. Both these characteristics help us on saving up on energy. However, using conventional CS on embedded WCSNs has some implementation related challenges. WCSNs mainly find applications in surveillance systems. This requires the snapshots to be large enough to encompass a wide field of view; requiring image sizes at least QVGA or more. The processor memory and the recovery time of L1 optimization, needed for CS recovery, are non-linear with respect to the data size; hence large images hinder the applicability of CS in practical cases. In this paper, we address the issues which may affect the practical usability of CS and provide a CS framework suitable for WCSNs. In order to enable the processing of such large images we propose a block-wise sampling approach, which helps to reduce both the memory overhead and the recovery time. For sampling matrices we use binary sparse random matrices instead of dense matrices, so as to reduce the encoding and decoding times and the computational overheads. Moreover, to further reduce the time factor we employ very sparse matrices (row weight equal to one) and show that they still provide good quality images. We have tested our propositions on WCSNs that include Imote2 sensor nodes equipped with IMB400 multimedia boards, on which we have analyzed the loss resilience of our proposed framework and have also provided an estimate of the energy saved.
Article
The resource-constraints in the sensor networks make reliable data communication a challenging task. Particularly, the limited availability of battery and computing power necessitates designing computationally efficient means for providing data compression and protection against data loss. In this paper, we propose to integrate the emerging framework of compressive sensing (CS) with real expander codes (RECs), coined as CS-REC, for robust data transmission. CS works as a computationally inexpensive data compression scheme, while RECs act as an elegant application layer erasure coding scheme. The benefits provided by RECs are twofold: one, RECs require only few addition-subtraction operations over real numbers for encoding and decoding; two, they provide graceful degradation in recovery performance with increase in the number of erasures. Through elaborate simulations, we show that CS-REC can achieve the recovery performance close to the case where there is no data loss. Further, again via simulations, we demonstrate the usefulness of CS-REC for reliably transmitting image data in multimedia sensor networks.
Article
Full-text available
Wavelets were developed independently by mathematicians, quantum physicists, electrical engineers and geologists, but collaborations among these fields during the last decade have led to new and varied applications. What are wavelets, and why might they be useful to you? The fundamental idea behind wavelets is to analyze according to scale. Indeed, some researchers feel that using wavelets means adopting a whole new mind-set or perspective in processing data. Wavelets are functions that satisfy certain mathematical requirements and are used in representing data or other functions. Most of the basic wavelet theory has now been done. The mathematics have been worked out in excruciating detail, and wavelet theory is now in the refinement stage. This involves generalizing and extending wavelets, such as in extending wavelet packet techniques. The future of wavelets lies in the as-yet uncharted territory of applications. Wavelet techniques have not been thoroughly worked out in such applications as practical data analysis, where, for example, discretely sampled time-series data might need to be analyzed. Such applications offer exciting avenues for exploration
Conference Paper
Full-text available
This paper investigates the potential of the compressed sensing (CS) paradigm for video streaming in Wireless Multimedia Sensor Networks. The objective is to study performance limits and outline key design principles that will be the basis for cross-layer protocol stacks for efficient transport of compressive video streams. Hence, this paper investigates the effect of key video parameters (i.e., quantization, CS samples per frame, and channel encoding rate) on the received video quality of CS images transmitted through a wireless channels. It is shown that, unlike JPEG-encoded images, CS-encoded images exhibit an inherent resiliency to channel errors, caused by the unstructured image representation; this leads to basically zero loss in image quality for random channel bit error rates as high as 10-4, and low degradation up to 10-3. Furthermore, it is shown how, unlike traditional wireless imaging systems, forward error correction is not beneficial for wireless transmission of CS images. Instead, an adaptive parity scheme that drops samples in error is proposed and shown to improve image quality. Finally, we present our initial investigations on a low-complexity, adaptive video encoder that performs low-complexity motion estimation.
Article
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈CN and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=στ∈Tf(τ)δ(t-τ) obeying |T|≤CM·(log N)-1 · |Ω| for some constant CM>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N-M), f can be reconstructed exactly as the solution to the ℓ1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for CM which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N-M) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Book
The purpose of this book is to present the general theory of interior-point polynomial-time methods for convex programming. Since the publication of Karmarkar's famous paper in 1984, the area has been intensively developed by many researchers, who have focused on linear and quadratic programming. This monograph has given us the opportunity to present in one volume all of the major theoretical contributions to the theory of complexity for interior-point methods in optimization. Our aim is to demonstrate that all known polynomial-time interior-point methods can be explained on the basis of general theory, which allows these methods to extend into a wide variety of nonlinear convex problems. We also have presented for the first time a definition and analysis of the self-concordant barrier function for a compact convex body. The abilities of the theory are demonstrated by developing new polynomial-time interior-point methods for many important classes of problems: quadratically constrained quadratic programming, geometrical programming, approximation in L p norms, finding extremal ellipsoids, and solving problems in structural design. Problems of special interest covered by the approach are those with positive semidefinite matrices as variables. These problems include numerous applications in modern control theory, combinatorial optimization, graph theory, and computer science. This book has been written for those interested in optimization in general, including theory, algorithms, and applications. Mathematicians working in numerical analysis and control theory will be interested, as will computer scientists who are developing theory for computation of solutions of problems by digital computers. We hope that mechanical and electrical engineers who solve convex optimization problems will find this a useful reference. Explicit algorithms for the aforementioned problems, along with detailed theoretical complexity analysis, form the main contents of this book. We hope that the theory presented herein will lead to additional significant applications.
Chapter
Information theory answers two fundamental questions in communication theory: what is the ultimate data compression (answer: the entropy H), and what is the ultimate transmission rate of communication (answer: the channel capacity C). For this reason some consider information theory to be a subset of communication theory. We will argue that it is much more. Indeed, it has fundamental contributions to make in statistical physics (thermodynamics), computer science (Kolmogorov complexity or algorithmic complexity), statistical inference (Occam's Razor: “The simplest explanation is best”) and to probability and statistics (error rates for optimal hypothesis testing and estimation). The relationship of information theory to other fields is discussed. Information theory intersects physics (statistical mechanics), mathematics (probability theory), electrical engineering (communication theory) and computer science (algorithmic complexity). We describe these areas of intersection in detail.
Article
This tutorial paper begins with an elementary presentation of the fundamental properties and structure of convolutional codes and proceeds with the development of the maximum likelihood decoder. The powerful tool of generating function analysis is demonstrated to yield for arbitrary codes both the distance properties and upper bounds on the bit error probability for communication over any memoryless channel. Previous results on code ensemble average error probabilities are also derived and extended by these techniques. Finally, practical considerations concerning finite decoding memory, metric representation, and synchronization are discussed.