ArticlePDF Available

A Robust and Secure Video Steganography Method in DWT-DCT Domains Based on Multiple Object Tracking and ECC


Abstract and Figures

Over the past few decades, the art of secretly embedding and communicating digital data has gained enormous attention because of the technological development in both digital contents and communication. The imperceptibility, hiding capacity, and robustness against attacks are three main requirements that any video steganography method should take into consideration. In this article, a robust and secure video steganographic algorithm in Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) domains based on the Multiple Object Tracking (MOT) algorithm and Error Correcting Codes (ECC) is proposed. The secret message is preprocessed by applying both Hamming and Bose, Chaudhuri, and Hocquenghem (BCH) codes for encoding the secret data. First, motion-based MOT algorithm is implemented on host videos to distinguish the regions of interest in the moving objects. Then, the data hiding process is performed by concealing the secret message into the DWT and DCT coefficients of all motion regions in the video depending on foreground masks. Our experimental results illustrate that the suggested algorithm not only improves the embedding capacity and imperceptibility but also it enhances its security and robustness by encoding the secret message and withstanding against various attacks.
Content may be subject to copyright.
Received March 5, 2017, accepted March 30, 2017, date of publication April 6, 2017, date of current version May 17, 2017.
Digital Object Identifier 10.1109/ACCESS.2017.2691581
A Robust and Secure Video Steganography
Method in DWT-DCT Domains Based on Multiple
Object Tracking and ECC
1Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA
2School of Computing, Sacred Heart University, Fairfield, CT 06825, USA
Corresponding author: Ramadhan J. Mstafa (
ABSTRACT Over the past few decades, the art of secretly embedding and communicating digital data has
gained enormous attention because of the technological development in both digital contents and commu-
nication. The imperceptibility, hiding capacity, and robustness against attacks are three main requirements
that any video steganography method should take into consideration. In this paper, a robust and secure video
steganographic algorithm in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains
based on the multiple object tracking (MOT) algorithm and error correcting codes is proposed. The secret
message is preprocessed by applying both Hamming and Bose, Chaudhuri, and Hocquenghem codes for
encoding the secret data. First, motion-based MOT algorithm is implemented on host videos to distinguish
the regions of interest in the moving objects. Then, the data hiding process is performed by concealing
the secret message into the DWT and DCT coefficients of all motion regions in the video depending on
foreground masks. Our experimental results illustrate that the suggested algorithm not only improves the
embedding capacity and imperceptibility but also enhances its security and robustness by encoding the secret
message and withstanding against various attacks.
INDEX TERMS Video steganography, multimedia security, data hiding techniques, multiple object tracking,
DWT, DCT, ECC, imperceptibility, embedding capacity, robustness.
In spite of the fact that the Internet is utilized as a medium
to access desired information, it has also opened a new door
for attackers to obtain precious information of other users
with little effort [1]. Steganography has functioned in a com-
plementary capacity to offer a protection mechanism that
hide communication between an authorized transmitter and
its recipient. Steganography is defined as the art of concealing
secret information in specific carrier data, establishing covert
communication channels between official parties [2], [3].
Subsequently, a stego object (steganogram) should appear the
same as an original data that has a slight change of the sta-
tistical features. The primary objective of the steganography
is to eliminate any suspicion to the transmission of hidden
messages and provide security and anonymity for legiti-
mate parties. The simplest way to observe the steganogram’s
visual quality is to determine its accuracy, which is achieved
through the Human Visual System (HVS). The HVS cannot
identify slight distortions in the steganogram, thus avoid-
ing suspiciousness [4]. However, if the size of the hidden
message in proportion with the size of the carrier object
is large, then the steganogram’s degradation will be visi-
ble to the human eye resulting in a failed steganographic
method [5].
Embedding efficiency, hiding capacity, and robustness are
the three major requirements incorporated in any successful
steganographic method [6]. First, embedding efficiency can
be determined by answering the following questions [7], [8]:
1) how safe is the steganographic method to conceal the
hidden information inside the carrier object? 2) how precise
are the steganograms’ qualities after the hiding procedure
occurs? and 3) is the secret message undetectable from the
steganogram? In other words, the steganography method is
highly efficient if it includes encryption, imperceptibility,
and undetectability characteristics. The high efficient algo-
rithm conceals the covert information into the carrier data
2169-3536 2017 IEEE. Translations and content mining are permitted for academic research only.
Personal use is also permitted, but republication/redistribution requires IEEE permission.
See for more information.
VOLUME 5, 2017
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
FIGURE 1. General diagram of the steganography method.
by utilizing some of the encoding and encryption techniques
prior to embedding stage for improving the security of the
underlying algorithm [9]. Fig. 1 represents the general model
of steganographic method.
Steganograms with low alteration rate and high quality
do not draw the hacker’s attention, and thus will avoid any
suspicion for the covert information [10]. If the steganogra-
phy method is more effective, then the steganalytical detec-
tors will find it more challenging to detect the hidden
message [11].
The hiding capacity is the second fundamental requirement
which permits any steganography method to increase the size
of hidden message taking into account the visual quality
of the steganograms. The hiding capacity is the quantity of
the covert messages inserted inside the carrier object [12].
In ordinary steganographic methods, both hiding capacity
and embedding efficiency are contradictory [13]. Conversely,
if the hiding capacity is increased, then the quality of the
steganograms will be diminished, decreasing the efficiency
of underlying method. The embedding efficiency is affected
by embedding capacity [14]. To increase the hiding capacity
with the minimum alteration rate of the carrier object, many
steganographic methods have been presented using differ-
ent strategies. These methods utilize linear block codes and
matrix encoding fundamentals which include BCH codes,
Hamming codes, Cyclic codes, Reed-Solomon codes, and
Reed-Muller codes [15], [16].
Robustness is the third requirement which measures the
steganographic method’s strength against attacks and signal
processing operations [17]. These operations contain geomet-
rical transformation, compression, cropping, and filtering.
A steganographic method is robust whenever the recipient
obtains the secret message accurately, without bit errors.
An efficient steganography method withstands against both
adaptive noises and signal processing operations [18], [19].
Chang et al. [20] presented a data concealing algorithm
using a High Efficiency Video Coding (HEVC) utilizing both
DCT and Discrete Sine Transform (DST) methods. In this
scheme, HEVC intra frames are used to conceal the hidden
message without propagating the error of the distortion drift
to the adjacent blocks. Blocks of quantized DCT (QDCT)
and DST coefficients are selected for embedding the secret
data by using a specific intra prediction mode. The combina-
tion modes of adjacent blocks will produce three directional
patterns of error propagation for data hiding, consisting of
vertical, horizontal, and diagonal. Each of the error propa-
gation patterns has a range of intra prediction modes that
protect a group of pixels in any particular direction. The
range of the modes begins at 0 and ends at 34. Chang et al.’s
algorithm has a low embedding payload because the selec-
tion of blocks for the embedding process must meet certain
Ma et al. [21] presented a video data hiding for H.264
coding without having an error accumulation in the intra
video frames. In the intra frame coding, the current block
predicts its data from the encoded adjacent blocks, specif-
ically from the boundary pixels of upper and left blocks.
Thus, any embedding process that occurs in these blocks
will propagate the distortion, negatively, to the current block.
In addition, the distortion drift will be increased toward the
lower right intra frame blocks. To prevent this distortion
drift, authors have developed three conditions to determine
the directions of intra frame prediction modes. To select
4×4 QDCT coefficients of the luminance component for
data embedding, the three raised conditions must be sat-
isfied together. However, this method has a low embed-
ding capacity because only the luminance of the intra frame
blocks that meet the three conditions are selected for hiding
VOLUME 5, 2017 5355
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
Shahid et al. [22] proposed a reconstruction loop for
embedding information of intra and inter frames for
H.264/AVC video codec. This method embeds the secret
message into the LSB of QDCT coefficients. Only non-
zero QDCT coefficients are chosen for data hiding process,
utilizing the predefined threshold, which directly depends on
the size of secret information. Edges, texture, and motion
regions of intra and inter frames are utilized in the conceal-
ing process. Shahid et al.’s algorithm extracts the hidden
message easily and maintains the efficiency of compression
Wang et al. [23] presented a real-time watermarking
method in the H.264/AVC codec based on the context
adaptive binary arithmetic coding (CABAC) features. The
CABAC encoder uses a unary binarization, which is a pro-
cess of concatenating all binary values of syntax elements.
A certain number of motion vectors for both predicted and bi-
directionally predicted frames are utilized for the data hiding
process using the CABAC properties. The secret watermark
is concealed by displacing the binary sequence of the selected
syntax elements orderly. This method achieves a low degra-
dation of the video quality because of the difference between
the original code and the replacement code is very small (at
most 1 bit is altered out of the 8-bits of the selected motion
vector). This small difference is also the reason of achieving
a little bit-rate increase. The percentage of the increased bit-
rate, µ, is calculated as follows:
u×100% (1)
Where u and m indicate the bit-rate of the original and
the watermarked videos respectively. Liu et al. [24] pre-
sented a robust data hiding using H.264/AVC codec with-
out a deformation accumulation in the intra frame based
on BCH codes. By using the directions of the intra frame
prediction, the deformation accumulation of the intra frame
can be prevented. Some blocks will be chosen as carrier
object for concealing the covert message. This procedure will
rely on the prediction of the intra frame modes of adjacent
blocks to prevent the deformation that proliferates from the
neighboring blocks. The authors applied BCH encoding to
the hidden message before the embedding phase to enhance
the method performance. Then, the encoded information is
concealed into the 4 ×4 QDCT coefficients using only a
luminance plane of the intra frame. Liu et al. defined N
as a positive integer and ˜
Yij as selected DCT coefficients
(i, j =0,1,2,3). The embedding process of this method is
carried out by the following steps:
1) If ˜
Yij=N+1 or ˜
Yij6= N, then the ˜
Yij will be
modified as follows:
Yij =
Yij +1if ˜
Yij 0 and ˜
Yij 1if ˜
Yij <0 and ˜
Yij if ˜
Yij6= N+1 or ˜
Yij6= N
2) If the secret bit is 1 and ˜
Yij=N, then the ˜
Yij will be
changed as follows:
Yij =(˜
Yij +1 if ˜
Yij 0 and ˜
Yij =N
Yij 1 if ˜
Yij <0 and ˜
Yij =N(3)
3) If the secret bit is 0 and ˜
Yij=N, then the ˜
Yij will not
be modified.
Ke et al. [25] presented a video steganography method
relies on replacing the bits in H.264 stream. In this algorithm,
context adaptive variable length coding (CAVLC) entropy
coding has been applied in the data concealing process. Dur-
ing the video coding and after the quantization stage, authors
used non-zero coefficients of high frequency regions for the
luminance component of the embedding process. Here, non-
zero coefficients in high frequency bands are almost ‘‘+1’’
or ‘‘-1’’. The embedding phase can be completed based on
the trailing ones sign flag and the level of the codeword
parity flag. The sign flag of the trailing ones changes if the
embedding bit equals ‘‘0’’ and the parity of the codeword
is even. Also, the sign flag changes if the embedding bit
equals ‘‘1’’ and the parity of the codeword is odd. Otherwise,
the sign flag of the trailing ones does not change. The trailing
ones are modified as follows:
Trailing Ones =(even codeword ;if secret bit =0
odd codeword;if secret bit =1(4)
The modification of high frequency coefficients does not
have an impact on the video quality. However, the embedding
capacity is low because Ke et al.s method is established on
the non-zero coefficients of the high frequencies that consist
of a large majority of zeros.
Alavianmehr et al. [26] presented a robust uncompressed
video steganography by utilizing the histogram distribution
constrained (HDC). In this method, the Ycomponent of every
frame is segmented into non-overlapping blocks (C) of size
m×n. Then, the secret message is concealed into these
blocks based on the shifting process. The selected blocks
are changed only when the secret message bits are ‘‘1’’.
Alavianmehr et al.’s method withstands against compression
attack. However, it utilizes only Yplane for data embedding
Video steganography is getting the attention of researchers
in the area of video processing due to substantial growth in
video data. The recent literature reports a significant amount
of video steganography algorithms. Unfortunately, many of
these algorithms lack the preprocessing stages. Particularly,
there is no video steganography algorithm that includes pre-
processing stages for both secret messages and cover videos.
Moreover, existing steganography techniques suffer major
weakness in several aspects including security, embedding
capacity, imperceptibility, and robustness against attacks.
5356 VOLUME 5, 2017
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
This paper is motivated by the limitations of the existing
video steganography algorithms, and is based on the follow-
ing reasons to improve the performance of these algorithms:
By utilizing the preprocessing stages to include the
manipulation on both secret messages and cover videos
earlier to the embedding stage in order to enhance the
security and robustness of the steganographic method.
Using a portion of each video frame as regions of interest
for the concealing process, the imperceptibility of stego
videos will improve. Accordingly, we track multiple
moving objects in video. Since it is very challenging for
hackers to recognize the position of the hidden message
in video frames because the hidden message is only
concealed into moving objects, which changes over time
from one frame to another, it is necessary to preserve the
security and robustness of embedded data.
Applying encryption methods and ECC such as Ham-
ming codes and BCH codes to encode the hidden mes-
sage earlier to the concealing stage will produce a secure
and robust steganographic algorithm.
Transforming video frames into frequency domain such
as DWT and DCT transformations will improve the
robustness of the steganographic method against attacks,
hence preserving imperceptibility of stego videos.
The remaining parts of the paper are organized as follows:
Section 2 explains DWT and DCT transformations. Hamming
and BCH ECC are given in Section 3. Section 4 presents
the motion-based MOT. The proposed video steganography
methodology is illustrated in Section 5. Section 6 provides
experimental results and discussion. Finally, Section 7 con-
cludes the paper and suggests future directions.
DWT and DCT are well-known methods which convert digi-
tal data from the spatial domain to the transform domain [27].
First, the two-dimensional DWT is a multi-resolution process
that decomposes the video frame into approximation, hori-
zontal, vertical, and diagonal sub-bands using low and high
pass decomposition filters. Fig. 2 illustrates the first level of
a two-dimensional DWT decomposition showing each of LL,
LH, HL, and HH sub-bands. In order to perform an absolute
reconstruction process, the following wavelet equations are
Lo_D(z)Hi_D(z)+Lo_R(z)Hi_R(z)=2 (5)
Hi_R(z)=zkLo_D(z) (7)
Where Lo_D(z)and Hi_D(z)indicate the decomposition
wavelet filters, and Lo_R(z)and Hi_R(z)represent the
reconstruction wavelet filters. Haar wavelet filters are given
in the following questions:
2(1 +z1) (8)
Hi_D(z)=(z+1) (9)
FIGURE 2. First level of a two-dimensional DWT decomposition [28].
Second, the DCT is mainly applied in video and image
compression. A two-dimensional DCT represents to a
one-dimensional DCT which applies on the first dimen-
sion followed by a one-dimensional DCT on the second
dimension [29], [30]. A video frame, A, of size M×N, the
two-dimensional DCT and the inverse of the two-dimensional
DCT are calculated as follows, respectively [31]:
Bpq = ∝pqXM1
cos π(2m+1)p
2Mcos π(2n+1)q
Amn =XM1
cos π(2m+1)p
2Mcos π(2n+1)q
Where p=
And q=
pand qare the transformations of mand n, and they have
the resolution of M×N.
In our proposed work, the Hamming (7, 4) codes is uti-
lized (n=7, k=4, and p=3), where a unique bit
error can be fixed. A message of size M(m1,m2,...,mk)
is encoded by adding p(p1,p2,p3) additional bits as par-
ity to be converted into a codeword of 7-bit length [32].
VOLUME 5, 2017 5357
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
FIGURE 3. Venn diagram of the Hamming codes (7, 4).
The ordinary combination of each message and parity are to
sort parity bits at the order of 2i(i =0,1,...,nk) such as
p1,p2,m1,p3,m2,m3,m4arrangement. A Venn diagram of
the hamming codes (7, 4) is illustrated in the Fig. 3.
In addition to Hamming codes, BCH (7, 4) codes are also
used over the Galois field GF (2m), where m=3, k=4, and
n=231=7. BCH codes are strong random cyclic codes
which are utilized to detect and correct errors. The generator
polynomial g(x) is the polynomial of the lowest degree in the
Galois field GF (2), with ,2,3,...,2tas roots on the
condition that is a primitive of GF (2m). When Mi(x)is a
minimal polynomial of iwhere (1i2t), then the least
common multiple (LCM) of 2t minimal polynomials will be
the generator polynomial g(x). The parity-check matrix H and
g(x) function of the BCH codes [19], [33] are illustrated as
1∝ ∝23. . . n1
13 3233. . . 3n1
15 5253. . . 5n1
· · · · ·
· · · · ·
· · · · ·
12t12t122t13. . . 2t1n1
g(x)=M1(x)M3(x)M5(x). . . M2t1(x)(16)
A binary BCH (n,k,t)codes can fix t-bit errors in a
codeword W= {w0,w1,w2,...,wn1of size n and a
secret message A= {a0,a1,a2,...,ak1of length k [34].
An embedded codeword C= {c0,c1,c2,...,cn1is calcu-
lated as follows:
At the recipient end, the code R= {r0,r1,r2,...,rn1
is acquired. Each of the original and obtained codewords
are described as polynomials, where C(X)=c0+c1x1+
. . . +cn1xn1, and R(X)=r0+r1x1+. . . +rn1xn1.E
represents the error between Cand R.Eand syndrome Yare
calculated as follows:
Due to its various applications, computer vision is one of
the fastest emerging fields in computer science. The detec-
tion and tracking of moving objects within the computer
vision field has recently gained significant attention [35].
Lin et al. [36] proposed a tube-and-droplet-based approach
for representing and analyzing motion trajectories. This paper
addressed main issues of motion trajectories in an informative
manner. Firstly, a 3D tube is constructed to represent the
trajectories. Then a droplet vector is derived from the con-
structed 3D tube, which has the following properties: 1) the
motion information of a trajectory is maintained, 2) the entire
contextual pattern throughout a trajectory is embedded, and
3) information about a trajectory in an obvious and unified
manner is visualized.
Another related work is presented by Ma et al. [37] about
long-term correlation tracking by addressing visual track-
ing issues caused by abrupt motion, heavy occlusion, defor-
mation, and out-of-view. The method decomposed the task
of tracking into translation and scale estimation of objects.
The accuracy and reliability of the translation estimation is
improved by considering the correlation between temporal
contexts, resulting in better efficiency, accuracy, and robust-
ness compared to existing methods of literature.
Ma et al. [38] proposed hierarchical convolutional features
for visual tracking by improving the tracking accuracy and
robustness using deep features of convolutional neural net-
works. In order to encode the target appearance, the corre-
lation filters have learned on each convolutional layer. The
experimental results show that Mas algorithm outperformed
related works.
The tracking of moving objects is commonly divided into
two major phases: 1) detection of moving objects in an
individual video frame, and 2) association of these detected
objects throughout all video frames in order to construct
complete tracks [39], [40].
In the first phase, the background subtraction technique
is utilized to detect the regions of interest such as moving
objects. This technique is based on the Gaussian mixture
model (GMM), which is the probability of density func-
tion that equals to a weighted sum of component Gaussian
densities. The background subtraction method computes the
differences between consecutive frames that generate the
foreground mask. Then, the noises will be eliminated from
the foreground mask by using morphological operations.
As a result, the corresponding moving objects are detected
from groups of connected pixels.
The second phase is called data association. It is based
on the motion of the detected object. A Kalman filter is
employed to speculate the motion of each trajectory. In each
5358 VOLUME 5, 2017
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
FIGURE 4. The proposed video steganography framework.
FIGURE 5. Process of encrypting and encoding input messages.
video frame, the location of each trajectory is predicted by
the Kalman filter. Moreover, the Kalman filter is utilized to
determine the probability of a specific detection that belongs
to each trajectory [39].
A robust and secure video steganography method in DWT-
DCT domains based on MOT and ECC is presented. The
major stages of the proposed video steganography framework
are illustrated in Fig. 4. A sizeable text data of 15.91 MB is
utilized as s secret messages, and it is preprocessed prior to
the data embedding interval, which is ciphered and coded by
Hamming and BCH (7, 4) codes. Fig. 5 illustrates the process
of securing input messages prior to the embedding stage. The
proposed steganographic algorithm is structured into three
The motion-based MOT algorithm has been previously
explained in Section 4. The process of identifying the
moving objects in the video frames must be carried out when
VOLUME 5, 2017 5359
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
FIGURE 6. Left column: four video frames from S2L1 PETS200 9
dataset [41], middle column: detecting multiple motion objects in the
corresponding frames, and right column: foreground masks for the
corresponding frames.
motion object regions are utilized as host data. This process is
achieved by detecting each moving object within an individ-
ual frame, and then associating these detections throughout
all of the video frames. The background subtraction method
is applied to detect the moving objects based on the GMM.
It also computes the differences between consecutive frames
that generate the foreground mask. Then, the Kalman filter
is employed to predict estimation trajectory of each moving
region. Fig. 6 shows a number of video frames that contain
multiple objects and their foreground masks.
In entire video frames, the host data of our proposed method is
the motion objects that are considered as regions of interest.
By using the motion-based MOT algorithm, the process of
detecting and tracking the motion regions over all video
frames are achieved. The regions of interest altered in each
video frame is dependent on the number and the size of the
moving objects. In every frame, 2D-DWT is implemented on
RGB channels of each motion region resulting LL, LH, HL,
and HH subbands.
In addition, 2D-DCT is also applied on the same motion
regions generating DC and AC coefficients. Thereafter, the
secret messages are concealed into LL, LH, HL, and HH of
DWT coefficients, and into DC and AC of DCT coefficients
of each motion object separately based on its foreground
mask. Furthermore, both secret keys are transmitted to the
receiver side by embedding them into the non-motion area of
the first frame. Upon accomplishment, the stego video frames
are rebuild in order to construct the stego video that can
be transmitted through the unsecure medium to the receiver.
Algorithm 1 clarifies the major steps of our embedding
In order to recover hidden messages accurately, the embed-
ded video is separated into a number of frames through
the receiver side, and then two secret keys are obtained
from the non-motion region of the first video frame. To
predict trajectories of motion objects, the motion-based MOT
algorithm is applied again by the receiver. Then, 2D-DWT
Algorithm 1 Data Embedding Stage
and 2D-DCT are employed on the RGB channels of each
motion object in order to create LL, LH, HL, and HH
subbands, and DC and AC coefficients, respectively. Next,
the extracting process of the embedded data is achieved by
obtaining the secret messages from LL, LH, HL, HH, DC, and
AC coefficients of each motion region over all video frames
based on the same foreground masks used in the embedding
stage. The extracted secret message is decoded by Hamming
and BCH (7, 4), and then decrypted to obtain the original
message. The essential steps of data extracting algorithm are
shown in the Algorithm 2.
A S2L1 video sequence was used from the well-known
PETS2009 dataset [41]. The proposed algorithm results are
achieved using MATLAB implementation of the algorithm.
5360 VOLUME 5, 2017
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
Algorithm 2 Data Extracting Stage
FIGURE 7. Visual quality assessment: The first line illustrates the original
574th frame of the tested video along with histograms of its RGB
channels. The 2nd line shows the stego 574th frame of the tested video
and histograms of RGB channels after embedding stage.
The cover video consists of a 768 ×576 video dimension at
30 frames/sec, and a 12684 kbps data rate. The video
FIGURE 8. The PSNR comparison of the experiment video in DWT domain.
FIGURE 9. The PSNR comparison of the tested video in DCT domain.
TABLE 1. Average PSNR each of R, G, and B component of the experment
video after applying DWT and DCT transform domains.
sequence also includes 795 frames; each frame has multiple
moving objects. In the entire video frames, the text messages
appear as a sizeable file divided based on the number and size
of the moving objects.
The imperceptibility of our proposed scheme is measured by
utilizing a PSNR measurement, which is a well-known metric
and can be calculated as follows [42]:
PSNR =10 Log10 MAX2
MSE !(dB) (20)
VOLUME 5, 2017 5361
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
TABLE 2. Performance comparison of the proposed method with other existing methods.
Where Aand Bindicate the original and embedded
frames, respectively, aand brefer to video dimen-
sions, and crefers to the RGB color components (k=1,
2, and 3). MAXAis the highest pixel value of the
frame A.
Fig. 7 illustrates the original and stego 574th frame of the
tested video along with histograms of their RGB components.
The histograms show no obvious alteration in the video qual-
ity. Fig. 8 illustrates the PSNR comparison of the experiment
video when using one LSB, two LSBs, and three LSBs of
each motion object’s DWT coefficients, including each of LL,
LH, HL, and HH subbands. The PSNR values equal 49.01,
42.70, and 36.41 dBs when using one LSB, two LSBs, and
three LSBs of each coefficient, respectively. Fig. 9 illustrates
the PSNR comparison of the tested video when using one
LSB, two LSBs, and three LSBs of each motion object’s DCT
coefficients, including both DCs and ACs. Here, the PSNR
values equal 48.67, 41.45, and 35.95 dBs for each one LSB,
two LSBs, and three LSBs, respectively. Table I clarifies the
average of visual qualities based on DWT and DCT domains.
Overall, the embedded videos’ qualities are near to the host
videos’ qualities because of the high values of PNSRs for our
proposed algorithm.
According to [43], our suggested method has a high embed-
ding capacity. Here, the average of the gained hiding ratio is
3.40% when our algorithm operates in DWT domain. This
FIGURE 10. The embedding capacity comparison of the experiment in
each of DWT-DCT domains.
average has increased to 3.46% when the proposed algorithm
operates in DCT domain. The average sizes of secret mes-
sages in both domains are 31.38, 62.77 and 94.15 Megabits
when using one LSB, two LSBs, and three LSBs of DWT
and DCT coefficients, respectively. The hiding ratio (HR) is
calculated as follows [44]:
HR =Size of embedded message
Video size ×100% (22)
Fig. 10 illustrates the payload capacity of algorithm while
using DWT and DCT domains. The figure has shown the
comparison of the embedding capacity of the tested video
when one LSB, two LSBs, and three LSBs of the moving
objects’ DWT and DCT coefficients are utilized separately.
Table II shows that our suggested method outperforms other
related methods.
5362 VOLUME 5, 2017
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
TABLE 3. Sim and BER values of our method under various attacks.
Similarity (Sim) and Bit Error Rate (BER) metrics have
been utilized [7]. The Sim (0Sim 1)and BER can be
calculated in the following equations [45], [46]:
Sim =
a×b×100% (24)
where Mand ˆ
Mare the original and obtained messages,
respectively, and a×bis the size of the hidden messages. The
algorithm used different attacks such as Gaussian noise, Salt
& pepper noise, and median filtering. The highest robustness
of our method can be achieved when the maximum Sim
and minimum BER values are gained. Table III shows the
robustness of the suggested method against different attacks.
A robust and secure video steganography method in DWT-
DCT domains based on MOT and ECC is proposed in this
paper. The proposed algorithm is three-fold: 1) the motion-
based MOT algorithm, 2) data embedding, and 3) data extrac-
tion. The performance of our suggested method is verified via
extensive experiments, demonstrating the high embedding
capacity with an average HR of 3.40% and 3.46% for DWT
and DCT domains, respectively. An average PSNR of 49.01
and 48.67 dBs for DWT and DCT domains are achieved
leading to a better visual quality for the proposed algorithm
when compared to existing methods of the literature. The
proposed algorithm has utilized MOT and ECC as the prepro-
cessing stages which in turn provides a better confidentiality
to the secret message prior to embedding phase. Moreover,
through experiments from different perspectives, the security
and robustness of the method against various attacks have
been confirmed. In our future work, we will apply our algo-
rithm in some other frequency domains such as curvelet trans-
form for further improving the efficiency, visual quality, and
[1] A. Cheddad, J. Condell, K. Curran, and P. Mc Kevitt, ‘‘A secure and
improved self-embedding algorithm to combat digital document forgery,’
Signal Process., vol. 89, no. 12, pp. 2324–2332, Dec. 2009.
[2] X.-Y. Wang, C.-P. Wang, H.-Y. Yang, and P.-P. Niu, ‘‘A robust blind color
image watermarking in quaternion Fourier transform domain,’J. Syst.
Softw., vol. 86, no. 2, pp. 255–277, Feb. 2013.
[3] M. Sajjad et al., ‘‘Mobile-cloud assisted framework for selective encryp-
tion of medical images with steganography for resource-constrained
devices,’Multimedia Tools Appl., vol. 76, no. 3, pp. 3519–3536,
Feb. 2017.
[4] M. S. Subhedar and V. H. Mankar, ‘‘Current status and key issues in image
steganography: A survey,’’ Comput. Sci. Rev., vols. 13–14, pp. 95–113,
Nov. 2014.
[5] S. Islam, M. R. Modi, and P. Gupta, ‘‘Edge-based image steganography,’
EURASIP J. Inf. Secur., vol. 2014, p. 8, Dec. 2014, doi: 10.1186/1687-
[6] M. Hasnaoui and M. Mitrea, ‘‘Multi-symbol QIM video watermark-
ing,’Signal Process., Image Commun., vol. 29, no. 1, pp. 107–127,
Jan. 2014.
[7] R. J. Mstafa and K. M. Elleithy, ‘‘A novel video steganography algo-
rithm in the wavelet domain based on the KLT tracking algorithm and
BCH codes,’’ in Proc. Long Island Syst., Appl. Technol., May 2015,
pp. 1–7.
[8] K. Qazanfari and R. Safabakhsh, ‘‘A new steganography method which
preserves histogram: Generalization of LSB++,’’ Inf. Sci., vol. 277,
pp. 90–101, Sep. 2014.
[9] L. Guangjie, L. Weiwei, D. Yuewei, and L. Shiguo, ‘‘Adaptive steganog-
raphy based on syndrome-trellis codes and local complexity,’’ in Proc.
4th Int. Conf. Multimedia Inf. Netw. Secur. (MINES), Nov. 2012,
pp. 323–327.
VOLUME 5, 2017 5363
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
[10] A. K. Singh, B. Kumar, S. K. Singh, S. P.Ghrera, and A. Mohan, ‘‘Multiple
watermarking technique for securing online social network contents using
Back Propagation Neural Network,’Future Generat. Comput. Syst., to be
published. doi: 10.1016/j.future.2016.11.023
[11] C. Rupa, ‘‘A digital image steganography using sierpinski gasket frac-
tal and PLSB,’J. Inst. Eng. (India), B, vol. 94, no. 3, pp. 147–151,
Sep. 2013.
[12] R. J. Mstafa and K. M. Elleithy, ‘‘Compressed and raw video steganogra-
phy techniques: A comprehensive survey and analysis,’’ Multimedia Tools
Appl., pp. 1–38, 2016, doi: 10.1007/s11042-016-4055-1.
[13] K. Muhammad et al., ‘‘A secure method for color image steganography
using gray-level modification and multi-level encryption,’’ KSII Trans.
Internet Inf. Syst., vol. 9, no. 5, pp. 1938–1962, 2015.
[14] A. K. Singh, M. Dave, and A. Mohan, ‘‘Hybrid technique for robust and
imperceptible multiple watermarking using medical images,’Multimedia
Tools Appl., vol. 75, no. 14, pp. 8381–8401, Jul. 2016.
[15] R. Zhang, V. Sachnev, and H. J. Kim, ‘‘Fast BCH syndrome coding
for steganography,’’ in Information Hiding (Lecture Notes in Computer
Science), vol. 5806, S. Katzenbeisser and A.-R. Sadeghi, Eds. Berlin,
Germany: Springer, 2009, pp. 48–58, doi: 10.1007/978-3-642-04431-1_4.
[16] C. Fontaine and F. Galand, ‘‘How can Reed–Solomon codes improve
steganographic schemes?’’ in Information Hiding (Lecture Notes in Com-
puter Science), vol. 4567, T. Furon, F. Cayre, G. Doërr, and P. Bas, Eds.
Berlin, Germany: Springer, 2007, pp. 130–144, doi: 10.1007/978-3-540-
[17] A. Khan, A. Siddiqa, S. Munib, and S. A. Malik, ‘‘A recent survey of
reversible watermarking techniques,’’ Inf. Sci., vol. 279, pp. 251–272,
Sep. 2014.
[18] Y. Tew and K. Wong, ‘‘An overview of information hiding in H.264/AVC
compressed video,’IEEE Trans. Circuits Syst. Video Technol., vol. 24,
no. 2, pp. 305–319, Feb. 2014.
[19] R. J. Mstafa and K. M. Elleithy, ‘‘A high payload video steganography
algorithm in DWT domain based on BCH codes (15, 11),’’in Proc. Wireless
Telecommun. Symp. (WTS), Apr. 2015, pp. 1–8.
[20] P.-C. Chang, K.-L. Chung, J.-J. Chen, C.-H. Lin, and T.-J. Lin,
‘‘A DCT/DST-based error propagation-free data hiding algorithm for
HEVC intra-coded frames,’J. Vis. Commun. Image Represent., vol. 25,
no. 2, pp. 239–253, Feb. 2014.
[21] X. Ma, Z. Li, H. Tu, and B. Zhang, ‘‘A data hiding algorithm for
H.264/AVC video streams without intra-frame distortion drift,’’ IEEE
Trans. Circuits Syst. Video Technol., vol. 20, no. 10, pp. 1320–1330,
Oct. 2010.
[22] Z. Shahid, M. Chaumont, and W. Puech, ‘‘Considering the reconstruction
loop for data hiding of intra- and inter-frames of H.264/AVC,’’ Signal,
Image Video Process., vol. 7, no. 1, pp. 75–93, Jan. 2013.
[23] R. Wang, L. Hu, and D. Xu, ‘‘A watermarking algorithm based on the
CABAC entropy coding for H.264/AVC,’J. Comput. Inf. Syst., vol. 7,
no. 6, pp. 2132–2141, 2011.
[24] Y. Liu, Z. Li, X. Ma, and J. Liu, ‘‘A robust data hiding algorithm for
H.264/AVC video streams,’’ J. Syst. Softw., vol. 86, no. 8, pp. 2174–2183,
Aug. 2013.
[25] N. Ke and Z. Weidong, ‘‘A video steganography scheme based on H.264
bitstreams replaced,’’ in Proc. 4th IEEE Int. Conf. Softw. Eng. Service
Sci. (ICSESS), May 2013, pp. 447–450.
[26] M. A. Alavianmehr, M. Rezaei, M. S. Helfroush, and A. Tashk, ‘‘A lossless
data hiding scheme on video raw data robust against H.264/AVC compres-
sion,’’ in Proc. 2nd Int. eConf. Comput. Knowl. Eng. (ICCKE), Oct. 2012,
pp. 194–198.
[27] B. G. Vani and E. V. Prasad, ‘‘High secure image steganography based on
hopfield chaotic neural network and wavelet transforms,’’ Int. J. Comput.
Sci. Netw. Secur., vol. 13, no. 3, pp. 1–6, 2013.
[28] S. G. Mallat, ‘‘A theory for multiresolution signal decomposition:
The wavelet representation,’IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 11, no. 7, pp. 674–693, Jul. 1989.
[29] A. K. Jain, Fundamentals of Digital Image Processing. Englewood Cliffs,
NJ, USA: Prentice-Hall, 1989.
[30] R. J. Mstafa and K. M. Elleithy, ‘‘A novel video steganography algorithm
in DCT domain based on Hamming and BCH codes,’’ in Proc. IEEE 37th
Sarnoff Symp., Sep. 2016, pp. 208–213.
[31] R. J. Mstafa and K. M. Elleithy, ‘‘A DCT-based robust video
steganographic method using BCH error correcting codes,’’ in Proc.
IEEE Long Island Syst., Appl. Technol. Conf. (LISAT), Apr. 2016,
pp. 1–6.
[32] K. Muhammad, M. Sajjad, and S. W. Baik, ‘‘Dual-level security based
cyclic18 steganographic method and its application for secure transmission
of keyframes during wireless capsule endoscopy,’J. Med. Syst., vol. 40,
no. 5, pp. 1–16, 2016.
[33] H. Yoo, J. Jung, J. Jo, and I.-C. Park, ‘‘Area-efficient multimode encoding
architecture for long BCH codes,’IEEE Trans. Circuits Syst. II, Express
Briefs, vol. 60, no. 12, pp. 872–876, Dec. 2013.
[34] R. J. Mstafa and K. M. Elleithy, ‘‘An efficient video steganography algo-
rithm based on BCH codes,’’ in Proc. Northeast Section Conf. Amer. Soc.
Eng. Edu. (ASEE), 2015, pp. 1–10.
[35] K. Muhammad, J. Ahmad, M. Sajjad, and S. W. Baik, ‘‘Visual saliency
models for summarization of diagnostic hysteroscopy videos in healthcare
systems,’SpringerPlus, vol. 5, no. 1, p. 1495, 2016.
[36] W. Lin et al., ‘‘A tube-and-droplet-based approach for representing and
analyzing motion trajectories,’IEEE Trans. Pattern Anal. Mach. Intell., to
be published.
[37] C. Ma, X. Yang, Z. Chongyang, and M.-H. Yang, ‘‘Long-term correlation
tracking,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR),
Jun. 2015, pp. 5388–5396.
[38] C. Ma, J.-B. Huang, X. Yang, and M.-H. Yang, ‘‘Hierarchical convolutional
features for visual tracking,’’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV),
Dec. 2015, pp. 3074–3082.
[39] A. Yilmaz, O. Javed, and M. Shah, ‘‘Object tracking: A survey,’’ ACM
Comput. Surv., vol. 38, no. 4, pp. 1–45, 2006.
[40] R. J. Mstafa and K. M. Elleithy, ‘‘A new video steganography algorithm
based on the multiple object tracking and Hamming codes,’’ in Proc. IEEE
14th Int. Conf. Mach. Learn. Appl. (ICMLA), Dec. 2015, pp. 335–340.
[41] J. Ferryman and A. Shahrokni, ‘‘PETS2009: Dataset and challenge,’’ in
Proc. 20th IEEE Int. Workshop Perform. Eval. Tracking Surveill., Dec.
2009, pp. 1–6, doi: 10.1109/PETS-WINTER.2009.5399556.
[42] R. J. Mstafa and K. M. Elleithy, ‘‘A video steganography algorithm based
on Kanade–Lucas–Tomasi tracking algorithm and error correcting codes,’’
Multimedia Tools Appl., vol. 75, no. 17, pp. 10311–10333, Sep. 2016.
[43] T.-H. Lan and A. H. Tewfik, ‘‘A novel high-capacity data-embedding
system,’IEEE Trans. Image Process., vol. 15, no. 8, pp. 2431–2440,
Aug. 2006.
[44] K. Muhammad, M. Sajjad, I. Mehmood, S. Rho, and S. W. Baik, ‘‘A novel
magic LSB substitution method (M-LSB-SM) using multi-level encryption
and achromatic component of an image,’Multimedia Tools Appl., vol. 75,
no. 22, pp. 14867–14893, Nov. 2016.
[45] Y. He, G. Yang, and N. Zhu, ‘‘A real-time dual watermarking algorithm of
H.264/AVC video stream for video-on-demand service,’’ AEU-Int. J. Elec-
tron. Commun., vol. 66, no. 4, pp. 305–312, Apr. 2012.
[46] A. K. Singh, B. Kumar, M. Dave, and A. Mohan, ‘‘Robust and imper-
ceptible dual watermarking for telemedicine applications,’Wireless Pers.
Commun., vol. 80, no. 4, pp. 1415–1433, Feb. 2015.
RAMADHAN J. MSTAFA (M’14) was born
in Duhok, Kurdistan Region, Iraq. He received
the bachelor’s degree in computer science from
Salahaddin University-Erbil, Erbil, Iraq, and the
master’s degree in computer science from the Uni-
versity of Duhok, Duhok. He is currently pursuing
the Ph.D. degree in computer science and engi-
neering with the University of Bridgeport, Bridge-
port, CT, USA. His research areas of interest
include image processing, mobile communication,
security, watermarking, and steganography. He is an ACM Student Member.
5364 VOLUME 5, 2017
R. J. Mstafa et al.: Robust and Secure Video Steganography Method in DWT-DCT Domains Based on MOT and ECC
KHALED M. ELLEITHY is currently the Associate
Vice President of Graduate Studies and Research
with the University of Bridgeport. He is also a
Professor of Computer Science and Engineering.
He has over 25 years of teaching experience.
His teaching evaluations are distinguished in all
the universities he joined. He supervised hundreds
of senior projects, M.S. theses, and Ph.D. dis-
sertations. He supervised several Ph.D. students.
He developed and introduced many new under-
graduate/graduate courses. He also developed new teaching/research labo-
ratories in his area of expertise. He has authored over 350 research papers in
international journals and conferences in his areas of expertise. He is an Edi-
tor or a Co-Editor for 12 books by Springer. He has research interests in the
areas of wireless sensor networks, mobile communications, network security,
quantum computing, and formal approaches for design and verification.
He is a member of technical program committees of many international con-
ferences as recognition of his research qualifications. He was the Chairman
of the International Conference on Industrial Electronics, Technology and
Automation, IETA 2001, 2001, Cairo, Egypt. Also, he is the General Chair
of the 2005–2013 International Joint Conferences on Computer, Information,
and Systems Sciences, and Engineering virtual conferences. He served as a
Guest Editor for several International Journals.
EMAN ABDELFATTAH received the M.S. degree
in computer science and the Ph.D. degree in com-
puter science and engineering from the University
of Bridgeport in 2002 and 2011, respectively.
She was a Professional Assistant Professor of
Computer Science with the School of Engineering
and Computing Sciences, Texas A&M University-
Corpus Christi. She was also an Adjunct Professor
with the Department of Computer Science and
Engineering and the Department of Mathematics,
University of Bridgeport. She is currently a Lecturer with the School of
Computing, Sacred Heart University, and also an Adjunct Assistant Professor
with American Intercontinental University Online. Her research results were
published in prestigious international conferences. She has research interests
in the areas of network security, networking, and mobile communications.
She actively participated as a Committee Member of the International Con-
ferences on Engineering Education, Instructional Technology, Assessment,
and E-learning from 2005 to 2014.
VOLUME 5, 2017 5365
... The deviation has been used to describe the distortion function and the possibility of adding a mapping rule to increase the number of options available for each block. [11] Proposes a video steganography system based on error correction codes and multiple object tracking (MOT) approaches that employ discrete cosine transform (DCT) and discrete wavelet transform (DWT). One solution proposed in [7] is to hide crucial data in a video using hybrid neural networks and a hash algorithm to find the optimal sections in the video to implant hidden information. ...
Conference Paper
In a digital environment, secure data transfer is not possible via an unsecured medium due to intruders or hackers who manipulate data in an unauthorized manner. Data can be transmitted safely by using data hiding methods to prevent data leakage. Both data hiding and data encryption are used in this work to give dual safety. A speech signal file (.wav) has been invisible after being encrypted in a video file (.avi) through three stages. The first stage involved scrambling bits using a Brownian motion algorithm, scrambling blocks using a circle map algorithm, and finally shuffling the speech signal as a whole. The second stage involved encrypting the scrambled speech signal file with the ABC optimization algorithm. In the third stage, we used the k-means and least significant bit (LSB) algorithms for hiding. The paper results were very successful in encrypting a speech signal file that was difficult to recognize. That has been made possible by employing a multi-level scrambling approach. The measurements SNR, SNRseg, CC, LLR, PSNR, MSE, SSIM, and PRD are used to evaluate the ability of the encryption and hidden strategies. The steganography result has well relative to the difficulties of hiding the major speech signal file after encrypting and retrieving it without altering its contents.
Full-text available
Steganography is critical in traceability, authentication, and secret delivery for multimedia. In this paper, we propose a novel image steganography framework, named StegaEdge, via learning edge-guidance network to simultaneously address three challenges, capacity, multi-task, and invisibility. First, we use an upsampling strategy to expand the embedding space and thus increase the capacity of the embedded message. Second, our algorithm improves the embedding way of messages so that it can handle different messages embedded in the same image and achieve split-task recovery completely. Different information can be embedded in one cover image without affecting each other. Third, we innovatively propose an edge-guidance strategy to solve the problem of poor invisibility in smooth regions. The human eye is significantly less perceptive of intensity changes in edges than in smooth areas. Unlike traditional steganography methods, our edge-guidance steganography can appropriately embed part of the information into non-edge regions when the amount of embedded information is too large. Experimental results on three datasets show that the newly proposed StegaEdge algorithm achieves satisfactory results in terms of capacity, multi-task, imperceptibility, and security compared to the state-of-the-art algorithms.
Full-text available
The development of steganography methods has raised growing worries about steganography abuse. As the significant demand for digital video processing is on the rise from last decade, data security becomes a crucial issue. Motion vector manipulation (MVM)-based video steganography has caught attention since it can result in indirect and arbitrary alterations in video data. The moderate payload capacity and complexity are issues faced by MV-based methods. A hybrid motion estimation and transform coefficients strategy applied on video steganography using the H.265 compression method is proposed. The robust imperceptible compressed domain video steganography (RI-CDVS) model is presented to increase imperceptibility with improved security. The two phases of the RI-CDVS model are embedding and extraction. The embedding stage generates the compressed stego video from the inputs of compressed cover video and secret image. Using dynamic threshold from the cover video, the motion estimation technique is used to select the group of key frames. The key frames are chosen to hide the secret image without sacrificing quality and lower error rate. The Discrete Cosine Transform (DCT) is used to transform keyframes into the frequency domain. The Least Significant Bit (LSB) of the integer coefficients of the DCT components is used to embed the secret information. The H.265 codec is used to create the compressed stego video. At extraction phase reverse operations are performed to get secret image. The experiments are conducted using a publicly accessible video collection and compared the results of RICDVS with the techniques at the cutting edge of video steganography.
Full-text available
Video steganography approach enables hiding chunks of secret information inside video sequences. The features of video sequences including high capacity as well as complex structure make them more preferable for choosing as cover media over other media such as image, text, or audio. Video steganography is a prominent as well as the evolving field in the information security domain and significant number of video steganography methods are proposed in recent years. This article provides a comprehensive review of video steganography methods proposed in the literature. This article initially reviews various raw domain-based video steganography methods. In particular, the raw domain-based methods include spatial domain approaches such as least significant bits (LSB), transform domain-based methods such as discrete wavelet transform, discrete cosine transform, etc. Furthermore, the article looks into various compressed domain steganography methods. A critical comparative analysis is included in the article to analyze and contrast the steganography methods proposed in the literature. A brief description of various evaluation matrices for video steganography methods is provided in this article. Moreover, a brief introduction to steganalysis and video steganalysis is provided. The article concludes with a discussion focused on the limitations and challenges of the video steganography methods. Further, a brief insight into future directions in video steganography systems is provided.
Full-text available
With the ubiquitous progress of information technology it is now possible to transfer multimedia information rapidly over the Internet. Significant growth of video data on the Internet insists the users towards video steganography as a popular choice for data hiding. Steganography algorithm must emphasis to improve the embedding efficiency, payload and robustness against the intruders. In this paper, we have addressed those issues and present a new approach of steganography. Our segmentation process is based on video frames. We apply a region selection method followed by the dimensionality reduction process, called principal component analysis (PCA), to compress the regions and embed secret data on those compact regions. This PCA is used as a best-fitted vector that minimizes the average square distance from the pixel values to that vector. Our results show higher embedding capacity along with better visual quality. Moreover, the proposed method improves the robustness in the sense that the secret message can be retrieved by the receiver even after some known attacks on the channel.
Full-text available
Steganography is a technique that embeds secret information in a suitable cover file such as text, image, audio, and video in such a manner that secret information remains invisible to the outside world. The study of the literature relevant to video steganography reveals that a tradeoff exists in attaining the acceptable values of various evaluation parameters such as a higher capacity usually results in lesser robustness or imperceptibility. In this article, we propose a technique that achieves high capacity along with required robustness. The embedding capacity is increased using singular value decomposition compression. To achieve the desired robustness, we constrain the embedding of the secret message in the region of interest in the cover video file. In this manner, we also succeed in maintaining the required imperceptibility. We prefer Haar-based lifting scheme in the wavelet domain for embedding the information because of its intrinsic benefits. We have implemented our suggested technique using MATLAB. The analysis of results on the prespecified parameters of the steganography justifies the effectiveness of the proposed technique.
Full-text available
Full paper: .... In the last two decades, the science of covertly concealing and communicating data has acquired tremendous significance due to the technological advancement in communication and digital content. Steganography is the art of concealing secret data in a particular interactive media transporter, e.g., text, audio, image, and video data in order to build a covert communication between authorized parties. Nowadays, video steganography techniques have become important in many video-sharing and social networking applications such as Livestreaming, YouTube, Twitter, and Facebook because of the noteworthy development of advanced video over the Internet. The performance of any steganographic method ultimately relies on the imperceptibility, hiding capacity, and robustness. In the past decade, many video steganography methods have been proposed; however, the literature lacks of sufficient survey articles that discuss all techniques. This paper presents a comprehensive study and analysis of numerous cutting edge video steganography methods and their performance evaluations from literature. Both compressed and raw video steganography methods are surveyed. In the compressed domain, video steganography techniques are categorized according to the video compression stages as venues for data hiding such as intra frame prediction, inter frame prediction, motion vectors, transformed and quantized coefficients, and entropy coding. On the other hand, raw video steganography methods are classified into spatial and transform domains. This survey suggests current research directions and recommendations to improve on existing video steganography techniques.
Full-text available
In clinical practice, diagnostic hysteroscopy (DH) videos are recorded in full which are stored in long-term video libraries for later inspection of previous diagnosis, research and training, and as an evidence for patients’ complaints. However, a limited number of frames are required for actual diagnosis, which can be extracted using video summarization (VS). Unfortunately, the general-purpose VS methods are not much effective for DH videos due to their significant level of similarity in terms of color and texture, unedited contents, and lack of shot boundaries. Therefore, in this paper, we investigate visual saliency models for effective abstraction of DH videos by extracting the diagnostically important frames. The objective of this study is to analyze the performance of various visual saliency models with consideration of domain knowledge and nominate the best saliency model for DH video summarization in healthcare systems. Our experimental results indicate that a hybrid saliency model, comprising of motion, contrast, texture, and curvature saliency, is the more suitable saliency model for summarization of DH videos in terms of extracted keyframes and accuracy.
Full-text available
In this paper, the problem of outsourcing the selective encryption of a medical image to cloud by resource-constrained devices such as smart phone is addressed, without revealing the cover image to cloud using steganography. In the proposed framework, the region of interest of the medical image is first detected using a visual saliency model. The detected important data is then embedded in a host image, producing a stego image which is outsourced to cloud for encryption. The cloud which has powerful resources, encrypts the image and sent back the encrypted marked image to the client. The client can then extract the selectively encrypted region of interest and can combine it with the region of non-interest to form a selectively encrypted image, which can be sent to medical specialists and healthcare centers. Experimental results and analysis validate the effectiveness of the proposed framework in terms of security, image quality, and computational complexity and verify its applicability in remote patient monitoring centers.
Conference Paper
Full-text available
Due to the significant growth of video data over the Internet, it has become a popular choice for data hiding field. The performance of any steganographic algorithm relies on the embedding efficiency, embedding payload, and robustness against attackers. Low hidden ratio, less security, and low quality of stego videos are the major issues of many existing steganographic methods. In this paper, we propose a DCT-based robust video steganographic method using BCH codes. To improve the security of the proposed algorithm, a secret message is first encrypted and encoded by using BCH codes. Then, it is embedded into the discrete cosine transform (DCT) coefficients of video frames. The hidden message is embedded into DCT coefficients of each Y, U, and V planes excluding DC coefficients. The proposed algorithm is tested under two types of videos that contain slow and fast moving objects. The results of the proposed algorithm are compared with three existing methods. The results demonstrate better performance for the proposed algorithm than for the others. The hidden ratio of the proposed algorithm is approximately 27.53%, which is evaluated as a high hiding capacity with a minimal tradeoff of the visual quality. The robustness of the proposed algorithm was tested under different attacks.
The initial contribution in this paper begins with proposing a robust and secure DWT, DCT and SVD based multiple watermarking techniques for protecting digital contents over unsecure social networks. The proposed technique initially decomposes the host image into third level DWT where the vertical frequency band (LH2) at second level and low frequency band (LL3) at the third level DWT is selected for embedding image and text watermark respectively. Further, the proposed method addresses the issue of ownership identity authentication, multiple watermarks are embedded instead of single watermark into the same multimedia objects simultaneously, which offer the extra level of security and reduced storage and bandwidth requirements in the important applications areas such as E-health, secure multimedia contents on online social network, ssecured E-Voting systems, digital cinema, education and insurance companies, driver’s license /passport. Moreover, the robustness image watermark is also enhanced by using Back Propagation Neural Network (BPNN), which is applied on extracted watermark to minimize the distortion effects on the watermarked image. In addition, the method addresses the issue of channel noise distortions in the identity information. This has been achieved using error correcting codes (ECCs) for encoding the text watermark before embedding into the host image. The effects of Hamming and BCH codes on the robustness of personal identity information in the form of text watermark and the cover image quality have been investigated. Further, to enhance the security of the host and watermarks the selective encryption is applied on watermarked image, where only the important multimedia data is encrypted. The proposed method has been extensively tested and analyzed against known attacks. Based on experimental results, it is established that the proposed technique achieves superior performance in respect of, robustness, security and capacity with acceptable visual quality of the watermarked image as compared to reported techniques. Finally, we have evaluated the image quality of the watermarked image by subjective method. Therefore, the proposed method may find potential solutions in prevention of personal identity theft and unauthorized multimedia content sharing on online social networks/open channel.
Trajectory analysis is essential in many applications. In this paper, we address the problem of representing motion trajectories in a highly informative way, and consequently utilize it for analyzing trajectories. Our approach first leverages the complete information from given trajectories to construct a thermal transfer field which provides a context-rich way to describe the global motion pattern in a scene. Then, a 3D tube is derived which depicts an input trajectory by integrating its surrounding motion patterns contained in the thermal transfer field. The 3D tube effectively: 1) maintains the movement information of a trajectory, 2) embeds the complete contextual motion pattern around a trajectory, 3) visualizes information about a trajectory in a clear and unified way. We further introduce a droplet-based process. It derives a droplet vector from a 3D tube, so as to characterize the high-dimensional 3D tube information in a simple but effective way. Finally, we apply our tube-and-droplet representation to trajectory analysis applications including trajectory clustering, trajectory classification & abnormality detection, and 3D action recognition. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our approach.