Conference PaperPDF Available

Securing information content using new encryption method and Steganography

Authors:

Abstract and Figures

This paper proposes a novel encryption method with password protection based on an extended version of SHA-1 (secure hash algorithm) that is able to encrypt 2D bulk data such as images. There has been a modest research in the literature on encryption of digital images though. The algorithm benefits also from the conjugate symmetry exhibited in what is termed, herein, an irreversible fast Fourier transform (IrFFT). The proposed encryption method is a preprocessing phase which aims at increasing the robustness of image steganography against hackers. This scenario lays down a multi layer of security which forms a strong shield, against eavesdroppers, that is impossible to break. Both Shannon law requirements are met and results show promising results.
Content may be subject to copyright.
Securing Information Content using
New Encryption Method and Steganography
Abbas Cheddad, Joan Condell, Kevin Curran and Paul Mc Kevitt
School of Computing and Intelligent Systems, Faculty of Computing and Engineering
Londonderry, BT48 7JL, Northern Ireland, United Kingdom
Emails: {cheddad-a, j.condell, kj.curran, p.mckevitt}@ulster.ac.uk
Abstract
This paper proposes a novel encryption method with
password protection based on an extended version of
SHA-1 (Secure Hash Algorithm) that is able to
encrypt 2D bulk data such as images. There has been
a modest research in the literature on encryption of
digital images though. The algorithm benefits also
from the conjugate symmetry exhibited in what is
termed, herein, an Irreversible Fast Fourier
Transform (IrFFT). The proposed encryption method
is a preprocessing phase which aims at increasing
the robustness of Image Steganography against
hackers. This scenario lays down a multi layer of
security which forms a strong shield, against
eavesdroppers, that is impossible to break. Both
Shannon law requirements are met and results show
promising results.
1. Introduction
Steganography, which is the science of concealing
the very existence of data in another transmission
medium, comes along not to replace Cryptography
but rather to boost the security using its obscurity
features. Steganography has various useful
applications such as for Human rights organizations
(as encryption is prohibited in some countries),
Smart IDs where individuals’ details are embedded
in their photographs (content authentication), data
integrity by embedding checksum, medical imaging
and secure transmission of medical data and bank
transactions to name few. Various algorithms were
proposed to implement Steganography in digital
images. We can categorize them into three major
clusters, algorithms in the spatial domain such as
S-
Tools
[1], algorithms in the transform domain for
instance
F5
[2] and algorithms taking an adaptive
approach combined with one of the former two
methods for example
ABCDE
(A Block-based
Complexity Data Embedding) [3]. Most of the
existing Steganographic methods rely on two factors:
the secrecy of the key and the robustness of the
Steganographic algorithm.
This work aims at adding another unit of security
which encrypts the secret image before the
embedding process. Various hash algorithms are
available such as MD5 (Message Digest 5),
Blowfish, and SHA-1 (Secure Hash Algorithm 1)
which hash data strings, thus changing their state
from being natural to a seemingly unnatural state. A
hash function is more formally defined as the
mapping of bit strings of an arbitrary finite length to
strings of fixed length [4]. Here, we attempt to
extend SHA-1 (the terminology and functions used
as building blocks to form SHA-1 are described in
the US Secure Hash Algorithm 1 [5]) to encrypt 2D
data. The introduction of Fast Fourier Transform
(FFT) forms together with the output of SHA-1 a
strong image encryption property.
This paper is organized as follows; section 2
discusses related work, followed by our proposal in
section 3 and application to Steganography in section
3.1. Section 4 will analyzes and compares our
results. Finally we conclude this work in section 5.
2. Related Work
Transferring images into chaotic maps was the
obvious channel to the design of secure encrypted
images. Chaos theory, which essentially emerged
from mathematics and physics, deals with the
behaviour of certain nonlinear dynamic systems that
exhibit a phenomenon under certain condition known
as chaos which adopts the Shannon requirement on
diffusion and confusion [6]. Due to chaos’ attractive
features such as sensitivity to initial condition and
random-like outspreading behaviour, chaotic maps
become popular for data protection [4]. In the realm
of 2D data, Shin [6] gave the following system in
order to spread the neighbouring pixels into largely
dispersed locations:
N
y
x
lly
xmod
1
11
!
"
#
$
%
!
"
#
$
%
+
=
!
"
#
$
%
(1)
978-1-4244-2917-2/08/$25.00 ©2008 IEEE
563
Where, 11
1
11
det =
&
&
'
(
)
)
*
+
!
"
#
$
%
+or
ll , and l and N
denote an arbitrary integer and the width of a square
image respectively. W e referred to the determinant
here as ‘det’. After exactly 17 iterations the chaotic
map converged into the original image as can be seen
from Fig. 1.
Figure 1. Different iterative results of
applying Eq. 1 with L=2 on Lena image of
size (101x101) [6].
Some remarks are worth noting here regarding this
method:
A) The algorithm uses a determinant in its process,
thus, the input matrix can only be square. A work
around this problem might be in applying the
algorithm on square blocks of a given image
repetitively. However, it would generate noticeable
periodic blocking artifact given the nature of the
process and of course this is not an interesting fact as
it conflicts with the aim of generating chaotic maps.
B) As far as the security systems are concerned, the
convergence of the translated pixels into their initial
locations, i.e., image reconstruction after some
iteration, is also not an appealing factor. Given one
of the iterations is used, if hackers have a prior
knowledge of the algorithm and obtained the
parameter “l”, which is actually not difficult to crack
using brute force, they will be able to invest some
time to add more iteration that will reveal the
original image.
In a detailed and concise attempt to introduce
image encryption, Pisarchik et al., [7] demonstrated
that any image can be represented as a lattice of
pixels, each of which has a particular colour. The
pixel colour is the combination of three components:
red, green, and blue, each of which takes an integer
value C= (Cr, Cg, Cb) between 0 and 255. Thus, they
create three parallel CMLs (Chaotic Map lattices) by
converting each of these three colour components to
the corresponding values of the map
variable, ),,( b
c
g
c
r
cc xxxx =, and use these values as the
initial conditions, 0
xxc=. Starting from different
initial conditions, each chaotic map in the CMLs,
after a small number of iterations, yields a different
value from the initial conditions, and hence the
image becomes indistinguishable because of an
exponential divergence of chaotic trajectories [7].
They introduced seven steps for encrypting images
and seven steps for decryption. Their algorithm does
not encompass any conventional hash algorithm, i.e.,
MD’s family, SHA’s family or Blowfish. Moreover,
four parameters were used of which one was set
constant and the other two were regulated. Their
settings can have tremendous impact on the chaotic
map quality as can be concluded from Fig 2 and Fig
3. Therefore, the receiver must know the decryption
algorithm and the parameters beforehand. The
algorithm is well formulated and adequately
presented; it yields good results as proclaimed by the
authors. The authors in [7] used a rounding operator
which was applied recursively along the different
iterations. An immediate concern will be about
recovering the exact intensity values of the input
image as the recovered image shown in their paper
might be just an approximation because of the
aforementioned operator. This is important,
especially in the proposed application of
Steganography where one wants to recover the exact
embedded file rather than its approximation.
Figure 2. Colour sensitivity to number of
cycles (a=3.9 and n=75). (a) Image encoded
with j=1 and (b) j=2 [7].
Figure 3. Colour sensitivity to number of
iterations (a=3.9 and j=3). (a) Original image,
(b) image encoded with n=1, (c) n=30 and (d)
n=75 [7].
564
3. Proposed Method
The proposal exploits the strength of a 1D hash
algorithm namely SHA-1 and extends it to handle 2D
data such as images. The FFT is incorporated into
the process to increase the disguise level and thus
generate a random-like output that does not leave
any distinguishable patterns of the original image.
The exhaustive description of the algorithm step
by step is illustrated in Fig 4 (Appendix). The
proposal starts with a password phrase P supplied by
the user to generate a SHA-1 based hash string H
(P). The bit stream vector of H is then transformed to
a matrix of fixed dimension 8x35. Parallel to this, the
original image A is converted to a bit stream and
reshaped to have the dimension of )),((8 NMx .
The key, herein 8x35, is short to accommodate the
image bit stream. Therefore, the algorithm resizes the
key towards the needed dimension,
herein )),((8 NMx . Obviously this step would
result in repetitive patterns which would turn the
ciphered image prone to attacks. To cope with this
situation a modified DCT (Discrete Cosine
Transform) followed by FFT are applied to provide
the confusion and diffusion requirement and to
tighten the security.
Let the resized key bit stream be lk,
λ
where the
subscripts k and l denote the width and height after
resizing the key respectively, i.e., 8, M*N. The FFT
will operate on the DCT transform of lk,
λ
subject to
Eq. 3.
3),(),(,
),(
1
),(
,
1
0
1
0
/)(2
EqsatisfyingDCTyxFwhere
eyxF
N
vuf
lk
N
x
N
y
Nyuxui
λ
π
=
, ,
=
=
=
+
(2)
Note that for the transformation at the FFT and
DCT levels the algorithm does not utilise the whole
coefficients. Rather, it imposes the following rule,
which generates at the end a binary random-like
map. Given the output of Eq. 2 the binary map can
be derived straightforwardly such that:
-
.
/>
=Otherwise
vufiff
yxMap 0
0),(1
),( (3)
This map takes the positive coefficients of the
imaginary part to form the ON pixels in the map.
Since the coefficients are omitted the reconstruction
of the password phrase is impossible, hence the name
Irreversible Fast Fourier Transform (IrFFT). In
other words, it is a one way hash function which
accepts initially a user password. This map finally is
XORed with the bit stream version of the image. The
result is then converted into grayscale values then
reshaped to form the ciphered image.
Another phenomena that was noticed worth
exploitation is the sensitivity of the spread of the
FFT coefficients to any kind of changes in the spatial
domain, therefore if this is coupled with the
sensitivity of SHA-1 algorithm to changes of the
initial condition, i.e., Password phrase, the algorithm
can meet easily the Shannon law requirements. For
instance a small change in the password phrase will,
with overwhelming probability, result in a
completely different hash. The following exemplifies
such assertion:
Input password: ‘Steganography’
The corresponding Hash Function:
‘40662a5f1e7349123c4012d827be8688d9fe013b’
Input password: ‘Steganographie’
The corresponding Hash Function:
‘c703bbc5b91736d8daa72fd5d620536d0dfbfe01’
So, the core idea here is to transform these
changes into spatial domain where 2D-DCT and 2D-
FFT are applied that introduce the aforementioned
sensitivity to the 2D space. As such, images can be
easily encoded securely with password protection.
3.1 Application to Steganography
After generating the chaotic map (encrypted
image), the algorithm uses the colour transformation
YCbCrRGB on the cover image which will carry
the encrypted data. The use of such a transformation
is twofold; first to segment homogeneous objects in
the cover image namely human skin region, and
second to embed our data using the chrominance red
(Cr) [8]. The YCbCr space can remove the strong
correlation among R, G, and B matrices in a given
image. This phenomenon is what interests the
authors as less correlation between colours means
less noticeable distortion if any alteration happens at
this level. In this approach, the concentration on skin
tone is motivated by some interesting applications of
the final product. The algorithm starts first with the
segmentation of probable human skin regions such
that:
)(:
),( 1
jiSSwhere
SBckC
ji
i
n
i
=
=
= (4)
In Eq (4) C denotes the cover image, Bck
background regions and (S1, S2,…, Sn ) are connected
subsets that correspond to skin regions.
Based on the carried experiments it was found that
embedding into these regions produces less
distortion to the carrier image compared to
embedding in a sequential order or in any other
areas. In addition to this, the algorithm yields a
robust output against reasonable noise attacks and
translation. The resistance to geometric distortion is
feasible, unlike S-Tools and F5, since by selecting
skin tone blobs the process can detect eye
coordinates [9] which act as the reference points to
recover the initial position and orientation and thus
make the proposed method invariant to both rotation
and translation. As shown in Fig 5 the proposed
565
algorithm was applied to digital image
Steganography for two reasons, the first motivation
is that embedding a random-like data into the Least
Significant Bits (LSBs) would perform better than
embedding the real natural data, secondly for
security and fidelity reasons the embedded data must
undergo a high encryption so even if it is
accidentally discovered, which is unlikely to happen,
the actual embedded data would not be revealed.
More specifically the study targets identification
cards (IDs) which are prone to forgery in aspects
pertaining to Biodata alteration or photo
replacement. To protect photos, government bodies
use a physical watermark on the photos using an iron
stamp which is half visible or sometimes they use a
normal stamp. This fragile shield of security can be
easily deceived by mimicking the same stamp. The
Biometric security measurement relies heavily on
face features extraction and it is essential to have the
system integrated into an external database with a
real time connection to double check for identities.
On the other hand, systems on chip are extremely
expensive to roll out and require dedicated hardware.
In addition, some chip circuits can be reverse
engineered using a Radio Frequency Identification
(RFID) technology. This occurred recently1 in the
Netherlands where two students from the University
of Amsterdam broke the Dutch Public Transit Card.
Such event brought the Netherlands’ media into a
storm of questionable security implementations.
This study proposes a cost effective yet highly
secure system in which individuals’ details are
embedded into their photographs using a multi layer
security channel reliant on Steganography.
Steganography is the science of hiding the very
existence of secret data in an innocuous way and the
authors believe that this specific application of
Steganography can overcome the difficulties
mentioned earlier. Recently there have been a large
scale losses of personal sensitive data in the UK e.g.
loss of 25 million child benefit records after HMRC
sent two unregistered/unencrypted discs to National
Audit Office and the theft of laptop from Navy
officer with personal details of 600,000 people.
These incidents inspired another application of
Steganography that is under investigation which
aims at developing a highly secure large scale
database. To evaluate the performance of the
proposed system, a set of RGB images were used for
this purpose. Fig 5 shows an example of the test data
of which its PSNR (discussed in the next section)
values are shown in Table 1. It is worth noting here
that the frame appearing in Fig. 5 (top) is grabbed
from a 16 sec duration video of size 2.76 MB and
having 485 frames. The approximate embedding
1 Dutch Public Transit Card Broken, [online]. Available from:
<http://www.cs.vu.nl/~ast/ov-chip-card/>, accessed on 02-03-
2008 at 15:37.
capacity would be about 34% of the total file size or
0.9407 MB.
Figure 5. Figure showing the proposed
Steganography system (Embedding in Cr)
verses S-Tools and F5 algorithms.
Table. 1. Performance Comparison of the
proposed method against S-Tools and F5.
Lady vs. UU
PSNR Size Original /Stego-
KB
Proposed 70.3956 315/222
S-T 69.5629 660/ 660
F5 48.9067 101/99.8
The authors believe that there will be numerous
interesting applications for such an extended 2D-
SHA-1 algorithm; however the concentration here is
granted solely to Steganography.
4. Results and Discussions
This section reports the results which do better
than the algorithm proposed by Pisarchik et al., [7] in
terms of algorithm complexity and parameters
requirement. Moreover, the algorithm unlike [7] is
securely backed up by a 1D strong hash function. In
[7], the desired outcome converges after some
iterations which need to be visually controlled to flag
the termination of the program; however, in the
proposed case the algorithm is run only once for each
colour component (R, G and B). The procedure
needs only one input from the user which is the
password and it will handle the rest of the process,
while in [7] three parameters namely the reported a,
j, and n need to be specified. The proposal obviously
can be applied to gray scale images as well as binary
images; these extensions are not feasible in [7] as
566
they incorporate into their process relationship
between the three primary colours (R, G and B).
Finally, time complexity which is a problem
admittedly stated in [7] would be reduced greatly by
adopting our method; however, since MATLAB was
used which is an interpreted language while [7] used
C# for their application, this contrast was omitted
here.
The algorithm was tested on the same test image
described in [7] to establish a fair judgement. To
demonstrate visually the diffusion requirement being
met, Fig 6 illustrates the output with
Steganographie and with Steganographie as
passwords. Even though only small change has
occurred, the final two chaotic maps differ
dramatically as can be seen from Fig 6 (d). This,
combined with the sensitivity shown in Fig 6, will
form excellent properties of the proposed algorithm.
From Fig 6 it can be concluded that the proposed 2D
encryption meets the diffusion requirement.
Pisarchik et al., [7] altered the test image by
adding a black box at the lower right corner of the
image and tried to visualise the difference by means
of image histograms. Even though image histograms
are a useful tool; unfortunately they, do not tell much
about the structure of the image and in this case
about the displacement of colour values. Histograms
accumulate similar colours in distinguished bins
regardless of their spatial arrangements. A better
alternative would be to use similarity measurement
metrics such as the popular Peak Signal to Noise
Ratio (PSNR). PSNR values will run into infinity if
the two examined sets are identical. PSNR is defined
by the following system:
)(
log10
2
max
10 MSE
C
PSNR = (5)
where MSE denotes the Mean Square Error which is
given by:
)(
12
1 1
, , =
= =
M
x
N
yxyxy CS
MN
MSE (6)
and max
C holds the maximum value in the examined
image, for example:
1 in double precision intensity
Images
max
C
255 in 8-bit unsigned integer intensity
images
x and y are the image coordinates, M and N are the
dimensions of the image, xy
S is the original data and
xy
Cis the modified data.
Table 2 compares the PSNR values which inform
us further with regards to the diffusion aspect.
Table 2. PSNR values of the different
generated chaotic maps (unit measurement
of PSNR is decibel (dB)).
Chaos Fig 6 (b) Fig 6 (c)
Fig 6 (b) Inf 7.7765 dB
Fig 6 (c) 7.7765 dB Inf
It was mentioned in section 2 that Pisarchik et al.’s
algorithm involves a rounding operator applied each
time the program is invoked by the different
iterations. This study does not adopt this feature as it
is believed that there will be a loss of information
when the embedded data is reconstructed. In this
proposal the algorithm goes in one direction and the
recovery could be initiated by the same password
and goes in parallel, i.e., not in the reverse order. Fig
8 depicts the input image and the recovered image,
the PSNR reaches to infinity which means the two
images are identical. Fig 9 shows the output of the
algorithm applied to a binary image.
5. Conclusion
This paper presents a new encryption algorithm
for two dimensional data such as images which is a
pre-processing step in the ongoing research project
named “Steganoflage” [10]. The proposal is initiated
by a password supplied by the user. Then the process
applies the introduced extension of the SHA-1
algorithm to handle 2D data. An Irreversible Fast
Fourier Transform (IrFFT) is applied to generate a
more scattered data. It was shown that the method
outperforms that of Pisarchik et al., [7] in many
ways.
References
[1] Brown, A. 1996. S-Tools [online]. [Accessed 04th
April 2008]. Available from World Wide Web:
<http://www.jjtc.com/Security/stegtools.htm>
[2] Westfeld, A. 2001. F5 [online]. [Accessed 04th April
2008]. Available from World Wide Web:
<http://wwwrn.inf.tu-dresden.de/~westfeld/f5.html>
[3] Hioki H. (2002). A Data Embedding Method Using
BPCS Principle with New Complexity Measures.
Proceedings of the Pacific Rim Workshop on Digital
Steganography, pp.30-47.
[4] Yang, Wang., Liao Xiaofeng., Xiao Di., and Wong
Kwok-Wo. (2008). One-way hash function
construction based on 2D coupled map lattices.
Journal of Information Sciences 178 (2008) 1391-
1406.
[5] US Secure Hash Algorithm 1 [online]. [Accessed 05th
April 2008]. Available from World Wide Web:
<http://www.faqs.org/rfcs/rfc3174>.
[6] Shih, F. (2008). Digital Watermarking and
Steganography, Fundamentals and Techniques. CRC
Press. USA, pp: 22-24.
567
[7] Pisarchik A. N., Flores-Carmona N. J., and Carpio-
Valadez M., (2006). Encryption and decryption of
images with chaotic map lattices. Journal of CHAOS
16, 033118 (2006). The American Institute of
Physics.
[8] Cheddad, A., Condell, J., Curran, K. and Mc Kevitt, P.
(2008). Biometric Inspired Digital Image
Steganography. In the proceedings of the 15th Annual
IEEE International Conference and Workshops on the
Engineering of Computer-Based Systems (ECBS'08).
pp. 159-168. IEEE Computer Society.
[9] Cheddad, A., Mohamad, D. and Abd Manaf. A.
(2008). Exploiting Voronoi Diagram Properties
in Face Segmentation and Features Extraction.
Pattern Recognition 41 (2008): 3842-3859,
Elsevier Science.
[10] Steganoflage, available from:
http://www.infm.ulst.ac.uk/~abbasc/
Appendix:
Figure 4. Block diagram of the steps used in the proposed algorithm for image encryption.
(a) (b) (c) (d)
Figure 6. Our 2D-SHA-1 algorithm: (a) test image (Mother of the Nature), (b) chaotic map using
Steganography’ as password, (c) chaotic map using ‘Steganographie’ as password and (d) the
difference between (b) and (c).
568
... Mean Square Error (MSE) is the cumulative squared error between two digital images and can be used to check the avalanche effect. Let C 1 and C 2 be two ciphertext images whose corresponding keys are differ by one bit, then MSE can be calculated as [40], [41]: ...
... Unified Average Change Intensity (U ACI) determines the average intensity of differences between the two images. Mathematically U ACI can defined as [40], [41]: ...
... Both P andP are encrypted to give C andC, respectively. A good diffusion algorithm is guaranteed if C andC differ from each other in half of their bits [40], [41]. If the changes are small, this might provide a way to reduce the size of the key space to be searched [40], [41]. ...
Article
Full-text available
In recent years, there has been significant development in multimedia technologies. Transmission of multimedia data such as audio, video and images over the Internet is now very common. The Internet, however, is a very insecure channel and this possess a number of security issues. To achieve confidentiality and security of multimedia data over an insecure channel like the Internet, a number of encryption schemes have been proposed. The need to develop new encryption schemes comes from the fact that traditional encryption schemes for textual data are not suitable for multimedia data stream. This paper presents a framework to evaluate image encryption schemes proposed in the literature. Instead of visual inspection, a number of parameters, for example, correlation coefficient, information entropy, compression friendliness, number of pixel change rate and unified average change intensity etc., are used, to quantify the quality of encrypted images. Encryption efficiency analysis and security evaluation of some conventional schemes like the Advanced Encryption Standard (AES) and Compression Friendly Encryption Scheme (CFES) is also presented. The security estimations of AES and CFES for digital images against brute-force, statistical, and differential attacks are explored. Experiments results are presented to test the security of these algorithms for digital images. After analysis of AES and CFES, some weaknesses have been discovered in CFES. These weaknesses were mainly related to low entropy and horizontal correlation in encrypted images.
... 1 xi N is calculated as [8], [9]: ...
Article
Full-text available
security and memory management are the major demands for electronics devices like cell phones, PMPS, iphones and digital cameras. This paper suggested a high level of security mechanism by considering the concept of steganography along with the principles of image compression and cryptography. The chaos based on image encryption using the Aizawa attractor map is applied. Secured stego-image creator and secured image are constructed to provide high level of security in addition to achieve less memory space for storage. The encrypted image security is evaluated via statistical measure of applying correlation function between two adjacent pixels, then differential attack and key space analysis were achieved. This method aims to recover the problem of encryption inability such as level of security and small key space. Keywords: Chaotic function, chaotic trigonometric map, Integer Wavelet and DCT Transforms, Image encryption, Aizawa attractor map RESUMEN / la seguridad y la gestión de la memoria son las principales demandas de dispositivos electrónicos como teléfonos celulares, PMPS, iphones y cámaras digitales. Este documento sugirió un alto nivel de mecanismo de seguridad al considerar el concepto de esteganografía junto con los principios de compresión de imágenes y criptografía. Se aplica el caos basado en el cifrado de imágenes utilizando el mapa de atracción de Aizawa. El creador seguro de imágenes de stego y la imagen segura están construidas para proporcionar un alto nivel de seguridad además de lograr menos espacio de memoria para el almacenamiento. La seguridad de la imagen encriptada se evalúa mediante la medición estadística de la aplicación de la función de correlación entre dos píxeles adyacentes, luego se lograron el ataque diferencial y el análisis del espacio clave. Este método tiene como objetivo recuperar el problema de la incapacidad de cifrado, como el nivel de seguridad y el pequeño espacio clave. Palabras clave: función caótica, mapa trigonométrico caótico, transformaciones de onda entera y DCT, cifrado de imagen, mapa de atracción de Aizawa.
... Mean Square Error (MSE) is the cumulative squared error between two digital images and can be used to check the avalanche effect. MSE can be calculated using (2.15) [20], [21]. ...
Article
Full-text available
In recent years, a significant improvement in multimedia technologies has been observed and therefore, transmission of these data over the network is very common. The network (mainly internet) is not a secure channel and it has a number of security related problems. To achieve security of multimedia data over an insecure channel, a number of encryption methods have been developed. This paper gives a study on different methodology to evaluate image encryption techniques proposed in the literature. Only the visual inspection is not enough and therefore a number of parameters like, correlation coefficient, NPCR. UACI, information entropy, compression friendliness, PSNR, histogram analysis etc., are used, to judge the quality of encrypted images.
... The encryption of the payload prior to embedding is discussed in [29,30], and the basic notes that should be observed by a steganographer are defined. First, in order to eliminate the attack of comparing the original image file with the stego-image, it is advised that a completely new image is created and destroyed after generating the stego-image. ...
Article
Full-text available
In modern communication systems, one of the most challenging tasks involves the implementation of adequate methods for successful and secure transfer of confidential digital information over an unsecured communication channel. Many encryption algorithms have been developed for protection of confidential information. However, over time, flaws have been discovered even with the most sophisticated encryption algorithms. Each encryption algorithm can be decrypted within sufficient time and with sufficient resources. The possibility of decryption has increased with the development of computer technology since available computer speeds enable the decryption process based on the exhaustive data search. This has led to the development of steganography, a science which attempts to hide the very existence of confidential information. However, the stenography also has its disadvantages, listed in the paper. Hence, a new method which combines the favourable properties of cryptography based on substitution encryption and stenography is analysed in the paper. The ability of hiding the existence of confidential information comes from steganography and its encryption using a coding table makes its content undecipherable. This synergy greatly improves protection of confidential information.
Research
Full-text available
Steganography is the science that involves communicating secret data in an appropriate multimedia carrier like image, video and audio files. it does not replace cryptography but rather boots the security using its obscurity features. When we work on a network the security requirements of a user as well as a network increases. There are number of available ways over the network to achieve the information security. Till now the available methods hide the secret data over the image on a fixed pattern that makes a user identify the pattern easily. We are providing a dynamic pattern extraction approach using biometric. It advocates an object-oriented approach in which skin tone detected areas in the image are selected for embedding where possible. After detecting the skin area, edge detection is performed by using the canny edge detection method. As the edges will be detected we will use this area as the pattern to hide the data over the image. The secret data is compressed using the DWT technique and then further compressed secret information is encrypted using RSA algorithm with bit shift method. This proposed technique provides more security to the data as data is embedded not in the whole image but only at some specified location. Due to this image distortion of the carrier image is less and it is difficult to identify the Secret image.
Article
Full-text available
Digital images have become part of everyday life by demonstrating its usability in a variety of fields from education to space research. Confidentiality and security of digital images have grown significantly with increasing trend of information interchange over the public channel. Cryptography can be used as a successful technique to prevent image data from unauthorized access. Keeping the nature of image data in mind, several encryption techniques are presented specifically for digital images, in literature during past few years. These cryptographic algorithms lack a benchmark for evaluation of their performance, cryptographic security and quality analysis of recovered images. In this study, we have designed and developed a benchmark based on all the parameters necessary for a good image encryption scheme. Extensive studies have been made to categories all the parameters used by different researchers to evaluate their algorithms and an optimum benchmark for evaluation is formulated. This benchmark is used to evaluate three image encryption schemes. The results of evaluation have highlighted the specific application areas for these image encryption schemes.
Conference Paper
The technique used to hide copyright information (watermark) into the digital audio signal is termed as audio watermarking. Audio watermarking is an excellent approach to provide a solution to mitigate challenges that occur from easy copying and distribution of audio files that are being downloaded or uploaded through the web. The audio watermarking algorithms proposed earlier were implemented by image or binary logo or a unique pattern as watermark. To tackle the difficulty of online musical piracy, unethical attempt to alter purposefully copyrighted audio data and unauthorized distribution and tampering of audio files an efficient audio watermarking method is suggested here. In this method the copyright information (watermark) which is an audio signal of shorter length is imperceptibly added into the original audio signal with greater duration. Here two audio watermarking methods using DCT (Discrete Cosine Transform) and then with DWT-SVD (Discrete Wavelet Transform — singular value decomposition) methods are proposed with audio signal encrypted with secure hash algorithm as watermark. A comparative study on performance evaluation of both DCT and DWT-SVD methods are made in terms of PSNR (Peak-Signal to Noise Ratio) and MSE (Mean Square Error). The results show that transparency is best for each methods and DWT-SVD method is more efficient against signal attacks. The elementary notion of this document is to provide a beneficial and satisfactory method for secure audio file transmission by providing copyright protection.
Article
Full-text available
We propose a secure algorithm for direct encryption and decryption of digital images with chaotic map lattices. The basic idea is to convert, pixel by pixel, the image color to chaotic logistic maps one-way coupled by initial conditions. After small numbers of iterations and cycles, the image becomes indistinguishable due to inherent properties of chaotic systems. Since the maps are coupled, the image can be completely recovered by the decryption algorithm if map parameters, number of iterations, number of cycles, and the image size are exactly known.
Conference Paper
Full-text available
Steganography is defined as the science of hiding or embedding ";data"; in a transmission medium. Its ultimate objectives, which are undetectability, robustness (i.e., against image processing and other attacks) and capacity of the hidden data (i.e., how much data we can hide in the carrier file), are the main factors that distinguish it from other "; sisters-in science"; techniques, namely watermarking and Cryptography. This paper provides an overview of well known Steganography methods. It identifies current research problems in this area and discusses how our current research approach could solve some of these problems. We propose using human skin tone detection in colour images to form an adaptive context for an edge operator which will provide an excellent secure location for data hiding.
Book
Every day millions of people capture, store, transmit, and manipulate digital data. Unfortunately free access digital multimedia communication also provides virtually unprecedented opportunities to pirate copyrighted material. Providing the theoretical background needed to develop and implement advanced techniques and algorithms, Digital Watermarking and Steganography: • Demonstrates how to develop and implement methods to guarantee the authenticity of digital media • Explains the categorization of digital watermarking techniques based on characteristics as well as applications • Presents cutting-edge techniques such as the GA-based breaking algorithm on the frequency-domain steganalytic system The popularity of digital media continues to soar. The theoretical foundation presented within this valuable reference will facilitate the creation on new techniques and algorithms to combat present and potential threats against information security.
Article
This paper presents a new method for embedding data into an image called "A Block Complexity based Data Embedding" (ABCDE). The principle of the method is the same as that of BPCS-Steganography. Embedding is performed by replacing pixel data of noisy regions in an image with another noisy data obtained by convert-ing data to be embedded. A complexity measure is de-fined in BPCS-Steganography for discriminating noisy regions in an image. It is a suitable measure, but is not always applicable. The two new complexity mea-sures called the run-length irregularity and the border noisiness are proposed in this paper to properly dis-criminate noisy regions. An M-sequence is employed for converting data to be embedded into noisy data. ABCDE can be successfully applied to various images. We can expect that near 50% pixels of an image can be used for embedding without degrading its quality.
Article
Segmentation of human faces from still images is a research field of rapidly increasing interest. Although the field encounters several challenges, this paper seeks to present a novel face segmentation and facial feature extraction algorithm for gray intensity images (each containing a single face object). Face location and extraction must first be performed to obtain the approximate, if not exact, representation of a given face in an image. The proposed approach is based on the Voronoi diagram (VD), a well-known technique in computational geometry, which generates clusters of intensity values using information from the vertices of the external boundary of Delaunay triangulation (DT). In this way, it is possible to produce segmented image regions. A greedy search algorithm looks for a particular face candidate by focusing its action in elliptical-like regions. VD is presently employed in many fields, but researchers primarily focus on its use in skeletonization and for generating Euclidean distances; this work exploits the triangulations (i.e., Delaunay) generated by the VD for use in this field. A distance transformation is applied to segment face features. We used the BioID face database to test our algorithm. We obtained promising results: 95.14% of faces were correctly segmented; 90.2% of eyes were detected and a 98.03% detection rate was obtained for mouth and nose.
Article
An algorithm for constructing one-way hash function based on spatiotemporal chaos is proposed. A two-dimensional coupled map lattices (2D CML) with parameters leading to the largest Lyapunov exponent is employed. The state of the 2D CML is dynamically determined by its previous state and the message bit at the corresponding positions. The hash value is obtained by a linear transform on the final state of the 2D CML. Theoretical analysis and computer simulation indicate that our algorithm has good statistical properties, strong collision resistance and high flexibility. It is practical and reliable, with high potential to be adopted as a strong hash function for providing data integrity.
S-Tools [online] Available from World Wide Web
  • A Brown
F5 [online] Available from World Wide Web
  • A Westfeld