ArticlePDF Available

Feature base fusion for splicing forgery detection based on neuro fuzzy

Authors:

Abstract and Figures

Most of researches on image forensics have been mainly focused on detection of artifacts introduced by a single processing tool. They lead in the development of many specialized algorithms looking for one or more particular footprints under specific settings. Naturally, the performance of such algorithms are not perfect, and accordingly the provided output might be noisy, inaccurate and only partially correct. Furthermore, a forged image in practical scenarios is often the result of utilizing several tools available by image-processing software systems. Therefore, reliable tamper detection requires developing more poweful tools to deal with various tempering scenarios. Fusion of forgery detection tools based on Fuzzy Inference System has been used before for addressing this problem. Adjusting the membership functions and defining proper fuzzy rules for attaining to better results are time-consuming processes. This can be accounted as main disadvantage of fuzzy inference systems. In this paper, a Neuro-Fuzzy inference system for fusion of forgery detection tools is developed. The neural network characteristic of these systems provides appropriate tool for automatically adjusting the membership functions. Moreover, initial fuzzy inference system is generated based on fuzzy clustering techniques. The proposed framework is implemented and validated on a benchmark image splicing data set in which three forgery detection tools are fused based on adaptive Neuro-Fuzzy inference system. The outcome of the proposed method reveals that applying Neuro Fuzzy inference systems could be a better approach for fusion of forgery detection tools.
Content may be subject to copyright.
Noname manuscript No.
(will be inserted by the editor)
Feature base fusion for splicing forgery detection
based on neuro fuzzy
Habib Ghaffari Hadigheh ·Ghazali bin
sulong
Received: date / Accepted: date
Abstract Most of researches on image forensics have been mainly focused on
detection of artifacts introduced by a single processing tool. They lead in the
development of many specialized algorithms looking for one or more particular
footprints under specific settings. Naturally, the performance of such algorithms
are not perfect, and accordingly the provided output might be noisy, inaccurate
and only partially correct. Furthermore, a forged image in practical scenarios is
often the result of utilizing several tools available by image-processing software
systems. Therefore, reliable tamper detection requires developing more poweful
tools to deal with various tempering scenarios. Fusion of forgery detection tools
based on Fuzzy Inference System has been used before for addressing this problem.
Adjusting the membership functions and defining proper fuzzy rules for attaining
to better results are time-consuming processes. This can be accounted as main
disadvantage of fuzzy inference systems. In this paper, a Neuro-Fuzzy inference
system for fusion of forgery detection tools is developed. The neural network char-
acteristic of these systems provides appropriate tool for automatically adjusting
the membership functions. Moreover, initial fuzzy inference system is generated
based on fuzzy clustering techniques. The proposed framework is implemented and
validated on a benchmark image splicing data set in which three forgery detection
tools are fused based on adaptive Neuro-Fuzzy inference system. The outcome of
the proposed method reveals that applying Neuro Fuzzy inference systems could
be a better approach for fusion of forgery detection tools.
Keywords Machine Learning ·Splicing Forgery Detection ·Image Processing.
Habib Ghaffari-Hadigheh
Universiti Tecknologi Malaysia (UTM)
Tel.: +98-914-3091405
E-mail: ghhabib2@live.utm.my
Ghazali bin sulong
Universiti Malaysia Terengganu (UMT)
Tel.: +60-177-467128
E-mail: ghazali.s@umt.edu.my
arXiv:1701.08374v1 [cs.CV] 29 Jan 2017
2 Habib Ghaffari Hadigheh, Ghazali bin sulong
1 Introduction
Nowadays, exploitation of digital images and photos taken by various smart record-
ing devices is getting more prevalent along with the extensive use of social media
like Facebook, Twitter, and Instagram. Individuals tend to spend a considerable
amount of time surfing the Internet, and share almost all moments of their lives, as
well as events in their districts such as festivals, concerts or some dreadful incidents
e.g. terrorist attacks, armed robberies or anti-human rights behaviors. Recently,
so many devices exist for producing digital images, and almost each communica-
tion device has access to the Internet and is equipped with a digital camera which
produces very high-resolution pictures. These snapshots and professionally taken
images are of very big advantage, since they are accounted as an inevitable part
of the accompanying contexts in the media, and they help to elucidate different
aspects of events, while at the same time they could be a source of troublemaking
consequences both in local and global scales.
Images might be the mean of deliberately cynical distortion of a reality, and
aimed to convey special messages without other corroborating documents. A ma-
nipulated image taken from a protest may be intended to exaggerate the crowd
thronged in supporting of an idea which might be ignored by the authorities. Dis-
torted shots that are claimed have been taken from private moments of celebrities
might be used for extortion or imposing them to act in favor of third parties which
are the anomaly in their background. This innate and potential quality of pictures
in general and digital image in particular, would make them susceptible for forgery
by digital image editing and post processing.
Above mentioned issues assert the necessity for providing reliable tools to au-
thenticate the originality of such images, which attracted many researchers to work
on detecting the possible traces of forgery. Forgery detection methods are catego-
rized in two general branches, the active and the passive methods [Farid (2009)].
The former is based on the idea of inserting information inside the images and use
them for authentication and proving of their integrity. This information is usually
inserted either at the time of taking the photos or during the post-processing oper-
ations. This approach needs special kind of hardware while the information could
easily be altered [Katzenbeisser et al (2000); Cox et al (2003)]. The latter exploits
the image data to detect the possible clues of forgery. Since most of devices in
the market are not equipped with such complicated and fancy tools to accom-
plish the first method, and consequently taken photos are free of such embedded
information, passive methods are more practical [Farid (2009)].
There are different kinds of passive methods for detecting possible forgeries,
while none of them claims to have 100 percent accuracy, since forgers sometimes
professionally manipulate images such that detection of forgery becomes very hard
and even an impossible job [Kirchner and B¨ohme (2007); Kirchner and Bohme
(2008)]. Image splicing, copy-paste attack, and image retouching are three exam-
ples of passive methods. Most of the researches just focus on one type of these
methods, because they usually don’t use common features of photos during their
process [Avcibas et al (2004); Hsiao and Pei (2005)].
In this research, we only focus on image splicing methods. These methods are
applicable when the forged image are produced by integration of more than one im-
age. Splicing detection is the most challenging problem while there is almost no ul-
timate guaranteeing solution [Ms. Sushama (2014)]. To overcome this uncertainty,
Feature base fusion for splicing forgery detection based on neuro fuzzy 3
we use Neuro-Fuzzy Inference System (NFIS) for the fusion of features extracted
by three different splicing forgery detection methods; Discrete Wavelet Transfor-
mation (DWT) decompression [Fu et al (2006)], Edge images based on Gray Level
Co-occurrence Matrix (GLCM) [Wang et al (2009)], and N-Run Length [Dong
et al (2009)]. To the best of our knowledge this is the first study on using NFIS in
the combination of these methods.
The reminder of this paper is organized as follows: In Section 2 a literature
of problem is reviewed. The methodology of the study is described in Section 3.
Section 4 depicts experimental results, and the final section includes the discussion
on the results and potential future work directions.
2 Problem Background
Image splicing is a kind of forgery attack implemented by integration of some
fragments from the same image or others, without further post-processing such as
smoothing of boundaries among adjacent fragments [Zhang et al (2008)]. Several
researches have been conducted on splicing forgery detection. One group focused
on detecting the possible forgery using statistical analysis of the image pixels infor-
mation, others diverted their focus on deriving the inconsistency of light directions,
to discover the potentially existent splicing forgery traces [Redi et al (2011)].
Any splicing operation, even when the forger tries to mask it with blending
techniques, leaves some traces in the image statistics. Subsequently, these sta-
tistical inconsistencies could be used for locating the manipulated place . The
primary application of this approach traces back to forgery detection in digital
signals [Farid (1999)]. The main idea was that, when digital data are deformed,
spectrum analysis could be used as a detection mean. This tool had been adapted
for 2d signal in image processing [Ng et al (2004)]. For more details we refer the
interested readers to [Birajdar and Mankar (2013)].
Illumination analysis is another trend for splicing forgery detection where il-
lumination directions could be detected by processing the intensity of the colors
in the neighboring pixels. The majority of objects in an authenticated image are
supposed to have consistent light directions [Redi et al (2011)]. Even though, mod-
ern editing tools allow concealing the traces of splicing in a deceptive manner, but
it is not always possible to match the lighting conditions of the fragments even
when it has been perfumed by professionals. One of the pioneering attempts aimed
to estimate the incident light direction of different objects in order to highlight
the possible mismatches [Johnson and Farid (2005)]. Further findings are reported
in [Johnson and Farid (2007); Zhang et al (2009)].
In spite of numerous practical findings in this area, all of the proposed methods
suffer from a tangible degree of indeterminacy. This imperfection due to perform-
ing post-processing operations for hiding the traces of forgery or utilizing lossy
compression formats. This problem is referred to as Uncertainty, and could be
addressed by diverse approaches. One is to fuse more than one detection tool,
either extracted features by each tool are intergraded before decision making [He
et al (2006); Dong et al (2009); Chetty and Singh (2010)],or each tool individually
results to a decision and a combination rule produces a final decision [Barni and
Costanzo (2012)].
4 Habib Ghaffari Hadigheh, Ghazali bin sulong
Fig. 1 Methodology used for fusion of forgery detection tools
Using fuzzy as a combination rule in the fusion of forgery detection tools had
been overlooked until it has been studied in Chetty and Singh (2010); Barni and
Costanzo (2012)]. The main disadvantage of their method is that, adjusting mem-
bership functions in order to acquire accurate results is a time-consuming process.
In this paper, we develop the idea presented in [Barni and Costanzo (2012)], and
introduce a methodology in order to detect splicing forgery automatically using
Neuro-Fuzzy fusion. The next section describes the steps of the proposed method-
ology.
3 Proposed Methodology
The suggested method aims to combine three existing tools; DWT decompression,
Edge images based on GLCM, and N-Run Length. In the sequel, three main steps
are described in details. Preprocessing is the first step that has the responsibility
to extract necessary data for the subsequent steps. This data includes features
vectors from images for each tool and they are converted to decision values by a
Support Vector Machine (SVM). These decision values are used for fusion based
on an NFIS in the second step,. Analysis of the results and making final decisions
are carried out in the last step. Fig. 1 illustrates the framework for fusion of forgery
detection tools.
3.1 Preprocessing Step
The first step is to make decision based on each forgery detection tool. Extraction
of the features is the most important part of this step. The size of feature vectors
in the Edge Image based on GLCM is reasonable [Wang et al (2009)], while Boost-
ing Feature Selection (BFS) algorithm is used to reduce the size of feature vectors
in DWT decompression [Fu et al (2006)] and N-Run Length [Dong et al (2009)].
Using BFS with an acceptable number of iterations, makes it possible to gener-
ate effective feature vectors with the smaller dimensions, an as a result, the main
classification step could be accomplished faster. AdaBoost based feature selection
Feature base fusion for splicing forgery detection based on neuro fuzzy 5
Fig. 2 Process of training/testing for SVM
system preserves a probability distribution over the training samples. These prob-
abilities are assumed identical at the beginning of the learning stage. Adaboost
adjusts them on a series of cycles via a weak learning algorithm. For each training
sample, it affiliates a weight and these weights are updated by a multiplicative
rule based on the errors of the former learning step. This process is carried out by
giving the priority to those samples that are not classified correctly by the previ-
ous learning weak classifier. Thus, the samples with lower errors during the weak
learning process have greater weights [Majid Valiollahzadeh et al (2008)]. Subse-
quently, a BFS system based on basic BFS introduced in [(Tieu and Viola, 2004)]
is designed. Finally, SVM classifier with Radial Bases Function (RBF) kernel is
used for stable classification, which provides a decision value of each tool for each
individual image. Fig. 2 depicts the process of generating decision values based on
SVM classifier.
3.2 Rule Generation and Fusion Step
Let us discuss the process of converting the decision values generated in the previ-
ous step into input values for fusion of forgery detection tools. All decision values
are denoted as the detection rate, and should be standardized as a requirement
for an NFIS. For this purpose, we have to convert them to numbers in [0,1] [Platt
et al (1999)]. The main idea is to extract the probability P(class/input) from
the decision values of the SVM classifier. To this end, a sigmoid-based function is
considered that fits with the output of the classifier to find two parameters Aand
Bof Eq. 1.
p(f) = 1
1 + exp(Af +B),(1)
where fis the decision value of the tool. Fig. 3 depicts a sample of the generated
sigmoid function. The plus signs stand for the converted decision values which are
fitted to an appropriate sigmoidal function.
Now, these scaled values are considered as inputs for training/testing of the
NFIS system, which integrates these values as a single decision value. Implementing
details are presented in Section 4, and schematically depicted in Fig. 4.
6 Habib Ghaffari Hadigheh, Ghazali bin sulong
Fig. 3 Sample of sigmoidal base function curve, generated based on an SVM decision values.
Decision Values
of DWT
Decisoin Values
of N-Run Length
Decision Values
of Edge Images
Convert Decsiton Value
int to Stanadar Input
Convert Decsiton Value
int to Stanadar Input
Convert Decsiton Value
int to Stanadar Input
ANFIS
T0.5
Image is Forgery
Image is Original
Fig. 4 ANFIS training/testing process
Feature base fusion for splicing forgery detection based on neuro fuzzy 7
3.3 Evaluation and Comparison
Sensitivity and specificity are used for the construction of the final results. Let us
define the following notations.
True positive (TP). The image is detected as a forged one, and it is really
a forged image.
– False Positive (FP). The image is detected as an authenticated one, while
it is a forged image.
True Negative (TN). The image is detected as an authenticated one, and it
is a real authenticate image.
– False Negative (FN). The image is detected as forged one, while it is an
authenticated image.
Here, the sensitivity of the test stands for the proportion of images identified
to be forged. Mathematically,
sensitivity = number of TP
number of TP + number of FN ,
=number of TP
total number of forged image in test samples.
Further, specificity is the proportion of authenticated images identified not to be
forged. In mathematical notion, it is calculated as
specificity = number of TN
number of TN + number of FP ,
=number of TN
total number of authenticate images in test samples.
4 Experimental Results
The proposed methodology was implemented on the benchmark data set Splic-
ing Forgery Detection Data Set, provided by the Columbia University’s Computer
Graphics and User Interfaces Lab [Yu-Feng Hsu (2006)]. It consists of 933 authen-
ticated and 912 spliced image blocks of the size of 128×128. All images are in gray
scale mode, and their blocks have been extracted from the CalPhotos image set.
The splicing operation was down by cutting some part of the original image blocks
and pasting inside the other original one, without any post processing operation.
Fig. 5 depicts two images of this data set, one is authenticated and the other is its
forged version. LIBSVM; a library for support vector machine has been utilized
for implementation of the SVM classifier in Matlab [Chang and Lin (2011)]. Its
RBF kernel has two parameters Cand γ, must be specified before starting the
training phase. Empirical results denote that RBF with default parameters values
may imply to misclassification or overfitting of the classifier. Therefore, a method
based on grid search was proposed and implemented [Dong et al (2009)].
8 Habib Ghaffari Hadigheh, Ghazali bin sulong
(a) (b)
Fig. 5 Image (a) is an original one and, (b) is the same which is spliced using a fragment of
another original image.
All the calculations and implementations were performed using a desktop com-
puter with an Intel(R) Core(TM)i7-2630QM CPU, 2.00 ×8 GHz processor, and 8
Gb of RAM on the Ubuntu 14.04 LTS operating system platform. Image process-
ing toolbox of Matlab 8.1 is used for converting images to the matrices of the size
128 ×128. For obtaining the results, we carried out five runs, each run includes all
images of the data set. In each run, 90% of images were randomly selected for train-
ing and the rest were left for testing of both SVMs and NFIS. For generating scaled
values from each SVM classifier result, a method based on the one introduced in
[Platt et al (1999)] was used. The image is considered the most authenticated as
the associated output value approaches to one. Accordingly, authenticated images
of the data set were labeled 1, and the others 0 in both training and testing steps.
Complete algorithm and details on the performance of this method are presented
thoroughly in [Platt et al (1999)], and a Matlab implementation is developed by
[Lin et al (2007)].
We examined different kinds of Neuro-Fuzzy system to find out which of them
has better accuracy rate. The input membership functions are considered as Gaus-
sian. However, output membership function is selected either constant or linear,
depending on the limitation of the fuzzy toolbox of Matlab. For implementation
of NFIS, we applied the Matlab Fuzzy Toolbox. Based on the limitation of NFIS
in Matlab, we are just able to implement Adaptive Neuro Fussy Inference System
(ANFIS) [Turevskiy (2014); Castillo et al (2007)]. We classified the final results
based on the threshold used by [Barni and Costanzo (2012)], when the final deci-
sion value of NFIS is greater than 0.5, the image was considered as authenticated
one, and otherwise as a forged image. Fig. 4 depicts the flowchart of the evaluation
step as well.
The results are reported in Tables 1 and 2. The former depicts the best obtained
results of sensitivity measurement, and the latter shows the results of specificity.
Five columns are included in both tables; the first column denotes the number of
selected features by BFS, which varies from 30 to 100. The case when all all features
are considered stands at the last row. The subsequent three columns respectively
Feature base fusion for splicing forgery detection based on neuro fuzzy 9
Table 1 Results of Fusion in terms of Sensitivity
Features Number DWT Edge Images N-Run Length NFIS
30 82.32% 50.61% 59.76% 85.37%
50 80.00% 50.30% 55.76% 83.03%
75 83.54% 51.22% 59.76% 86.59%
100 82.32% 50.00% 59.76% 82.93%
All 89.47% 32.26% 67.76% 72.37%
Table 2 Results of Fusion in terms of Specificity
Features Number DWT Edge Images N-Run Length NFIS
30 72.54% 74.19% 67.08% 71.61%
50 70.55% 74.85% 56.44% 85.28%
75 76.13% 79.73% 63.23% 70.32%
100 76.77% 80.65% 63.23% 81.29%
All 74.00% 86.67% 66.67% 91.33%
denote the results of DWT, Edge Rune base on GLCM and N-Run Length, and
the final column reports the results of the fusion based on NFIS.
As shown in Table 1, the best sensitivity for DWT and N-Run Length is
achieved when all of the features are used, while this is not the case in others.
The accuracy rate of sensitivity for NFIS reveals that there is no necessity for
considering all features. Moreover, the results of the NFIS is as promising as the
others. As denoted in Table 2, the behavior of specificity is contrariwise, the best
results are obtained for Edge Rune base on GLCM and NFIS when all features
are applied. However, the NFIS competes with the Edge Rune base on GLCM in
specificity detection and approximately better than the two others.
5 Conclusion
In this paper, we presented a methodology for fusion of features on the decision
level, with the aim of automatically finding possible forgery traces in grayscale
images. For this propose, we integrated the scaled decision values of three different
tools using the NFIS. To achieve lower run time, we applied a BFS algorithm for
detecting more efficients among all features.
Based on the experimental results, our proposed method has almost promising
results in comparison with individual tools. It seems that considering a trade-off
between two objectives sensitivity and specificity is necessary for further investi-
gation. Based on the limitation of our benchmark data set, we experimented only
on grayscale images, while adapting the methodology on color images could be
challenging. We restate that reducing the run time was not our main concern.
Nevertheless, we observed that decreasing the number of features has the positive
effect on the run time. Further study can be conducted using recently advanced
machine learning methods such as deep learning.
10 Habib Ghaffari Hadigheh, Ghazali bin sulong
Acknowledgements We sincerely appreciate Columbia University Graphic Lab for the help
in providing splicing forgery detection Data-set. We also acknowledge the UTM for providing
facilities and fretful environment for this research.
References
Avcibas I, Bayram S, Memon N, Ramkumar M, Sankur B (2004) A classifier design
for detecting image manipulations. In: Image Processing, 2004. ICIP’04. 2004
International Conference on, IEEE, vol 4, pp 2645–2648
Barni M, Costanzo A (2012) A fuzzy approach to deal with uncertainty in image
forensics. Signal Processing: Image Communication
Birajdar GK, Mankar VH (2013) Digital image forgery detection using passive
techniques: A survey. Digital Investigation
Castillo O, Melin P, Kacprzyk J, Pedrycz W (2007) Type-2 fuzzy logic: Theory
and applications. In: Granular Computing, 2007. GRC 2007. IEEE International
Conference on, pp 145–145, DOI 10.1109/GrC.2007.118
Chang CC, Lin CJ (2011) Libsvm: a library for support vector machines. ACM
Transactions on Intelligent Systems and Technology (TIST) 2(3):27
Chetty G, Singh M (2010) Nonintrusive image tamper detection based on fuzzy
fusion. International Journal of Computer Science and Network Security 10:86–
90
Cox I, Miller M, Bloom J (2003) Digital watermarking morgan kaufmann publish-
ers. San Francisco, CA
Dong J, Wang W, Tan T, Shi YQ (2009) Run-length and edge statistics based
approach for image splicing detection. In: Digital Watermarking, Springer, pp
76–87
Farid H (1999) Detecting digital forgeries using bispectral analysis
Farid H (2009) Exposing digital forgeries from jpeg ghosts. Information Forensics
and Security, IEEE Transactions on 4(1):154–160
Fu D, Shi YQ, Su W (2006) Detection of image splicing based on hilbert-huang
transform and moments of characteristic functions with wavelet decomposition.
In: Digital Watermarking, Springer, pp 177–187
He J, Lin Z, Wang L, Tang X (2006) Detecting doctored jpeg images via dct
coefficient analysis. In: Computer Vision–ECCV 2006, Springer, pp 423–435
Hsiao DY, Pei SC (2005) Detecting digital tampering by blur estimation. In: Sys-
tematic Approaches to Digital Forensic Engineering, 2005. First International
Workshop on, IEEE, pp 264–278
Johnson MK, Farid H (2005) Exposing digital forgeries by detecting inconsistencies
in lighting. In: Proceedings of the 7th workshop on Multimedia and security,
ACM, pp 1–10
Johnson MK, Farid H (2007) Exposing digital forgeries through specular highlights
on the eye. In: Information Hiding, Springer, pp 311–325
Katzenbeisser S, Petitcolas FA, et al (2000) Information hiding techniques for
steganography and digital watermarking, vol 316. Artech house Norwood
Kirchner M, B¨ohme R (2007) Tamper hiding: Defeating image forensics. In: Infor-
mation Hiding, Springer, pp 326–341
Kirchner M, Bohme R (2008) Hiding traces of resampling in digital images. Infor-
mation Forensics and Security, IEEE Transactions on 3(4):582–592
Feature base fusion for splicing forgery detection based on neuro fuzzy 11
Lin HT, Lin CJ, Weng RC (2007) A note on platts probabilistic outputs for support
vector machines. Machine learning 68(3):267–276
Majid Valiollahzadeh S, Sayadiyan A, Nazari M (2008) Face detection using ad-
aboosted svm-based component classifier
Ms Sushama GR (2014) Review of detection of digital image splicing forgeries
with illumination color estimation. International Journal of Emerging Research
in Management &Technology 3(3)
Ng TT, Chang SF, Sun Q (2004) A data set of authentic and spliced image blocks.
Columbia University, ADVENT Technical Report pp 203–2004
Platt J, et al (1999) Probabilistic outputs for support vector machines and com-
parisons to regularized likelihood methods. Advances in large margin classifiers
10(3):61–74
Redi JA, Taktak W, Dugelay JL (2011) Digital image forensics: a booklet for
beginners. Multimedia Tools and Applications 51(1):133–162
Tieu K, Viola P (2004) Boosting image retrieval. International Journal of Com-
puter Vision 56(1-2):17–36
Turevskiy A (2014) Design and simulate fuzzy logic systems. URL http://www.
mathworks.com/products/fuzzy-logic/
Wang W, Dong J, Tan T (2009) Effective image splicing detection based on image
chroma. In: Image Processing (ICIP), 2009 16th IEEE International Conference
on, IEEE, pp 1257–1260
Yu-Feng Hsu SF (2006) Columbia image splicing detection evaluation dataset.
Zhang W, Cao X, Zhang J, Zhu J, Wang P (2009) Detecting photographic com-
posites using shadows. In: Multimedia and Expo, 2009. ICME 2009. IEEE In-
ternational Conference on, IEEE, pp 1042–1045
Zhang Z, Zhou Y, Kang J, Ren Y (2008) Study of image splicing detection. In:
Advanced Intelligent Computing Theories and Applications. With Aspects of
Theoretical and Methodological Issues, Springer, pp 1103–1110
... We introduced (Polar DyWT) which was more efficient. Similarly, ANFIS [15] identifies the forgery in the case of splicing only, and this classifier still needs to be explored for further research. ...
... The method is invariant to reflection attacks and any combination of reflection attacks with geometrical transforms. In [26], the fusion of forgery detection methods based on Fuzzy inference rules is addressed. A neuro-fuzzy inference system (NFIS) is developed for forgery detection for better time complexity. ...
Chapter
Full-text available
In today’s modern era, digital images have noteworthy significance because they have become a leading source of information dissemination. However, the images are being manipulated and tampered. The image manipulation is as old as images itself. The history of modifying images dates back to the 1860s’, though it has become very popular in recent times due to the availability of various open source software available freely over the internet. Such software is responsible for eroding our trust on the integrity of the visual imagery. In this paper, a comprehensive survey of various image forgeries, its types and the currently used techniques to detect such forgeries is presented. The review delivers the downsides of various controversial forgeries that have happened in the history. It provides the taxonomy of various forgeries in digital images and a redefined the classification of forgery detection methods. It also highlights the pros and cons of forgery detection methods currently in use and directs path towards challenges for further research.
... The method is invariant to reflection attacks and any combination of reflection attacks with geometrical transforms. In [26], the fusion of forgery detection methods based on Fuzzy inference rules is addressed. A neuro-fuzzy inference system (NFIS) is developed for forgery detection for better time complexity. ...
Chapter
The authenticity of digital images is openly challenged today due to the easy availability of various advanced image editing software. The semantic meaning of an image can be changed upto any extent with the help of these software. Image splicing forgery is one of the most popular ways to manipulate the content of an image. In image splicing forgery, two or more images or the parts of the images are used to create a spliced (composite) image. Spliced images can be misused in many ways. Therefore, to revive the trustworthiness of digital images, several efforts are made by researchers to develop various methods to detect image splicing forgery in the last few years. The main objective of this study is to review and analyze the recent work in this area. In this paper, first, a generalized workflow to detect image splicing forgery is presented. Second, this paper categorized the existing image splicing detection methods as hand-crafted feature-based and deep learning-based. Third, various publicly available image datasets are also summarized. Finally, future research directions are provided to help the researchers.
Article
Full-text available
In this paper, we propose a novel fuzzy fusion of image residue features for detecting tampering or forgery in video sequences. We suggest use of feature selection techniques in conjunction with fuzzy fusion approach to enhance the robustness of tamper detection methods. We examine different feature selection techniques, the independent component analysis (ICA), and the canonical correlation analysis (CCA) for achieving a more discriminate subspace for extracting tamper signatures from quantization and noise residue features. The evaluation of proposed fuzzy fusion technique along with different feature selection techniques for copy-move tampering emulated on low bandwidth Internet video sequences, show a significant improvement in tamper detection accuracy with fuzzy fusion.
Conference Paper
Full-text available
In this paper, a simple but efficient approach for blind image splicing detection is proposed. Image splicing is a common and fundamental operation used for image forgery. The detection of image splicing is a preliminary but desirable study for image forensics. Passive detection approaches of image splicing are usually regarded as pattern recognition problems based on features which are sensitive to splicing. In the proposed approach, we analyze the discontinuity of image pixel correlation and coherency caused by splicing in terms of image run-length representation and sharp image characteristics. The statistical features extracted from image run-length representation and image edge statistics are used for splicing detection. The support vector machine (SVM) is used as the classifier. Our experimental results demonstrate that the two proposed features outperform existing ones both in detection accuracy and computational complexity.
Book
Until recently, information hiding techniques received very much less attention from the research community and from industry than cryptography. This situation is, however, changing rapidly and the first academic conference on this topic was organized in 1996. The main driving force is concern over protecting copyright; as audio, video and other works become available in digital form, the ease with which perfect copies can be made may lead to large-scale unauthorized copying, and this is of great concern to the music, film, book and software publishing industries. At the same time, moves by various governments to restrict the availability of encryption services have motivated people to study methods by which private messages can be embedded in seemingly innocuous cover messages. This book surveys recent research results in the fields of watermarking and steganography, two disciplines generally referred to as information hiding.
Article
Image splicing is a simple process that crops and pastes regions from the same or separate sources. It is a fundamental step used in digital photomontage, which refers to a paste-up produced by sticking together images using digital tools such as Photoshop. Examples of photomontages can be seen in several infamous news reporting cases involving the use of faked images. Searching for technical solutions for image authentication, researchers have recently started development of new techniques aiming at blind passive detection of image splicing. However, like most other research communities dealing with data processing, we need an open data set with diverse content and realistic splicing conditions in order to expedite the progresses and facilitate collaborative studies. In this report, we describe with details a data set of 1845 image blocks with a fixed size of 128 pixels x 128 pixels. The image blocks are extracted from images in the CalPhotos collection [CalPhotos'00], with a small number of additional images captured by digital cameras. The data set include about the same number of authentic and spliced image blocks, which are further divided into different subcategories (smooth vs. textured, arbitrary object boundary vs. straight boundary).
Article
Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.
Conference Paper
The steady improvement in image/video editing techniques has enabled people to synthesize realistic images/videos conveniently. Some legal issues may occur when a doctored image cannot be distinguished from a real one by visual examination. Realizing that it might be impossible to develop a method that is universal for all kinds of images and JPEG is the most frequently used image format, we propose an approach that can detect doctored JPEG images and further locate the doctored parts, by examining the double quantization effect hidden among the DCT coefficients. Up to date, this approach is the only one that can locate the doctored part automatically. And it has several other advantages: the ability to detect images doctored by different kinds of synthesizing methods (such as alpha matting and inpainting, besides simple image cut/paste), the ability to work without fully decompressing the JPEG images, and the fast speed. Experiments show that our method is effective for JPEG images, especially when the compression quality is high.
Conference Paper
Image compositing technology has become popular for tampering with digital photographies. We describe how such composites can be detected by enforcing the geometric and photometric constraints from shadows. In particular, we explore (i) the imaged shadow relations that are modeled by the planar homology, and (ii) the color characteristics of the shadows measured by the shadow matte. Our approach efficiently extracts these constraints from a single image and makes use of them for the digital forgery detection. Experimental results on visually plausible images demonstrate the performance of the proposed method.
Article
Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.