Ismail Avcibas

Ismail Avcibas

PhD

About

42
Publications
18,087
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
3,066
Citations
Citations since 2017
0 Research Items
892 Citations
2017201820192020202120222023050100150
2017201820192020202120222023050100150
2017201820192020202120222023050100150
2017201820192020202120222023050100150
Additional affiliations
September 2006 - February 2011
Baskent University
Position
  • Professor (Associate)
April 1999 - August 2000
Polytechnic Institute of New York University
Position
  • Researcher

Publications

Publications (42)
Article
Full-text available
In this paper we conduct a statistical analysis of the sensitivity and consistency behavior of objective image quality measures. We categorize the quality measures and compare them for still image compression applications. The measures have been categorized into pixel difference-based, correlation-based, edge-based, spectral-based, context-based an...
Article
A novel technique is presented to identify the codec of a coded audio. The technique does not perform decoding, utilize any coding metadata, or assume information about the structure describing the bit stream format of a codec. The underlying idea of the technique is that the design choices governing the compression level, audio quality and complex...
Article
Full-text available
We propose a fast, simple and new method for identification of audio codecs that does not require decoding of coded audio data. The method uses chaotic and randomness features of coded audio to build models associated with different codecs. The most important features of the method are operating on just a few kilobytes of data sampled randomly from...
Article
In this paper, we use contourlet transform for digital image manipulation detection. We extract contourlet and wavelet features and test these obtained features on a controlled image data set. Results show that contourlet based features gives better success rates than wavelet based ones.
Article
Full-text available
We propose statistical image models for wavelet-based transforms, investigate their use, and compare their relative merits within the context of digital image forensics. We consider the problems of 1) differentiating computer graphics images from photographic images, 2) source camera and source scanner identification, and 3) source artist identific...
Article
Full-text available
W0065 present a new method for audio codec identification that does not require decoding of coded audio data. The method utilizes randomness and chaotic characteristics of coded audio to build statistical models that represent encoding process associated with different codecs. The method is simple, as it does not assume knowledge on encoding struct...
Conference Paper
Full-text available
The technologies making possible to change even the content of the digital media are developing fast. The demand for the development of mathematical and computational algorithms to determine the modifications on digital media has arisen and the studies in this area are gaining speed by the day. In this paper, we propose new statistical models for w...
Article
gostermektedir. Bu c,;ah~mada, dalgaclk tabanh donii~iimler iizerinde birer istatistiksel model olu~turulmakta, bu istatistiksel model kullamlarak elde edilen iki grup oznitelik: donii~iim katsaytlannm Keif
Conference Paper
In this paper we present a novel method based on singular value decomposition (SVD) for forensic analysis of digital images. We show that image tampering distorts linear dependencies of image rows/columns and derived features can be accurate enough to detect image manipulations and digital forgeries. Extensive experiments show that the proposed app...
Conference Paper
Full-text available
Micro and macro statistical features based on Singular Value Decomposition (SVD) have been proposed for source cell-phone identification. The performance of the proposed features is evaluated with nai¿ve and informed classifiers for the identification of original images as well as images under several manipulations. The results have been compared...
Article
With the use of advanced computer graphics rendering software, very successful photorealistic images can be generated. Therefore, it may be hard to discriminate the photographic images from the photorealistic ones. In this work, we propose to use the statistics obtained from the ridgelet transform to distinguish the photographic and photorealistic...
Article
Full-text available
We investigate the use of chaotic-type features for recorded speech steganalysis. Considering that data hiding within a speech signal distorts the chaotic properties of the original speech signal, we design a steganalyzer that uses Lyapunov exponents and a fraction of false neighbors as chaotic features to detect the existence of a stegosignal. We...
Article
The various image-processing stages in a digital camera pipeline leave telltale footprints, which can be exploited as forensic signatures. These footprints consist of pixel defects, of unevenness of the responses in the charge-coupled device sensor, black current noise, and may originate from proprietary interpolation algorithms involved in color f...
Article
Full-text available
We investigate the use of chaotic-type features for recorded speech steganalysis. Considering that data hiding within a speech signal distorts the chaotic properties of the original speech signal, we design a steganalyzer that uses Lyapunov exponents and a fraction of false neighbors as chaotic features to detect the existence of a stegosignal. We...
Conference Paper
We propose a method for the identification of the source cellphone from a given image. The sensors, digital image formation pipeline and color filter interpolation algorithms used in different brands of cell-phones make the image unique to the camera. Our method is based on the analysis of strict and relative linear dependance in image rows and col...
Article
In this work, we focus on blind identification of digital cameras used in cellular phones. Since cellular phones are so widely deployed, we thus indirectly bring a solution to their authentication. Digital cameras possess a chain of processing operations such as signal quantization, white balance, demosaicking, color correction, gamma correction, f...
Article
In this paper, we focus on blind source cell-phone identification problem. It is known various artifacts in the image processing pipeline, such as pixel defects or unevenness of the responses in the CCD sensor, black current noise, proprietary interpolation algorithms involved in color filter array [CFA] leave telltale footprints. These artifacts,...
Article
Full-text available
Techniques and methodologies for validating the authenticity of digital images and testing for the presence of doctoring and manipulation operations on them has recently attracted attention. We review three categories of forensic features and discuss the design of classifiers between doctored and original images. The performance of classifiers with...
Article
We address the problem of detecting the presence of hidden messages in audio. The detector is based on the characteristics of the denoised residuals of the audio file, which may consist of a mixture of speech and music data. A set of generalized moments of the audio signal is measured in terms of objective and perceptual quality measures. The detec...
Conference Paper
In this paper, we focus on blind source cell-phone identification problem. The main idea is that proprietary interpolation algorithm (involved due to the structure of color filter array [CFA]) leaves footprints in the form of correlations across adjacent bit planes of images. For this purpose, we explore a set of binary similarity measures, image q...
Article
Full-text available
We first propose a novel content-independent distortion measurement method and use this methodology for digital audio steganalysis. Content-independent distortion measures are utilized as features for the classifier (steganalyzer) design. Experimental results show that the removal of content dependency from features enhances their discriminatory po...
Conference Paper
Recently proposed perturbed quantization (PQ) data hiding is a novel steganographic scheme which is undetectable with the current steganalysis methods. In this paper PQ steganography scheme is described briefly and a novel singular value decomposition (SVD) based Steganalysis method is proposed to detect PQ embedding. The proposed SVD based stegana...
Article
Full-text available
We present a novel technique for steganalysis of images that have been subjected to embedding by steganographic algorithms. The seventh and eighth bit planes in an image are used for the computation of several binary similarity measures. The basic idea is that the correlation between the bit planes as well as the binary texture characteristics with...
Conference Paper
Full-text available
In this work, we focus our interest on blind source camera identification problem by extending our results in the direction of M. Kharrazi et al. (2004). The interpolation in the color surface of an image due to the use of a color filter array (CFA) forms the basis of the paper. We propose to identify the source camera of an image based on traces o...
Article
Full-text available
We present a compression technique that provides progressive transmission as well as lossless and near-lossless compression in a single framework. The proposed technique produces a bit stream that results in a progressive, and ultimately lossless, reconstruction of an image similar to what one can obtain with a reversible wavelet codec. In addition...
Article
Full-text available
Since extremely powerful technologies are now available to generate and process digital images, there is a concomitant need for developing techniques to distinguish the original images from the altered ones, the genuine ones from the doctored ones. In this paper we focus on this problem and propose a method based on the neighbor bit planes of the i...
Conference Paper
Full-text available
In this paper we present a framework for digital image forensics. Based on the assumptions that some processing operations must be done on the image before it is doctored and an expected measurable distortion after processing an image, we design classifiers that discriminates between original and processed images. We propose a novel way of measurin...
Article
Full-text available
We present techniques for steganalysis of images that have been potentially subjected to steganographic algorithms, both within the passive warden and active warden frameworks. Our hypothesis is that steganographic schemes leave statistical evidence that can be exploited for detection with the aid of image quality features and multivariate regressi...
Article
Full-text available
A novel image compression technique is presented that incorporates progressive transmission and near-lossless compression in a single framework. Experimental performance of the proposed coder proves to be competitive with the state-of-the-art compression schemes.
Conference Paper
Full-text available
We present a novel technique for steganalysis of images that have been subjected to least significant bit (LSB) type steganographic algorithms. The seventh and eighth bit planes in an image are used for the computation of several binary similarity measures. The basic idea is that the correlation between the bit planes as well the binary texture cha...
Article
Full-text available
In this work we comprehensively categorize image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. They are categorized into pixel difference-based, correlation-based, edge-based, spectral-based, context-based and human visual system {(HVS)-based} measures. Further...
Article
In this work we comprehensively categorize image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. They are categorized into pixel difference-based, correlation-based, edge-based, spectral-based, context-based and human visual system (HVS)-based measures. Furthermo...
Conference Paper
Full-text available
We present techniques for steganalysis of images that have been potentially subjected to a watermarking algorithm. We show that watermarking schemes leave statistical evidence or structure that can be exploited for detection with the aid of proper selection of image features and multivariate regression analysis. We use some image quality metrics as...
Conference Paper
Full-text available
ABSTRACT Wepresent,a technique ,that provides progressive transmission and near-lossless compression ,in one ,single framework. The proposed,technique produces a bitstream,that results in progressive,reconstruction of the image just like what one can obtain with a reversible wavelet codec. In addition, the proposed scheme provides nearlossless reco...
Article
We present a technique that provides progressive transmission and near-lossless compression in one single framework. The proposed technique produces a bitstream that results in progressive reconstruction of the image just like what one can obtain with a reversible wavelet codec. In addition, the proposed scheme provides near-lossless reconstruction...
Article
This paper presents a new distortion measure for multi-band image vector quantization. The distortion measure penalizes the deviation in the ratios of the components. We design a VQ coder for the proposed ratio distortion measure. We then give experimental results that demonstrate that the new VQ coder yields better component ratio preservation tha...

Network

Cited By