Conference Paper

Image-adaptive Watermarking Using The Improved Signal to Noise Ratio

DOI: 10.1007/978-3-540-74377-4_64 Conference: Computational Intelligence and Security, International Conference, CIS 2006, Guangzhou, China, November 3-6, 2006, Revised Selected Papers
Source: DBLP

ABSTRACT The two conflicting requirements of watermark invisibility and robustness are both required in most applications. The solution
is to utilize a suitable perceptual quality metric (PQM) for watermarking correctly. This paper develops a new quality metric,
the improved signal to noise ratio (iSNR). The improvement is done in the following two aspects: 1) SNR manifests much better
performance in an image block of small size than in a whole image; 2) the average luminance and gradient information are added
into SNR. Next, we propose a new adaptive watermarking framework based on the localized quality evaluation, which divides
the cover data into nonoverlapping blocks and assigns an independent distortion constraint to each block to control the quality
of it. In comparison with ones based on the global quality evaluation, the new one exploits the localized signal characteristics
sufficiently while guaranteeing the localized watermark invisibility. Then, a specific implementation of the above framework
is developed for image applying iSNR as the quality metric in the sense of maximizing the detection value. Experimental results
demonstrate that the proposed watermarking performs very well both in robustness and invisibility.

7 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Digital image watermarking techniques for copyright protection have become increasingly robust. The best algorithms perform well against the now standard benchmark tests included in the Stirmark package. However the stirmark tests are limited since in general they do not properly model the watermarking process and consequently are limited in their potential to removing the best watermarks. Here we propose a stochastic formulation of watermarking attacks using an estimation-based concept. The proposed attacks consist of two main stages: (a) watermark or cover data estimation; (b) modification of stego data aiming at disrupting the watermark detection and of resolving copyrights, taking into account the statistics of the embedded watermark and exploiting features of the human visual system. In the second part of the paper we propose a “second generation benchmark”. We follow the model of the Stirmark benchmark and propose the 6 following categories of tests: denoising attacks and wavelet compression, watermark copy attack, synchronization removal, denoising/compression followed by perceptual remodulation, denoising and random bending. Our results indicate that even though some algorithms perform well against the Stirmark benchmark, almost all algorithms perform poorly against our benchmark. This indicates that much work remains to be done before claims about “robust” watermarks can be made. We also propose a new method of evaluating image quality based on the Watson metric which overcomes the limitations of the PSNR.
    Signal Processing 06/2001; 81(6-81):1177-1214. DOI:10.1016/S0165-1684(01)00039-1 · 2.21 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A watermarking algorithm operating in the wavelet domain is presented. Performance improvement with respect to existing algorithms is obtained by means of a new approach to mask the watermark according to the characteristics of the human visual system (HVS). In contrast to conventional methods operating in the wavelet domain, masking is accomplished pixel by pixel by taking into account the texture and the luminance content of all the image subbands. The watermark consists of a pseudorandom sequence which is adaptively added to the largest detail bands. As usual, the watermark is detected by computing the correlation between the watermarked coefficients and the watermarking code, and the detection threshold is chosen in such a way that the knowledge of the watermark energy used in the embedding phase is not needed, thus permitting one to adapt it to the image at hand. Experimental results and comparisons with other techniques operating in the wavelet domain prove the effectiveness of the new algorithm.
    IEEE Transactions on Image Processing 02/2001; 10(5):783-91. DOI:10.1109/83.918570 · 3.63 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper suggests a new image compression scheme, using the discrete wavelet transformation (DWT), which is based on attempting to preserve the texturally important image characteristics. The main point of the proposed methodology lies on that, the image is divided into regions of textural significance employing textural descriptors as criteria and fuzzy clustering methodologies. These textural descriptors include cooccurrence matrices based measures and coherence analysis derived features. While rival image compression methodologies utilizing the DWT apply it to the whole original image, the herein presented novel approach involves a more sophisticated scheme in the application of the DWT. More specifically, the DWT is applied separately to each region in which the original image is partitioned and, depending on how it has been texturally clustered, its relative number of the wavelet coefficients to keep is then, determined. Therefore, different compression ratios are applied to the above specified image regions. The reconstruction process of the original image involves the linear combination of its corresponding reconstructed regions. An experimental study is conducted to qualitatively assessing the proposed compression approach. Moreover, this experimental study aims at comparing different textural measures in terms of their results concerning the quality of the reconstructed image.
Show more