Image-Adaptive Spread Transform Dither Modulation Using Human Visual Model.

Conference Paper · November 2006with4 Reads
DOI: 10.1109/ICCIAS.2006.295326 · Source: DBLP
Conference: Computational Intelligence and Security, International Conference, CIS 2006, Guangzhou, China, November 3-6, 2006, Revised Selected Papers

    Abstract

    This paper presents a new approach on image-adaptive spread-transform dither modulation (STDM). The approach is performed in the discrete cosine transform (DCT) domain, and modifies the original STDM in such a way that the spread vector is weighted by a set of just noticeable differences (JND's) derived from Watson's model before it is added to the cover work. An adaptive quantization step size is next determined according to the following two constraints: 1) the covered work is perceptually acceptable, which is measured by a global perceptual distance; 2) the covered work is within the detection region. We derive the strategy on the choice of the quantization step. Further, an effective solution is proposed to deal with the amplitude scaling attack, where the scaled quantization step is produced using an extracted signal in proportion to the amplitudes of the cover work. Experimental results demonstrate that the proposed approach achieves the improved robustness and fidelity