Image-Adaptive Spread Transform Dither Modulation Using Human Visual Model.
This paper presents a new approach on image-adaptive spread-transform dither modulation (STDM). The approach is performed in the discrete cosine transform (DCT) domain, and modifies the original STDM in such a way that the spread vector is weighted by a set of just noticeable differences (JND's) derived from Watson's model before it is added to the cover work. An adaptive quantization step size is next determined according to the following two constraints: 1) the covered work is perceptually acceptable, which is measured by a global perceptual distance; 2) the covered work is within the detection region. We derive the strategy on the choice of the quantization step. Further, an effective solution is proposed to deal with the amplitude scaling attack, where the scaled quantization step is produced using an extracted signal in proportion to the amplitudes of the cover work. Experimental results demonstrate that the proposed approach achieves the improved robustness and fidelity
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.