Image-Adaptive Spread Transform Dither Modulation Using Human Visual Model.
ABSTRACT This paper presents a new approach on image-adaptive spread-transform dither modulation (STDM). The approach is performed in the discrete cosine transform (DCT) domain, and modifies the original STDM in such a way that the spread vector is weighted by a set of just noticeable differences (JND's) derived from Watson's model before it is added to the cover work. An adaptive quantization step size is next determined according to the following two constraints: 1) the covered work is perceptually acceptable, which is measured by a global perceptual distance; 2) the covered work is within the detection region. We derive the strategy on the choice of the quantization step. Further, an effective solution is proposed to deal with the amplitude scaling attack, where the scaled quantization step is produced using an extracted signal in proportion to the amplitudes of the cover work. Experimental results demonstrate that the proposed approach achieves the improved robustness and fidelity
- SourceAvailable from: academypublisher.com[Show abstract] [Hide abstract]
ABSTRACT: This paper proposes a novel self-adaptation differential energy watermarking based on the Watson visual model, which inserts robust watermark into video streaming according to the differential energy theory. This algorithm can control the watermark's embedding intensity of sub-low AC coefficients in the video streaming adaptively based on the Watson visual model. And it also can be self-adaptive cheesed that the region should be embed watermarks according to the relationship between the energy adjustable threshold and their differential energy. So watermark not only meets the non-visual perception, but also has the better robustness. Experiments show that this algorithm has strong robustness and security against the usual video attacks such as noise, filter and compression attack etc with low complexity of energy computation and high capacity.Journal of multimedia 06/2009; 4(3).