• Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This research proposes a novel method to extract image regions of products from an advertisement video, by analyzing features which are completely independent from the target object. Namely, we focus on how each product is emphasized in the video production, and propose the utilization of low-level visual features which leverage the technical know-how of video producers. By using such features, our proposal can achieve highly accurate detection of temporal and spatial locations of the advertised products, regardless of the product domain. Evaluation of the proposed method has been conducted with the actual advertisement video, in which accuracy of 79.4% in F­ measure has been achieved.
    Proceedings of the 2011 IEEE International Conference on Multimedia and Expo, ICME 2011, 11-15 July, 2011, Barcelona, Catalonia, Spain; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: Searching of video clip from TV streams has significant commercial values, for example, the application of advertisement tracking in different TV channels. In this paper, a segment-based advertisement searching method is proposed. Robust visual features are first extracted to overcome some video transformations, and then two search strategies are presented for long AD clips and short ones. The experimental results indicate that the average search time of one query clip from a 24-hour TV stream is about 1 second, and the mean recall for 9 channels exceeds 98% while 100% precision is achieved. The evaluation results of copy detection task at TRECVID show the robustness and effectiveness.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Content-based copy detection (CBCD) recently has appeared a promising technique for video monitoring and copyright protection. In this paper, a novel framework for CBCD is proposed. Robust global features and local Speeded Up Robust Features (SURF) are first combined to describe video contents, and the density sampling method is proposed to improve the generation of visual codebook. Secondly, Smith-Waterman algorithm is introduced to find the similar video segments, meanwhile, a video matching method based on visual codebook is proposed to calculate the similarity of copy videos. Finally, a hierarchical fusion scheme is used to refine the detection results. Experiments on TRECVID dataset show that the proposed framework gives better results than the average results of CBCD task in TRECVID 2008.


Available from