A new parallel video understanding and retrieval system
ABSTRACT In this paper, a hybrid parallel computing framework is proposed for video understanding and retrieval. It is a unified computing architecture based on the Map-Reduce programming model, which supports multi-core and GPU architectures. A key task scheduler is designed for the parallelization of computation tasks. The SVM method is used to train models for video understanding purposes. To effectively shorten the training and processing time, the hybrid computing framework is used to train large scale SVM models. The TRECVID database is used as the basic experimental content for video understanding and retrieval. Experiments were conducted on two 8-core servers, each equipped with NVIDIA Quadro FX 4600 graphics card. Results proved that the proposed parallel computing framework works well for the video understanding and retrieval system by speeding up system development and providing better performances.
- SourceAvailable from: Kaiyang Liao
[Show abstract] [Hide abstract]
- "Emmanuel et al. presented a parallel approach for content based medical image retrieval system . Liu et al. proposed a hybrid parallel computing framework for video understanding and retrieval . The distributed parallel approaches have greatly raised the retrieval system's efficiency, but these parallel approaches cannot give full play to the effect of parallel because of the speed limit of network communication . "
ABSTRACT: Content-based video copy detection (CBCD) is very important for video copyright protection in view of the growing popularity of video sharing websites, which deals with not only whether a copy occurs in a query video stream but also where the copy is located and where the copy is originated from. In this paper, we present a video copy detection scheme based on local features which can deal with very large databases both in terms of quality and speed. First, we propose a new clustering algorithm to build "visual words" efficiently. Since our CBCD framework performs indexing using tree structures of cluster centers, the proposed clustering algorithm can improve the quality of retrieval and greatly shrink the off-line training time. Then, we introduce a TF-IDF (term frequency-inverse document frequency) weighted BOF (bag-of-features) voting retrieval method for matching video frames, which is robust to significant video distortion and efficient in terms of memory usage and computation time. Furthermore, we present a verification step robustly inspecting the temporal consistency between the query video and the corresponding candidate videos to further improve the accuracy of retrieval. The experimental results show that the proposed video copy detection scheme achieves high localization accuracy, while achieving comparable performance compared with state-of-the-art copy detection methods.Journal of Intelligent Information Systems 02/2014; 44(1):133-158. DOI:10.1007/s10844-014-0332-5 · 0.63 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: SUMMARY Cloud computing has recently attracted great attention, both commercially and academically. MapReduce is a popular programming model for distributed storage and computation in the cloud. In this paper, we survey cloud-based multimedia applications, identifying the open issues and challenges which arise when MapReduce is used for cloud computing. Copyright © 2011 John Wiley & Sons, Ltd.Concurrency and Computation Practice and Experience 12/2012; 24(17). DOI:10.1002/cpe.1846 · 0.78 Impact Factor
Conference Paper: Distributed Multimedia Content Analysis with MapReduce[Show abstract] [Hide abstract]
ABSTRACT: This paper introduces a scalable solution for distributing content-based video analysis tasks using the emerging MapReduce programming model. Scalable and efficient solutions are needed for this type of tasks, as the number of multimedia content is growing at an increasing rate. We present a novel implementation utilizing the popular Apache Hadoop MapReduce framework for both analysis job scheduling and video data distribution. We employ face detection as a case example because it represents a popular visual content analysis task. The main contribution of this paper is the performance evaluation of distribution models for video content processing in various configurations. In our experiments, we have compared the performance of our video data distribution method against two alternatives solutions on a seven node cluster. Hadoop's performance overhead in video content analysis was also evaluated. We found Hadoop to be a data efficient solution with minimal computational overhead for the face detection task.24th Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications; 01/2013