Conference Paper

Video2Cartoon: generating 3D cartoon from broadcast soccer video.

DOI: 10.1145/1101149.1101184 Conference: Proceedings of the 13th ACM International Conference on Multimedia, Singapore, November 6-11, 2005
Source: DBLP

ABSTRACT In this demonstration, a prototype system for generating 3D cartoon from broadcast soccer video is proposed. This system takes advantage of computer vision (CV) and computer graphics (CG) techniques to provide users new experience that can not be obtained from original video. Firstly, it uses CV techniques to obtain 3D positions of the players and ball. Then, CG techniques are applied to model the playfield, players, and ball. Finally, 3D cartoon is generated. Our system allows users to watch the game at any point of view using a 3D viewer based on OpenGL.

Full-text

Available from: Wen Gao, Jan 19, 2015
0 Followers
 · 
135 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Commercial applications of video analysis are getting valuable with the development of digital television. People can easily record all kinds of programs and enjoy the videos in their leisure time. Among these programs, broadcasting sports videos are usually more tedious than others since they involve not only the main games, but also break time or commercials. And even main games comprise periods which are not splendid enough for the audience. Therefore, a considerable amount of research focuses on automatically annotating semantic concepts in sports videos, and providing a spellbinding way to browse videos. In this chapter, we briefly introduce related work of video analysis for different kinds of sports, and propose a generic framework for sports video annotation. We explicitly elaborate the state of the art techniques for sports videos analysis. Visual and audio information are utilized to extract mid-level features, and different models for semantic annotation are expounded with practical examples. We also expand on applications of sports video analysis from the viewpoints of the audience, professional athletes, and advertisers.
    11/2011: pages 413-441;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work we developed an augmented reality sports broadcasting application for enhanced end-user experience. The proposed system consists of three major steps. In the first step each player is detected using AdaBoost Algorithm. In second step, same algorithm is used to detect face in each player image. In third step, a robust face recognition algorithm is applied to match face of each player with an online database of players face images which also stores statistics of each player. The application can be used to show the users the statistics of players captured in still image using camera or smart phone. Useful statistics can be name of the player, height, age, sports record etc in specific game. For player and subsequent face detection we use Haar-like features and AdaBoost algorithm for both feature selection and classification. The employed face recognition system uses AdaBoost algorithm with Liner Discriminant Analysis as a week learner for feature selection in LDA subspace while classification is performed using a classic nearest center classifier. Detailed experimental results are shown on general player face database as well as on real baseball game images containing different number of players at various poses and lighting conditions.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With advances in broadcasting technologies, people are now able to watch videos on devices such as televisions, computers, and mobile phones. Scalable video provides video bitstreams of different size under different transmission bandwidths. In this paper, a semantic scalability scheme with four levels is proposed, and tennis videos are used as examples in experiments to test the scheme. Rather than detecting shot categories to determine suitable scaling options for Scalable Video Coding (SVC) as in previous studies, the proposed method analyzes a video, transmits video content according to semantic priority, and reintegrates the extracted contents in the receiver. The purpose of the lower bitstream size in the proposed method is to discard video content of low semantic importance instead of decreasing the video quality to reduce the video bitstream. The experimental results show that visual quality is still maintained in our method despite reducing the bitstream size. Further, in a user study, we show that evaluators identify the visual quality as more acceptable and the video information as clearer than those of SVC. Finally, we suggest that the proposed scalability scheme in the semantic domain, which provides a new dimension for scaling videos, can be extended to various video categories. KeywordsContent adaptive–Scalable video–Video rendering–Video analysis–Scalable video coding
    Multimedia Tools and Applications 07/2012; DOI:10.1007/s11042-010-0685-x · 1.06 Impact Factor