Conference Paper

Video2Cartoon: generating 3D cartoon from broadcast soccer video.

DOI: 10.1145/1101149.1101184 In proceeding of: Proceedings of the 13th ACM International Conference on Multimedia, Singapore, November 6-11, 2005
Source: DBLP

ABSTRACT In this demonstration, a prototype system for generating 3D cartoon from broadcast soccer video is proposed. This system takes advantage of computer vision (CV) and computer graphics (CG) techniques to provide users new experience that can not be obtained from original video. Firstly, it uses CV techniques to obtain 3D positions of the players and ball. Then, CG techniques are applied to model the playfield, players, and ball. Finally, 3D cartoon is generated. Our system allows users to watch the game at any point of view using a 3D viewer based on OpenGL.

0 Bookmarks
 · 
96 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cartoons play important roles in many areas, but it requires a lot of labor to produce new cartoon clips. In this paper, we propose a gesture recognition method for cartoon character images with two applications, namely content-based cartoon image retrieval and cartoon clip synthesis. We first define Edge Features (EF) and Motion Direction Features (MDF) for cartoon character images. The features are classified into two different groups, namely intra-features and inter-features. An Unsupervised Bi-Distance Metric Learning (UBDML) algorithm is proposed to recognize the gestures of cartoon character images. Different from the previous research efforts on distance metric learning, UBDML learns the optimal distance metric from the heterogeneous distance metrics derived from intra-features and inter-features. Content-based cartoon character image retrieval and cartoon clip synthesis can be carried out based on the distance metric learned by UBDML. Experiments show that the cartoon character image retrieval has a high precision and that the cartoon clip synthesis can be carried out efficiently.
    Proceedings of the 17th International Conference on Multimedia 2009, Vancouver, British Columbia, Canada, October 19-24, 2009; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a novel method called fuzzy diffusion maps (FDM) is proposed to evaluate cartoon similarity, which is critical to the applications of cartoon recognition, cartoon clustering and cartoon reusing. We find that the features from heterogeneous sources have different influence on cartoon similarity estimation. In order to take all the features into consideration, a fuzzy consistent relation is presented to convert the preference order of the features into preference degree, from which the weights are calculated. Based on the features and weights, the sum of the squared differences (L2) can be calculated between any cartoon data. However, it has been demonstrated in some research work that the cartoon dataset lies in a low-dimensional manifold, in which the L2 distance cannot evaluate the similarity directly. Unlike the global geodesic distance preserved in Isomap, the local neighboring relationship preserved in Locally Linear Embedding, and the local similarities of neighboring points preserved in Laplacian Eigenmaps, the diffusion maps we adopt preserve diffusion distance summing over all paths of length connecting the two data. As a consequence, this diffusion distance is very robust to noise perturbation. Our experiment in cartoon classification using Receiver Operating Curves shows fuzzy consistent relation's excellent performance on weights assignment. The FDM’s performance on cartoon similarity evaluation is tested on the experiments of cartoon recognition and clustering. The results show that FDM can evaluate the cartoon similarity more precisely and stably compared with other methods.
    Journal of Computer Science and Technology 01/2011; 26:203-216. · 0.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Commercial applications of video analysis are getting valuable with the development of digital television. People can easily record all kinds of programs and enjoy the videos in their leisure time. Among these programs, broadcasting sports videos are usually more tedious than others since they involve not only the main games, but also break time or commercials. And even main games comprise periods which are not splendid enough for the audience. Therefore, a considerable amount of research focuses on automatically annotating semantic concepts in sports videos, and providing a spellbinding way to browse videos. In this chapter, we briefly introduce related work of video analysis for different kinds of sports, and propose a generic framework for sports video annotation. We explicitly elaborate the state of the art techniques for sports videos analysis. Visual and audio information are utilized to extract mid-level features, and different models for semantic annotation are expounded with practical examples. We also expand on applications of sports video analysis from the viewpoints of the audience, professional athletes, and advertisers.
    11/2011: pages 413-441;

Full-text

View
0 Downloads
Available from