JPEG2000 Based Scalable Summary for Remote Video Content Browsing and Efficient Semantic Structure Understanding.
ABSTRACT This paper presents a new method for remote and interactive browsing of long video sequences. The solution is based on interactive navigation in a scalable mega image resulting from a JPEG2000 coded keyframe-based video summary. The presented system is compliant with the new JPEG2000 part 9 "JPIP – JPEG2000 Interactivity, API and Protocol," which lends itself to working under varying channel conditions such as wireless networks. The flexibility offered by JPEG2000 allows the application to highlight interactively keyframes corresponding to the desired content first within a low quality and low-resolution version of the full video summary. It then offers a fine grain scalability for a user to navigate and zoom in to particular scenes or events represented by the keyframes. This possibility to visualise keyframes of interest and playback the corresponding video shots within the context of the whole sequence enables the user to understand the temporal relations between semantically similar events, i.e. a new way to analyse long video sequences.
- SourceAvailable from: Sean Marlow[Show abstract] [Hide abstract]
ABSTRACT: In this paper we present a variety of browsing interfaces for digital video information. The six interfaces are implemented on top of Físchlár, an operational recording, indexing, browsing and playback system for broadcast TV programmes. In developing the six browsing interfaces, we have been informed by the various dimensions which can be used to distinguish one interface from another. For this we include layeredness (the number of “layers” of abstraction which can be used in browsing a programme), the provision or omission of temporal information (varying from full timestamp information to nothing at all on time) and visualisation of spatial vs. temporal aspects of the video. After introducing and defining these dimensions we then locate some common browsing interfaces from the literature in this 3-dimensional “space” and then we locate our own six interfaces in this same space. We then present an outline of the interfaces and include some user feedback.12/1999: pages 206-218;
Conference Paper: Generation of interactive multi-level video summaries.[Show abstract] [Hide abstract]
ABSTRACT: In this paper, we describe how a detail-on-demand representation for interactive video is used in video summarization. Our approach automatically generates a hypervideo composed of multiple video summary levels and navigational links between these summaries and the original video. Viewers may interactively select the amount of detail they see, access more detailed summaries, and navigate to the source video through the summary. We created a representation for interactive video that supports a wide range of interactive video applications and Hyper-Hitchcock, an editor and player for this type of interactive video. Hyper-Hitchcock employs methods to determine (1) the number and length of levels in the hypervideo summary, (2) the video clips for each level in the hypervideo, (3) the grouping of clips into composites, and (4) the links between elements in the summary. These decisions are based on an inferred quality of video segments and temporal relations those segments.Proceedings of the Eleventh ACM International Conference on Multimedia, Berkeley, CA, USA, November 2-8, 2003; 01/2003
- [Show abstract] [Hide abstract]
ABSTRACT: Recent advances in digital video compression and networks have made video more accessible than ever. However, the existing content-based video retrieval systems still suffer from the following problems. 1) Semantics-sensitive video classification problem because of the semantic gap between low-level visual features and high-level semantic visual concepts; 2) Integrated video access problem because of the lack of efficient video database indexing, automatic video annotation, and concept-oriented summary organization techniques. In this paper, we have proposed a novel framework, called ClassView, to make some advances toward more efficient video database indexing and access. 1) A hierarchical semantics-sensitive video classifier is proposed to shorten the semantic gap. The hierarchical tree structure of the semantics-sensitive video classifier is derived from the domain-dependent concept hierarchy of video contents in a database. Relevance analysis is used for selecting the discriminating visual features with suitable importances. The Expectation-Maximization (EM) algorithm is also used to determine the classification rule for each visual concept node in the classifier. 2) A hierarchical video database indexing and summary presentation technique is proposed to support more effective video access over a large-scale database. The hierarchical tree structure of our video database indexing scheme is determined by the domain-dependent concept hierarchy which is also used for video classification. The presentation of visual summary is also integrated with the inherent hierarchical video database indexing tree structure. Integrating video access with efficient database indexing tree structure has provided great opportunity for supporting more powerful video search engines.IEEE Transactions on Multimedia 03/2004; · 1.75 Impact Factor