Model-Aided Coding: Using 3-D Scene Models in Motion-Compensated Video Coding
ABSTRACT We show that traditional waveform-coding and 3-D modelbased coding are not competing alternatives but should be combined to support and complement each other. Both approaches are combined such that the generality of waveform coding and the e#ciency of 3-D model-based coding are available where needed. The combination is achieved by providing the block-based video coder with a second reference frame for prediction which is synthesized by the model-based coder. Since the coding gain of this approach is directly related to the quality of the synthetic frame, we have extended the model-aided coder  to exploit knowledge about illumination changes and multiple objects. Remaining model failures and objects that are not known at the decoder are handled by standard block-based motioncompensated prediction. A Lagrangian approach is employed to control the coder. Experimental results show that bit-rate savings of about 35 % are achieved at equal average PSNR when comparing the model-aided codec to TMN-10, the state-of-the-art test model of the H.263 standard.
[Show abstract] [Hide abstract]
ABSTRACT: Known coding techniques for transmitting moving images at very low bit rates are explained by the source models on which these coding techniques are based. It is shown that with motion-compensated hybrid coding, object-based analysis-synthesis coding, knowledge-based coding and semantic coding, there is a consistent development of source models. In consequence these coding techniques can be combined in a layered coding system. From experimental results obtained for object-based analysis-synthesis, coding estimates for the coding efficiency of such a layered coding system are derived using head and shoulder video telephone test sequences. It is shown that an additional compression factor of about 3 can be expected with such a complex layered coding system, when compared to block-based hybrid coding.Signal Processing Image Communication 11/1995; DOI:10.1016/0923-5965(95)00010-5 · 1.15 Impact Factor
Conference Paper: Rate-distortion-efficient video compression using a 3-D head model[Show abstract] [Hide abstract]
ABSTRACT: In this paper we combine model-based video synthesis with block-based motion-compensated prediction (MCP). Two frames utilized for prediction where one frame is the previous decoded one and the other frame is provided by a model-based coder. The approach is integrated into an H.263-based video codec. Rate-distortion optimization is employed for the coding control. Hence, the coding efficiency does not decrease below H.263 even if the model based coder cannot describe the current scene. On the other hand, if the objects in the scene correspond to the model-based coder, significant gains in coding efficiency can be obtained compared to TMN-10, the test model of the H.263 standard. This is verified by experiments with natural head-and-shoulder sequences. Bit-rate savings of about 35% are achieved at equal average PSNR. When encoding at equal bit-rate, significant improvements in terms of subjective quality are visibleImage Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on; 02/1999
[Show abstract] [Hide abstract]
ABSTRACT: IMDSP Workshop '98, pp. 119-122, Alpbach, July 1998. In this paper we describe a model-based algorithm for the estimation of photometric properties in a scene recorded with a video camera. We focus on the coding of head-and-shoulder scenes at very low data-rates of about 1kbit/s. Facial animation parameters  spec-ifying facial expressions are estimated from video se-quences and transmitted to the decoder. There, the sequence is reconstructed by rendering a 3-D head model that is animated according to the facial pa-rameters. We show in this paper that the quality of the decoded images and the robustness of the motion estimation can be improved by considering photomet-ric effects. An illumination model based on Lambert reflection of directional colored light is added to the virtual scene and adapted to the current illumination condition. Experimental results show an improvement of about 1.4 dB in PSNR in comparison to simple am-bient illumination models.