Article

Multiview Video Coding Using View Interpolation and Color Correction

Universal Media Res. Center, Tokyo
IEEE Transactions on Circuits and Systems for Video Technology (Impact Factor: 1.82). 12/2007; DOI:10.1109/TCSVT.2007.903802
Source: IEEE Xplore

ABSTRACT Neighboring views must be highly correlated in multiview video systems. We should therefore use various neighboring views to efficiently compress videos. There are many approaches to doing this. However, most of these treat pictures of other views in the same way as they treat pictures of the current view, i.e., pictures of other views are used as reference pictures (inter-view prediction). We introduce two approaches to improving compression efficiency in this paper. The first is by synthesizing pictures at a given time and a given position by using view interpolation and using them as reference pictures (view-interpolation prediction). In other words, we tried to compensate for geometry to obtain precise predictions. The second approach is to correct the luminance and chrominance of other views by using lookup tables to compensate for photoelectric variations in individual cameras. We implemented these ideas in H.264/AVC with inter-view prediction and confirmed that they worked well. The experimental results revealed that these ideas can reduce the number of generated bits by approximately 15% without loss of PSNR.

0 0
 · 
1 Bookmark
 · 
91 Views
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: In this paper, we first develop improved projective rectification-based view interpolation and extrapolation methods, and apply them to view synthesis prediction-based multiview video coding (MVC). A geometric model for these view synthesis methods is then developed. We also propose an improved model to study the rate-distortion (R-D) performances of various practical MVC schemes, including the current joint multiview video coding standard. Experimental results show that our schemes achieve superior view synthesis results, and can lead to better R-D performance in MVC. Simulation results with the theoretical models help explaining the experimental results.
    IEEE Transactions on Circuits and Systems for Video Technology 07/2011; · 1.82 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: View synthesis using depth maps is a well-known technique for exploiting the redundancy between multi-view videos. In this paper, we deal with the bitrates of view synthesis at the decoder side of FTV that would use compressed depth maps and views. Both inherent depth estimation error and coding distortion would degrade synthesis quality. The focus is to reduce bitrates required for generating the high-quality virtual view. We employ a reliable view synthesis method which is compared with standard MPEG view synthesis software. The experimental results show that the bitrates required for synthesizing high-quality virtual view could be reduced by utilizing our enhanced view synthesis technique to improve the PSNR at medium bitrates.
    Picture Coding Symposium (PCS), 2010; 01/2011
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: To improve rendered view quality in a 3-D video system, we propose to encode and transmit depth transition data, which represents, for each pixel in a frame, the location in between two existing views where the depth corresponding to that pixel changes. Given the highly localized and non-linear characteristics of rendered view distortion, it is possible to achieve better coding performance by providing this depth transition data only for subjectively important regions. In this paper, a method to apply the depth transition data to the view rendering procedure is proposed. Experimental results verify that improvements in subjective quality can be achieved by the proposed method.
    Multimedia and Expo (ICME), 2011 IEEE International Conference on; 08/2011

K. Yamamoto