Article

Inferring Changes in Intrinsic Parameters From Motion Blur

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Estimating changes in camera parameters, such as motion, focal length and exposure time over a single frame or sequence of frames, is an integral part of many computer vision applications. Rapid changes in these parameters often cause motion blur to be present in an image, which can make traditional methods of feature identification and tracking difficult. In this work we describe a method for tracking changes in two camera intrinsic parameters – shutter angle and scale changes brought about by changes in focal length. We also provide a method for estimating the expected accuracy of the results obtained using these methods and evaluate how the technique performs on images with a low depth of field, and therefore likely to contain blur other than that brought about by motion.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, unlike still images, video frames are vulnerable to camera motion, resulting in motion blurs or rolling shutter effects (Fig. 2). As these effects can be handled by transformations in the image-plane coordinate [3,31,33], FastSurf is designed to correct the camera intrinsic matrix, which effectively changes the scale and translation of the projected image plane. The level of distortion varies from frame to frame and thus, having learnable features that can quickly correct the distortion per frame is critical. ...
... The level of distortion varies from frame to frame and thus, having learnable features that can quickly correct the distortion per frame is critical. Figure 2: Samples from the ScanNet V2 [11] dataset demonstrate the negative impact of motion blurs [3] and rolling shutter effects [33]. The RGB frames (a, b) are blurry and distorted. ...
Preprint
Full-text available
We introduce FastSurf, an accelerated neural radiance field (NeRF) framework that incorporates depth information for 3D reconstruction. A dense feature grid and shallow multi-layer perceptron are used for fast and accurate surface optimization of the entire scene. Our per-frame intrinsic refinement scheme corrects the frame-specific errors that cannot be handled by global optimization. Furthermore, FastSurf utilizes a classical real-time 3D surface reconstruction method, the truncated signed distance field (TSDF) Fusion, as prior knowledge to pretrain the feature grid to accelerate the training. The quantitative and qualitative experiments comparing the performances of FastSurf against prior work indicate that our method is capable of quickly and accurately reconstructing a scene with high-frequency details. We also demonstrate the effectiveness of our per-frame intrinsic refinement and TSDF Fusion prior learning techniques via an ablation study.
Article
Peter Hall leads the Vision and Graphics group at the University of Bath. His research interests turn around Computer Graphics applications of Computer Vision. Previous work focussed on non-photorealistic rendering from photographs and from video. Recent projects include Outdoor Asset Capture (OAK) that seeks to acquire editable three dimensional dynamic models of outdoor objects from video and photographs. Also recognition of objects in images, regardless of whether they are photographed, drawn, painted.
Conference Paper
Full-text available
Optical flow estimation is a difficult task given real-world video footage with camera and object blur. In this paper, we combine a 3D pose&position tracker with an RGB sensor allowing us to capture video footage together with 3D camera motion. We show that the additional camera motion information can be embedded into a hybrid optical flow framework by interleaving an iterative blind deconvolution and warping based minimization scheme. Such a hybrid framework significantly improves the accuracy of optical flow estimation in scenes with strong blur. Our approach yields improved overall performance against three state-of-the-art baseline methods applied to our proposed ground truth sequences, as well as in several other real-world sequences captured by our novel imaging system.
Conference Paper
Full-text available
It is still a difficult problem to establish correspondences of feature points and to estimate view relations for multiple images of a static scene, if the images have large disparities. In this paper we explore the possibility of applying a cheap and general-purpose 3D orientation sensor to improve the robustness of matching such two images. We attach a 3D orientation sensor to a camera and use the system to acquire the images. The camera orientation is obtained from the sensor. Assuming known intrinsic parameters of the camera, we are to estimate only the camera translation between the two views. Owing to the small number of parameters needed to be estimated, it becomes possible to apply a voting method. We show that the method by voting is more robust than the methods based on random sampling, especially for difficult pair of images to make correspondences. In addition, using the known camera orientation, the images can be rectified so that it is as if they were taken by parallel cameras, before the candidate matches are searched for. This helps finding as many correct matches as possible for pairs of images that include rotation around the camera axis. Experimental results for synthetic images as well as real images are shown.
Article
Full-text available
Due to the sequential-readout structure of complementary metal-oxide semiconductor image sensor array, each scanline of the acquired image is exposed at a different time, resulting in the so-called electronic rolling shutter that induces geometric image distortion when the object or the video camera moves during image capture. In this paper, we propose an image processing technique using a planar motion model to address the problem. Unlike previous methods that involve complex 3-D feature correspondences, a simple approach to the analysis of inter- and intraframe distortions is presented. The high-resolution velocity estimates used for restoring the image are obtained by global motion estimation, Bezier curve fitting, and local motion estimation without resort to correspondence identification. Experimental results demonstrate the effectiveness of the algorithm.
Article
Full-text available
This paper considers the explicit use of motion blur to compute the Optical Flow. In the past, many algorithms have been proposed for estimating the relative velocity from one or more images. The motion blur is generally considered an extra source of noise and is eliminated, or is assumed nonexistent. Unlike most of these approaches, it is feasible to estimate the Optical Flow map using only the information encoded in the motion blur. An algorithm that estimates the velocity vector of an image patch using the motion blur only is presented; all the required information comes from the frequency domain. The first step consists of using the response of a family of steerable filters applied on the log of the Power Spectrum in order to calculate the orientation of the velocity vector. The second step uses a technique called Cepstral Analysis. More precisely, the log power spectrum is treated as another signal and we examine the Inverse Fourier Transform of it in order to estimate the magnitude of the velocity vector. Experiments have been conducted on artificially blurred images and with real world data.
Article
Full-text available
No feature-based vision system can work unless good features can be identified and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under affine image transformations. We test performance with several simulations and experiments. 1 Introduction IEEE Conference on Computer Vision and Pattern Recognition (CVPR94) Seattle, June 1994 Is feature tracking a solved problem? The extensive studies of image correlation [4], [3], [15], [18], [7], [17] and sum-of-squared-difference (SSD...
Article
Modern blockbuster movies seamlessly introduce impossible characters and action into real-world settings using digital visual effects. These effects are made possible by research from the field of computer vision, the study of how to automatically understand images. Computer Vision for Visual Effects will educate students, engineers and researchers about the fundamental computer vision principles and state-of-the-art algorithms used to create cutting-edge visual effects for movies and television. The author describes classical computer vision algorithms used on a regular basis in Hollywood (such as blue screen matting, structure from motion, optical flow and feature tracking) and exciting recent developments that form the basis for future effects (such as natural image matting, multi-image compositing, image retargeting and view synthesis). He also discusses the technologies behind motion capture and three-dimensional data acquisition. More than 200 original images demonstrating principles, algorithms and results, along with in-depth interviews with Hollywood visual effects artists, tie the mathematical concepts to real-world filmmaking.
Conference Paper
Estimating changes in camera parameters, such as motion, focal length and exposure time over a single frame or sequence of frames is an integral part of many computer vision applications. Rapid changes in these parameters often cause motion blur to be present in an image, which can make traditional methods of feature identification and tracking difficult. Here we present a method for estimating the scale changes brought about by change in focal length from a single motion-blurred frame. We also use the results from two seperate methods for determining the rotation of a pair of motion-blurred frames to estimate the exposure time of a frame (i.e. the shutter angle).
Conference Paper
This paper extends the classical warping-based optical flow method to achieve accurate flow in the presence of spatially-varying motion blur. Our idea is to parameterize the appearance of each frame as a function of both the pixel motion and the motion-induced blur. We search for the flows that best match two consecutive frames, which amounts to finding the derivative of a blurred frame with respect to both the motion and the blur, where the blur itself is a function of the motion. We propose an efficient technique to calculate the derivatives using prefiltering. Our technique avoids performing spatially-varying filtering (which can be computationally expensive) during the optimization iterations. In the end, our derivative calculation technique can be easily incorporated with classical flow code to handle video with non-uniform motion blur with little performance penalty. Our method is evaluated on both synthetic and real videos and outperforms conventional flow methods in the presence of motion blur.
Article
High-frequency energy distributions are important characteristics of blurry images. In this paper, directional high-pass filters are proposed to analyze blurry images. Firstly, we show that the proposed directional high-pass filters can effectively estimate the motion direction of motion blurred images. A closed-form solution for motion direction estimation is derived. It achieves a higher estimation accuracy and is also faster than previous methods. Secondly, the paper suggests two important applications of the directional high-frequency energy analysis. It can be employed to identify out-of-focus blur and motion blur, and to detect motion blurred regions in observed images. Experiments on both synthetic and real blurred images are conducted. Encouraging results demonstrate the efficacy of the proposed methods.
Conference Paper
We propose a novel approach to reduce spatially varying motion blur using a hybrid camera system that simultane- ously captures high-resolution video at a low-frame rate to- gether with low-resolution video at a high-frame rate. Our work is inspired by Ben-Ezra and Nayar (3) who introduced thehybridcameraideaforcorrectingglobalmotionblurfor a single still image. We broaden the scope of the problem to address spatially varying blur as well as video imagery. We also reformulate the correction process to use more in- formation available in the hybrid camera system, as well as iteratively refine spatially varying motion extracted from the low-resolution high-speed camera. We demonstrate that our approach achieves superior results over existing work and can be extended to deblurring of moving objects.
Conference Paper
Rapid camera rotations (e.g. camera shake) are a significant problem when real-time computer vision algorithms are applied to video from a handheld or head-mounted camera. Such camera motions cause image features to move large distances in the image and cause significant motion blur. Here we pro- pose a very fast method of estimating the camera rotation from a single frame which does not require any detection, matching or extraction of feature points and can be used as a motion estimator to reduce the search range for feature matching algorithms that may be subsequently applied to the image. This method exploits the motion blur in the frame, using features which remain sharp to rapidly compute the axis of rotation of the camera, and using blurred features to estimate the magnitude of the camera's rotation.
Article
Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blind restoration of motion blurred images, optical correction using stabilized lenses, and special cmos sensors that limit the exposure time in the presence of motion. In this paper, we exploit the fundamental trade off between spatial resolution and temporal resolution to construct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used to compute a point spread function (psf) that represents the path of the camera during integration. This psf is then used to deblur the image. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototype system was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The results show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem. We conclude with a brief discussion on how our ideas can be extended beyond the case of global camera motion to the case where individual objects in the scene move with different velocities.
In: The VES handbook of visual effects
  • Goulekas Karen
  • Postvis
Fast motion deblurring
  • Cho Sunghyun
  • Lee Seungyong
Digital video stabilization and rolling shutter correction using gyroscopes
  • Jacobs Karpenko Alexandre
  • Baek David
  • Levoy Jongmin
  • Marc