Conference Paper

High-Speed Periodic Motion Reconstruction Using an Off-the-shelf Camera with Compensation for Rolling Shutter Effect

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In recent years, high-speed signal reconstruction with sub-Nyquist sampling have attracted the attention of researchers in the signal processing field. Nonetheless, such methods have been limited either by the need to utilize multiple cameras, or relying on newly designed imaging hardware. In this paper, we propose a high-speed periodic motion reconstruction method, obtained by randomly delaying the camera exposure. This allows it to utilize a conventional off-the-shelf camera. In addition, the proposed method compensates the rolling shutter effect, which is inevitable if the camera’s image sensor is made of complementary metal-oxide semiconductor (CMOS), while reconstructing the high-speed periodic motion. Exhaustive and comparative experiments have been conducted to validate the proposed method, which showed promising performance in terms of reconstruction error, and effective compensation of the rolling shutter effect.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
We show that, via temporal modulation, one can observe and capture a high-speed periodic video well beyond the abilities of a low-frame-rate camera. By strobing the exposure with unique sequences within the integration time of each frame, we take coded projections of dynamic events. From a sequence of such frames, we reconstruct a high-speed video of the high-frequency periodic process. Strobing is used in entertainment, medical imaging, and industrial inspection to generate lower beat frequencies. But this is limited to scenes with a detectable single dominant frequency and requires high-intensity lighting. In this paper, we address the problem of sub-Nyquist sampling of periodic signals and show designs to capture and reconstruct such signals. The key result is that for such signals, the Nyquist rate constraint can be imposed on the strobe rate rather than the sensor rate. The technique is based on intentional aliasing of the frequency components of the periodic signal while the reconstruction algorithm exploits recent advances in sparse representations and compressive sensing. We exploit the sparsity of periodic signals in the Fourier domain to develop reconstruction algorithms that are inspired by compressive sensing.
Conference Paper
Full-text available
We describe an imaging architecture for compressive video sensing termed programmable pixel compressive camera (P2C2). P2C2 allows us to capture fast phenomena at frame rates higher than the camera sensor. In P2C2, each pixel has an independent shutter that is modulated at a rate higher than the camera frame-rate. The observed intensity at a pixel is an integration of the incoming light modulated by its specific shutter. We propose a reconstruction algorithm that uses the data from P2C2 along with additional priors about videos to perform temporal superresolution. We model the spatial redundancy of videos using sparse representations and the temporal redundancy using brightness constancy constraints inferred via optical flow. We show that by modeling such spatio-temporal redundancies in a video volume, one can faithfully recover the underlying high-speed video frames from the observed low speed coded video. The imaging architecture and the reconstruction algorithm allows us to achieve temporal superresolution without loss in spatial resolution. We implement a prototype of P2C2 using an LCOS modulator and recover several videos at 200 fps using a 25 fps camera.
Conference Paper
Full-text available
We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By “temporal super-resolution” we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else observed incorrectly) in any of the input sequences, even if these are played in “slow-motion”. The spatial and temporal dimensions are very different in nature, yet are inter-related. This leads to interesting visual tradeoffs in time and space, and to new video applications. These include: (i) treatment of spatial artifacts (e.g., motion-blur) by increasing the temporal resolution, and (ii) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence.
Article
Full-text available
The advent of inexpensive digital image sensors and the ability to create photographs that combine information from a number of sensed images are changing the way we think about photography. In this paper, we describe a unique array of 100 custom video cameras that we have built, and we summarize our experiences using this array in a range of imaging applications. Our goal was to explore the capabilities of a system that would be inexpensive to produce in the future. With this in mind, we used simple cameras, lenses, and mountings, and we assumed that processing large numbers of images would eventually be easy and cheap. The applications we have explored include approximating a conventional single center of projection video camera with high performance along one or more axes, such as resolution, dynamic range, frame rate, and/or large aperture, and using multiple cameras to approximate a video camera with a large synthetic aperture. This permits us to capture a video light field, to which we can apply spatiotemporal view interpolation algorithms in order to digitally simulate time dilation and camera motion. It also permits us to create video sequences using custom non-uniform synthetic apertures.
Article
Full-text available
Direct observations of nonstationary asymmetric vocal-fold oscillations are reported. Complex time series of the left and the right vocal-fold vibrations are extracted from digital high-speed image sequences separately. The dynamics of the corresponding high-speed glottograms reveals transitions between low-dimensional attractors such as subharmonic and quasiperiodic oscillations. The spectral components of either oscillation are given by positive linear combinations of two fundamental frequencies. Their ratio is determined from the high-speed sequences and is used as a parameter of laryngeal asymmetry in model calculations. The parameters of a simplified asymmetric two-mass model of the larynx are preset by using experimental data. Its bifurcation structure is explored in order to fit simulations to the observed time series. Appropriate parameter settings allow the reproduction of time series and differentiated amplitude contours with quantitative agreement. In particular, several phase-locked episodes ranging from 4:5 to 2:3 rhythms are generated realistically with the model.
Article
Full-text available
Due to the sequential-readout structure of complementary metal-oxide semiconductor image sensor array, each scanline of the acquired image is exposed at a different time, resulting in the so-called electronic rolling shutter that induces geometric image distortion when the object or the video camera moves during image capture. In this paper, we propose an image processing technique using a planar motion model to address the problem. Unlike previous methods that involve complex 3-D feature correspondences, a simple approach to the analysis of inter- and intraframe distortions is presented. The high-resolution velocity estimates used for restoring the image are obtained by global motion estimation, Bezier curve fitting, and local motion estimation without resort to correspondence identification. Experimental results demonstrate the effectiveness of the algorithm.
Article
Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix-vector multiplies with the sampling matrix. For compressible signals, the running time is just O(N log^2 N), where N is the length of the signal.
Article
The ten articles in this special section provide the reader with specific insights into the basic theory, capabilities, and limitations of compressed sensing (CS). The papers are summarized here.
Compressive Sampling
  • R G Baraniuk
  • E Candes
  • R Nowak
  • M Vetterli
Baraniuk, R.G., Candes, E., Nowak, R., Vetterli, M. : Compressive Sampling. In: IEEE Signal Processing Magazine. 25(2), 12-13 (2008)