## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Total variation (TV) minimisation algorithms have been successfully applied in compressive sensing (CS) recovery for natural images owing to its advantage of preserving edges. However, traditional TV is no longer appropriate for omni-directional image processing because of the distortions in catadioptric imaging systems. The omni-gradient computing method combined with the characteristics of omni-directional imaging is proposed in this study. To reconstruct the image from its compressive samples, the omni-total variation (omni-TV) regularisation based on omni-gradient is utilised instead of traditional TV during the image restoration. The experimental results show that the omni-directional images can be reconstructed effectively and accurately. Compared with the classical TV minimisation model, the images recovered based on omni-TV model can provide higher quality both in subjective evaluation and objective evaluation.

To read the full-text of this research,

you can request a copy directly from the authors.

... In Lou et al. (2014), the authors describe the catadioptric imaging formation. Consequently the acquired image contains significant distortions or anamorphosis due to the geometry of the mirror Daniilidis et al. (2002); Geyer and Daniilidis (2001). ...

This paper discusses the possibility to extend and apply the conventional two dimensional recovering images from blurry and noisy observation with the total variation regularization method to the catadioptric images. The principal in this special method is the stabilization of dual alternating minimization. The latter introduces two auxiliary half quadratic variables to transfer the system out of the ill-posed term. The main contribution of this paper is the use of the inverse stereographic projection and the spherical harmonics in order to adapt this proposed deconvolution with catadioptric omnidirectional images. The projection on the unit sphere of the omnidirectional image, is one way to alleviate the problem of the heterogeneous resolution and the negative effects of anamorphosis. In both anisotropic and isotropic deconvolutions, the experimental results conducted on synthetic as well as captured catadioptric omnidirectional images which are subject to various effects, confirm the performance of the proposed method to restore such images impaired by the blur and noise. Compared with several state-of-the-art approaches, the images resulted can achieve up to an acceptable and higher level of deconvolution quality.

Recent compressive sensing results show that it is possible to accurately reconstruct certain compressible signals from relatively few linear measurements via solving nonsmooth convex optimization problems. In this paper, we propose the use of the alternating direction method - a classic approach for optimization problems with separable variables (D. Gabay and B. Mercier, ??A dual algorithm for the solution of nonlinear variational problems via finite-element approximations,?? Computer and Mathematics with Applications, vol. 2, pp. 17-40, 1976; R. Glowinski and A. Marrocco, ??Sur lapproximation par elements finis dordre un, et la resolution par penalisation-dualite dune classe de problemes de Dirichlet nonlineaires,?? Rev. Francaise dAut. Inf. Rech. Oper., vol. R-2, pp. 41-76, 1975) - for signal reconstruction from partial Fourier (i.e., incomplete frequency) measurements. Signals are reconstructed as minimizers of the sum of three terms corresponding to total variation, ??<sub>1</sub>-norm of a certain transform, and least squares data fitting. Our algorithm, called RecPF and published online, runs very fast (typically in a few seconds on a laptop) because it requires a small number of iterations, each involving simple shrinkages and two fast Fourier transforms (or alternatively discrete cosine transforms when measurements are in the corresponding domain). RecPF was compared with two state-of-the-art algorithms on recovering magnetic resonance images, and the results show that it is highly efficient, stable, and robust.

A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lanrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t → ∞ the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.

The use of omnidirectional vision has increased during these past years. It provides a very large field of view. Nevertheless, omnidirectional images contain significant radial distortions and conventional image processing is not adapted to these specific images. This paper presents an edge detector adapted to the image geometry. Fuzzy sets will be used to take into account all imprecisions introduced by the sampling process. The Prewitt filter applied to omnidirectional image will be studied to illustrate this paper.

Accurate signal recovery or image reconstruction from indirect and possibly
undersampled data is a topic of considerable interest; for example, the
literature in the recent field of compressed sensing is already quite immense.
Inspired by recent breakthroughs in the development of novel first-order
methods in convex optimization, most notably Nesterov's smoothing technique,
this paper introduces a fast and accurate algorithm for solving common recovery
problems in signal processing. In the spirit of Nesterov's work, one of the key
ideas of this algorithm is a subtle averaging of sequences of iterates, which
has been shown to improve the convergence properties of standard
gradient-descent algorithms. This paper demonstrates that this approach is
ideally suited for solving large-scale compressed sensing reconstruction
problems as 1) it is computationally efficient, 2) it is accurate and returns
solutions with several correct digits, 3) it is flexible and amenable to many
kinds of reconstruction problems, and 4) it is robust in the sense that its
excellent performance across a wide range of problems does not depend on the
fine tuning of several parameters. Comprehensive numerical experiments on
realistic signals exhibiting a large dynamic range show that this algorithm
compares favorably with recently proposed state-of-the-art methods. We also
apply the algorithm to solve other problems for which there are fewer
alternatives, such as total-variation minimization, and convex programs seeking
to minimize the l1 norm of Wx under constraints, in which W is not diagonal.

Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these IST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step IST (TwIST) algorithms, exhibiting much faster convergence rate than IST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (l(p) norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.

Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/∼lcv/ssim/.

The authors introduce an algorithm, called matching pursuit, that
decomposes any signal into a linear expansion of waveforms that are
selected from a redundant dictionary of functions. These waveforms are
chosen in order to best match the signal structures. Matching pursuits
are general procedures to compute adaptive signal representations. With
a dictionary of Gabor functions a matching pursuit defines an adaptive
time-frequency transform. They derive a signal energy distribution in
the time-frequency plane, which does not include interference terms,
unlike Wigner and Cohen class distributions. A matching pursuit isolates
the signal structures that are coherent with respect to a given
dictionary. An application to pattern extraction from noisy signals is
described. They compare a matching pursuit decomposition with a signal
expansion over an optimized wavepacket orthonormal basis, selected with
the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat.
Theory, vol. 38, Mar. 1992)

Suppose x is an unknown vector in Ropf<sup>m</sup> (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m<sup>1/4</sup>log<sup>5/2</sup>(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscr<sub>p</sub> ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr<sub>2</sub> error O(N<sup>1/2-1</sup>p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscr<sub>p</sub> balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces

Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.

This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈C<sup>N</sup> and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=σ<sub>τ∈T</sub>f(τ)δ(t-τ) obeying |T|≤C<sub>M</sub>·(log N)<sup>-1</sup> · |Ω| for some constant C<sub>M</sub>>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N<sup>-M</sup>), f can be reconstructed exactly as the solution to the ℓ<sub>1</sub> minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C<sub>M</sub> which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N<sup>-M</sup>) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

The problem of catadioptric omnidirectional imaging defocus blur, which is caused by lens aperture and mirror curvature, becomes more severe when high resolution sensors and large apertures are applied. In order to overcome this problem, a novel method based on computational photography is proposed. Firstly, the defocus blur of catadioptric omnidirectional imaging is analyzed to calculate the point spread function for different scene points. Then, the defocus blur kernel of omnidirectional image is confirmed to be spatially invariant when rotating the focus ring of camera lens during an image’s integration time. Lastly, the deconvolution algorithm using prior sparse derivatives is applied to obtain all-focused/sharp omnidirectional images. Experimental results demonstrate that the proposed method is effective for omnidirectional image deblurring and can be applied to most existing catadioptric omnidirectional imaging systems.

Simultaneously acquired omni-directional images contain rays of 360 degree viewing directions. To take advantage of this unique characteristic, we have been developing several methods for constructing virtual cities. In this paper, we first describe a system to generate the appearance of a virtual city; the system, which is based on image-based rendering (IBR) techniques, utilizes the characteristics of omni-directional images to reduce the number of samplings required to construct such IBR images. We then describe a method to add geometric information to the IBR images; this method is based on the analysis of a sequence of omni-directional images. Then, we describe a method to seamlessly superimpose a new building model onto a previously created virtual city image; the method enables us to estimate illumination distributions by using an omni-directional camera. Finally, to demonstrate the methods' effectiveness, we describe how we implemented and applied them to urban scenes.

In this paper, we present a new compressed-sensing (CS) setup together with a new scalable CS model, which allows the tradeoff between system complexity (number of detectors) and time (number of measurements). We describe the calibration of the system with respect to model parameters and show the reconstruction of compressed measurements according to the new model, which are acquired with the proposed setup. The proposed model and its parameter are evaluated with the established measures, i.e., restricted isometry property and coherence. The resulting consequences for usable sparsifying basis are derived on this evaluation. With the proposed setup, it is possible to acquire high-resolution images with a low-resolution camera.

Perimeter security generally requires watching areas that afford trespassers reasonable cover and concealment. By definition, such ‘interesting’ areas have limited visibility distance. Furthermore, targets of interest generally attempt to conceal themselves within the cover, sometimes adding camouflage to further reduce their visibility. Such targets are only visible while in motion. The combined result of limited visibility and target visibility severely reduces the usefulness of any approach using a standard Pan/Tilt/Zoom (PTZ) camera. As a result, these situations call for a very sensitive system with a wide field of view, and are a natural application for Omni-directional Video Surveillance and Monitoring.This paper describes a frame-rate, low-power, omni-directional tracking system (LOTS). The paper discusses related background work including resolution issues in omni-directional imaging. One of the novel system component details is quasi-connected-components (QCC). QCC combines gap filling, thresholding-with-hysteresis (TWH) and a novel region merging/cleaning approach. The multi-background modeling and dynamic thresholding make an ideal approach for difficult situations like outdoor tracking in high clutter. The paper also describes target geolocation and issues in the system user interface. The single viewpoint property of the omni-directional imaging system used simplifies the backprojection and unwarping. We end with a summary of an external evaluation of an early form of the system and comments about recent work and field tests.

We consider linear equations y = Φx where y is a given vector in ℝn and Φ is a given n × m matrix with n < m ≤ τn, and we wish to solve for x ∈ ℝm. We suppose that the columns of Φ are normalized to the unit 2-norm, and we place uniform measure on such Φ. We prove the existence of ρ = ρ(τ) > 0 so that for large n and for all Φ's except a negligible fraction, the following property holds: For every y having a representation y = Φx0by a coefficient vector x0 ∈ ℝmwith fewer than ρ · n nonzeros, the solution x1of the 1-minimization problemis unique and equal to x0. In contrast, heuristic attempts to sparsely solve such systems—greedy algorithms and thresholding—perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almost-spherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices. © 2006 Wiley Periodicals, Inc.

A stylized compressed sensing radar is proposed in which the time-frequency
plane is discretized into an N by N grid. Assuming the number of targets K is
small (i.e., K much less than N^2), then we can transmit a sufficiently
"incoherent" pulse and employ the techniques of compressed sensing to
reconstruct the target scene. A theoretical upper bound on the sparsity K is
presented. Numerical simulations verify that even better performance can be
achieved in practice. This novel compressed sensing approach offers great
potential for better resolution over classical radar.

To implement high-speed panoramic unwrapping of high-resolution catadioptric omnidirectional images on field-programmable gate array (FPGA), a novel design technique of pipeline architecture is proposed, named a `series-parallel pipeline'. Based on the strategy of `block prefetching', catadioptric omnidirectional images are divided into image blocks before loading into the pipeline. Multiple functional sub-modules are copied to carry out the relatively time-consuming steps in parallel whereas fewer sub-modules or only one sub-module is created to carry out those less time-consuming steps. The number of copies is determined by the proportion of execution time of each step, and several neighbouring basic units are combined to form a `unit package' before loading into the pipeline. The basic units in one `unit package' are processed in series while carrying out the less time-consuming steps, but processed in parallel while carrying out the more time-consuming steps. A hardware pipeline design of series-parallel architecture is implemented on Xilinx Spartan-3 FPGA, which is able to unwrap one catadioptric omnidirectional image with size 1024 × 1024 into one cylindrical panorama with size 3200 × 768 at 12.480 ms per frame when the system clock is 100 MHz.

In this correspondence, we introduce a new imaging method to obtain high-resolution (HR) images. The image acquisition is performed in two stages, compressive measurement and optimization reconstruction. In order to reconstruct HR images by a small number of sensors, compressive measurements are made. Specifically, compressive measurements are made by a low-resolution (LR) camera with randomly fluttering shutter, which can be viewed as a moving random exposure pattern. In the optimization reconstruction stage, the HR image is computed by different models according to the prior knowledge of scenes. The proposed imaging method offers a new way of acquiring HR images of essentially static scenes when the camera resolution is limited by severe constraints such as cost, battery capacity, memory space, transmission bandwidth, etc. and when the prior knowledge of scenes is available. The simulation results demonstrate the effectiveness of the proposed imaging method.

A vision-based navigation system is presented for determining a mobile robot's position and orientation using panoramic imagery. Omni-directional sensors are useful in obtaining a 360° field of view, permitting various objects in the vicinity of a robot to be imaged simultaneously. Recognizing landmarks in a panoramic image from an a priori model of distinct features in an environment allows a robot's location information to be updated. A system is shown for tracking vertex and line features for omni-directional cameras constructed with catadioptric (containing both mirrors and lenses) optics. With the aid of the panoramic Hough transform, line features can be tracked without restricting the mirror geometry so that it satisfies the single viewpoint criteria. This allows the use of rectangular scene features to be used as landmarks. Two paradigms for localization are explored, with experiments conducted with synthetic and real images. A working implementation on a mobile robot is also shown.

Images obtained with catadioptric sensors contain significant deformations which prevent the direct use of classical image treatments. Thus, Markov random fields (MRF) whose usefulness is now obvious for projective image processing, cannot be used directly on catadioptric images because of the inadequacy of the neighborhood. In this paper, we propose to define a new neighborhood for MRF by using the equivalence theorem developed for central catadioptric sensors. We show the importance of this adaptation for segmentation, image restoration and motion detection.

Because of the distortions produced by the insertion of a mirror, catadioptric images cannot be processed similarly to classical perspective images. Now, although the equivalence between such images and spherical images is well known, the use of spherical harmonic analysis often leads to image processing methods which are more difficult to implement. In this paper, we propose to define catadioptric image processing from the geodesic metric on the unitary sphere. We show that this definition allows to adapt very simply classical image processing methods. We focus more particularly on image gradient estimation, interest point detection, and matching. More generally, the proposed approach extends traditional image processing techniques based on Euclidean metric to central catadioptric images. We show in this paper the efficiency of the approach through different experimental results and quantitative evaluations.

This lecture note presents a new method to capture and represent compressible signals at a rate significantly below the Nyquist rate. This method, called compressive sensing, employs nonadaptive linear projections that preserve the structure of the signal; the signal is then reconstructed from these projections using an optimization process.

Suppose we are given a vector f in a class FsubeRopf<sup>N </sup>, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr<sub>2</sub>) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|<sub>(n)</sub>lesRmiddotn<sup>-1</sup>p/, where R>0 and p>0. Suppose that we take measurements y<sub>k</sub>=langf<sup># </sup>,X<sub>k</sub>rang,k=1,...,K, where the X<sub>k</sub> are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction f<sup>t</sup>, defined as the solution to the constraints y<sub>k</sub>=langf<sup># </sup>,X<sub>k</sub>rang with minimal lscr<sub>1</sub> norm, obeys parf-f<sup>#</sup>par<sub>lscr2</sub>lesC<sub>p </sub>middotRmiddot(K/logN)<sup>-r</sup>, r=1/p-1/2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed

This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f∈R<sup>n</sup> from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ<sub>1</sub>-minimization problem (||x||<sub>ℓ1</sub>:=Σ<sub>i</sub>|x<sub>i</sub>|) min(g∈R<sup>n</sup>) ||y - Ag||<sub>ℓ1</sub> provided that the support of the vector of errors is not too large, ||e||<sub>ℓ0</sub>:=|{i:e<sub>i</sub> ≠ 0}|≤ρ·m for some ρ>0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of ℓ<sub>1</sub> is a crucial property we call the uniform uncertainty principle that we shall describe in detail.

The Time-Frequency and Time-Scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the Method of Frames (MOF), Matching Pursuit (MP), and, for special dictionaries, the Best Orthogonal Basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l 1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP! and BOB, including better sparsity, and super-resolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation de-noising, and multi-scale edge de-noising. Basis Pursuit in highly ...

Compressive coded apertures for high resolution imagingStable recovery of sparse over complete representations in the presence of noise

- R F Marcia
- Z T Harmany
- R M D L Willett
- M Elad
- V Temlyakov

Marcia, R.F., Harmany, Z.T., Willett, R.M.: 'Compressive coded apertures for high resolution imaging'. Proc. SPIE Photonics Europe, Brussels, Belgium, 2010 12 Donoho, D.L., Elad, M., Temlyakov, V.: 'Stable recovery of sparse over complete representations in the presence of noise', IEEE Trans. Inf. Theory, 2006, 52, (1), pp. 6–18

Constructing virtual cities by using panoramic images

- Ikeuchi K.