Ron Rubinstein's research while affiliated with Technion - Israel Institute of Technology and other places

Publications (9)

Article
Thresholding is a classical technique for signal denoising. In this process, a noisy signal is decomposed over an orthogonal or overcomplete dictionary, the smallest coefficients are nullified, and the transform pseudo-inverse is applied to produce an estimate of the noiseless signal. The dictionaries used is this process are typically fixed dictio...
Article
The synthesis-based sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis-based model, where an analysis operator-hereafter...
Conference Paper
The synthesis-based sparse representation model for signals has drawn a considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis-based model, where an Analysis Dictionary multip...
Article
Full-text available
Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways...
Article
Full-text available
An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ?? A, where ?? is a fixed base dictionary and A is sparse. The sparse dictionary provides efficient forward and...
Conference Paper
Full-text available
This paper is concerned with the problem of capturing higher dynamic range video with a mobile device. We assume the mobile device has a standard (or low) dynamic range image sensor, and that the device is constrained by power and processing capability. To address these issues, we develop a system that captures a video sequence containing time vary...
Article
Full-text available
The K-SVD algorithm is a highly efiective method of training overcomplete dic- tionaries for sparse signal representation. In this report we discuss an e-cient im- plementation of this algorithm, which both accelerates it and reduces its memory consumption. The two basic components of our implementation are the replacement of the exact SVD computat...
Article
The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. Much of the literature on this topic can be divided between analysis-based and synthesis-based priors. Analysis-based priors assign probability to a signal through various forward measurements of it, while synthesis-based priors seek a...
Article
Full-text available
The K-SVD algorithm is a highly effective method of training overcomplete dic- tionaries for sparse signal representation. In this report we discuss an efficien t im- plementation of this algorithm, which both accelerates it and reduces its memory consumption. The two basic components of our implementation are the replacement of the exact SVD compu...

Citations

... Under the assumption of the sparse generative model, if the convolution kernels {k c } C c=1 match well with the "transpose" or "inverse" of the above sparsifying dictionaries D = [D 1 , . . . , D k ], also known as the analysis filters (Nam et al., 2013;Rubinstein and Elad, 2014), signals in one class will only have high responses to a small subset of those filters and low responses to others (due to the incoherence assumption). Nevertheless, in practice, often a sufficient number of, say C, random filters {k c } C c=1 suffice the purpose of ensuring so extracted C-channel features: ...
... The training set is assigned for learning in advance, and then the sparse dictionary [22] suitable for the training set is obtained. Such a sparse dictionary can extract the inherent characteristics of the target signal more accurately, and its common algorithms are K SVD dictionary learning [23], structure-based dictionary learning [24], and online dictionary learning [25]. ...
... Three well-known solution methods for the DL problem Equation (2) are the method of optimum directions (MOD) [24,25], K-singular value decomposition (KSVD) [8,26], and sequential generalisation of K-means (SGK) [27,28]. The MOD algorithm obtains the unconstrained minimum of Equation (2) as a closed-form formula, and then projects it to the feasible set D, by normalising its columns. ...
... This least square problem can be solved using the most popular DL method K-SVD [73]. A faster version named Approximate K-SVD (AK-SVD) [74] is adopted to solve the optimization problem as it approximately calculates the singular values instead of performing the singular value decomposition. After the dictionary updating, normalization is performed for each atom, and this procedure is iterated until and converge. ...
... Typical greedy algorithms include Matching Pursuit (MP) (Mallat and Zhifeng Zhang, 1993), Orthogonal Matching Pursuit (OMP) (Yi and Song, 2015), which is developed on the basis of MP, Regularized Orthogonal Matching Pursuit (ROMP) (Sajjad et al., 2015), Sparsity Adaptive Matching Pursuit (SAMP) (Wang et al., 2020), Compressive Sampling Matching Pursuit (CoSaMP) (Huang et al., 2017), Subspace Pursuit (SP) (Li et al., 2015), and other methods, and all of them can achieve sparse signal reconstruction very well. Matching Pursuit class algorithms are commonly used for image sparse representation, and Rubinstein's team (Rubinstein et al., 2008) used Batch Orthogonal Matching Pursuit (Batch-OMP) to achieve fast noise reduction and sparse representation processing of image signals. Greedy class algorithms with mature theory, low complexity, and fast running speed are widely used for signal sparse decomposition. ...
... This estimator is also known as the analysis estimator, in the terminology of [10]; see [13], [38], [23] and [11] for results on prediction error bounds for the estimator (7) and its constrained form when Γβ * is sparse. The graph considered in the trend filtering problem is usually a chain or grid graph due to applications such as image denoising, but results for other types of graphs such as trees and star graphs are also available in the literature. ...
... In Equation (1), when the dictionary matrix D is determined, the sparse matrix X is solved. This process is defined as sparse coding [8]. Sparseness is reflected in the fact that each vector x i has only a few non-zero elements. ...
... Approximations can also be performed by superposing sigmoidal functions [14], using data-adaptive normalized Gaussian functions [43] or a gradient descent 'boosting' paradigm [24]. We also cite the recent sparse approximations [36], many solvers have been proposed [21,27,46,49,55]. In this study, we put more emphasis on piecewise smooth approximations, namely piecewise constant and affine approximations, which have found important applications in signal/image processing and related fields. ...