## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

The synthesis-based sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis-based model, where an analysis operator-hereafter referred to as the analysis dictionary-multiplies the signal, leading to a sparse outcome. Our goal is to learn the analysis dictionary from a set of examples. The approach taken is parallel and similar to the one adopted by the K-SVD algorithm that serves the corresponding problem in the synthesis model. We present the development of the algorithm steps: This includes tailored pursuit algorithms-the Backward Greedy and the Optimized Backward Greedy algorithms, and a penalty function that defines the objective for the dictionary update stage. We demonstrate the effectiveness of the proposed dictionary learning in several experiments, treating synthetic data and real images, and showing a successful and meaningful recovery of the analysis dictionary.

To read the full-text of this research,

you can request a copy directly from the authors.

... where s is the sparse signal of f, and is the sparse basis of f. Previous studies have shown that various types of signals have good sparsity under a redundant dictionary that is an adaptive sparse basis representation [29,30]. Therefore, we use a redundant dictionary based on the K-singular value decomposition (K-SVD) method [30] for the sparse representation of the rainfall field f. ...

... Previous studies have shown that various types of signals have good sparsity under a redundant dictionary that is an adaptive sparse basis representation [29,30]. Therefore, we use a redundant dictionary based on the K-singular value decomposition (K-SVD) method [30] for the sparse representation of the rainfall field f. As for the solution of the redundancy dictionary, it is a process of establishing the prior relationship between the rainfall field f and the sparse signal s, and the mathematical model is given by The core of reconstructing the rainfall field using the ESL network is how to reconstruct the complete rainfall field in the area from sparsely distributed rainfall information measured by the ESL network, and the process can be represented as ...

... where s is the sparse signal of f , and ψ is the sparse basis of f . Previous studies have shown that various types of signals have good sparsity under a redundant dictionary that is an adaptive sparse basis representation [29,30]. Therefore, we use a redundant dictionary based on the K-singular value decomposition (K-SVD) method [30] for the sparse representation of the rainfall field f . ...

High-precision rainfall information is of great importance for the improvement of the accuracy of numerical weather prediction and the monitoring of floods and mudslides that affect human life. With the rapid development of satellite constellation networks, there is great potential for reconstructing high-precision rainfall fields in large areas by using widely distributed Earth–space link (ESL) networks. In this paper, we have carried out research on reconstructing high-precision rainfall fields using an ESL network with the compressed sensing (CS) method in the case of a sparse distribution of the ESLs. Firstly, ESL networks with different densities are designed using the K-means clustering algorithm. The real rainfall fields are then reconstructed using the designed ESL networks with CS, and the reconstructed results are compared with that of the inverse distance weighting (IDW) algorithm. The results show that the root mean square error (RMSE) and correlation coefficient (CC) of the reconstructed rainfall fields using the ESL network with CS are lower than 0.15 mm/h and higher than 0.999, respectively, when the density is 0.05 links per square kilometer, indicating that the ESL network with CS is capable of reconstructing the high-precision rainfall fields under sparse sampling. Additionally, the performance of reconstructing the rainfall fields using the ESL networks with CS is superior compared to the reconstructed results of the IDW algorithm.

... In all of the above methods, the transform matrix W is known before-hand and hence the form of the regularizer is known. There has been much interest in the past decade to adapt the regularizer to data [40][41][42]46]. The regularization parameters such as the sparsifying transform could be learned from a dataset of high-quality images or even directly from measurements [40] (during image reconstruction). ...

... 3. We address the drawbacks of the existing methods for solving (1.6) which are discussed in detail in Section 2.5 and we discuss how our method overcomes these drawbacks. 4. We perform experiments on image denoising on some images from Urban-100 Dataset and compare with methods like BM3D and unsupervised sparsifying operator learning methods like Analysis K-SVD [46]. To the best of our knowledge, no previous work on bilevel optimization problems performed experiments on image denoising problems. ...

... The study is meant to provide some understanding of the behavior of supervised learning via BLORC and not an exhaustive demonstration of image denoising performance. We compare the results of image denoising by BLORC with the methods BM3D [15] and Analysis KSVD [46]. Both these algorithms are well-known to work on images with blocks, structures and orientations and hence comparing with these denoising algorithms gives us a standard baseline for the performance of BLORC. ...

We present a method for supervised learning of sparsity-promoting regularizers for denoising signals and images. Sparsity-promoting regularization is a key ingredient in solving modern signal reconstruction problems; however, the operators underlying these regularizers are usually either designed by hand or learned from data in an unsupervised way. The recent success of supervised learning (mainly convolutional neural networks) in solving image reconstruction problems suggests that it could be a fruitful approach to designing regularizers. Towards this end, we propose to denoise signals using a variational formulation with a parametric, sparsity-promoting regularizer, where the parameters of the regularizer are learned to minimize the mean squared error of reconstructions on a training set of ground truth image and measurement pairs. Training involves solving a challenging bilievel optimization problem; we derive an expression for the gradient of the training loss using the closed-form solution of the denoising problem and provide an accompanying gradient descent algorithm to minimize it. Our experiments with structured 1D signals and natural images show that the proposed method can learn an operator that outperforms well-known regularizers (total variation, DCT-sparsity, and unsupervised dictionary learning) and collaborative filtering for denoising. While the approach we present is specific to denoising, we believe that it could be adapted to the larger class of inverse problems with linear measurement models, giving it applicability in a wide range of signal reconstruction settings.

... Among them, the total-variation (TV) regularization [13] assumes that the image and signal is piecewise smooth, usually resulting in patchy effects. The dictionary learning based methods [14] can exploit more sparsity by learning a overcomplete dictionary. Moreover, the discrete cosine transform (DCT) [15], tight frame [16] and wavelet are also some popular sparsity models [17]. ...

... According to Remark 3.1, if we let G = G G , then the group-based CT denoising problem (14) is equivalent to the following low-rank minimization problem ...

... After substituting G G with G , the optimization problem (14) can be equivalently converted into problem (17), for convenience, we rewrite the problem (17) as here, ...

Compressive sensing (CS) based computed tomography (CT) image reconstruction aims at reducing the radiation risk through sparse-view projection data. It is usually challenging to achieve satisfying image quality from incomplete projections. Recently, the nonconvex ${{L_ {{1/2}}}} $-norm has achieved promising performance in sparse recovery, while the applications on imaging are unsatisfactory due to its nonconvexity. In this paper, we develop a ${{L_ {{1/2}}}} $-regularized nonlocal self-similarity (NSS) denoiser for CT reconstruction problem, which integrates low-rank approximation with group sparse coding (GSC) framework. Concretely, we first split the CT reconstruction problem into two subproblems, and then improve the CT image quality furtherly using our ${{L_ {{1/2}}}} $-regularized NSS denoiser. Instead of optimizing the nonconvex problem under the perspective of GSC, we particularly reconstruct CT image via low-rank minimization based on two simple yet essential schemes, which build the equivalent relationship between GSC based denoiser and low-rank minimization. Furtherly, the weighted singular value thresholding (WSVT) operator is utilized to optimize the resulting nonconvex ${{L_ {{1/2}}}} $ minimization problem. Following this, our proposed denoiser is integrated with the CT reconstruction problem by alternating direction method of multipliers (ADMM) framework. Extensive experimental results on typical clinical CT images have demonstrated that our approach can further achieve better performance than popular approaches.

... Extended versions of these frames for higher-dimensional signals, such as videos, have also been proposed [18]- [21]. In addition to directional frames, more general systems such as dictionary [22], [23] and graph WTs/FBs [24] that capture highly complex structures and non-local similarity have been The associate editor coordinating the review of this manuscript and approving it for publication was Lorenzo Mucchi. 1 In this paper, atom is referred to as an element d n (n = 0, . . . , N − 1) of a frame D = d 0 · · · d N −1 ∈ R M ×N [17]. ...

... Although these existing frames and dictionaries have been successfully applied to image processing, problems and limitations have been recognized in practical situations. First, computational complexity for calculating sparse coefficients is typically high due to the complicated algorithms involved, such as 2D filtering [6], sparse coding with various iterative schemes [22], [23], and eigenvalue decomposition of a largescale graph Laplacian [24]. Second, they typically require high transform redundancy which leads to a large amount of memory usage to store the coefficients. ...

... The DADCF requires two block transforms, additions and subtractions between two transforms, and scaling operations. Its computational cost is slightly higher than conventional block transforms due to the SAP operations but much lower than other overlapped frames and dictionaries, as mentioned in Section I. Its redundancy ratio is 2: It is the same as the DFT and the DTCWTs [12], [14], [15], [32], and thus it can reduce the amount of memory usage compared with highly redundant frames and dictionaries like those in [22], [23]. ...

Block frames called
directional analytic discrete cosine frames
(DADCFs) are proposed for sparse image representation. In contrast to conventional overlapped frames, the proposed DADCFs require a reduced amount of 1) computational complexity, 2) memory usage, and 3) global memory access. These characteristics are strongly required for current high-resolution image processing. Specifically, we propose two DADCFs based on discrete cosine transform (DCT) and discrete sine transform (DST). The first DADCF is constructed from parallel separable transforms of DCT and DST, where the DST is permuted by row. The second DADCF is also designed based on DCT and DST, while the DST is customized to have no DC leakage property which is a desirable property for image processing. Both DADCFs have rich directional selectivity with slightly different characteristics each other and they can be implemented as non-overlapping block-based transforms, especially they form Parseval block frames with low transform redundancy. We perform experiments to evaluate our DADCFs and compare them with conventional directional block transforms in image recovery.

... Extended versions of these frames for higher-dimensional signals, such as videos, have also been proposed [18]- [21]. In addition to directional frames, more general systems such as dictionary [22], [23] and graph WTs/FBs [24] that capture highly complex structures and nonlocal similarity have been proposed. Those transforms can provide sparse image representation for various fine components. ...

... itations have been recognized in practical situations. First, computational complexity for calculating sparse coefficients is typically high due to the complicated algorithms involved, such as 2D filtering [6], sparse coding with various iterative schemes [22], [23], and eigenvalue decomposition of a largescale graph Laplacian [24]. Second, they typically require high transform redundancy which leads to a large amount of memory usage to store the coefficients. ...

... The DADCF requires two block transforms, additions and subtractions between two transforms, and scaling operations. Its computational cost is slightly higher than conventional block transforms due to the SAP operations but much lower than other overlapped frames and dictionaries, as mentioned in Section I. Its redundancy ratio is 2: It is the same as the DFT and the DTCWTs [12], [14], [15], [32], and thus it can reduce the amount of memory usage compared with highly redundant frames and dictionaries like those in [22], [23]. ...

Block frames called directional analytic discrete cosine frames (DADCFs) are proposed for sparse image representation. In contrast to conventional overlapped frames, the proposed DADCFs require a reduced amount of 1) computational complexity, 2) memory usage, and 3) global memory access. These characteristics are strongly required for current high-resolution image processing. Specifically, we propose two DADCFs based on discrete cosine transform (DCT) and discrete sine transform (DST). The first DADCF is constructed from parallel separable transforms of DCT and DST, where the DST is permuted by row. The second DADCF is also designed based on DCT and DST, while the DST is customized to have no DC leakage property which is a desirable property for image processing. Both DADCFs have rich directional selectivity with slightly different characteristics each other and they can be implemented as non-overlapping block-based transforms, especially they form Parseval block frames with low transform redundancy. We perform experiments to evaluate our DADCFs and compare them with conventional directional block transforms in image recovery.

... Recent years, sparse representation has been widely applied to many data mining areas across signal processing and pattern recognition, by which signals can be represented as a linear combination of a relatively small number of atoms of an over-complete dictionary [1]. Then, sparse representation models can be classified into two categories by the learning method, synthesis dictionary learning (SDL) and analysis dictionary learning (ADL) [1,9]. A typical SDL model expects to learn the over-completed dictionary by minimizing the reconstruction errors such that it can linearly represent the original signals. ...

... However, the current LRR model are commonly combined with SDL model for spanning the multiple subspaces, and learning representations for test samples is a quite time-consuming procedure [2]. Furthermore, the traditional ADL model views the signal transformation problem as a pure approximation task, which may overlook signal intrinsic attributes, such as structural and discriminative capability [9]. Thus, the discriminative ability of the analysis dictionary is not fully exploited only with the label matrix of training samples [3,17]. ...

... The analysis dictionary learning (ADL) model aims to learn a projective matrix (i.e., analysis dictionary) Ω ∈ R k×m with k > m to implement the approximately sparse representation Z ∈ R k×n in transformed domain [8,9]. Specifically, it assumes that the product of Ω and x i is sparse, i.e., z i = Ωx i with ∥z i ∥ 0 = k − l, where 0 ≤ l ≤ k is the number of zeros in z i ∈ R k . ...

Sparse representation of images by analysis dictionary learning (ADL) has been an active topic in pattern classification as samples can be transformed into sparse representation efficiently. However, learning a discriminative and compact analysis dictionary (ADL) has not been well addressed when the samples are corrupted with noises. In this paper, we propose a low-rank orthonormal analysis dictionary learning (LR-OADL) model. Specially, the low-rank constraint is firstly imposed on the analysis representation to handle the possible noises in the samples. With orthonormal constraint and off-block diagonal supressing term, the analysis dictionary atoms from different classes are incoherent from each other, leading to discriminative block-diagonal representations. Furthermore, a novel locality constraint is exploited to promote the discriminative within class representation similarity. Finally, we employ an alternating minimization algorithm to solve this problem. Experiments on benchmark image datasets demonstrate the efficacy of the proposed LR-OADL model.

... Three well-known solution methods for the DL problem Equation (2) are the method of optimum directions (MOD) [24,25], K-singular value decomposition (KSVD) [8,26], and sequential generalisation of K-means (SGK) [27,28]. The MOD algorithm obtains the unconstrained minimum of Equation (2) as a closed-form formula, and then projects it to the feasible set D, by normalising its columns. ...

... The structure of the first proposed algorithm for solving problem Equation (11) is based on the generalisation of the KSVD algorithm for 1DDL [26] to MD case. Unlike the 1D KSVD algorithm, which uses OMP in the SC step, here, the MRe-HDSL0 technique is used which has a high convergence speed [13]. ...

Generalisation of one‐dimensional dictionary learning (1DDL) algorithms to Multidimensional (MD) mode and its utilisation in MD data applications, increases the speed and reduces the computational complexity. An example of such an application is 3D inverse synthetic aperture radar (ISAR) image reconstruction and noise reduction. In this study, in addition to MD mode generalisation, the formulation structure of the multidimensional dictionary learning (MDDL) problem is discussed followed by two novel algorithms to solve it. The first one is based on the K‐singular value decomposition algorithm for 1DDL, which uses alternating minimisation and singular value decomposition. The second algorithm is the extension of the sequential generalisation of K‐means 1DDL algorithm to the MD mode. Moreover, the MD tensor denoising method based on the MDDL algorithm (MDDL‐ALG) is proposed. As an application, the proposed method is used to denoise the 3D ISAR image. The numerical simulations reveal that the proposed methods, in addition to reducing the memory consumption and the computational complexity, also enjoy higher convergence rate in comparison to 1D algorithms. Specifically, convergence speed of MD algorithms, depending on the training data size, is up to at least 10 times faster than the equivalent 1D counterparts. As revealed through the simulations, the amount of signal to noise ratio recovered by the proposed methods is almost 2 dB higher than the case using a pre‐designed dictionary for denoising. Moreover, it outperforms about 10 dB over the case with the conventional 3D‐IFFT method for image construction.

... To obtain reliable classification models, sparse representation has attracted much attention, due to its discriminative and compact characteristics, with which an input sample can be coded as a linear combination of a few atoms in an over-complete dictionary [17]. The dictionary learning (DL) methods play a vital role in sparse representation, which can be classified into two categories by the way of encoding samples, i.e., synthesis dictionary learning (SDL) model and analysis dictionary learning (ADL) model [1,10]. A SDL model expects to learn an over-completed dictionary by minimizing the reconstruction errors such that it can exactly represent the samples. ...

... The ADL model aims to learn an analysis dictionary Ω ∈ R k×m with k > m to implement the approximately sparse representation of the signal y ∈ R m in transformed domain [10] [9]. Specifically, it assumes that the product of Ω and y is sparse, i.e., x = Ωy with ∥x i ∥ 0 = k − t, where 0 ≤ t ≤ k is the number of zeros in x ∈ R k . ...

Analysis dictionary learning (ADL) model has attracted much interest from researchers in representation-based classification due to its scalability and efficiency in out-of-sample classification. However, the discrimination of the analysis representation is not fully explored when roughly consider the supervised information with redundant and noisy samples. In this paper, we propose a discriminative and robust analysis dictionary learning model (DR-ADL), which explores the underlying structural information of data samples. Firstly, the supervised latent structural term is first implicitly considered to generate a roughly block-diagonal representation for intra-class samples. However, this discriminative structure is fragile and weak in the presence of noisy and redundant samples. Concentrating on both intra-class and inter-class information, we then explicitly incorporate an off-block suppressing term on the ADL model for discriminative structure representation. Moreover, non-negative constraint is incorporated on representations to ensure a reasoning explanation for the contributions of each atoms. Finally, the DR-ADL model is alternatively solved by the K-SVD method, iterative re-weighted method and gradient method efficiently. Experimental results on four benchmark face datasets classification validate the performance superiority of our DR-ADL model.KeywordsAnalysis dictionary learningOff-block suppressionK-SVD methodLatent space classification

... Dictionary Learning can be decomposed into two principal themes of learning: Synthesis Dictionary Learning (SDL) and Analysis Dictionary Learning (ADL) / Transform Learning. While SDL has been greatly investigated and widely used Mairal, Ponce, Sapiro, Zisserman and Bach (2009) ;Aharon, Elad and Bruckstein (2006); Ramirez, Sprechmann and Sapiro (2010a) ;Yang, Zhang, Feng and Zhang (2014); Wang, Yang, Nasrabadi and Huang (2013), the ADL /Transform Learning, as a dual problem, has been getting greater attention for its robustness property among others Rubinstein, Peleg and Elad (2013); Bian, Krim, Bronstein and Dai (2016) ;Tang, Otero, Krim and Dai (2016). Conventional DL based methods have primarily focused on learning a single-layer dictionary and its associated sparse representation. ...

... Conventional DL based methods have primarily focused on learning a single-layer dictionary and its associated sparse representation. Other variations The conventional single-layer DL methods with their associated sparse representations, present significant computational challenges addressed by different techniques, including K-SVD Aharon et al. (2006); Rubinstein et al. (2013), SNS-ADL Bian et al. (2016) and Fast Iterative Shrinkagethresholding Algorithm (FISTA) Beck and Teboulle (2009). Meant to provide a practically faster solution, the alternating minimization of FISTA still exhibited limitations and a relatively high computational cost. ...

On account of its many successes in inference tasks and imaging applications, Dictionary Learning (DL) and its related sparse optimization problems have garnered a lot of research interest. In DL area, most solutions are focused on single-layer dictionaries, whose reliance on handcrafted features achieves a somewhat limited performance. With the rapid development of deep learning, improved DL methods called Deep DL (DDL), have been recently proposed an end-to-end flexible inference solution with a much higher performance. The proposed DDL techniques have, however, also fallen short on a number of issues, namely, computational cost and the difficulties in gradient updating and initialization. While a few differential programming solutions have been proposed to speed-up the single-layer DL, none of them could ensure an efficient, scalable, and robust solution for DDL methods. To that end, we propose herein, a novel differentiable programming approach, which yields an efficient, competitive and reliable DDL solution. The novel DDL method jointly learns deep transforms and deep metrics, where each DL layer is theoretically reformulated as a combination of one linear layer and a Recurrent Neural Network (RNN). The RNN is also shown to flexibly account for the layer-associated approximation together with a learnable metric. Additionally, our proposed work unveils new insights into Neural Network (NN) and DDL, bridging the combinations of linear and RNN layers with DDL methods. Extensive experiments on image classification problems are carried out to demonstrate that the proposed method can not only outperform existing DDL several counts including, efficiency, scaling and discrimination, but also achieve better accuracy and increased robustness against adversarial perturbations than CNNs.

... Citation information: DOI 10.1109/TPWRS.2022.3156025, IEEE Transactions on Power Systems system there information labelling based on discriminative dictionary learning algorithms for pattern recognition, sparse model analysis, and sparse coding has been introduced in [4][5][6][7]. Then, considering a small number of noisy measurements, [8] proposed the recovery of high-dimensional sparse signal based on OMP algorithms under the condition of mutual incoherence and non-zero components. ...

... IEEE Transactions on Power Systems unprocessed input data like generator active/reactive power, bus voltage phasor, branch active/reactive power, etc. The studies [4][5][6][7][8][9][10][11][12] report TSA results which are generally satisfactory in terms of accuracy, but they all use highly blackboxed learning machines which make it impossible to explain why the stability or instability decision is taken. Also, while some methods report a very short decision time, such as predicting instability 30-sec ahead of time, using 1-cycle postfault data, it is unclear if they considered all types of instability that may occur in this extended time frame, such as rotor angle, oscillatory, transient voltage and frequency instability, which altogether can be termed dynamic instability. ...

This paper develops a supervised learning of overcomplete dictionaries (SLOD) to efficiently perform online analysis and dynamic stability prediction (DSP) in large power systems. To this end, multiple post-contingency signals with varying faults, load and generator switching are acquired by phasor measurement units (PMUs) based monitoring of all generators to form the training dataset. The SLOD approach then jointly learns a sparse feature dictionary and a stability status classifier using dynamic singular value decomposition (D-KSVD) and orthogonal matching pursuit (OMP). The aim is to learn features of the signals while adding a stability status term to the optimization problem. This enabled stable/unstable classes to be represented. The proposed SLOD manages to improve online DSP shortly after a contingency. It does this without any prior assumptions or signal preprocessing except for dynamic state estimation to get rotor speed from generator terminal PMU data. Test results on the IEEE 2-area 4 machines, 39 and 68 -bus test systems demonstrate that the SLOD gives more than satisfactory performance in all cases (i.e., ~99.99% accuracy, ~99.99% reliability, and 100% security) compared to Statistics and Machine Learning Toolbox -MATLAB that use time-series-based high-dimensional stability indices as input for DSP.

... Other algorithms formulate the prior explicitly. For instance, dictionary-based methods [17] that assume images can be well represented by a fixed set of elements, which we discuss in the next section. Other examples are shrinkage methods [18], [19]. ...

... We also describe the relation to dictionary-based methods. Dictionary-based methods [17], [43] generally follow the formulationx = arg min ...

Classic image-restoration algorithms use a variety of priors, either implicitly or explicitly. Their priors are hand-designed and their corresponding weights are heuristically assigned. Hence, deep learning methods often produce superior image restoration quality. Deep networks are, however, capable of inducing strong and hardly predictable hallucinations. Networks implicitly learn to be jointly faithful to the observed data while learning an image prior; and the separation of original data and hallucinated data downstream is then not possible. This limits their wide-spread adoption in image restoration. Furthermore, it is often the hallucinated part that is victim to degradation-model overfitting. We present an approach with decoupled network-prior based hallucination and data fidelity terms. We refer to our framework as the Bayesian Integration of a Generative Prior (BIGPrior). Our method is rooted in a Bayesian framework and tightly connected to classic restoration methods. In fact, it can be viewed as a generalization of a large family of classic restoration algorithms. We use network inversion to extract image prior information from a generative network. We show that, on image colorization, inpainting and denoising, our framework consistently improves the inversion results. Our method, though partly reliant on the quality of the generative network inversion, is competitive with state-of-the-art supervised and task-specific restoration methods. It also provides an additional metric that sets forth the degree of prior reliance per pixel relative to data fidelity.

... Based on this model, several analysis dictionary learning (ADL) algorithms have been proposed, e.g. Analysis K-SVD [19], constrained overcomplete analysis operator learning [23], geometric analysis operator learning [9], and Analysis SimCO [6]. ...

... where {ˆ j,: } p j=1 are the atoms of the learned dictionary and is the threshold value to determine the similarity of the learned atoms and the reference atoms. The value of is set as 0.01 [6,19]. The average cosparsity evaluates the learned dictionaries directly based on the cosparsities of original signals with respect to the learned dictionaries. ...

We consider the problem of distributed dictionary learning which aims to learn a global dictionary from data geographically distributed on nodes of a network. Existing works are based on sparse synthesis model while this paper is based on sparse analysis model. Two novel distributed analysis dictionary learning (ADL) algorithms are proposed by adapting the centralized ADL algorithms Analysis SimCO (ASimCO) and Incoherent Analysis SimCO (INASimCO) to distributed settings. In particular, local representation vectors and local dictionaries are introduced, and they can be updated independently on each node by distributing the sparse coding and dictionary update stages of ASimCO. A diffusion strategy is then applied to estimate a global dictionary from the local dictionaries by exchanging local information. Experimental results with synthetic data and for image denoising demonstrate that the proposed distributed ADL algorithms can obtain similar results as correpsonding centralized algorithms.

... The novel method of projection matrix optimization for CAMB-CS is proposed by taking into account the relative amplified CSRE and the amplified CSRE energy as in Equation (16). can be obtained from X by using a cosparse coding [87], and cs E X -Z is CSRE matrix. The projection matrix optimization for CAMB-CS in [86] adopted method in [74] to solve Equation (16) with 1 ...

... KSVD algorithm [22] and the algorithm in [36] were used to build synthesis dictionary Ψ and operator Ω , respectively with 320000 P , 64 N , 96 K , 4 S , and 64 4 60 C by using the training signals X . The backward greedy algorithm [87] The SSMB-CS uses random Gaussian optimization algorithms in [53], [63], and [74], [78] ...

... Since the most disturbing effect on images is noise, the image denoising is one of the most studied subfield of image processing. Numerous image denoising methods are proposed in the literature (Aharon et al., 2006;Rubinstein et al., 2013;Eksioglu and Bayir, 2014a;Milanfar, 2013;Colak and Eksioglu, 2019). Patch ordering is one of the remarkable approaches to non-local processing (Ram et al., 2013b). ...

... The problem for the construction of the dictionary is known as dictionary learning. Plenty of methods have been suggested to learn overcomplete dictionaries (Rubinstein et al., 2013;Eksioglu and Bayir, 2014b,a;Aharon et al., 2006;Ravishankar and Bresler, 2013;Yaghoobi et al., 2013). The core aim of dictionary learning is to specifically train atoms which work well for sparse representation of a given data set. ...

We introduce an image denoising algorithm which utilizes a novel online dictionary learning procedure together with patch ordering. The developed algorithm employs both the non-local image processing power of patch ordering and the sequential patch-based update of online dictionary learning. The patch ordering process exploits the similarities between patches of a given image which are extracted from different locations. Joint processing of the ordered set of image patches facilitates the non-local image processing ability of the algorithm. The algorithm starts with the extraction of a maximally overlapped set of patches from the given noisy image. Then, the extracted patches are reordered by using a distance measure, and the 3D ordered patch cube is formed. The ordered patch cube is used sequentially to update an overcomplete dictionary. In each iteration, firstly the present patch is denoised using sparse coding over the current overcomplete dictionary. Secondly, the overcomplete dictionary is updated using the current image patch, and the dictionary is passed to the next iteration. We call this process as “on the fly denoising”, because each patch is individually denoised using an instantaneously updated overcomplete dictionary. Patch ordering together with online dictionary learning ensures that the dictionary is adapted to different neighborhoods of patches in the patch cube. This adaptation of the dictionary to specialized local patch structures in the patch cube promises improved denoising performance when compared to dictionary learning algorithms devoid of such adaptation. Simulation results indicate that the introduced online method presents improved denoising performance in comparison to both online and batch dictionary learning algorithms from the literature while maintaining similar computational complexity.

... K-SVD is a classical dictionary learning algorithm [31][32][33][34][35]. The problem model solved by K-SVD, one of the most famous dictionaries learning algorithms, is: ...

In view of the fact that the noise in the same frequency band as the useful signal in the MEMS acceleration sensor observation data cannot be effectively removed by traditional filtering methods, a denoising method for strong earthquake signals based on the theory of sparse representation and compressive sensing is proposed in this paper. This skillfully realized the separation of strong earthquake signals from noise by adopting a fixed dictionary and utilizing sparse characteristics. Furthermore, considering the weakness of the sparse denoising method based on the fixed dictionary in the high signal-to-noise ratio, a spare denoising method based on learning an over-complete dictionary is proposed. Through the initial given seismic data, the ideal over-complete dictionary is trained to achieve seismic data denoising. In addition, for the interference waves of non-seismic events, this paper proposes an idea based on sparse representation classification to remove such non-seismic interference directly. Combining the ideas of noise reduction and non-seismic event elimination, we can obtain a standard sparse anti-interference denoising model for earthquake early warning. It’s innovative that this model implements the sparse theory into the field of earthquake early warning. According to the experimental results, in the case of heavy noise, the denoising model based on sparse representation can reach average SNR of 8.73 and an average MSE of 29.53, and the denoising model based on compression perception can reach average SNR of 7.29 and an average MSE 41.34, and the denoising model based on learning dictionary can reach average SNR 11.07 and average MSE 17.32. The performance of these models is better than the traditional FIR filtering method (average SNR −0.73 and average MSE 260.37) or IIR filtering method (average SNR 4.73 and average MSE 73.95). On the other hand, the anti-interference method of the sparse classification proposed in this paper can accurately distinguish non-seismic interference events from natural earthquakes. The classification accuracy of the method based on the noise category of the selected test data set reaches 100% and achieves good results.

... We use two metrics to quantify how close the learned filters are to the handcrafted filters, both from Rubinstein, Peleg, and Elad [244]. The first metric, ...

Signals and systems (S&S) concepts are the theoretical foundation of machine learning and signal processing, cutting-edge fields with real-world applications in many domains. This dissertation combines two projects on S&S in the fields of engineering education research and image reconstruction. Within the field of engineering education research, this dissertation discusses which S&S concepts students understand and what factors—such as motivation, choice of upper-level electives, or use of evidence-based instructional practices like active learning—influence their understanding. This research project involved three phases. The first phase used quantitative methods to measure CU and investigated factors that predict CU of students at the end of their S&S course. This phase found that measures of ability and motivation are significantly predictive of CU. Phase one also served as a pilot project for the following two phases that concentrate on CU of senior undergraduate students. The second phase used think-aloud interviews and a concept inventory to measure CU of S&S. The results show that many seniors understand some topics, such as filtering and time invariance, but struggle with other S&S concepts, such as linearity and convolution. The third phase used interviews and qualitative data analysis methods to investigate what factors impact CU over the course of an undergraduate degree. The results provide recommendations for how instructors and curriculum designers can improve students’ CU of S&S, such as emphasizing the purpose of concepts, using contrasting examples in lectures, translating mathematics, and repeating concepts across multiple courses. The second part of this dissertation applies concepts from S&S to image reconstruction. Image reconstruction is the process of taking input data from one signal space and producing an interpretable image. In medical image reconstruction, state-of-the-art methods use advances in machine learning and training datasets to learn parameters that can be used to reconstruct high-quality images with fewer measurements, thus decreasing radiation exposure for patients while providing doctors with high-quality images to properly diagnose and treat many diseases. The image reconstruction project in this dissertation motivates and reviews bilevel methods for learning image reconstruction parameters. Bilevel methods are task-based, so that learned parameters are expected to perform best at reconstructing; are explainable and interpretable, thus improving the likelihood that doctors will trust and adopt them; and allow for different measures of image quality, including traditional mean square error metrics that are easy to use and metrics that more accurately capture human perception. The results demonstrate that parameters learned in a common non-bilevel formulation under-perform handcrafted parameters due to the structure of the learning problem and that bilevel methods help to address this gap.

... The main conclusions of Rubinstein R's research are the same. The original virtual reality ecological music is mainly original and unique folk music produced in different natural arts, which is the life of local residents [3]. Slizovskaia O's many related achievements on "Music Art Teaching" and his attention to "Soundscape Art" are closely related to the artistic diversity of ethnic regions [4]. ...

Music education in our country has a long history, but modern teaching started relatively late. In recent years, our country has continuously accelerated the pace of learning in the field of music education. The use of advanced Internet of Things virtual reality has become an important way for the development of music art education in our country. This paper studies the practical effects of music art teaching combined with the aid of the Internet of Things virtual reality technology. This article takes the Internet of Things virtual reality technology-assisted music art teaching as the research object and analyzes the effects and advantages of virtual reality technology in music art teaching. Search for topics such as “music” and “VR” through Google to find literature related to music and art. Get the latest rankings of different types of electronic music on YouTube, and through the audition and BPM screening, download music that meets the classroom practice of this article. Research results show that a teaching system that combines VR and augmented reality technology can shorten teaching time by 65%. The virtual reconstruction of music art using VR technology allows students to visually observe the form of music art. The combination of VR technology and traditional teaching methods significantly increases the initiative, interest, effectiveness, and participation of students in learning and can achieve better teaching results. VR system is used in music teaching. DentSim system, Moog Simodont, and dental trainer system can effectively help students master the basic methods of note preparation and actively improve the teaching quality of music professional courses.

... The ADL models has attracted much attention due to its flexibility and interpretability. 21,22 Recent years, researcher has focused on dictionary pair learning model (DPL) as another mainstream research, which leverages the merits of the conventional SDL models and flexible ADL models. [23][24][25] In this paper, we focus on exploring the discrimination of the SDL models, and some of the novel ideas in the ADL and DPL models can be further applied to our proposed model. ...

During the last decade, graph regularized dictionary learning (DL) models have obtained a lot of attention due to their flexible and discriminative ability in nonlinear pattern classification. However, the conventional graph-regularized methods construct a fixed affinity matrix for nearby samples in high-dimensional data space, which is vulnerable to noisy and redundant sample features. Furthermore, the discrimination of the graph regularized representation is not fully explored with the supervised classifier learning framework. To remedy these limitations, we propose an adaptive graph-regularized and label-embedded DL model for pattern classification. Especially, the affinity graph construction in low-dimensional representation space and the discriminative sparse representation is simultaneously learned in a unified framework for mutual promotion. More concretely, we iteratively update the sample similarity weight matrix in representation space to enhance the model robustness and further impose a supervised label-embedding term on sparse representation to enhancing its discriminative capability for classification. The dictionary orthonormal constraint is also considered to eliminate the redundant atoms and enhance the model discrimination. An efficient alternating direction solution with guaranteed convergence is proposed for the nonconvex and unsmooth model. Experimental results on five benchmark datasets verify the effectiveness of the proposed model.

... During the sparse coding stage, the dictionary is fixed, and the sparse coefficients of the signal are estimated. During the dictionary update step, the dictionary is updated iteratively given fixed sparse coefficients, until the approximation to the signal converges [23]. Various types of algorithms have been proposed to estimate the sparse coefficients, e.g., greedy approaches[Matching pursuit (MP) and orthogonal matching pursuit (OMP)], thresholding-based approaches (hard and soft thresholding) and homotopy approaches (least angle regression, least absolute shrinkage and selection operators) [24]. ...

A simplified (i.e., single shot) method is demonstrated to generate a Fourier hologram from multiple two-dimensional (2D) perspective images (PIs) under low light level imaging conditions. It was shown that the orthographic projection images (OPIs) can be synthesized using PIs and then, following incorporation of corresponding phase values, a digital hologram can be generated. In this work, a fast dictionary learning (DL) technique, known as Sequential Generalised K-means (SGK) algorithm, is used to perform Integral Fourier hologram reconstruction from fewer samples. The SGK method transforms the generated Fourier hologram into its sparse form, which represented it with a linear combination of some basis functions, also known as atoms. These atoms are arranged in the form of a matrix called a dictionary. In this work, the dictionary is updated using an arithmetic average method while the Orthogonal Matching Pursuit algorithm is opted to update the sparse coefficients. It is shown that the proposed DL method provides good hologram quality, (in terms of peak signal-to-noise ratio) even for cases of ~ 90% sparsity.

... During the sparse coding stage, the dictionary is fixed, and the sparse coefficients of the signal are estimated. During the dictionary update step, the dictionary is updated iteratively given fixed sparse coefficients, until the approximation to the signal converges [23]. Various types of algorithms have been proposed to estimate the sparse coefficients, e.g., greedy approaches[Matching pursuit (MP) and orthogonal matching pursuit (OMP)], thresholding-based approaches (hard and soft thresholding) and homotopy approaches (least angle regression, least absolute shrinkage and selection operators) [24]. ...

A simplified (i.e., single shot) method is demonstrated to generate a Fourier hologram from multiple two-dimensional (2D) perspective images (PIs) under low light level imaging conditions. It was shown that the orthographic projection images (OPIs) can be synthesized using PIs and then, following incorporation of corresponding phase values, a digital hologram can be generated. In this work, a fast dictionary learning (DL) technique, known as Sequential Generalised K-means (SGK) algorithm, is used to perform Integral Fourier hologram reconstruction from fewer samples. The SGK method transforms the generated Fourier hologram into its sparse form, which represented it with a linear combination of some basis functions, also known as atoms. These atoms are arranged in the form of a matrix called a dictionary. In this work, the dictionary is updated using an arithmetic average method while the Orthogonal Matching Pursuit algorithm is opted to update the sparse coefficients. It is shown that the proposed DL method provides good hologram quality, (in terms of peak signal-to-noise ratio) even for cases of ~ 90% sparsity.

... The sparse coding step is NP-hard, and the algorithms to learn synthesis or analysis dictionaries are computationally expensive such as K-SVD AEB06 , Analysis K-SVD RPE13 , NAAOL MSRM13 , etc. The sparsifying transform model manifests advantage in terms of low computational cost for computing the sparse coding of signals RB15b . ...

The recently proposed sparsifying transform models incur low computational cost and have been applied to medical imaging. Meanwhile, deep models with nested network structure reveal great potential for learning features in different layers. In this study, we propose a network-structured sparsifying transform learning approach for X-ray computed tomography (CT), which we refer to as multi-layer clustering-based residual sparsifying transform (MCST) learning. The proposed MCST scheme learns multiple different unitary transforms in each layer by dividing each layer's input into several classes. We apply the MCST model to low-dose CT (LDCT) reconstruction by deploying the learned MCST model into the regularizer in penalized weighted least squares (PWLS) reconstruction. We conducted LDCT reconstruction experiments on XCAT phantom data and Mayo Clinic data and trained the MCST model with 2 (or 3) layers and with 5 clusters in each layer. The learned transforms in the same layer showed rich features while additional information is extracted from representation residuals. Our simulation results demonstrate that PWLS-MCST achieves better image reconstruction quality than the conventional FBP method and PWLS with edge-preserving (EP) regularizer. It also outperformed recent advanced methods like PWLS with a learned multi-layer residual sparsifying transform prior (MARS) and PWLS with a union of learned transforms (ULTRA), especially for displaying clear edges and preserving subtle details.

... Obviously, the sparsity α indicates which components the target signals have, that are also corresponding to the atoms dk in the dictionary D. Therefore, for a qualified dictionary learning model, the learning dictionary D should cope with the sparsity α well. Based on this idea, an alternating optimization is applied to determine the dictionary D in the training stage, and K-SVD algorithm is allocated to solve the problem [37]. ...

As a pivotal technological foundation for smart home implementation, non-intrusive load monitoring is emerging as a widely recognized and popular technology to replace the sensors or sockets networks for the detailed household appliance monitoring. In this paper, a probability model framed ensemble method is proposed for the target of robust appliance monitoring. Firstly, the non-intrusive load disaggregation-oriented ensemble architecture is presented. Then, dictionary learning model is utilized to formulate the individual classifier, while the sparse coding-based approach is capable of providing multiple solutions under greedy mechanism. Furthermore, a fully probabilistic model is established for combined classifier, where the candidate solutions are all labelled with probability scores and evaluated via two-stage decision-making. The proposed method is tested on both low-voltage network simulator platform and field measurement datasets, and the results show that the proposed ensemble method always guarantees an enhancement on the performance of non-intrusive load disaggregation. Besides, the proposed approach shows high flexibility and scalability in classification model selection. Therefore, by initializing the architecture and approach of ensemble method-based NILM, this work plays a pioneer role in using ensemble method to improve the robustness and reliability of non-intrusive appliance monitoring.

... K-SVD uses ℓ2 distortion as a measure of data fidelity. The problem of dictionary learning is also solved from the perspective of analysis [43][44][45]. In addition to denoising, dictionarybased techniques have also been applied in inpainting [23][24][25] and classification [26][27][28]. ...

Quick Response (QR) codes are designed for information storage and high-speed reading applications. To store additional information, Two-Level QR (2LQR) codes replace black modules in standard QR codes with specific texture patterns. When the 2LQR code is printed, texture patterns are blurred and their sizes are smaller than $$0.5{\mathrm{cm}}^{2}$$ 0.5 cm 2 . Recognizing small-sized blurred texture patterns is challenging. In original 2LQR literature, recognition of texture patterns is based on maximizing the correlation between print-and-scanned texture patterns and the original digital ones. When employing desktop printers with large pixel extensions and low-resolution capture devices, the recognition accuracy of texture patterns greatly reduces. To improve the recognition accuracy under this situation, our work presents a dictionary learning based scheme to recognize printed texture patterns. To our best knowledge, it is the first attempt to use dictionary learning to promote the recognition accuracy of printed texture patterns. In our scheme, dictionaries for all kinds of texture patterns are learned from print-and-scanned texture modules in the training stage. And these learned dictionaries are employed to represent each texture module in the testing stage (extracting process) to recognize their texture pattern. Experimental results show that our proposed algorithm significantly reduces the recognition error of small-sized printed texture patterns.

... In recent years, the theory and application of sparse and redundant representation have been developing continuously [16] [17] . Sparse representation has achieved good results in such aspects as image restoration [18] , denoising [15] , super-resolution [19] , face recognition [20] and so on. ...

Halftoning image is widely used in printing and scanning equipment, which is of great significance for the preservation and processing of these images. However, because of the different resolution of the display devices, the processing and display of halftone image are confronted with great challenges, such as Moore pattern and image blurring. Therefore, the inverse halftone technique is required to remove the halftoning screen. In this paper, we propose a sparse representation based inverse halftone algorithm via learning the clean dictionary, which is realized by two steps: deconvolution and sparse optimization in the transform domain to remove the noise. The main contributions of this paper include three aspects: first, we analysis the denoising effects for different training sets and the redundancy of dictionary; Then we propose the improved a sparse representation based denoising algorithm through adaptively learning the dictionary, which iteratively remove the noise of the training set and upgrade the quality of the dictionary; Then the error diffusion halftone image inverse halftoning algorithm is proposed. Finally, we verify that the noise level in the error diffusion linear model is fixed, and the noise level is only related to the diffusion operator. Experimental results show that the proposed algorithm has better PSNR and visual performance than state-of-the-art methods.

... There are various methods proposed in the literature for dictionary update to get approximated solutions which include method of optimal direction (MOD) [44], K-singular value decomposition (K-SVD) [45], [46], stochastic gradient descent method [47], Lagrange dual method [48], and ODL method [49]. Among all these methods, MOD, K-SVD, and ODL methods are commonly used for seismic denoising. ...

In seismic data processing, denoising is one of the important steps to get the earth subsurface layers' information accurately. The dictionary learning (DL) method is one of the prominent methods to denoise the seismic data. In the DL method, there are various parameters involved for denoising such as patch size, dictionary size, number of training patches, choice of threshold, sparsity level, computational cost, and number of iterations for DL. In this work, we study each parameter and its effects on seismic denoising in terms of signal-to-noise ratio and mean square error between the true and denoised seismic data. We examined the performance of the DL method on synthetic and field seismic data for various choices of parameters.

... Simultaneous codeword optimization (SimCO) [11] method can simultaneously update an arbitrary subset of codewords and the corresponding sparse coefficients by viewing the coefficients as a function of the dictionary. In contrast to the dictionary learning described above, the sparse analysis model has been used to analyze the signals by employing an analysis operator or analysis dictionary [12,13]. ...

Underdetermined blind source separation of speech mixtures is a challenging issue in the classical “Cocktail-party” problem. Recently, there has been attention to use dictionary learning to solve this problem. In this paper, we build a novel framework to solve the underdetermined blind separation of speech mixtures as a sparse signal recovery problem by using a compressed sensing model. First, to eliminate the influence of additive white Gaussian noise, a wavelet transform with tunable Q-factor is used as noise reduction pretreatment. Second, to obtain an accurate mixing matrix estimation, a blind identification method is designed by identifying single source data. Third, to find the best dictionary to represent the training signals, an arbitrary subset of codewords and the corresponding coefficients are updated simultaneously. In the source signal recovery stage, a block processing is used into the mixing signals so that the source components are separated from each block by using sparse representation. Then, the whole source signals are reconstructed by concatenating the separated source components from all the block. The advantage is reducing the computational complexity. Finally, experimental results by separating the underdetermined speech mixtures demonstrate the superiority of the proposed algorithm.

Dictionary learning (DL) methods have been successfully applied to seismic data denoising. However, when the noise is strong, weak signals cannot be well preserved. Considering the characteristics of seismic data in the spatial-time domain, we propose a dictionary learning method with total variation regularization (DLTV). The DLTV method consists of two steps. The first step is to learn the dictionary; it uses the sparse feature of the data in the dictionary domain and the learned dictionary contains data features. The second step is to use the augmented Lagrange multiplier method to restore the data, and total variation regularization to preserve the weak signal by sampling data information around it. An example with seismic data shows that the proposed method retains the weak features better than the traditional F-X deconvolution (FX-Decon) method and the data-driven tight frame method (DDTF).

Electromagnetic waves at millimetre-wave (mmW) frequencies have found applications in a variety of imaging systems, from security screening to defence and automotive radars, with the research and development of mmW imaging systems gaining interest in recent years. Despite their significant advantages, mmW imaging systems suffer from poor resolution as compared to higher frequency reconstructions, such as optical images. To improve the resolution of mmW images, various super-resolution (SR) techniques have been introduced. One such technique is the use of machine learning algorithms in the signal processing layer of the imaging system without altering any of the system’s parameters. This paper focuses on the use of a convolutional neural network (CNN) architecture to achieve SR when applied to three-dimensional mmW input images. To exploit the phase information content of the input images along with the magnitude, a complex-valued CNN is designed that can accommodate complex-valued data. To simplify the learning process, the resolution difference between the input and output images is divided into smaller parts by using sub-networks in the CNN architecture. The trained model is tested on simulated as well as experimental targets. The average mean-squared-error score and structural similarity index obtained on a test dataset of 460 samples are 0.0127 and 0.9225, respectively. It can be inferred that the model has the capability to improve the resolution of input mmW images to a high degree of fidelity, hence paving the way for an end-to-end SR imaging system.

Seismic denoising is an essential step for seismic data processing. Conventionally, dictionary learning methods for seismic denoising always assume the representation coefficients to be sparse and the dictionary to be normalized or a tight frame. Current dictionary learning methods need to update the dictionary and the coefficients in an alternating iterative process. However, the dictionary obtained from dictionary learning method often needs to be recalculated for each input data. Moreover, the performance of dictionary learning for seismic noise removal is related to the parameter selection and the prior constraints of dictionary and representation coefficients. Recently, deep learning demonstrates promising performance in data prediction and classification. Following the architecture of dictionary learning algorithms strictly, we propose a novel and interpretable deep unfolding dictionary learning method for seismic denoising by unfolding the iterative algorithm of dictionary learning into a deep neural network. The proposed architecture of deep unfolding dictionary learning contains two main parts, the first is to update the dictionary and representation coefficients using least-square inversion and the second is to apply a deep neural network to learn the prior representation of dictionary and representation coefficients, respectively. Numerical synthetic and field examples show the effectiveness of our proposed method. More importantly,the proposed method for seismic denoising obtains the dictionary for each seismic data adaptively and is suitable for seismic data with different noise levels.

Discriminative dictionary learning (DDL) aims to address pattern classification problems via learning dictionaries from training samples. Dictionary pair learning (DPL) based DDL has shown superiority as compared with most existing algorithms which only learn synthesis dictionaries or analysis dictionaries. However, in the original DPL algorithm, the discrimination capability is only promoted via the reconstruction error and the structures of the learned dictionaries, while the discrimination of coding coefficients is not considered in the process of dictionary learning. To address this issue, we propose a new DDL algorithm by introducing an additional discriminative term associated with coding coefficients. Specifically, a support vector machine (SVM) based term is employed to enhance the discrimination of coding coefficients. In this model, a structured dictionary pair and SVM classifiers are jointly learned, and an optimization method is developed to address the formulated optimization problem. A classification scheme based on both the reconstruction error and SVMs is also proposed. Simulation results on several widely used databases demonstrate that the proposed method can achieve competitive performance as compared with some state-of-the-art DDL algorithms.

Segmentation of brain MR images can help clinicians to extract regions of interest. At present, there are many studies on brain MR image segmentation, especially in deep learning methods. However, deep learning methods requires a very large quantity of datasets. With the popularity of deep learning, dictionary learning has been revived again. As a traditional machine learning segmentation method, dictionary learning requires less sample size, has high segmentation efficiency, and can describe the image well. This paper mainly proposes an MR image segmentation method based on posterior probability. The process of medical image acquisition is accompanied by the generation of noise which is limited by factors such as the equipment environment. We use the powerful classification function of the naive Bayes model to classify the noise data and sample label data in the images, and then provide the classification results to the dictionary learning sparse matrix for sparse expression. We achieve the effect of noise reduction and denoising through mathematical methods to improve the signal-to-noise ratio of medical image data. After getting the denoised image, we adopt region growth combined with the softmax function method to classify the pixels. The results of this paper provide technical support for the subsequent image segmentation, detection and other computer-assisted clinical diagnosis, so as to improve the efficiency of automated clinical diagnosis for clinical application.

Purpose: The technique of fusing or integrating medical images collected from single and various modalities is known as medical image fusion. This is done to improve the quality of the images and combine information from several medical images. The whole procedure aids medical practitioners in gaining correct information from single images. Image fusion is one of the fastest-growing research topics in the medical imaging field. Sparse modeling is a popular signal representation technique used for image fusion with different dictionary learning approaches. We propose a medical image fusion with sparse representation (SR) and block total least-square (BLOTLESS) update dictionary learning. Approach: The domain of dictionary learning is the most significant research domain related to SR. An efficient dictionary increases the effectiveness of sparse modeling. Due to SR being an ongoing interesting research area, the medical image fusion process is done with a modified image fusion framework with recently developed BLOTLESS update dictionary learning. Results: The experimental results are compared for the image fusion process using other state-of-the-art dictionary learning algorithms, such as simultaneous codeword optimization, method of optimal directions, and K-singular value decomposition. The effectiveness of the algorithm is evaluated based on image fusion quantitative parameters. Results show that the BLOTLESS update dictionary algorithm is a promising modification for the sparse-based image fusion with its applicability in the fusion of images related to different diseases. Conclusions: The experiments and results show that the dictionary learning algorithm plays an important role in the sparse-based image fusion general framework. The fusion results also show that the proposed improved image fusion framework for medical images is promising compared with frameworks with other dictionary learning algorithms. As an application, it is also used as a tool for the fusion of different modularities of images related to brain tumor and glioma.

As a powerful statistical signal modeling technique, sparse representation has been widely used in various image restoration (IR) applications. The sparsity-based methods have achieved leading performance in the past few decades. However, in recent years it has been surpassed by other methods, especially the recent deep learning based methods. In this paper, we address the question that whether sparse representation can be competitive again. The way we answer this question is to redesign it with a deep architecture. To be specific, we propose an end-to-end deep architecture that follows the process of the sparse representation based IR. In particular, we learn a sparse convolutional dictionary to replace the traditional dictionary, and a convolutional neural network (CNN) denoising prior to replace the image prior. Through end-to-end training, the parameters in convolutional dictionary and CNN denoiser can be jointly optimized. Experimental results on several representative IR tasks, including image denoising, deblurring and super-resolution, demonstrate that the proposed deep network can achieve superior performance against state-of-the-art model-based and learning-based methods.

For seismic random noise attenuation, deep learning has attracted much attention and achieved promising performance. However, compared with conventional methods, the denoising performance of supervised learning-based methods heavily depends on massive training samples with high-quality labeled data, which makes their generalization capabilities limited. Even though deep neural networks (DNNs) usually outperform the conventional denoising methods, their performance is not guaranteed since neural networks still lack good mathematical interpretability at present. To alleviate the dependency on labeled data and explore insights into the denoising system, we proposed an unsupervised denoising method based on model-based deep learning, which combined domain knowledge and a data-driven method. We designed a network based on the modified iterative soft threshold algorithm (ISTA), which omitted the soft threshold to alleviate uncertainties introduced by empirically selected thresholds. In this network, we set the dictionary and code as trainable parameters. A loss function with a smooth penalty was designed to ensure that the network training can be implemented in an unsupervised manner. In the proposed method, we set the denoised result by
$f-x$
deconvolution as the input for our network, and the further denoised data can be obtained after each epoch of the training, which means that our method does not need the testing procedure. Experiments on synthetic and field seismic data demonstrate that our method exhibits competitive performance compared to the conventional, supervised, and unsupervised methods, including
$f-x$
deconvolution, curvelet, the Denoising Convolutional Neural Network (DnCNN), and the integration of neural network and Block-matching and 3-D filtering method (NN + BM3D).

The supervised dictionary learning methods have made considerable achievements in the field of pattern recognition. In order to make the learned coefficients have intraclass similarity and interclass incoherence, many dictionary learning methods have been proposed using different structured constraints. However, these constraints often have the disadvantages of complexity, time-consuming, and poor interpretability. More importantly, the existing dictionary learning often ignores the inherent structure of the label matrix. In this paper, we propose a dictionary pair learning based on the structured classifier and perform classifier learning and structured coefficients learning simultaneously. Our main idea is to use the extended label matrix and the invertible constraints on the classification transformation matrix to utilize the structure of the label matrix to impose structured constraints on coefficients. Moreover, the l21-norm is added to force the analysis dictionary to focus on more important features. Experimental results on multiple databases prove that our proposed method has better classification performance than several state-of-the-art dictionary learning methods.

Deep learning (DL) recently receives state-of-the-art results in pansharpening. However, most of the existing pansharpening neural networks are purely data-driven, without taking account of the characteristics of the pansharpening task. To address this issue, we propose a novel deep unfolding network for pansharpening that combines the insight of the variational optimization (VO) model and the capability of deep neural network (DNN). We first develop a panchromatic side sparsity (PASS) prior-based VO model for pansharpend image reconstruction, which is formulated as the
$\ell _{1}-\ell _{1}$
minimization. In particular, the PASS prior is defined using the transform sparsity, which can alleviate the influences of the irregular outliers between the multispectral (MS) and panchromatic (PAN) images. The iterations of half-quadratic splitting algorithm for solving the
$\ell _{1}-\ell _{1}$
minimization are then deeply unfolded into a DNN, referred as PASS-Net. To capture the nonlinear relationship between MS and PAN images, the linear transforms used in PASS prior are extended into the subnetworks in PASS-Net. Moreover, a pair of learnable downsampling and upsampling modules are designed to realize the downsampling and upsampling operations, which can improve the flexibility. The experimental results on different satellite datasets confirm that PASS-Net is superior to some representational traditional methods and state-of-the-art DL-based methods.

This paper proposes an improved double-dictionary K-singular value decomposition (IDDK-SVD) algorithm for the compound fault diagnosis of rolling element bearings under complex industrial environments. In the framework, a double-dictionary is first designed for respectively identifying and distinguishing compound-fault features. In addition, an atom selection strategy based on fault information is constructed by combining the Gini index with envelope periodic modulation intensity, which can simultaneously take the periodicity and impulsivity into consideration to find a set of atoms most related to fault for the update of double-dictionary. Finally, an estimation method for calculating the residual error in the sparse coding stage is also presented to meet a better result. Benefitting from these improvements, the proposed IDDK-SVD is effective in the application of compound-fault diagnosis, which is verified by both simulation signals and real signals collected from the locomotive bearing test rig.

Purpose:
Signal models based on sparse representations have received considerable attention in recent years. On the other hand, deep models consisting of a cascade of functional layers, commonly known as deep neural networks, have been highly successful for the task of object classification and have been recently introduced to image reconstruction. In this work, we develop a new image reconstruction approach based on a novel multilayer model learned in an unsupervised manner by combining both sparse representations and deep models. The proposed framework extends the classical sparsifying transform model for images to a Multilayer residual sparsifying transform (MARS) model, wherein the transform domain data are jointly sparsified over layers. We investigate the application of MARS models learned from limited regular-dose images for low-dose CT reconstruction using penalized weighted least squares (PWLS) optimization.
Methods:
We propose new formulations for multilayer transform learning and image reconstruction. We derive an efficient block coordinate descent algorithm to learn the transforms across layers, in an unsupervised manner from limited regular-dose images. The learned model is then incorporated into the low-dose image reconstruction phase.
Results:
Low-dose CT experimental results with both the XCAT phantom and Mayo Clinic data show that the MARS model outperforms conventional methods such as filtered back-projection and PWLS methods based on the edge-preserving (EP) regularizer in terms of two numerical metrics (RMSE and SSIM) and noise suppression. Compared with the single-layer learned transform (ST) model, the MARS model performs better in maintaining some subtle details.
Conclusions:
This work presents a novel data-driven regularization framework for CT image reconstruction that exploits learned multilayer or cascaded residual sparsifying transforms. The image model is learned in an unsupervised manner from limited images. Our experimental results demonstrate the promising performance of the proposed multilayer scheme over single-layer learned sparsifying transforms. Learned MARS models also offer better image quality than typical nonadaptive PWLS methods.

Dictionary learning has been widely applied in the field of pattern classification. Although many analysis-synthesis dictionary pair learning methods have been proposed in recent years, most of them ignore the influence of noises and training their model is usually time-consuming. To overcome these issues, we propose an efficient and robust discriminant analysis-synthesis dictionary pair learning (ERDDPL) method for pattern classification, which can efficiently learn a structured analysis-synthesis dictionary pair with powerful discrimination and representation capabilities. Specifically, we design a coding coefficient discriminant term to ensure that the coding coefficient matrix has an approximate block diagonal structure, which can enhance the discrimination capability of the structured analysis dictionary. To weaken the influence of noises on the structured synthesis dictionary, we impose a low-rank constraint on each synthesis sub-dictionary. Besides, we develop a fast and efficient iterative algorithm to solve the optimization problem of ERDDPL. Extensive experiments on five image datasets sufficiently verify that our method has higher classification accuracy and efficiency than the state-of-the-art dictionary learning methods.

Dictionary learning is one of classical data-driven ways for linear feature extraction, which finds wide applications in image recovery and classification, audio processing, biomedical signal processing, and data fusion. As its natural extension for multidimensional data, tensor dictionary learning can extract the multilinear features. The optimization of tensor dictionary learning can be generally divided into sparse tensor coding and tensor dictionary updating. Taking advantage of various tensor decomposition forms, such as Tucker form, CP form, and tensor singular value decomposition form, tensor dictionary learning methods have different kinds of sparse tensor dictionaries. And these dictionaries can be either synthetic based on sparse tensor representation or analytic based on cosparse tensor representation. To improve the computational complexity of tensor dictionary learning from large-scale data, online learning strategy is briefly discussed. Generally, tensor dictionary learning is more efficient to handle multidimensional data than its matrix counterparts. Experimental results on hyperspectral image denoising, data fusion of hyperspectral image, and multispectral image demonstrate its outstanding performance in applications.

Rolling bearings are an essential core component in the power transmission systems of rotating machinery. Many previous dictionary learning (DL) methods have proven to be powerful for rolling bearing fault diagnosis. Existing diagnostic methods based on DL usually involve analyzing training and test data drawn from the same distribution. However, an unbalanced data distribution is a common phenomenon in practice. In addition, the signals in the source and target domains are often mixed with noise and irrelevant information in complex environments, which greatly reduces the performance of DL. Therefore, a new DL method based on a mixed noise model for transfer dictionary learning (TDL-MN) is presented. Specifically, the TDL-MN model is constructed to overcome noise and irrelevant signal interference in the DL process under variable conditions. Moreover, rolling bearing health status is diagnosed through sparse representation classification by calculating the target domain samples corresponding to the redundancy error in the sparse representation vector. The results for two cases verify the ability of the TDL-MN method to accurately identify bearing fault types under complex and variable working conditions. Comparisons with other methods verify the applicability and superiority of the proposed method.

The coronavirus disease 2019 (COVID-19) is a fast transmitting virus spreading throughout the world and causing a pandemic. Early detection of the disease is crucial in preventing the rapid propagation of the virus. Although Computed Tomography (CT) technology is not considered to be a reliable first-line diagnostic tool, it does have the potential to detect the disease. While several high performing deep learning networks have been proposed for the automated detection of the virus using CT images, deep networks lack the explainability, clearness, and simplicity of other machine learning methods. Sparse representation is an effective tool in image processing tasks with an efficient algorithm for implementation. In addition, the output sparse domain can be easily mapped to the original input signal domain, thus the features provide information about the signal in the original domain. This work utilizes two sparse coding algorithms, frozen dictionary learning, and label-consistent k-means singular value decomposition (LC-KSVD), to help classify Covid-19 CT lung images. A framework for image sparse coding, dictionary learning, and classifier learning is proposed and an accuracy of \(89\%\) is achieved on the cleaned CC-CCII CT lung image dataset.

Recently, a cosparse analysis model was introduced as an al- ternative to the standard sparse synthesis model. This model was shown to yield uniqueness guarantees in the context of linear inverse problems, and a new reconstruction algorithm was provided, showing improved performance compared to analysis l1 optimization. In this work we pursue the parallel between the two models and propose a new family of algo- rithms mimicking the family of Iterative Hard Thresholding algorithms, but for the cosparse analysis model. We provide performance guarantees for algorithms from this family un- der a Restricted Isometry Property adapted to the context of analysis models, and we demonstrate the performance of the algorithms on simulations.

This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov’s first-order methods, tailored to the image processing applications in such a way that, except for the mandatory regularization parameter, the user needs not specify any parameters in the algorithms. The software is written in C with interface to Matlab (version 7.5 or later), and we demonstrate its performance and use with examples.

Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this work, we present an algorithm for learning an analysis operator from training images. Our method is based on `p-norm minimization on the set of full rank matrices with normalized columns.We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-theart methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques.

We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can
be used for a variety of machine vision tasks. The approach provides a practical method for learning high-order Markov random
field (MRF) models with potential functions that extend over large pixel neighborhoods. These clique potentials are modeled
using the Product-of-Experts framework that uses non-linear functions of many linear filter responses. In contrast to previous
MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the
capabilities of this Field-of-Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate
inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we
obtain results that compete with specialized techniques.

Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: i) building a sparsifying dictionary based on a mathematical model of the data, or ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the K-SVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.

The co-sparse analysis model for signals assumes that the signal of interest
can be multiplied by an analysis dictionary \Omega, leading to a sparse
outcome. This model stands as an interesting alternative to the more classical
synthesis based sparse representation model. In this work we propose a
theoretical study of the performance guarantee of the thresholding algorithm
for the pursuit problem in the presence of noise. Our analysis reveals two
significant properties of \Omega, which govern the pursuit performance: The
first is the degree of linear dependencies between sets of rows in \Omega,
depicted by the co-sparsity level. The second property, termed the Restricted
Orthogonal Projection Property (ROPP), is the level of independence between
such dependent sets and other rows in \Omega. We show how these dictionary
properties are meaningful and useful, both in the theoretical bounds derived,
and in a series of experiments that are shown to align well with the
theoretical prediction.

We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov random field (MRF) models by learning potential functions over extended pixel neighborhoods. Field potentials are modeled using a Products-of-Experts framework that exploits nonlinear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field of Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with and even outperform specialized techniques.

Abstract A full-rank matrix A 2 IR,£m,with n < m,generates an underdetermined system of linear equations Ax = b having inflnitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there e‐cient methods for flnding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easily-veriflable conditions under which optimally-sparse solutions can be found by concrete, efiective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several well-known signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in flnding sparse solutions to underdetermined systems energizes research on such signal and image processing problems { to striking efiect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical results on sparse modeling of signals and images, and recent applications in inverse problems and compression in image processing. This work lies at the intersection between signal processing and applied mathematics, and arose initially from the wavelets/harmonic analysis research communities. The aim of this paper is to introduce a few key notions and applications connected to sparsity, targeting newcomers interested in either the mathematical aspects of this area or the applications. Keywords: Underdetermined, Linear system of equations, Redundant, Overcomplete, Sparse represen-

This paper investigates the theoretical guarantees of L1-analysis
regularization when solving linear inverse problems. Most of previous works in
the literature have mainly focused on the sparse synthesis prior where the
sparsity is measured as the L1 norm of the coefficients that synthesize the
signal from a given dictionary. In contrast, the more general analysis
regularization minimizes the L1 norm of the correlations between the signal and
the atoms in the dictionary, where these correlations define the analysis
support. The corresponding variational problem encompasses several well-known
regularizations such as the discrete total variation and the Fused Lasso. Our
main contributions consist in deriving sufficient conditions that guarantee
exact or partial analysis support recovery of the true signal in presence of
noise. More precisely, we give a sufficient condition to ensure that a signal
is the unique solution of the L1-analysis regularization in the noiseless case.
The same condition also guarantees exact analysis support recovery and
L2-robustness of the L1-analysis minimizer vis-a-vis an enough small noise in
the measurements. This condition turns to be sharp for the robustness of the
analysis support. To show partial support recovery and L2-robustness to an
arbitrary bounded noise, we introduce a stronger sufficient condition. When
specialized to the L1-synthesis regularization, our results recover some
corresponding recovery and robustness guarantees previously known in the
literature. From this perspective, our work is a generalization of these
results. We finally illustrate these theoretical findings on several examples
to study the robustness of the 1-D total variation and Fused Lasso
regularizations.

In the past decade there has been a great interest in a synthesis-based model for signals, based on sparse and redundant representations. Such a model assumes that the signal of interest can be composed as a linear combination of few columns from a given matrix (the dictionary). An alternative analysis-based model can be envisioned, where an analysis operator multiplies the signal, leading to a cosparse outcome. In this paper, we consider this analysis model, in the context of a generic missing data problem (e.g., compressed sensing, inpainting, source separation, etc.). Our work proposes a uniqueness result for the solution of this problem, based on properties of the analysis operator and the measurement matrix. This paper also considers two pursuit algorithms for solving the missing data problem, an L1-based and a new greedy method. Our simulations demonstrate the appeal of the analysis model, and the success of the pursuit techniques presented.

We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2-D image fragments (e.g., blocks) into 3-D data arrays which we call "groups." Collaborative filtering is a special procedure developed to deal with these 3-D groups. We realize it using the three successive steps: 3-D transformation of a group, shrinkage of the transform spectrum, and inverse 3-D transformation. The result is a 3-D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.

A frame design technique for use with vector selection algorithms,
for example matching pursuits (MP), is presented. The design algorithm
is iterative and requires a training set of signal vectors. The
algorithm, called method of optimal directions (MOD), is an improvement
of the algorithm presented by Engan, Aase and Husoy see (Proc. ICASSP
'98, Seattle, USA, p.1817-20, 1998). The MOD is applied to speech and
electrocardiogram (ECG) signals, and the designed frames are tested on
signals outside the training sets. Experiments demonstrate that the
approximation capabilities, in terms of mean squared error (MSE), of the
optimized frames are significantly better than those obtained using
frames designed by the algorithm of Engan et. al. Experiments show
typical reduction in MSE by 20-50%

In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data

The authors introduce an algorithm, called matching pursuit, that
decomposes any signal into a linear expansion of waveforms that are
selected from a redundant dictionary of functions. These waveforms are
chosen in order to best match the signal structures. Matching pursuits
are general procedures to compute adaptive signal representations. With
a dictionary of Gabor functions a matching pursuit defines an adaptive
time-frequency transform. They derive a signal energy distribution in
the time-frequency plane, which does not include interference terms,
unlike Wigner and Cohen class distributions. A matching pursuit isolates
the signal structures that are coherent with respect to a given
dictionary. An application to pattern extraction from noisy signals is
described. They compare a matching pursuit decomposition with a signal
expansion over an optimized wavepacket orthonormal basis, selected with
the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat.
Theory, vol. 38, Mar. 1992)

The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries-stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l(1) norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

This paper investigates analysis operator learning for the recently introduced cosparse signal model that is a natural analysis complement to the more traditional sparse signal model. Previous work on such analysis operator learning has relied on access to a set of clean training samples. Here we introduce a new learning framework which can use training data which is corrupted by noise and/or is only approximately cosparse. The new model assumes that a p-cosparse signal exists in an epsilon neighborhood of each data point. The operator is assumed to be uniformly normalized tight frame (UNTF) to exclude some trivial operators. In this setting, an alternating optimization algorithm is introduced to learn a suitable analysis operator.

One of the fundamental assumptions in traditional sampling theorems is that the signals to be sampled come from a single vector space (e.g., bandlimited functions). However, in many cases of practical interest the sampled signals actually live in a union of subspaces. Examples include piecewise polynomials, sparse representations, nonuniform splines, signals with unknown spectral support, overlapping echoes with unknown delay and amplitude, and so on. For these signals, traditional sampling schemes based on the single subspace assumption can be either inapplicable or highly inefficient. In this paper, we study a general sampling framework where sampled signals come from a known union of subspaces and the sampling operator is linear. Geometrically, the sampling operator can be viewed as projecting sampled signals into a lower dimensional space, while still preserving all the information. We derive necessary and sufficient conditions for invertible and stable sampling operators in this framework and show that these conditions are applicable in many cases. Furthermore, we find the minimum sampling requirements for several classes of signals, which indicates the power of the framework. The results in this paper can serve as a guideline for designing new algorithms for various applications in signal processing and inverse problems.

Recently, a cosparse analysis model was introduced as an alternative to the standard sparse synthesis model. This model was shown to yield uniqueness guarantees in the context of linear inverse problems, and a new reconstruction algorithm was provided, showing improved performance compared to analysis l1 optimization. In this work we pursue the parallel between the two models and propose a new family of algorithms mimicking the family of Iterative Hard Thresholding algorithms, but for the cosparse analysis model. We provide performance guarantees for algorithms from this family under a Restricted Isometry Property adapted to the context of analysis models, and we demonstrate the performance of the algorithms on simulations.

Some high-dimensional data.sets can be modelled by assuming that there are
many different linear constraints, each of which is Frequently Approximately
Satisfied (FAS) by the data. The probability of a data vector under the model
is then proportional to the product of the probabilities of its constraint
violations. We describe three methods of learning products of constraints using
a heavy-tailed probability distribution for the violations.

The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. Much of the literature on this topic can be divided between analysis-based and synthesis-based priors. Analysis-based priors assign probability to a signal through various forward measurements of it, while synthesis-based priors seek a reconstruction of the signal as a combination of atom signals. The algebraic similarity between the two suggests that they could be strongly related; however, in the absence of a detailed study, contradicting approaches have emerged. While the computationally intensive synthesis approach is receiving ever-increasing attention and is notably preferred, other works hypothesize that the two might actually be much closer, going as far as to suggest that one can approximate the other. In this paper we describe the two prior classes in detail, focusing on the distinction between them. We show that although in the simpler complete and undercomplete formulations the two approaches are equivalent, in their overcomplete formulation they depart. Focusing on the ℓ1 case, we present a novel approach for comparing the two types of priors based on high-dimensional polytopal geometry. We arrive at a series of theoretical and numerical results establishing the existence of an unbridgeable gap between the two.

After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alter-native, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regu-larized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.

We describe a recursive algorithm to compute representations of
functions with respect to nonorthogonal and possibly overcomplete
dictionaries of elementary building blocks e.g. affine (wavelet) frames.
We propose a modification to the matching pursuit algorithm of Mallat
and Zhang (1992) that maintains full backward orthogonality of the
residual (error) at every step and thereby leads to improved
convergence. We refer to this modified algorithm as orthogonal matching
pursuit (OMP). It is shown that all additional computation required for
the OMP algorithm may be performed recursively

This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an ℓ1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of ℓ1-analysis for such problems.

We propose a model for natural images in which the probability of an im- age is proportional to the product of the probabilities of some filter out- puts. We encourage the system to find sparse features by using a Student- t distribution to model each filter output. If the t-distribution is used to model the combined outputs of sets of neurally adjacent filters, the sys- tem learns a topographic map in which the orientation, spatial frequency and location of the filters change smoothly across the map. Even though maximum likelihood learning is intractable in our model, the product form allows a relatively efficient learning procedure that works well even for highly overcomplete sets of filters. Once the model has been learned it can be used as a prior to derive the "iterated Wiener filter" for the pur- pose of denoising images.

The field of sparse and redundant representation modeling has gone through a major revolution in the past two decades. This started with a series of algorithms for approximating the sparsest solutions of linear systems of equations, later to be followed by surprising theoretical results that guarantee these algorithms' performance. With these contributions in place, major barriers in making this model practical and applicable were removed, and sparsity and redundancy became central, leading to state-of-the-art results in various disciplines. One of the main beneficiaries of this progress is the field of image processing, where this model has been shown to lead to unprecedented performance in various applications. This book provides a comprehensive view of the topic of sparse and redundant representation modeling, and its use in signal and image processing. It offers a systematic and ordered exposure to the theoretical foundations of this data model, the numerical aspects of the involved algorithms, and the signal and image processing applications that benefit from these advancements. The book is well-written, presenting clearly the flow of the ideas that brought this field of research to its current achievements. It avoids a succession of theorems and proofs by providing an informal description of the analysis goals and building this way the path to the proofs. The applications described help the reader to better understand advanced and up-to-date concepts in signal and image processing. Written as a text-book for a graduate course for engineering students, this book can also be used as an easy entry point for readers interested in stepping into this field, and for others already active in this area that are interested in expanding their understanding and knowledge. The book is accompanied by a Matlab software package that reproduces most of the results demonstrated in the book. A link to the free software is available on springer.com. © Springer Science+Business Media, LLC 2010. All rights reserved.

One of the fundamental assumptions in traditional sampling theorems is that the signals to be sampled come from a single vector space (e.g., bandlimited functions). However, in many cases of practical interest the sampled signals actually live in a union of subspaces. Examples include piecewise polynomials, sparse representations, nonuniform splines, signals with unknown spectral support, overlapping echoes with unknown delay and amplitude, and so on. For these signals, traditional sampling schemes based on the single subspace assumption can be either inapplicable or highly inefficient. In this paper, we study a general sampling framework where sampled signals come from a known union of subspaces and the sampling operator is linear. Geometrically, the sampling operator can be viewed as projecting sampled signals into a lower dimensional space, while still preserving all the information. We derive necessary and sufficient conditions for invertible and stable sampling operators in this framework and show that these conditions are applicable in many cases. Furthermore, we find the minimum sampling requirements for several classes of signals, which indicates the power of the framework. The results in this paper can serve as a guideline for designing new algorithms for various applications in signal processing and inverse problems.

This paper introduces a novel approach to learn a dictionary in a sparsity-promoting analysis-type prior. The dictionary is opti- mized in order to optimally restore a set of exemplars from their degraded noisy versions. Towards this goal, we cast our prob- lem as a bilevel programming problem for which we propose a gradient descent algorithm to reach a stationary point that might be a local minimizer. When the dictionary analysis operator specializes to a convolution, our method turns out to be a way of learning generalized total variation-type prior. Applications to 1-D signal denoising are reported and potential applicability and extensions are discusses. oui

We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.

An adaptive procedure for signal representation is proposed. The representation is built up through functions (atoms) selected from a redundant family (dictionary). At each iteration, the algorithm gives rise to an approximation of a given signal, which is guaranteed (1) to be the orthogonal projection of a signal onto the subspace generated by the selected atoms and (2) to minimize the norm of the corresponding residual error. The approach is termed optimized orthogonal matching pursuit because it improves upon the earlier proposed matching pursuit and orthogonal matching pursuit approaches.

We present a nonparametric algorithm for finding localized energy
solutions from limited data. The problem we address is underdetermined,
and no prior knowledge of the shape of the region on which the solution
is nonzero is assumed. Termed the FOcal Underdetermined System Solver
(FOCUSS), the algorithm has two integral parts: a low-resolution initial
estimate of the real signal and the iteration process that refines the
initial estimate to the final localized energy solution. The iterations
are based on weighted norm minimization of the dependent variable with
the weights being a function of the preceding iterative solutions. The
algorithm is presented as a general estimation tool usable across
different applications. A detailed analysis laying the theoretical
foundation for the algorithm is given and includes proofs of global and
local convergence and a derivation of the rate of convergence. A view of
the algorithm as a novel optimization method which combines desirable
characteristics of both classical optimization and learning-based
algorithms is provided. Mathematical results on conditions for
uniqueness of sparse solutions are also given. Applications of the
algorithm are illustrated on problems in direction-of-arrival (DOA)
estimation and neuromagnetic imaging

We propose a model for natural images in which the probability of an image is proportional to the product of the probabilities of some filter outputs.

The Time-Frequency and Time-Scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the Method of Frames (MOF), Matching Pursuit (MP), and, for special dictionaries, the Best Orthogonal Basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l 1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP! and BOB, including better sparsity, and super-resolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation de-noising, and multi-scale edge de-noising. Basis Pursuit in highly ...

Peyr&eacute; , C. Dossal and J. Fadili Robust sparse analysis regularization, arXiv:1109

- S Vaiter