Science topic

# Compressed Sensing - Science topic

Beyond Nyquist-Shannon? - applications and discussions
Questions related to Compressed Sensing
Question
Compressed sensing(also known as , compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals
Image processing can benefit from the usage of discrete wavelet transformations. As image resolution grows, so does the amount of storage space required. DWT is used to reduce the size of an image without sacrificing quality, resulting in higher resolution.
I also recommend that you read the following articles:
Question
For an underdetermined system, compressed sensing can solve the equation:
Y = AX,
where matrix A has dimensions N × M (N > M).
When N < M, the system becomes an overdetermined one. Can this overdetermined problem be solved by LASSO, sparse Bayesian learning or other compressive sensing methods?
Thank you!
Dear Yuhan Liu,
For the system of equations Y=AX to be underdetermined, and if matrix A has dimensions NxM, then N must be less than M, and not the opposite.
In line with Vikas’ answer, compressed sensing (CS) solves underdetermined problems (minimum norm solution). The overdetermined solution via least squares applies to compressive covariance sensing (CCS).
Kind regards.
Question
2D compressed sensing
Thanks. Through reading the literature, I have completed the program of performing 2-d compressive sensing on images. Finally, thanks again!
Question
I am trying to minimize the following objective function:
(A-f(B)).^2 + d*TV(B)
A: known 2D image
B: unknown 2D image
f: a non-injective function (e.g. sine function)
d: constant
TV: total variation: sum of the image gradient magnitudes
I am not an expert on these sorts of problems and looking for some hints to start from. Thank you.
Om Prakash Yadav Thank you, will take a look.
Question
Is there any alternative topic/theory/mathematical foundation to compressed sensing (CS) theory?
successive to Nyquist Criterion is CS theory, is there any theory that surpasses the CS theory ?
Dear Vishwaraj B Manur,
First of all, we should separate the concept of Sampling against the concept of Sensing. These two are not interchangeable!
1. Compressed Sensing theory states that it could recover a set of coefficients (which represents in a specific transform domain the useful information from the analyzed signal) from less samples than Nyquist sampling criteria in order to be able to reconstruct a signal (of course as it could be reconstructed from uniform samples by classical Shannon theory).
2. Compressive Sampling theory states that a signal can be sampled by a protocol (non-uniform sampling, random sampling, modulation and sampling, etc.) which will allow later to be reconstructed by means of a Compressed Sensing algorithm which knows about the used sampling protocol.
3. There are at least 4 sampling ways (according to Figure 2 from https://core.ac.uk/download/pdf/34645298.pdf ) to acquire the information from a signal. Take into account that practical CS is a lossy compression, and this is due to the non-ideal process which happens when the sampling process take place.
Question
*The formal definition of CS is y=Φx=ΦΨα
where x is the input signals, Φ is the sensing matrix and y is the compressed vector. α is a sparse vector,and Ψ is a sparse basis.
* In case of DCT-based compressed sensing,Ψ is the DCT basis.
* For wireless body area network, I think it is not practical to apply the DCT transform of the signal X at the sensor node , right ?
If so, how can we apply DCT-based compressed sensing for wireless body area networks?
yes, you can but I prefer to use wavelet or contourlet to get more sparisty and Better information compression and higher accuracy
Question
Hi,
I want to reduce the number of rows of my data set and until now i used some clustering algorithms (kmeans, kmedoids, SOM), but recently I discover some papers:
• lp row sampling with Lewis weights ( Cohen, Peng),
• Iterative row sampling (Li, Miller, Peng),
• Compressive sampling (Candes).
I would like to know what is the best method taking into account the density of the variables of the data set?
Is my question meaningful? Or does it make no sense?
I mean, I want a true representation of my data set.
Thanks,
Robin
Question
How can implementation an Adaptive Dictionary Reconstruction for Compressed Sensing of ECG Signals and how can an analysis of the overall power consumption of the proposed ECG compression framework as would be used in WBAN.
Thid is in class of machine learning in youtubre
Question
Wavelet based compression can be achieved by suppressing coefficients below a threshold (or preserving a predetermined number of significant coefficients). Compressed sensing on the other hand projects the data to a lower dimension (assuming sparsity in a domain such as wavelets) and reconstructs by solving (a relaxed version of) an optimization problem. What is the advantage of one over the other? CS seems like a round about way of doing things that can be achieved straightforward by DWT. Although one needs to keep track of the index of significant coefficients in wavelet compression, I am ending up with DWT based compression outperforming CS! Is there a distinct advantage of CS for compression?
CS can be considered as an enhancement tool for wavelet video coding. Moreover, it can be also applied as an error-concealment tool as well (in case of video streaming). For example, my experiements show that 3-D SPIHT compression can be improved by 1.5 dB by appliing CS at the decoder side, and in case of losses the error-concealment based on CS gives 4.9 dB improvement. For more details please have a look into my paper E.Belyaev et al., Error concealment for 3-D DWT based video codec using iterative thresholding, IEEE Communications Letters, 2017.
Question
I have tried a code to compress a signal using Compressed Sensing(CS). The input signal is x and the compressed signal y is given by : y=Φ∗x where Φ is the sensing matrix. I have used the following matlab function to compute the energy of x and the energy of y .
• energy_x= sum(x(1:384,:).^2);
• energy_y = sum(y(1:192,:).^2);
The length of x is 384 , and the length of y is 192. I found that the energy of x is 446910 and the energy of y is 77651282 . Is it is reasonable to obtain higher energy for the output than the input?
Assuming you are using random sensing matrices (such as Gaussian or Bernoulli). The energy (using the code mentioned by you) of the output can be higher than the input even after compression.
Question
Dear all, Currently I am trying to realize a TGV+shearlet compressed sensing data process routine. I've got to know that Shearlab and FFST are two efficient Shearlet toolboxes, but they are all wrote for MATLAB. Does anyone known such packages for C++? Thanks!
There are already work on Shearlet and Total generalized variation in Compressive sensing in
The source code is available in the author's website.
Question
Hi
I have a question regarding high-rate LDPC codes constructions. My research field is not coding but somewhere in my research I need to find an explicit way of constructing LDPC codes with high rate with girth 6 or 8. I think high-rate is not an important factor in coding but in compressed sensing it is of great importance since it provides the main assumption of compressed sensing.
I would like to know whether Cyclic or Quasi-cyclic LDPC codes of girth 6 or 8 can provide high-rate or not? Any suggestion is appreciated!
Thanks
Mahsa
Question
what are the advantages and  disadvantages of Matching Pursuit Algorithms for Sparse Approximation? and if there are alternative method better than Matching Pursuit
The advantages of OMP and MP algorithms for Direction of Arrival estimation:
Applying BS algorithms to a DOA problem enhances resolution
and decreases complexity. Moreover, the knowledge of the number
of signal sources is not required to know in these algorithms. In
addition, they do not need any post-processing to converge to
the ML solution since the output of these algorithms is straightly
the DOAs. ML algorithm compares all feasible directions and then
selects the most likely one. On the other hand, BS algorithms
compare some of the angles and select them in a smart method.
Hence, BS algorithms are much more computationally efficient
in comparison to other algorithms of DOA estimation such as
MUSIC and ESPRIT to approach to the ML solution. Moreover, BS
algorithms converge to the ML solution when the value of SNR
is low, whereas other approaches converge at high SNRs only.
In addition, in other methods for DOA estimation, the number
of estimated DOAs is limited by the number of antennas. The BS
based DOA estimation methods can estimate more DOAs than the
antennas number. Among BS methods, OMP algorithm provides
slightly higher performance than MP algorithm with moderately
higher computational complexity.
Question
The old question has been removed.
Dear Xia
I am working on lattice thermal conductivity in elementary and compound aemiconductors for the last 25 years as a group leader. We measured and calculate these propertoes mostly for ternary compounds. For elementary and binary compounds we calculate them but for nanowires. We did extend the property du to the effects of such as size effect, electron concentration, lattice defect, ... if you interest in our program you can see our publications as the main auther is M. S. Omar and you may see the power point presentation
Solid Surface and Nanoscale materials Structure
Presentation · July 2016 with 117 Reads
DOI: 10.13140/RG.2.2.31947.59684
Charmo, University of Charmo, University of Salahaddin, DOI:10.13140/RG.2.2.31947.59684
by
Omar M S at Salahaddin University - Hawler, Arbil, Kurdistan, Iraq
Question
I want to do multichannel ECG data compression using multiscale PCA . Do the transformed coefficients are eigenspace.
Hi Sushant,
Hope these help.
Thanks.
Question
I want to obtain a matrix C in matlab which is the n-by-n DCT (discrete curvelet transform) matrix such that for a given set of signals X and given set of coefficients A (I supposedly think which will better represent edges of X ) we can get a representation X=C*A. C will be a universal transform matrix like an n-Haar transform matrix. Can I obtain such a matrix since curvelet is linear? I have curvelab 2.1.3 installed and a function fdct_usfft.m returns curvelet transform of a given input. But I need the curvelet transform to be a transform operator matrix rather than as an operator on a signal.
Question
I would like to calculate CNR for compressed sensing MRI. However as it incorporate parallel imaging calculation of background noise might be spurious if ROI is placed outside the image. So please suggest the way.
The contrast (or contrast modulation) defined by (S1 - S2)/(S1 + S2) is an established metric, but it does not take into account the image noise. I.e., you may obtain high contrasts although the actual CNR is low (or identical contrasts from acquisitions with differing noise levels).
To estimate also the noise, you could try to work with the standard deviation of the signals in a region of interest (ROI), if the non-noise contributions to this standard deviation are sufficiently low (i.e. you would need a ROI with very homogeneous signal, without any intensity gradients etc.)
Question
I have used the structural similarity index metric (ssim) as one of the metrics when dealing with compressed sensing signals. There is a parameter in the code called Sliding window length. Increasing the window length always yields higher ssim values. How can I choose this value? Shall I increase it until no further increase in ssim? Thanks
I am working on transmitting Biomedical signals over wireless body area network.
The signals are  affected by noise and small scale fading. The metrics used for evaluating the quality of the reconstructed signal is the normalized mean square error (NMSE) and the structural similarity index metric (SSIM).  @Michael Wirtzfeld
Question
How we can acquire random undersampling in MRI for CS?
Thanks Dr. Waggoner for your kind response. Currently I am using Bruker 7.05 T magnet with micro-imaging accessories i.e. non-clinical scanner. But we have Clinical software Bruker Paravision 6.0.1 associated with the scanner. Can you suggest me some softwares which can convert 2dseq or fid files into matlab format? What software we can utilize for removing our k-space data from high frequency region and again FT it into image? Just for checking it weather undersampled data will give us same images or not?
Question
Is it possible to use the particle filters for parameter identification ? if ye, what could be the cost related to the identified parameter which need to be minimized? e.g in case of EKF, we had the covariance matrix and we can minimize the covariance of the identified parameter as the cost. Please suggest some suitable matlab implementation to start my work on PF. Thanks
Hello. Your question relates to whether a filter (i.e. a state estimator) can be used as a parameter estimator. Whether the filter is Kalman based or Monte Carlo (particle filter) based, it does not matter. You need to set the problem up so that you have a constant state equation, i.e. x(k+1) = x(k) or perhaps x(k+1) = f(x(k)), essentially without any process noise term. The Kalman filter will compute a recursive weighted least squares estimate of x (the parameter vector) based on the noisy measurements y(1), ..., y(k), where y(k)=Hx(k)+w(k), w(k) ~ N(0,R).  It will be the recursive equivalent of minimising a sum of squares cost function of the form sum (1..k) (y(j)-Hx(j))^T R^(-1) (y(j)-Hx(j)) + (x(j)-x(0))^TQ^(-1)(x(j)-x(0)), where Q is the process noise covariance (zero in your case).
If the system functions are nonlinear, you need an approximate estimator, like EKF, UKF, etc. The particle filter should handle the nonlinearity without problems, but the question is do you really need a recursive estimator? If not, use a batch estimator like iteratively reweighted least squares (I call it ILS), which is OK for any nonlinearity of the form y=h(x)+w. This is a really nice, simple technique that you can code up very easily in Matlab, having calculated the derivative of your measurement function. Have a look at the references in the paper below for ILS.
Question
How we can determine the number of measurements required by a compressive sensing technique. Or what is the number of measurements required by a specific sampling matrix used to sample the signal.
I am also looking for some references that can answer my question,
Thanks,
Has you are working on circulant matrices go through the following link below :
this link provides complete details about different types of partial circulant matrices and also shown that some of the partial circulant matrices perform at par with gaussian measurement matrices.
There is one more paper that discusses about circulant matrices
Bajwa, Waheed U., et al. "Toeplitz-structured compressed sensing matrices." 2007 IEEE/SP 14th Workshop on Statistical Signal Processing. IEEE, 2007.
Question
Dear All,
In compressive sensing for cognitive radio network, when the reconstruction of the original sparse signal is not needed? and How we can prove it.
Thanks,
@Valerio: thank you for your response, the article you shared is very interesting, thanks a lot.
Question
I am working on a research work on transmitting ECG signals compressed by Compressed sensing over wireless body area network . The signal is affected by noise and small scale fading. The normalized mean square error is used to estimate the quality of the reconstructed signal at the receiver. When I run the same m file in matlab several times, I obtain different values for the mean square error, although the same channel signal to noise ratio is used. I think this is reasonable, do you agree? If so, how can I obtain a signal value to represent the quality of the reconstructed signal at the receiver?
Thanks a lot Mohammed Mobien , Bo Li and Яков Аронович Рейзенкинд .
Question
while taking measurements of the original signal (sparse in some basis) using Gaussian random measurement matrix with l1-magic package the authors are performing orthogonalization of measurement matrix , why should we do orthogonalization although without doing that step also we are able to reconstruct the signal.
Doing orthogornalization will save your running time since Phi * Phi^T = Identity matrix
Question
compressive sensing says while capturing a signal  take random measurements of signal using random matrices (gaussian etc.) and while reconstructing the signal at reciever  we use sparsifying basis but some researchers perform sparsification after capturing the signal  and then take random measurements of this sparsified signal using random matrices but performing sparisification at the encoder/transmitter is it a correct way
Both are correct. The first approach is common Compressive sensing. The second approach give you a chance to utilize structure of signal in sparsify domain. For example, the low frequency is often more important than the high frequency (as human visual system is more sensitive to low-frequency in image processing application). The second approach often refers to hybrid compressive sensing, multi scale compressive sensing which also discussed in very beginning time of CS.
Question
Dear Researchers,
Anyone can help in defining the 1-bit compressive sensing, I have some difficulties to understand how it works exactly. Is the algorithm can be applied to analog/digital signals?
Thanks,
Recovering sparse signal from the sign of the linear measurement
Question
Hello Sir,
I am working compressive sensing on an image. I decomposed the image (256*256) using DWT and obtained the four sub bands of each 128 *128. Now I am taking the three high frequency sub bands of each (128*1)to be measured with Measurement matrix (100*128). So therefore i would get three resultant measured vectors.This is  Compression. Reconstruction: I would use the low frequency band(128*128) and the 3 high frequency band(128*1) using OMP to reconstruct the image.
Clarification: How to convert the one high frequency sub band (128 *128) into one measured vectors (128*1) ?
Please correct me if my understanding is wrong.
Dear Poornima Prabhakar, take each line of the frequency subband matiz as vector of 128x1, and make the measurement to obtain y_{100x1}. In the reconstruction you use each measurement to reconstruct the respective line, and after rebuild the matrix.
Question
random demodulator, concept of  xampling  are well known compressed sensing techniques to perform sub-nyquist sampling of analog signals. is there any new techniques available
Question
I have a technical question about Massive MIMO to discuss with you . If each user feedback their compressed low dimensional channel \hat h. The compress method can employ compress sensing (PCA,2D-DCT) like the paper attached. When all the users feedback their individual \hat h, The BS perform multi-user precoding like ZF\RZF,etc. In general, the BS should recovery the channel h and employ precoding. I am wondering that may I use the low dimensional \hat h to do precoding. I have read your paper about beamforing, if all user's orthogonal space are obatined, the performance will be the same as use ture h to perform precoding. If I use the feedback \hat h with a pre-known matrix and prove ZF precoding just depend on \hat h, it may be right. Howerver, I have no idea to prove that . What do you think about this question? Thanks !
I am not sure who this message is directed to. Anyway, I think the approach that you describe make sense for MU-MIMO in FDD mode. Each user feeds back an estimate of its channel, which might be compressed if there is sparsity in the channel (but that is not always the case in reality). Then these estimates are utilized to compute a precoding matrix, using any of the conventional techniques. The precoder needs to computed in the uncompressed domain, I think, because if the user channel are compressible then they will probable use different subspaces.
I don't think there is anything that you need to "prove", since what you describe is a heuristic algorithm. There is no optimality or anything like that, but you can hopefully get a reasonably good communication performance.
Question
while  i am reading theory about  OMP algorithm it is given that support of the original signal is also considered to reconstruct the original signal but when i downloaded sparcelab toolbox for using OMP algorithm the solver (SolveOMP) required only A(Measurement Matrix), N(signal original length), Y(mesurement vector/output vector). then how they are obtaining the reconstructed signal . Does OMP requires support of the original signal or not ?
Actually SparseLAB needs also sparsity level as an input. The 3rd input parameter "maxIters" defined as "number of atoms in the decomposition" which is also the sparsity level.
BR,
Amir
Question
Since compressed sensing is developing area, then, there is growing number of algorithms and I would not want to miss the best one (as of today). Relevant references and links on the MATLAB code are welcomed. Thank you in advance.
Greedy algorithms are usually the fastest
Question
I have raw data for Synthetic Aperture Radar which consists from 800 by 702 matrix and I want to use Compressive sensing for the reconstruction of the of the targets. so the best number of measurement should I choose.
I think you should be looking at the coherence between columns of the sensing matrix.
In other words look at the maximum correlation between columns, i.e. max_{i,j} (a_i^H a_j) where a_i is the i^th column of the sensing matrix A. The lower this metric is, the better you could resolve.
BR,
Question
Measurement matrix with low coherence
There are two methods either by Probabilistic Methods or Deterministic method.
Deterministic Methods works well when length of the unknown vector becomes very large .
For probabilistic Method you generate a matrix randomly and then normalize it.
Question
For example, in IR spectroscopy we can identify unknown compounds by analyzing their IR Spectrum, then how can we use DSP or Compressed Sensing in this field?
Not sure if you have considered pulse shape filtering. We use relatively slow 96 khz PC sound card sampling for gamma spectrometry, the software creates a normalised and zeroised mean pulse shape from 16 samples and when the spectrum is being recorded the pulses are compared to the mean pulse shape and filtered according to a set threshold. PRA is free software. Links on my site: http://www.gammaspectacular.com/software-downloads
Question
Basically, I wanted to know what motivated people to do parallel imaging and compressed sensing (random undersampling). I guess the answer lies in MRI physics.
Slow?  Compared to the images in this Mansfield paper from 38 years ago
and other images from that era, MRI today is very fast.  Fig. 4a in that paper is a single "slice" cross-sectional image of a human finger that took 23min. to acquire.  That said, depending on the contrast and spatial coverage that you want, it still might take several minutes to acquire an MRI data set, especially if it is 3D.  remaining motionless for several minutes can be challenging for healthy subjects and almost impossible for many patients.  In addition some MRI methods currently being developed can take a very long time to acquire, both due to the type of contrast desired (as others mentioned) and the large amount of data needed.  Some state of the art DTI techniques could require close to an hour acquire if no type of acceleration is used.  That is far too long to be considered for use clinically.  Methods such as compressed sensing, parallel imaging, and multi-band imaging and can be used to reduce the time required even for these types of scans to clinically feasible times.
Another important issue is cost.  MRI systems are expensive and expensive to maintain. If you can reduce the amount of time needed for a typical clinical MRI imaging session,  you reduce the cost of the procedure.
Question
compressed sensing(CS) aims to reconstruct signals and images from significantly fewer measurements than were traditionally thought necessary. MRI is an essential medical imaging tool with an inherently slow data acquisition process . applying CS to MRI can offers scan time reduction , but how?
You can get resources from site : www.eecs.berkeley.edu/~mlustig/CS.html
Also refer the attachment
Question
I want to simulate WSN which use compressive sensing in transmissions. Unfortunately I do not know , how I should implement compressive in this simulation?
thanks alot.
Question
Let me first apologize for putting this collaboration request in question-answer section. However, I could think of any other way to reach to relevant researchers faster.
I am an MRI scientist working at the dept. of Neuroradiology, University Medical Center, Hamburg.
I have been working towards making T2-relaxometry based myelin imaging as a feasible clinical (MRI) marker. For that, I am using the gold standard CPMG sequence and exploring ways to cut down the scan time for whole brain coverage  < 15-20 minutes using various regular and random under sampling (compressed sensing).
I am planning to apply for an international collaboration grant between India-EU. This grant aims to make collaboration between India and following EU countries: Belgium, Estonia, Portugal, France and Norway. More information here: http://indigoprojects.eu/funding/indigo-calls/call-2015
I am looking for collaborators (from India/ Belgium, Estonia, Portugal, France and Norway):
1)    who is interested in white matter disorders: It could be any white matter disease.
2)     who may have following backgrounds:
a.      Sequence development
b.      Compressed sensing
c.       Medical background
d.       Histological background
Any interested MRI/ “white matter disorder” researcher can contact me:
Thanks, Ashutosh. However, the collaboration must be with researchers from:  India & Belgium, Estonia, Portugal, France and Norway.
None the less, I would really be interested in collaborating with him/ his PhD supervisor, since I am interested in myelination of central nervous system; but, from imaging point of view. Possible collaborating and joint funding proposal is very much possible.
Could you please introduce me to him?
My email id is: (dushyantkumar1) (AT)(gmail.com)
Question
In my work I mainly use two of these concepts. Do you know other approaches?Relevant references are welcome. Thank you in advance!
There is no Nyquist frequency limitation there. But the spectral signal is much more noisy.
Question
Can any one suggest the best algorithm and if possible the MATLAB code to reconstruct a group structured vector?
Not sure if it's "the best" for group sparsity problems, but for regular sparsity, SPGL1 is among the best solvers out there; the code can handle group sparsity as well, see https://www.math.ucdavis.edu/~mpf/spgl1/
Question
I am using pseudopolar fft to implement shearlet transform using Shearlab 1.1.
Hi Nija
I think the best way to do it, is by having a look on Sparse MRI code from of Lustigon the below link, you probably need to implement the forward\inverse shearlet projector as a function handler and use it. I can help you with it if you need.
Ala'
Question
I would like to expand knowledge in this field. Mainly I'm interested in the applications of the Compressed Sensing in the optics. Thank you in advance!
You can have a look on the following, useful especially for beginners :
All the best
Question
Compress sensing has been employed in medical image reconstruction, mainly tomography. It has been proved that the faster image with equivalent resolution is achievable with less number of measurements. But is it the same scenario when measurement is at oneside, like reflection mode (e.g ultrasound)?
What domain used for sparse representation of signal ?
Thanks Jerome,
What they have done is interesting.  As far as I understood their technique is more or less is  inpaiting than compress sensing. plus the fact that they dont mention on what Basis domain they do the processing.  This makes me wonder if  optimally they can figure out the best position of the measurement or its going to be always random .
Question
Can someone please clarify the following questions I have in compressed sensing.
1. Compressed sensing says that we need not acquire the entire signal X of dimension Nx1 , but instead we can take only few measurements y of dimension Mx1 where M<<N... and use them to reconstruct the signal X.
But we write the equation as : y = A x X where A is a measurement matrix which is MxN.
Looking at this equation I feel that in order to get y or M measurements or the compressed signal , we need to have a complete signal X, which implies that we need to have or measure X completely anyway... so how does this make compressed sensing useful in saying that we can directly compress while the data is being acquired?
2. If all we require is very few measurements M, which can be accomplished by correlating only few columns of A with X, why do we need X to be a sparse vector or a compressible vector in some other domain?
3. Does compressed sensing reduce the number of sensors required to sample the data? Or does it only reduce the storage and processing equipment & time required to compress the data after it is sampled from the sensors?
Thanks a lot for clarifying my confusion!
Nalin, The idea behind compressed sensing means we are sensing at compressed rate (below Nyquist rate). It is different from "sensing and compress" like in jpeg and others.  If one needs 20 sensors in a traditional sensing to sense a signal x of length 20. Then if x is sparse having, say 2 non-zero elements whose locations are unknown, then only 3 sensors (2*long20) will suffice instead. Further, if you know the location of those non-zero elements (2 in this example), then only 2 sensors will do the job.
Question
I want to solve the following optimization problem:
min_x { f(x) + lambda * ||x||1 }
There are many source codes for solving L1-norm Regularization problems, but their loss functions are limited to least square and logistic regression.
Do you know any source code that f(x) could be defined by the user?
Try this:Orthant-Wise Limited-memory Quasi-Newton Optimizer for L1-regularized Objectives(source code available)
Question
Recall, If we have P non-zero elements in a sparse vector of length N, minimum required samples will be P*C*log (N/P). What if we have Q groups of known or unknown length instead? Do we have any advantage on the number of measurements by having group structure?
If you already know that you have a particular (group or any other thing) structure, you should be able to reduce the lower bound of the minimum number of measurements. Some results have already been published; see the keywords "Model-based compressed sensing" for instance. Simon Foucart also published a paper on nonnegative sparse signal recovery. He also published (actually submitted) recently a paper on sparsity + separation between non-zeros very recently.
Hope this helps!
Question
In my research, in MSc, I dealt with compressed sensing approach.
In order to recover the desired signal from the acquired signal, I used a solver that performs minimization of the object function with regularization function based on l1-norm or TV-norm, but I'm sure that there are many other options.
So I will be very glad if someone will recommend the relevant references or links.
attached
Question
How Orthogonal Matching Pursuing Algorithm is implemented using MATLAB.
Hi Rohit,
I wrote tutorial of OMP for myself when I learned compressive sensing.
You can have a look at my site : kore76.wordpress.com  under TUTORIAL Section.
After understanding OMP, I think it is easy to implement it in MATLAB. Let me know if you have question.
Good luck!
Question
Does anyone know an algorithm and simulation example for compressed cooperative sensing?
Dear sir, this a good article about compressed sensing
Question
Could anybody suggest an l-1 norm solver form complex sparse signal?
Currently I am using the OMP (Orthogonal Matching Pursuit) Algorithm for l-1 optimization. But it is not able to detect the no-zero samples localized near to the origin. So please suggest an algorithm which will work for complex sparse signals and will detect all non-zero samples.
AFAIK the L1 approach is chosen because it is the closest convex relaxation of the L0 pseudo-norm. Why it works is explained in some of the early papers on the topic by Candès et al. or Donoho. A good place to start is http://dsp.rice.edu/cs
Question
I am working on designing multimodal biometric watermarking techniques using compressive sensing theory to improve security and payload capacity of techniques. But I am facing a problem when I reconstruct an image or signal from sparse coefficients, I am not able to reconstructebecause of not getting proper optimization technique.
The terminology is a bit unclear to me here. Usually in compressed sensing, you reconstruct a signal's sparse coefficients from linear combinations (measurements) of the signal. So I am not quite sure what you mean by reconstructing it from its sparse coefficients? If you mean the former, suitable optimisation techniques include basis pursuit denoising (BPDN), compressive sampling matching pursuit (CoSaMP), and iterative hard thresholding (IHT). There are numerous others, some more specialised.
Question
In regards to denoising with Tree-Structure Wavelets, I am trying to repeat the result of Section 5.3 of the paper, "Proximal Methods for Hierarchical Sparse Coding" by R. Jenatton .et.al, by using mexProximalTree function with their SPAMS toolbox http://spams-devel.gforge.inria.fr/index.html.
Can you be a bit more specific? I am not sure what you actually want to know?
Question
I want to create this thread to collect, archive knowledge of large-scale compressive sensing. Hopefully give a good overview to other academics about how to do compressed sensing on high-dimensional, high-resolution data such as images. Recently I worked with Hidden Markov Tree related models; therefore, Duarte's work on "Model-based Compressive Sensing" was brought to my attention. However, like many other methods, the size of input matrix are limited due to the need of creating and storing "sensing matrices" (randomly generated for incoherence with your input signals). Therefore, I can only process up to the size 64x64; it is simply not enough for me. I try to push this boundary and find "noiselet", which could help sense sparse structures of image data. I do some experiments of combining Hidden Markov Tree and Noiselet but there are no positive results so far. That is why I wish to know more about "noiselet" or other methods which can generate good sensing matrices without creating and storing enormous sensing matrices. I appreciate any hints, directions, comments, etc and thanks in advance. Some references:
Model-based Compressive Sensing, RG Baraniuk, V Cevher, MF Duarte, C Hedge,
Using Correlated Subset Structure for Compressive Sensing Recovery - A Divekar, D Needell