Science topic
Compressed Sensing - Science topic
Beyond Nyquist-Shannon? - applications and discussions
Questions related to Compressed Sensing
Compressed sensing(also known as , compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals
For an underdetermined system, compressed sensing can solve the equation:
Y = AX,
where matrix A has dimensions N × M (N > M).
When N < M, the system becomes an overdetermined one. Can this overdetermined problem be solved by LASSO, sparse Bayesian learning or other compressive sensing methods?
Thank you!
I am trying to minimize the following objective function:
(A-f(B)).^2 + d*TV(B)
A: known 2D image
B: unknown 2D image
f: a non-injective function (e.g. sine function)
d: constant
TV: total variation: sum of the image gradient magnitudes
I am not an expert on these sorts of problems and looking for some hints to start from. Thank you.
Is there any alternative topic/theory/mathematical foundation to compressed sensing (CS) theory?
successive to Nyquist Criterion is CS theory, is there any theory that surpasses the CS theory ?
*The formal definition of CS is
y=Φx=ΦΨα
where x is the input signals, Φ is the sensing matrix and y is the compressed vector. α is a sparse vector,and Ψ is a sparse basis.
* In case of DCT-based compressed sensing,Ψ is the DCT basis.
* For wireless body area network, I think it is not practical to apply the DCT transform of the signal X at the sensor node , right ?
If so, how can we apply DCT-based compressed sensing for wireless body area networks?
Hi,
I want to reduce the number of rows of my data set and until now i used some clustering algorithms (kmeans, kmedoids, SOM), but recently I discover some papers:
- lp row sampling with Lewis weights ( Cohen, Peng),
- Iterative row sampling (Li, Miller, Peng),
- Compressive sampling (Candes).
I would like to know what is the best method taking into account the density of the variables of the data set?
Is my question meaningful? Or does it make no sense?
I mean, I want a true representation of my data set.
Thanks,
Robin
How can implementation an Adaptive Dictionary Reconstruction for Compressed Sensing of ECG Signals and how can an analysis of the overall power consumption of the proposed ECG compression framework as would be used in WBAN.
Wavelet based compression can be achieved by suppressing coefficients below a threshold (or preserving a predetermined number of significant coefficients). Compressed sensing on the other hand projects the data to a lower dimension (assuming sparsity in a domain such as wavelets) and reconstructs by solving (a relaxed version of) an optimization problem. What is the advantage of one over the other? CS seems like a round about way of doing things that can be achieved straightforward by DWT. Although one needs to keep track of the index of significant coefficients in wavelet compression, I am ending up with DWT based compression outperforming CS! Is there a distinct advantage of CS for compression?
I have tried a code to compress a signal using Compressed Sensing(CS). The input signal is x and the compressed signal y is given by : y=Φ∗x where Φ is the sensing matrix.
I have used the following matlab function to compute the energy of x and the energy of y .
- energy_x= sum(x(1:384,:).^2);
- energy_y = sum(y(1:192,:).^2);
The length of x is 384 , and the length of y is 192.
I found that the energy of x is 446910 and the energy of y is 77651282 .
Is it is reasonable to obtain higher energy for the output than the input?
Hi
I have a question regarding high-rate LDPC codes constructions. My research field is not coding but somewhere in my research I need to find an explicit way of constructing LDPC codes with high rate with girth 6 or 8. I think high-rate is not an important factor in coding but in compressed sensing it is of great importance since it provides the main assumption of compressed sensing.
I would like to know whether Cyclic or Quasi-cyclic LDPC codes of girth 6 or 8 can provide high-rate or not? Any suggestion is appreciated!
Thanks
Mahsa
what are the advantages and disadvantages of Matching Pursuit Algorithms for Sparse Approximation? and if there are alternative method better than Matching Pursuit
I want to do multichannel ECG data compression using multiscale PCA . Do the transformed coefficients are eigenspace.
I want to obtain a matrix C in matlab which is the n-by-n DCT (discrete curvelet transform) matrix such that for a given set of signals X and given set of coefficients A (I supposedly think which will better represent edges of X ) we can get a representation X=C*A. C will be a universal transform matrix like an n-Haar transform matrix. Can I obtain such a matrix since curvelet is linear? I have curvelab 2.1.3 installed and a function fdct_usfft.m returns curvelet transform of a given input. But I need the curvelet transform to be a transform operator matrix rather than as an operator on a signal.
I would like to calculate CNR for compressed sensing MRI. However as it incorporate parallel imaging calculation of background noise might be spurious if ROI is placed outside the image. So please suggest the way.
I have used the structural similarity index metric (ssim) as one of the metrics when dealing with compressed sensing signals. There is a parameter in the code called Sliding window length. Increasing the window length always yields higher ssim values. How can I choose this value? Shall I increase it until no further increase in ssim? Thanks
How we can acquire random undersampling in MRI for CS?
Is it possible to use the particle filters for parameter identification ? if ye, what could be the cost related to the identified parameter which need to be minimized? e.g in case of EKF, we had the covariance matrix and we can minimize the covariance of the identified parameter as the cost. Please suggest some suitable matlab implementation to start my work on PF. Thanks
How we can determine the number of measurements required by a compressive sensing technique. Or what is the number of measurements required by a specific sampling matrix used to sample the signal.
I am also looking for some references that can answer my question,
Thanks,
Dear All,
In compressive sensing for cognitive radio network, when the reconstruction of the original sparse signal is not needed? and How we can prove it.
Thanks,
I am working on a research work on transmitting ECG signals compressed by Compressed sensing over wireless body area network . The signal is affected by noise and small scale fading. The normalized mean square error is used to estimate the quality of the reconstructed signal at the receiver. When I run the same m file in matlab several times, I obtain different values for the mean square error, although the same channel signal to noise ratio is used. I think this is reasonable, do you agree? If so, how can I obtain a signal value to represent the quality of the reconstructed signal at the receiver?
while taking measurements of the original signal (sparse in some basis) using Gaussian random measurement matrix with l1-magic package the authors are performing orthogonalization of measurement matrix , why should we do orthogonalization although without doing that step also we are able to reconstruct the signal.
compressive sensing says while capturing a signal take random measurements of signal using random matrices (gaussian etc.) and while reconstructing the signal at reciever we use sparsifying basis but some researchers perform sparsification after capturing the signal and then take random measurements of this sparsified signal using random matrices but performing sparisification at the encoder/transmitter is it a correct way
Dear Researchers,
Anyone can help in defining the 1-bit compressive sensing, I have some difficulties to understand how it works exactly. Is the algorithm can be applied to analog/digital signals?
Thanks,
Hello Sir,
I am working compressive sensing on an image. I decomposed the image (256*256) using DWT and obtained the four sub bands of each 128 *128. Now I am taking the three high frequency sub bands of each (128*1)to be measured with Measurement matrix (100*128). So therefore i would get three resultant measured vectors.This is Compression. Reconstruction: I would use the low frequency band(128*128) and the 3 high frequency band(128*1) using OMP to reconstruct the image.
Clarification: How to convert the one high frequency sub band (128 *128) into one measured vectors (128*1) ?
Please correct me if my understanding is wrong.
random demodulator, concept of xampling are well known compressed sensing techniques to perform sub-nyquist sampling of analog signals. is there any new techniques available
I have a technical question about Massive MIMO to discuss with you . If each user feedback their compressed low dimensional channel \hat h. The compress method can employ compress sensing (PCA,2D-DCT) like the paper attached. When all the users feedback their individual \hat h, The BS perform multi-user precoding like ZF\RZF,etc. In general, the BS should recovery the channel h and employ precoding. I am wondering that may I use the low dimensional \hat h to do precoding. I have read your paper about beamforing, if all user's orthogonal space are obatined, the performance will be the same as use ture h to perform precoding. If I use the feedback \hat h with a pre-known matrix and prove ZF precoding just depend on \hat h, it may be right. Howerver, I have no idea to prove that . What do you think about this question? Thanks !
while i am reading theory about OMP algorithm it is given that support of the original signal is also considered to reconstruct the original signal but when i downloaded sparcelab toolbox for using OMP algorithm the solver (SolveOMP) required only A(Measurement Matrix), N(signal original length), Y(mesurement vector/output vector). then how they are obtaining the reconstructed signal . Does OMP requires support of the original signal or not ?
Since compressed sensing is developing area, then, there is growing number of algorithms and I would not want to miss the best one (as of today). Relevant references and links on the MATLAB code are welcomed. Thank you in advance.
I have raw data for Synthetic Aperture Radar which consists from 800 by 702 matrix and I want to use Compressive sensing for the reconstruction of the of the targets. so the best number of measurement should I choose.
Measurement matrix with low coherence
For example, in IR spectroscopy we can identify unknown compounds by analyzing their IR Spectrum, then how can we use DSP or Compressed Sensing in this field?
Basically, I wanted to know what motivated people to do parallel imaging and compressed sensing (random undersampling). I guess the answer lies in MRI physics.
compressed sensing(CS) aims to reconstruct signals and images from significantly fewer measurements than were traditionally thought necessary. MRI is an essential medical imaging tool with an inherently slow data acquisition process . applying CS to MRI can offers scan time reduction , but how?
Let me first apologize for putting this collaboration request in question-answer section. However, I could think of any other way to reach to relevant researchers faster.
I am an MRI scientist working at the dept. of Neuroradiology, University Medical Center, Hamburg.
I have been working towards making T2-relaxometry based myelin imaging as a feasible clinical (MRI) marker. For that, I am using the gold standard CPMG sequence and exploring ways to cut down the scan time for whole brain coverage < 15-20 minutes using various regular and random under sampling (compressed sensing).
I am planning to apply for an international collaboration grant between India-EU. This grant aims to make collaboration between India and following EU countries: Belgium, Estonia, Portugal, France and Norway. More information here: http://indigoprojects.eu/funding/indigo-calls/call-2015
I am looking for collaborators (from India/ Belgium, Estonia, Portugal, France and Norway):
1) who is interested in white matter disorders: It could be any white matter disease.
2) who may have following backgrounds:
a. Sequence development
b. Compressed sensing
c. Medical background
d. Histological background
Any interested MRI/ “white matter disorder” researcher can contact me:
In my work I mainly use two of these concepts. Do you know other approaches?Relevant references are welcome. Thank you in advance!
Can any one suggest the best algorithm and if possible the MATLAB code to reconstruct a group structured vector?
I am using pseudopolar fft to implement shearlet transform using Shearlab 1.1.
I would like to expand knowledge in this field. Mainly I'm interested in the applications of the Compressed Sensing in the optics. Thank you in advance!
Compress sensing has been employed in medical image reconstruction, mainly tomography. It has been proved that the faster image with equivalent resolution is achievable with less number of measurements. But is it the same scenario when measurement is at oneside, like reflection mode (e.g ultrasound)?
What domain used for sparse representation of signal ?
Can someone please clarify the following questions I have in compressed sensing.
1. Compressed sensing says that we need not acquire the entire signal X of dimension Nx1 , but instead we can take only few measurements y of dimension Mx1 where M<<N... and use them to reconstruct the signal X.
But we write the equation as : y = A x X where A is a measurement matrix which is MxN.
Looking at this equation I feel that in order to get y or M measurements or the compressed signal , we need to have a complete signal X, which implies that we need to have or measure X completely anyway... so how does this make compressed sensing useful in saying that we can directly compress while the data is being acquired?
2. If all we require is very few measurements M, which can be accomplished by correlating only few columns of A with X, why do we need X to be a sparse vector or a compressible vector in some other domain?
3. Does compressed sensing reduce the number of sensors required to sample the data? Or does it only reduce the storage and processing equipment & time required to compress the data after it is sampled from the sensors?
Thanks a lot for clarifying my confusion!
I want to solve the following optimization problem:
min_x { f(x) + lambda * ||x||1 }
There are many source codes for solving L1-norm Regularization problems, but their loss functions are limited to least square and logistic regression.
Do you know any source code that f(x) could be defined by the user?
Recall, If we have P non-zero elements in a sparse vector of length N, minimum required samples will be P*C*log (N/P). What if we have Q groups of known or unknown length instead? Do we have any advantage on the number of measurements by having group structure?
In my research, in MSc, I dealt with compressed sensing approach.
In order to recover the desired signal from the acquired signal, I used a solver that performs minimization of the object function with regularization function based on l1-norm or TV-norm, but I'm sure that there are many other options.
So I will be very glad if someone will recommend the relevant references or links.
Thank you in advance!
How Orthogonal Matching Pursuing Algorithm is implemented using MATLAB.
Does anyone know an algorithm and simulation example for compressed cooperative sensing?
Could anybody suggest an l-1 norm solver form complex sparse signal?
Currently I am using the OMP (Orthogonal Matching Pursuit) Algorithm for l-1 optimization. But it is not able to detect the no-zero samples localized near to the origin. So please suggest an algorithm which will work for complex sparse signals and will detect all non-zero samples.
I am working on designing multimodal biometric watermarking techniques using compressive sensing theory to improve security and payload capacity of techniques. But I am facing a problem when I reconstruct an image or signal from sparse coefficients, I am not able to reconstructebecause of not getting proper optimization technique.
In regards to denoising with Tree-Structure Wavelets, I am trying to repeat the result of Section 5.3 of the paper, "Proximal Methods for Hierarchical Sparse Coding" by R. Jenatton .et.al, by using mexProximalTree function with their SPAMS toolbox http://spams-devel.gforge.inria.fr/index.html.
I want to create this thread to collect, archive knowledge of large-scale compressive sensing. Hopefully give a good overview to other academics about how to do compressed sensing on high-dimensional, high-resolution data such as images. Recently I worked with Hidden Markov Tree related models; therefore, Duarte's work on "Model-based Compressive Sensing" was brought to my attention. However, like many other methods, the size of input matrix are limited due to the need of creating and storing "sensing matrices" (randomly generated for incoherence with your input signals). Therefore, I can only process up to the size 64x64; it is simply not enough for me. I try to push this boundary and find "noiselet", which could help sense sparse structures of image data. I do some experiments of combining Hidden Markov Tree and Noiselet but there are no positive results so far. That is why I wish to know more about "noiselet" or other methods which can generate good sensing matrices without creating and storing enormous sensing matrices. I appreciate any hints, directions, comments, etc and thanks in advance. Some references:
Model-based Compressive Sensing, RG Baraniuk, V Cevher, MF Duarte, C Hedge,
Using Correlated Subset Structure for Compressive Sensing Recovery - A Divekar, D Needell
For example, if I could do some filtering operation on original signal, is there way a to do the same filtering operation directly on the compressed measurements as long as the filter preserves the sparsity of the signal? I.e. say the filter keeps all k-sparse signals to remain k-sparse?