Science topic
Statistical Signal Processing - Science topic
Explore the latest questions and answers in Statistical Signal Processing, and find Statistical Signal Processing experts.
Questions related to Statistical Signal Processing
i have implemented a recursive least square algorithm. i am testing it using random discrete time functions and works well. when i am trying to estimate the parameters of a certain transfer function it doesn't estimate them correctly unless i add noise to the system. is that reasonable? what are the specifications so that an rls algorithm works well?
I need some guidance regarding CRLB to compute numerically, and to estimate Doppler frequency, for a synthetic signal, given below.
X = A*sinc(B*(t-𝜏).*exp(j*2*pi*F*(t-𝜏); whereas θ = [ F, A, 𝜏 ]
"A" is complex and has amplitude and phase. "F" is doppler and "𝜏 " is azimuth shift
I am trying to approximate an AR(500) process by a lower order AR(n) n<10 for example. Is there any efficient technique for this problem?
Many thanks in advance.
I wonder if you could help me with the SNR (end-to-end) expression of a wireless communication system with a detect and forward relay without direct link (please see joined).
Best regard.
I am working with sensor signals and finding some problems with signal manipulation .(Any idea /hint /suggestion are welcome as i need something to move forward .
I have chnaged the qurey and added some new and more details to make the question easy to understand.
I have 2 signals.
⦁ Light Pink is the original Signal (refernce signal)with Red dots showing Local maxima’s.
⦁ Blue is a signal which is found after having test from another sensor that looks like refernce sensor but have some faults in it as it is made by us.
⦁ These signals are plotted against time on x-axis.
I have attached some plausible / some part of values of signal here so if anyone can help me with the logic along with MATLAB code it will be of great help.
⦁ Please open the attached link .
⦁ Copy the code written in MATLAB and run after saving it .
⦁ You will see 2 signals as shown in figure and when u will zoom it you will clearly see the difference.
(How to make my Original signal Pink signal Straight as like blue signal in plots)
Question:
⦁ I want to make my original signal (pink) to look alike blue signal interms of flat portion only.
Common behaviour to observe the logic:
⦁ The common behavior that I have seen in my measured signal is that it gets flat where it finds local maxima (either on positive side or negative side).
⦁ At every point in local maxima I see that my blue signal gets flat.
Everything I have to do is with Original signal (Pink signal) to formulate some results.
Is there any way, I can make my original signal flat just like blue signal.
Can someone suggest me the best way to do that? And some one provide me an example MATLAB code then it would be great help .
Please have a look at the picture to get a glimpse about my idea.
Thanks a lot in adavance for help.
I have tried few techniques.
The results of those Techniques are as follows but speaking truly nothing is working for me uptill now.
I have find local maxima values and using nlfilter applied neighborhood rule and tried to make the peaks and neighborhood area flat but unfortunately it is not working as window size is fixed but in my case window size varies and it also depends upon position of local maxima and most important constant window size changes the shape of the signal.
I have also tried to apply varying window but its not working for me may be i have not good concept of how to apply varying size window. I do not know how it will work for my signal.
Cut the long story short what I have done up till now is not working so I need help in that.
It will be really nice if someone provides me how to solve this issue and If i will get some MATLAB so it will great for me.
Thanks a lot in advance for your time ,expertise and help.
Code for running the variables of data in the attached Link:
load originalflatenninrough t original_signal_data measured_Signal_data
[p l] = findpeaks(original_signal_data);
[pn ln] = findpeaks(-original_signal_data);
figure(1)
hold on
plot(t,original_signal_data,'m')
plot(t,measured_Signal_data,'b')
plot(t(l),p,'ko','MarkerFaceColor','r');
plot(t(ln),-pn,'ko','MarkerFaceColor','r');
legend ('originalsignal', 'measureddatasignal')
hold off
Code on Test data example data for NL filter (Which is applied on original signal)
n = 10; % number of values to replace in the neighborhood of a local max
t= 0:0.001:10;
A = sin(2*pi*t);
[pks,locs] = findpeaks(A);
% [pks,locs] = findpeaks(-A);
locs = (locs) ;
locations = zeros(size(A));
locations(locs) = true;
locations = conv(locations, ones(1, 2*n+1), 'same') > 0;
X = -inf(size(A)); % create temporary
X(locs) = A(locs); % copy the local maxima
X = nlfilter(X, [1 2*n+1 ], @(x) max(x)); %replace all values with it local maxima
X(locs) = A(locs); % ensure local maxima are not changed
A(locations) = X(locations); % copy filtered temporary to output
figure()
hold on
plot(t,A,'b')
A = sin(2*pi*t);
plot(t,A,'g')
I don't know my question is correct or not ?
I need to develop an algorithm that will compare two signals (1 Reference Signal and other is measured signal values from sensor) and generate some metric(s) to describe changes between them. I am not good at signal processing and analysis so I would appreciate any help.
I have attached figures below to provide an idea about how my both signals looks like.
Some of the differences that I am expecting are:
1 -Amount of error between Reference signal and measured values signal.(I want to calculate value of overall error or difference occurred).
2- Changes that occurred in measured signal from reference signal like Amplitude change in some parts, phase changes, offset occurred, Difference in Peaks and troughs, rise and fall transitions.
(In short I want to have an overall idea about all the changes that happened in measured signal in comparison from reference signal). My signal is too complex and has lot of values so I remained unable to develop an approach for it from my side.
The algorithm needs to output some generic metrics which can be used to quantify changes in any or all of these parameters. Any guidance on what method(s) I could use to do this would be a great help.
For the case of finding errors I have think of RMSE is it a good idea to take this approach as the length of my signals are same. Given the data reference signal and sensor signal data of size 1x1626100 and 1 x 1626100 double.
Correlation function also came into my mind but according to my knowledge I can only find similarity between different signals using correlation function not the total error or overall changes that occurred in signal.
The signal is generated provides an information about changes in steering angle along with time.
Various measurements are taken over time at the same location and the final objective is to determine how the signals have changed over time (due to physical/Hardware changes).
We have taken different tests to find out how physical/Hardware changes affect my signal values and in every test speed, velocity or brakes condition of cars are different. I also needs to take into account these things for my algorithm.
The measurement system may indeed be moving at different speeds, and may have different acceleration profiles during the measurement. This needs to be accounted for in my algorithm.
I am performing this algorithm development in Matlab.
PubPeer: May 29, 2017
Unregistered Submission:
(May 25th, 2017 2:46 am UTC)
In this review the authors attempted to estimate the information generated by neural signals used in different Brain Machine Interface (BMI) studies to compare performances. It seems that the authors have neglected critical assumptions of the estimation technique they used, a mistake that, if confirmed, completely invalidates the results of the main point of their article, compromising their conclusions.
Figure 1 legend states that the bits per trial from 26 BMI studies were estimated using Wolpaw’s information transfer rate method (ITR), an approximation of Shannon’s full mutual information channel theory, with the following expression:
Bits/trial = log2N + P log2P + (1-P) log2[(1-P)/(N-1)]
where N is the number of possible choices (the number of targets in a center-out task as used by the authors) and P is the probability that the desired choice will be selected (used as percent of correct trials by the authors). The estimated bits per trial and bits per second of the 26 studies are shown in Table 1 and represented as histograms in Figure 1C and 1D respectively.
Wolpaw’s approximation used by the authors is valid only if several strict assumptions are true: i) BMI are memoryless and stable discrete transmission channels, ii) all the output commands are equally likely to be selected, iii) P is the same for all choices, and the error is equally distributed among all remaining choices (Wolpaw et al., 1998, Yuan et al, 2013; Thompson et al., 2014). The violation of the assumptions of Wolpaw’s approximation leads to incorrect ITR estimations (Yuan et al, 2013). Because BMI systems typically do not fulfill several of these assumptions, particularly those of uniform selection probability and uniform classification error distribution, researchers are encouraged to be careful in reporting ITR, especially when they are using ITR for comparisons between different BMI systems (Thompson et al. 2014). Yet, Tehovnik et al. 2013 failed in reporting whether the assumptions for Wolpaw’s approximation were true or not for the 26 studies they used. Such omission invalidates their estimations. Additionally, the inspection of the original studies reveals the authors failed at the fundamental aspect of understanding and interpreting the tasks used in some of them. This failure led to incorrect input values for their estimations in at least 2 studies.
The validity of the estimated bits/trial and bits/second presented in Figure 1 and Table 1 is crucial to the credibility of the main conclusions of the review. If these estimations are incorrect, as they seem to be, it would invalidate the main claim of the review, which is the low performance of BMI systems. It will also raise doubts on the remaining points argued by the authors, making their claims substantially weaker. Another review published by the same group (Tehovnik and Chen 2015), which used the estimations from the current one, would be also compromised in its conclusions. In summary, for this review to be considered, the authors must include the ways in which the analyzed BMI studies violate or not the ITR assumptions.
References
Tehovnik EJ, Woods LC, Slocum WM (2013) Transfer of information by BMI. Neuroscience 255:134–46.
Shannon C E and Weaver W (1964) The Mathematical Theory of Communication (Urbana, IL: University of Illinois Press).
Wolpaw J R, Ramoser H, McFarland DJ, Pfurtscheller G (1998) EEG-based communication: improved accuracy by response verification IEEE Trans. Rehabil. Eng. 6:326–33.
Thompson DE, Quitadamo LR, Mainardi L, Laghari KU, Gao S, Kindermans PJ, Simeral JD, Fazel-Rezai R, Matteucci M, Falk TH, Bianchi L, Chestek CA, Huggins JE (2014) Performance measurement for brain-computer or brain-machine interfaces: a tutorial. J. Neural Eng. 11(3):035001.
Yuan P, Gao X, Allison B, Wang Y, Bin G, Gao S (2013) A study of the existing problems of estimating the information transfer rate in online brain–computer interfaces. J. Neural Eng. 10:026014.
Is precoding done due to its strong channel correlation? Couldn't it be thought of as a LTI attenuation channel?
My doubt is about the dimension of the subspace when one signal is being oversampling. I would like to 'visualize' one example of this key idea of blind calibration. Next, the original text:
"Assume that the sensor network is slightly oversampling the phenomenon being sensed. Mathematically, this means that the calibrated snapshot x lies in a lower dimensional subspace of n-dimensional Euclidean space.
Let S denote this “signal subspace” and assume that it is r-dimensional, for some integer 0<r<n. For example, if the signal being measured is bandlimited and the sensors are spaced closer than required by the Shannon-Nyquist sampling rate, then x will lie in a lower dimensional subspace spanned by frequency basis vectors. If we oversample (relative to Shannon-Nyquist) by a factor of 2, then r =n/2. "
K&L actually defined what is effectively the "ultimate" sufficient statistic, which, in signal processing lingo, is called the log likelihood ratio (LLR). The LLR is the "ultimate" sufficient statistic because it is precisely the instantaneous information content available from the data bearing upon the specified binary decision (that is, the data can tell you nothing about the binary decision of interest that the LLR cannot). The generalized form of SNR (the symmetric form of the KL divergence) is then the associated average information content (the structural equivalent of entropy). It turns out this way of measuring information content (using LLRs, which I call "discriminating information") measures the same basic "stuff" that Shannon ("entropic information") does, but using a different measurement scale [like Kelvin rather than Fahrenheit], developed for the context of a binary decision rather than for the context of a discrete communications stream. The former is much more general (due to the generality of the underlying context), while also providing the LLR as an instantaneous measure (i.e., no ensemble averaging), a critical element missing from entropic information. If you are interested in references exploring the structure of discriminating information in more detail, feel free to contact me at jjpolcari@verizon.net.
After gathering huge data out of measurements of signal propagation, I assumed that finding the standard deviation and variance will help to explain how each point deviated from the mean.
What are the optimal ways to detect flat/smooth regions in a noisy image (other than the standard deviation because it is not very effective)?
Recently I came across where I tend to believe that adding noise is actually helps CR to detect the signal in a better way. Suppose we have Energy Detector, and we know that it works by taking averaging of Energy of received signal. If suppose we add noise to it which is also has its own energy which can be constructive or destructive in nature. If it is constructive, then it helps the detector to detect the signal and vice versa. But we always found that that Noise is always disturbing in nature and effect negatively in the context of spectrum sensing. I would be glad if somebody can explain why it happen.
Consider the following expression xHAx, where x is a known vector, i.e. a determinisitc parameter, and xH its hermitian, and A is a hermitian, positive semi-definite matrix constructed as the summation of D rank-one matrices of the form didiH for i=1,...,D. In this case, the vectors di are distributed as a standard complex Gaussian (zero mean and unit variance).
The question is which is the statistical model of xHAx?
Thanks in advance.
which one having better performance in spectrum sensing 1.game theory approach or 2. Statistical signal processing approach like MLE or NP method?
During CSP ,When I subject composite covariance matrix to eigen value decomposition , one of the diagonal elements is negative with very low magnitude as compared to others , because of which , my whitening matrix is becoming complex [ sqrt(inv(diag(D))) ] . Can I use abs(diag(D)) instead of diag(D) to overcome this problem ? Will it change my classification result ? Thank you in advance.
Hi,
In Statistical signal processing, lot of research is based on complex analysis. Many techniques and methods are transformed to complex domain. Whereas complex information is only important in form of magnitude and phase. So whats the difference in using magnitude information or real and imaginary information of the data? Why is phase important? What the difference of the signal that is added with phase information and without phase information?
Appreciate your comments
I know that Laplacian distribution function is defined as follow
f(x)=(b/2)*exp(-b|x-\mu|)
Also, I know that the mean and variance for the ratio between two normal variables.
Anyone can guide how would be mean and variance for Laplacian distribution?
|h|- |\tilde_h| = |e_h|
Kindly provide the distribution for |e_h| if both the |h| and |\tilde_h| follow Nakagami-m distribution.
Where |h| is the absolute value actual channel fading coefficient and |\tilde_h| is the estimated one.
Hi everyone,
Let A be a full rank square matrix (A has no null space). When does y^T A x = 0 occur ? (where T is transposition).
It could be that this problem is case-specific, so please find attached a document where x,y, and A take particular forms.
But in any case, is there a condition on which y^T A x = 0 may occur ?
Contributors would, indeed, be acknowledged.
Thank you very much
Let R' = R + D be the estimated correlation matrix, where R is the original correlation matrix and D is the matrix that models the estimation error. My question is the existence of models for this matrix D.
Dear all,
Assume we have the following vector linear model:
x = Hs + n where x is the received vector, H is a full column rank matrix and s is the vector of signals. The noise is n.
Why do some people go for a Bayesian approach for estimating H, s and the noise variance, whereas others take a deterministic one ?
Thanks in advance.
My question is why we reuse the orthogonal pilot symbols among the cells in Massive MIMO which leads to the pilot contamination? In other words, why we don't have enough orthogonal pilots to serve all the users within the system with different pilots? This may have a relationship with another question, how we can generate orthogonal pilots?
Could anyone please refer to any reference that can be useful for me to answer these questions?
Dear All,
Assume the following system of equations:
Ax = b where b is the vector of data of size Nx1, x is the vector of unknown of size Nx1 and A is the matrix of coefficients of size NxN.
The solution is x = pinv(A)*b where pinv(A) is the pseudo inverse of A.
Now if A is of rank N-1, how do we solve of x? i know that infinite solutions exist but is there another approach of solving for x?
Thank you in advance.
Dear All,
Assume i have an array of uniform linear antenna array of 3 antennas, distance uncertainties and other imperfections might perturb the steering vector away from the true one. Thus, DoA estimation using ML or subspace techniques would fail.
I would like to know if it is possible to calibrate when the receive number of signals are more than 3 (Due to severe multipath)????
Thank you in advance.
Dear researchers
For a random process, do you think that the upper and lower envelops of that random process is independent from the process itself? if they are independent, how can we prove that statistically?
Hope you can point out some references if available.
Best regards
We all know that the Correlation matrix is :
Rxx = E{x.x^H} where E{} denotes expectation and H is the hermitian operator.
In practice, and in most cases, the E{} is replaced by the sample average.
x is an N x 1 column complex vector.
I would like to know how the eigenvalues of Rxx are affected if x were affected by a diagonal matrix C that changes every sample and depends only on a scalar, say 'alfa' i.e.
Rx'x' = E{x'.x'^H} where x' = Cx
or
Rx'x' = 1/N * (C(1)x(1)[C(1)x(1)]^H + ......... + C(N)x(N)[C(N)x(N)]^H )
I am aware that industry nowadays are going towards fingerprinting techniques rather than online only-based algorithms. Could anyone provide with the most recent state-of-the-art paper that describes the indoor localization topic?
Thanks
First, I took a signal (X) and performed DWT for it
Then I performed Y = AX (where A is random m×n matrix having m<<n)
Later, It is taken as input to the reconstruction algorithm and is stored in other variable.
How can I perform IDWT of the reconstructed signal without knowing the wavelet coefficients of the reconstructed signal
Dear All,
In a Deterministic Framework where we have the following Linear Model:
y(t) = H.x(t) + n(t)
where
y(t) is the observed vector of size Nx1 (we have T observations)
H is an NxP matrix (no constraint on P, P could be smaller or larger than N)
x(t) is a Px1 vector
n(t) is random noise.
It is well known that if n(t) is a Gaussian process, then you couldn't do any better than Max Likelihood, i.e. the L2 norm is optimal to estimate parameters in H and x(t).
My question is : when does ML become sub-optimal ?
Thanks.
During the system state estimation, EKF is the useful method. But the initial state X0, the process noise covariance matrix Q, and the measurement noise covariance matrix R are not easily determined. Their values affect the estimated state directly. How best to choose their proper values?
If we were able to estimate the noise power blindly for a conventional energy detector (CED), does that shift the CED from semi-blind to fully blind detector?
I have a doubt in plotting an ROC , I have following parameters
1) Probablity of detection Theoritical (Pd_theory)
2) Probablity of false alarm assumed (Pfa)
3) Probablity of detection simulated (Pd_sim)
4) Probablity of false alarm simulated (Pf_sim)
And I want to plot Pd_sim with respect to false alarm. But I don't know which false alarm would be more correct to take. Pfa or Pf_sim and why?
Is there any standard steps of proving the equivalence between two stationary random processes? Thanks
In spectrum sensing, threshold is the most important term for estimating performance measure. In order to carry out particular type of CFAR detection, threshold is determined by reverse calculation from false alarm probability. But most of the time it doesn't match with predefined detection. So I want to know is their any other method by which we can find out threshold so that in determining performance measure it will perfectly match.
Can Someone provide me an information about carrier phase (raw) data for GPS??
It is clear that the combination of some objective functions is better than just one objective function but the problem is how can I combine them. Is it a random process or not? Does it have some rules?
For example, (OBJECTIVE FUNCTION=w1.ISE+w2.ITAE+w3.IAE). How can I determine the weight value (w) for these objective functions? Can I determine it randomly?
Sincerely
Jus want to know difference between static and dynamic state estimation and their applications........plz help me out with this.... thanks in advance......
In cyclostationary signal detection for particular type of signal modulation how to determine the value of cyclic frequency alpha so that it would give required detection.
I have Implemented this detector in AWGN channel in Matlab but I am getting vague results. As its performance doesn't vary with change in SNR. Its a very strange result for me. I don't know what mistake I am doing in it. I am attaching my codes please have look and comment. Here I am finding cyclostationary feature of signal for its detection. Firstly I tried to Out FFT of a signal then shift its frequency by +alpha of tranform and -alpha to its conjugate. Then I multiplied both to taken sum of all. Thats how all theory explain about cyclostationary feature detection I will very grateful if somebody can help me in this.
function S=cyclio_stat_TestStatics(x,N)
lx=length(x);
X=zeros(2*N+1);
Y=zeros(2*N+1);
Ts=1/N;
for f=-N:N
d=exp(-j*2*pi*f*(0:lx-1)*Ts);
xf=x.*d;
n_r=lx:-1:1;
X(f+N+1)=sum(xf(n_r));
Y(f+N+1)=conj(sum(xf(n_r)));
end
alpha=10;
f=5;
f1=f+floor(alpha/2)+(floor(-((N-1)/2)):floor((N-1)/2));
f2=f-floor(alpha/2)+(floor(-((N-1)/2)):floor((N-1)/2));
S=sum(X(f1+N+1).*Y(f2+N+1))/N;
S=abs(S)/lx;
I am simulating a feature detector in noisy environment which consist of AWGN Noise and Impulsive Noise. But I am getting a strange result as Signal with Impulsive+AWGN noise has better detection probability then with Signal with only AWGN. I know its wrong at some point. How Is it possible that the signal with more noise like Impulsive noise has better detection possibility. Please share your experience.
In signals, to detect extrema what is significance of fourth moment..
As I have found out, their is no way to find the actual pdf and its parameters (like mean and variance) when primary signal is exist in cyclostationary spectrum sensing. So how do I define analytic expression for the Probablity of detection if it exists.
Actually I am working on multichannel eeg data obtained from scalp electrode of meditating and non meditating subjects. We want to quantify the changes that occur in ones brain signals when one meditates.
I have preprocessed the signals by bandpass filtering, normalization and artifact removal by wavelet thresholding. After that i have i have segmented the data set of each channel( we have 64 channels per subject and 64000 samples from each channel, the sampling frequency being 256 Hz). I have considered 1 second(ie. 256 samples) segments with 50 percent overlapping So in total we have 499 segments per channel per subject.
Then I decomposed each of the segments using wavelet decomposition and calculated the statistics such as mean, variance, kurtosis and skewness from each band per segment per channel per subject. But I am unable to form a feature vector that I can input into a classifier. Please help.
In order to combat this?
In my structural model the results indicate- signficant positive direct effect, signficant negative indirect effect and insignificant total effect. How do I interpret this?
Generally it is observed that the P value for direct effect is obtained from the general regression output whereas indirect significance levels are obtained from bootstrapping (two tailed signficance). Can we use all P values from bootstrapping?
How these parameters gives better results than other parameters?
Matlab Code or some Idea to create environment for component carriers.
I guess time-series analysis can be also studied under the scope of statistical signal processing. Is this correct? Maybe someone could give me a hand in selecting introductory, intermediate and advanced text books for multivariate time series analysis. Thanks!!
Resolution of Direction of Arrival.
Often mentions the term of resolution with DOA estimation which refers to the accuracy in determining the direction of the received signal in right direction and if there is more than source locate each one in the right direction.
I believe it is possible and hope to be able to share real data soon. Need feedback.
I am working on hybrid platform UWB+Zigbee/WiFi, I was wondering if we can use the same reconfigurable transceiver on MATLAB? I am on the initial stages of the assignment and according to theory any signal with fractional bandwidth over 0.2 is considered as UWB, that means if we can adjust our transceiver in a way that it gives fractional bandwidth >0.2 it will be UWB and <0.2 means Narrow band. Please guide.
Check whether it is unbiased and has minimum variance.
If yes, then tell me why it is superior.
How to estimate the expected value of unknown signal in discretization points a linear dynamic system whose model is not known?
While transmitting the signal in highly noisy environments, increasing the signal power (i.e. increasing PSD) will not affect the signal.
I want to find the registration point of a signal to process it further. Is centroid technique okay? Can you suggest some other technique?
My purpose is to find similarity and dissimilarity between two signals that look like same by eye. Any statistical method is also appreciated.
Thanks
I'm working on pupil diameter data caused by emotions and I get two signals for positive and negative emotions. I'm looking for a way to find the differences between them using the whole signal rather than portion of it. I used 1st and 2nd derivatives but result is not clear and differences are not obvious.
Is it ok to simplify that whenever a delay element is used with the output (eg: y(n-1) ), the function it self becomes IIR?