Science topic

Statistical Signal Processing - Science topic

Explore the latest questions and answers in Statistical Signal Processing, and find Statistical Signal Processing experts.
Questions related to Statistical Signal Processing
  • asked a question related to Statistical Signal Processing
Question
9 answers
i have implemented a recursive least square algorithm. i am testing it using random discrete time functions and works well. when i am trying to estimate the parameters of a certain transfer function it doesn't estimate them correctly unless i add noise to the system. is that reasonable? what are the specifications so that an rls algorithm works well?
Relevant answer
Answer
The recursive least squares algorithm (RLS) is the recursive application of the well-known least squares (LS) regression algorithm, so that each new data point is taken in account to modify (correct) a previous estimate of the parameters from some linear (or linearized) correlation thought to model the observed system. The method allows for the dynamical application of LS to time series acquired in real-time. As with LS, there may be several correlation equations and a set of dependent (observed) variables. For the recursive least squares algorithm with forgetting factor (RLS-FF), acquired data is weighted according to its age, with increased weight given to the most recent data.
A particularly clear introduction to RLS is found at: Karl J. Åström, Björn Wittenmark, "Computer-Controlled Systems: Theory and Design", Prentice-Hall, 3rd ed., 1997.
Years ago, while investigating adaptive control and energetic optimization of aerobic fermenters, I have applied the RLS-FF algorithm to estimate the parameters from the KLa correlation, used to predict the O2 gas-liquid mass-transfer, hence giving increased weight to most recent data. Estimates were improved by imposing sinusoidal disturbance to air flow and agitation speed (manipulated variables). The power dissipated by agitation was accessed by a torque meter (pilot plant). The proposed (adaptive) control algorithm compared favourably with PID. Simulations assessed the effect of numerically generated white Gaussian noise (2-sigma truncated) and of first order delay. This investigation was reported at (MSc Thesis):
  • asked a question related to Statistical Signal Processing
Question
3 answers
I need some guidance regarding CRLB to compute numerically, and to estimate Doppler frequency, for a synthetic signal, given below.
X =  A*sinc(B*(t-𝜏).*exp(j*2*pi*F*(t-𝜏); whereas   θ = [ F, A, 𝜏  ]
"A" is complex and has amplitude and phase. "F" is doppler and  "𝜏 " is azimuth shift
Relevant answer
  • asked a question related to Statistical Signal Processing
Question
3 answers
I am trying to approximate an AR(500) process by a lower order AR(n) n<10 for example. Is there any efficient technique for this problem?
Many thanks in advance.
Relevant answer
Answer
I would suggest to use the first n partial correlation coefficients to derive the low order model. Look up in the literature for the equations to convert the AR coefficients into partial correlations and vice versa. Note that ometimes the AR coefficients are called linear prediction coefficients (LPC) and the partial correlation coefficients may be called reflection coefficients. You can convert the AR coefficients into partial correlation coefficients, then take the the first n partial correlation coefficients and convert them back into an AR coefficients or order n.
  • asked a question related to Statistical Signal Processing
Question
10 answers
I wonder if you could help me with the SNR (end-to-end) expression of a wireless communication system with a detect and forward relay without direct link (please see joined).
Best regard.
Relevant answer
Answer
As per my understanding of the relaying systems,
The SNR, represented by $\gamma$ of any link (SR (source to relay) or RD (relay to destination)) would be dependent on the channel gain h, corresponding transmitter power P and Noise power N_o or $\sigma^2$, according to the following formula $\gamma=P|h|^2/N_o$. First you need to find the $\gamma_{SR}$ and $\gamm_{RD}$
Then, for Decode and Forward case, end to end SNR calculated at the destination $\gamma_{D}$ is the minimum of both $gamma_{SR}$ and $\gamma_{RD}$.
For AF case you need to use this formula,
$\gamma_{D}=(\gamma_{SR}*\gamma_{RD})/(\gamma_{SR}+\gamma_{RD}+1)$
I hope this is answer to your question.
Correct me if anything is wrong.
Thanks
  • asked a question related to Statistical Signal Processing
Question
6 answers
I am working with sensor signals and finding some problems with signal manipulation .(Any idea /hint /suggestion are welcome as i need something to move forward .
I have chnaged the qurey and added some new and more details to make the question easy to understand.
I have 2 signals.
⦁ Light Pink is the original Signal (refernce signal)with Red dots showing Local maxima’s.
⦁ Blue is a signal which is found after having test from another sensor that looks like refernce sensor but have some faults in it as it is made by us.
⦁ These signals are plotted against time on x-axis.
I have attached some plausible / some part of values of signal here so if anyone can help me with the logic along with MATLAB code it will be of great help.
⦁ Please open the attached link .
⦁ Copy the code written in MATLAB and run after saving it .
⦁ You will see 2 signals as shown in figure and when u will zoom it you will clearly see the difference.
(How to make my Original signal Pink signal Straight as like blue signal in plots)
Question:
⦁ I want to make my original signal (pink) to look alike blue signal interms of flat portion only.
Common behaviour to observe the logic:
⦁ The common behavior that I have seen in my measured signal is that it gets flat where it finds local maxima (either on positive side or negative side).
⦁ At every point in local maxima I see that my blue signal gets flat.
Everything I have to do is with Original signal (Pink signal) to formulate some results.
Is there any way, I can make my original signal flat just like blue signal.
Can someone suggest me the best way to do that? And some one provide me an example MATLAB code then it would be great help .
Please have a look at the picture to get a glimpse about my idea.
Thanks a lot in adavance for help.
I have tried few techniques.
The results of those Techniques are as follows but speaking truly nothing is working for me uptill now.
I have find local maxima values and using nlfilter applied neighborhood rule and tried to make the peaks and neighborhood area flat but unfortunately it is not working as window size is fixed but in my case window size varies and it also depends upon position of local maxima and most important constant window size changes the shape of the signal.
I have also tried to apply varying window but its not working for me may be i have not good concept of how to apply varying size window. I do not know how it will work for my signal.
Cut the long story short what I have done up till now is not working so I need help in that.
It will be really nice if someone provides me how to solve this issue and If i will get some MATLAB so it will great for me. 
Thanks a lot in advance for your time ,expertise and help.
Code for running the variables of data in the attached Link:
          load originalflatenninrough t original_signal_data measured_Signal_data
          [p l] = findpeaks(original_signal_data);
         [pn ln] = findpeaks(-original_signal_data);
         figure(1)
         hold on
          plot(t,original_signal_data,'m')
         plot(t,measured_Signal_data,'b')
          plot(t(l),p,'ko','MarkerFaceColor','r');
          plot(t(ln),-pn,'ko','MarkerFaceColor','r');
          legend ('originalsignal', 'measureddatasignal')
          hold off
Code on Test data example data for NL filter (Which is applied on original signal)
        n = 10; % number of values to replace in the neighborhood of a local max
        t= 0:0.001:10;
        A = sin(2*pi*t);
      [pks,locs] = findpeaks(A);
      % [pks,locs] = findpeaks(-A);
      locs = (locs) ;
      locations = zeros(size(A));
      locations(locs) = true;
      locations = conv(locations, ones(1, 2*n+1), 'same') > 0;
      X = -inf(size(A)); % create temporary
      X(locs) = A(locs); % copy the local maxima
      X = nlfilter(X, [1 2*n+1 ], @(x) max(x)); %replace all values with it local maxima
      X(locs) = A(locs); % ensure local maxima are not changed
      A(locations) = X(locations); % copy filtered temporary to output
     figure()
     hold on
     plot(t,A,'b')
     A = sin(2*pi*t);
     plot(t,A,'g')
Relevant answer
Answer
Mrinmoy Sandilya  Thanks a lot for your nice feedback and I will also try this technique of curve fitting . It is worth mentioning and valuable addition to my knowledge. Thanks once again.
  • asked a question related to Statistical Signal Processing
Question
12 answers
I don't know my question is correct or not ?
Relevant answer
Answer
Rayleigh fading only gives you diversity order of one, while the Nakagami-m fading model will provide you with a diversity order of m. Having said that, both Rayleigh and Nakagami-m model will not describe the line-of-sight (LOS) transmission environments well. Some authors proposed to use the Nakagami-m model to approximate the Rician fading, and such an approximation is not recommended since both models donnot even give you the same diversity order, and Nakagami-m cannot be used to describe the LOS transmission, which has been empirically verified (Read the paper by Molish). Another reason why the Nakagami-m model is popular is because its mathematical form is more analytically tractable.
  • asked a question related to Statistical Signal Processing
Question
10 answers
I need to develop an algorithm that will compare two signals (1 Reference Signal and other is measured signal values from sensor) and generate some metric(s) to describe changes between them. I am not good at signal processing and analysis so I would appreciate any help.
I have attached figures below to provide an idea about how my both signals looks like.
Some of the differences that I am expecting are:
1 -Amount of error between Reference signal and measured values signal.(I want to calculate value of overall error or difference occurred).
2- Changes that occurred in measured signal from reference signal like Amplitude change in some parts, phase changes, offset occurred, Difference in Peaks and troughs, rise and fall transitions.
(In short I want to have an overall idea about all the changes that happened in measured signal in comparison from reference signal). My signal is too complex and has lot of values so I remained unable to develop an approach for it from my side.
The algorithm needs to output some generic metrics which can be used to quantify changes in any or all of these parameters. Any guidance on what method(s) I could use to do this would be a great help.
For the case of finding errors I have think of RMSE is it a good idea to take this approach as the length of my signals are same. Given the data reference signal and sensor signal data of size 1x1626100 and 1 x 1626100 double.
Correlation function also came into my mind but according to my knowledge I can only find similarity between different signals using correlation function not the total error or overall changes that occurred in signal.
The signal is generated provides an information about changes in steering angle along with time.
Various measurements are taken over time at the same location and the final objective is to determine how the signals have changed over time (due to physical/Hardware changes).
We have taken different tests to find out how physical/Hardware changes affect my signal values and in every test speed, velocity or brakes condition of cars are different. I also needs to take into account these things for my algorithm.
The measurement system may indeed be moving at different speeds, and may have different acceleration profiles during the measurement. This needs to be accounted for in my algorithm.
I am performing this algorithm development in Matlab.
Relevant answer
Answer
Thanks a lot to all for ur time and expert opinions . It was a great help for me. I have Tried some techniques and I am having some issues or errors so instead of updating this question I am positing a new qurrey .
Thanks a lot once again.
  • asked a question related to Statistical Signal Processing
Question
3 answers
PubPeer:                                                                                    May 29, 2017
Unregistered Submission:
(May 25th, 2017 2:46 am UTC)
In this review the authors attempted to estimate the information generated by neural signals used in different Brain Machine Interface (BMI) studies to compare performances. It seems that the authors have neglected critical assumptions of the estimation technique they used, a mistake that, if confirmed, completely invalidates the results of the main point of their article, compromising their conclusions.
Figure 1 legend states that the bits per trial from 26 BMI studies were estimated using Wolpaw’s information transfer rate method (ITR), an approximation of Shannon’s full mutual information channel theory, with the following expression:
Bits/trial = log2N + P log2P + (1-P) log2[(1-P)/(N-1)]
where N is the number of possible choices (the number of targets in a center-out task as used by the authors) and P is the probability that the desired choice will be selected (used as percent of correct trials by the authors). The estimated bits per trial and bits per second of the 26 studies are shown in Table 1 and represented as histograms in Figure 1C and 1D respectively.
Wolpaw’s approximation used by the authors is valid only if several strict assumptions are true: i) BMI are memoryless and stable discrete transmission channels, ii) all the output commands are equally likely to be selected, iii) P is the same for all choices, and the error is equally distributed among all remaining choices (Wolpaw et al., 1998, Yuan et al, 2013; Thompson et al., 2014). The violation of the assumptions of Wolpaw’s approximation leads to incorrect ITR estimations (Yuan et al, 2013). Because BMI systems typically do not fulfill several of these assumptions, particularly those of uniform selection probability and uniform classification error distribution, researchers are encouraged to be careful in reporting ITR, especially when they are using ITR for comparisons between different BMI systems (Thompson et al. 2014). Yet, Tehovnik et al. 2013 failed in reporting whether the assumptions for Wolpaw’s approximation were true or not for the 26 studies they used. Such omission invalidates their estimations. Additionally, the inspection of the original studies reveals the authors failed at the fundamental aspect of understanding and interpreting the tasks used in some of them. This failure led to incorrect input values for their estimations in at least 2 studies.
The validity of the estimated bits/trial and bits/second presented in Figure 1 and Table 1 is crucial to the credibility of the main conclusions of the review. If these estimations are incorrect, as they seem to be, it would invalidate the main claim of the review, which is the low performance of BMI systems. It will also raise doubts on the remaining points argued by the authors, making their claims substantially weaker. Another review published by the same group (Tehovnik and Chen 2015), which used the estimations from the current one, would be also compromised in its conclusions. In summary, for this review to be considered, the authors must include the ways in which the analyzed BMI studies violate or not the ITR assumptions.
References
Tehovnik EJ, Woods LC, Slocum WM (2013) Transfer of information by BMI. Neuroscience 255:134–46.
Shannon C E and Weaver W (1964) The Mathematical Theory of Communication (Urbana, IL: University of Illinois Press).
Wolpaw J R, Ramoser H, McFarland DJ, Pfurtscheller G (1998) EEG-based communication: improved accuracy by response verification IEEE Trans. Rehabil. Eng. 6:326–33.
Thompson DE, Quitadamo LR, Mainardi L, Laghari KU, Gao S, Kindermans PJ, Simeral JD, Fazel-Rezai R, Matteucci M, Falk TH, Bianchi L, Chestek CA, Huggins JE (2014) Performance measurement for brain-computer or brain-machine interfaces: a tutorial. J. Neural Eng. 11(3):035001.
Yuan P, Gao X, Allison B, Wang Y, Bin G, Gao S (2013) A study of the existing problems of estimating the information transfer rate in online brain–computer interfaces.  J. Neural Eng. 10:026014.
Relevant answer
Answer
Fitts’ Law and Brain-machine Interfaces according to Willett et al. (2017):
Reaching movements typically obey Fitts’ law: MT = a + b log2 (D/R) where MT is movement time, D is target distance, R is target radius, and a & b are parameters. Fitts’ law describes two properties that would be ideal for brain-machine interfaces (BMIs): (1) that movement time is insensitive to the absolute scale of the task since the time depends on the ratio of D/R and (2) that movements have a large dynamic range of accuracy since movement time is logarithmically proportional to D/R.  Movement times for BMI (based on motor cortex electrophysiological recordings from two tetraplegics performing a center-out task) were better described by the formula, MT = a + bD + cR(-2 pow), since the movement time increased as the target radius became smaller, independent of target distance.  The mismatch between reaching movement and BMI-generated movement was determined to be due the signal-independent noise of the decoder for BMI which makes targets below a certain size very difficult to acquire in a timely manner.  This would reduce the information transfer rate by BMI when using small targets.
For the complete article see: Willett FR, Murphy BA, Memberg WD, Blabe CH, Pandarinath C, et al. (2017)  Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts’ law.  J. Neural Eng. 14:026010.
  • asked a question related to Statistical Signal Processing
Question
3 answers
Is precoding done due to its strong channel correlation? Couldn't it be thought of as a LTI attenuation channel?
Relevant answer
Answer
Pre-coding is essential when we're dealing with multiplexed signals to eliminate interference among them. Furthermore, in order to ensure a successful communication, it is essential to have a specific coherence time where the channel is LTI. And during that time, the channel estimation is approximately constant and uncorrelated. 
Examples of great precoding techniques: Maximum Ratio and Zero Forcing.
  • asked a question related to Statistical Signal Processing
Question
4 answers
My doubt is about the dimension of the subspace when one signal is being oversampling. I would like to 'visualize' one example of this key idea of blind calibration. Next, the original text:
"Assume that the sensor network is slightly oversampling the phenomenon being sensed. Mathematically, this means that the calibrated snapshot x lies in a lower dimensional subspace of n-dimensional Euclidean space.
Let S denote this “signal subspace” and assume that it is r-dimensional, for some integer 0<r<n. For example, if the signal being measured is bandlimited and the sensors are spaced closer than required by the Shannon-Nyquist sampling rate, then x will lie in a lower dimensional subspace spanned by frequency basis vectors. If we oversample (relative to Shannon-Nyquist) by a factor of 2, then r =n/2. "
Relevant answer
Answer
Thanks for your answers. I finally found something that helped me to visualize the general idea. In Blind Drift Calibration of Sensor Networks using Signal Space Projection and Kalman Filter they explain what is a 'signal subspace' and also how to construct the orthogonal projection matrix P. 
  • asked a question related to Statistical Signal Processing
Question
2 answers
K&L actually defined what is effectively the "ultimate" sufficient statistic, which, in signal processing lingo, is called the log likelihood ratio (LLR).  The LLR is the "ultimate" sufficient statistic because it is precisely the instantaneous information content available from the data bearing upon the specified binary decision (that is, the data can tell you nothing about the binary decision of interest that the LLR cannot).  The generalized form of SNR (the symmetric form of the KL divergence) is then the associated average information content (the structural equivalent of entropy).  It turns out this way of measuring information content (using LLRs, which I call "discriminating information") measures the same basic "stuff" that Shannon ("entropic information") does, but using a different measurement scale [like Kelvin rather than Fahrenheit], developed for the context of a binary decision rather than for the context of a discrete communications stream.  The former is much more general (due to the generality of the underlying context), while also providing the LLR as an instantaneous measure (i.e., no ensemble averaging), a critical element missing from entropic information.  If you are interested in references exploring the structure of discriminating information in more detail, feel free to contact me at jjpolcari@verizon.net.
Relevant answer
Answer
See "An Informative Interpretation of Decision Theory: The Information Theoretic Basis for SNR and LLR" for rigorous development of the data components of discriminating information for binary decisions. See "An Informative Interpretation of Decision Theory: Scalar Performance Measures for Binary Decisions" for rigorous development of non-data components (i.e., "prior information") and how information flows through the actual binary decision process - this leads to (what is to me) a much more useful method of measuring decision performance than traditional ROC curves, since it is scalar, and thus can be maximized directly. Cites of both are available on my RG site, which will lead you back to IEEE Access where they were published.
I have a further working paper exploring the specific relationship between discriminating information and entropic information {not yet published because the first set of reviewers weren't sufficiently familiar with discriminating information to really understand what I was getting at) that is not currently posted. I have not yet written up the application of discriminating information to classification (one of N choices) and estimation (one of a continuous set of choices) problems, but it is quite straight forward (you just analyze the "oring" operation on the underlying set of binary decisions). Right now, I am starting to unravel the extension to inferential decisions (or decisions of action, such as "should I act?" rather than "is something present?")
Most importantly, if you are interested in tail issues, I should write up and pass along some noodling I have done on what I call "LLR generating probabilities" For me, this arose in trying to understand the required structure of the two probabilities associated with any LLR statistic, which is extremely constraining and I am sure I don't fully understand yet. Will be happy to do so if you are interested - just pass me a link as to where you would like it delivered.
  • asked a question related to Statistical Signal Processing
Question
5 answers
After gathering huge data out of measurements of signal propagation, I assumed that finding the standard deviation and variance will help to explain how each point deviated from the mean.
Relevant answer
Answer
Uche -
Not being familiar with your work, and what you are trying to do, I may be misunderstanding your question, but let's just say that a standard error is a standard deviation of a parameter such as a mean or a regression coefficient.  A standard deviation of a population is a fixed number that we estimate.  But a standard error is reduced with sample size.  Either a standard deviation or a standard error could be said to be the positive square root of a variance.  Just remember that standard errors of parameters, such as means, are based on standard deviations, but are reduced with increased sample size.
I do not know what you are doing, but perhaps you are interested in standard errors to form confidence intervals.  The confidence interval depends also on the form of a distribution.  For an estimated mean with a large sample size of continuous data, the Central Limit Theorem will probably let you use a Gaussian distribution (generally called a "normal" distribution) to construct a confidence interval about that estimated mean.  But do not make the mistake of assuming populations should necessarily be "normally distributed." 
Note that for a regression, the estimated "prediction interval" may take advantage of the Central Limit Theorem with regard to the variance of the "errors" of the "predictions" for y given x, so this involves estimated residuals, but the y data and x data themselves, that is, the dependent variable and independent variable data distributions, can have any distributional form.  (For work I did on energy establishment survey data, for official statistics, these distributions were very highly skewed.) 
 
You may be interested in researching other statistical concepts and terms such as bias, moving averages, time series, autocorrelation, and/or perhaps others. 
Hope some of the above might be useful and not misleading for your purposes.
Cheers - Jim
  • asked a question related to Statistical Signal Processing
Question
12 answers
What are the optimal ways to detect flat/smooth regions in a noisy image (other than the standard deviation because it is not very effective)?
Relevant answer
If You have access, please look http://opticalengineering.spiedigitallibrary.org/article.aspx?articleid=1098284 and the thesis https://pdfs.semanticscholar.org/9ffb/f156b21c93bae1c6537734d05be7a54ab5ab.pdfthreshold depends upon scanning window size and local parameter that You use.
  • asked a question related to Statistical Signal Processing
Question
8 answers
Recently I came across where I tend to believe that adding noise is actually helps CR to detect the signal in a better way. Suppose we have Energy Detector, and we know that it works by taking averaging of Energy of received signal. If suppose we add noise to it which is also has its own energy which can be constructive or destructive in nature. If it is constructive, then it helps the detector to detect the signal and vice versa. But we always found that that Noise is always disturbing in nature and effect negatively in the context of spectrum sensing. I would be glad if somebody can explain why it happen.
Relevant answer
Answer
Dear Imtiyaz Khan,
Dayans answer is right and the paper he suggested is very interesting, though Stochastic Resonance is a very special case.
I will try to figuratively clarify his answer for the case of additive noise (y=x+n, x: measured signal, n: noise). x and n are independent random variables. Now the pdf of y is the convolution of the pdf of x and the pdf of n, and the pdf of y will be "broader/flatter" the the pdf of x, if n is not deterministic.
For your energy detection example, there is an x1 for the case of "energy present" and x0 for "no energy present". Accordingly, you will obtain y1 and y0 after adding noise. You determine a threshold to distinguish whether a measured signal belongs to x1 or x0.
The false alarm rate and the is the integral of the pdf of x0 or y0 which is above the threshold and the probability for a miss is the integral of the pdf of x1 or y1 below the threshold.
Now if you use y instead of x, for a fixed threshold the miss rate will decrease as you stated in the question. However, your false alarm rate will increase then.
Another strategy to increase the detection rate would be to lower the detectors threshold. With this approach for the same detection rate the false alarm rate will be lower.
Figuratively you can understand that by plotting the pdfs of x0 and x1 and draw a line (threshold) to separate the classes. Intuitively this works better the less the pdfs of x0 and x1 overlap. But as the pdfs of y0/y1 are "broader/flatter" than the pdfs of x0/x1 due to the convolution with the noise pdf, the overlap is larger and the detection performance in general worse.
Hope this explanation helps.
  • asked a question related to Statistical Signal Processing
Question
4 answers
Consider the following expression xHAx, where x is a known vector, i.e. a determinisitc parameter, and xH its hermitian, and A is a hermitian, positive semi-definite matrix constructed as the summation of D rank-one matrices of the form didifor i=1,...,D. In this case, the vectors di are distributed as a standard complex Gaussian (zero mean and unit variance).
The question is which is the statistical model of xHAx?
Thanks in advance.
Relevant answer
Answer
Data presented in abstract well is not good for statistical modelling, because the phenomena itself must be related with the explanation and the model. Probably a Bootsrap approach could be a better option for your model, but is necessary an approximation of the size of the matrices and an empirical explanation about what the data is related on. Good luck. 
  • asked a question related to Statistical Signal Processing
Question
4 answers
which one having better performance in spectrum sensing 1.game theory approach or 2. Statistical signal processing approach like MLE or NP method?
Relevant answer
Answer
Machine learning technique is best for wideband communication
  • asked a question related to Statistical Signal Processing
Question
4 answers
During CSP ,When I subject composite covariance matrix to eigen value decomposition , one of the diagonal elements is negative with very low magnitude as compared to others , because of which , my whitening matrix is becoming complex [ sqrt(inv(diag(D))) ] . Can I use abs(diag(D)) instead of diag(D) to overcome this problem ? Will it change my classification result ? Thank you in advance.
Relevant answer
Answer
In some applications a small positive constant is added to all diagonal elements of the covariance matrix to solve this problem. Another solution may be using the pseudo inverse of covariance matrix in place of its inverse.
  • asked a question related to Statistical Signal Processing
Question
7 answers
Hi, 
In Statistical signal processing, lot of research is based on complex analysis. Many techniques and methods are transformed to complex domain. Whereas complex information is only important in form of magnitude and phase. So whats the difference in using magnitude information or real and imaginary information of the data?  Why is phase important? What the difference of the signal that is added with phase information and without phase information?
Appreciate your comments
Relevant answer
Answer
Let us consider two dimensional problems, where the power of complex analysis can be seen quite directly.
If a function f(x,y)=u(x,y)+ i v(x,y) is differentiable at z0=x0 + i y0 then at this point ux=vy and uy = -vx is satisfied. These are Cauchy-Riemann conditions. This immediately has a important consequence for quantum theory. It means that the quantum mechanical wave function (which has to be differentiable)  is analytic everywhere.
When z=x+i y and f(x,y)=u(x,y)+ i v(x,y), then if f(x,y) is an analytic function it immediately implies that u(x,y) and v(x,y) should satisfy Laplace's equation. This is directly related with Physics because then both real part and complex part of an analytic function (such as the wave function) must be harmonic. Example: The free particle wave function e i k.r .
  • asked a question related to Statistical Signal Processing
Question
3 answers
I know that Laplacian distribution function is defined as follow
f(x)=(b/2)*exp(-b|x-\mu|) 
Also, I know that the mean and variance for the ratio between two normal variables.
Anyone can guide how would be mean and variance for Laplacian distribution?
Relevant answer
Answer
Hi Mohammad,
The formulas of the PNG file are wrong, since the following exemplary fact holds:
If the joint density f(x,y) is bounded as follows:
f(x,y) \ge c  for  |x|, |y| \le 1, where  c > 0,
then the tails of the ratio are bounded by  c \cdot t^{-1},  for  t \ge 1
(please, make a drawing of the vicinity of the origin and put there a sector
| x/y | > t within the square [-1 , 1] \times [-1 , 1]  ),
and consequently the expectation of the ratio is bounded from below suitably by
E{ |X/Y| } \ge c \cdot \int_1^\infty   t^{-1}  dt = \infty.
Best wishes
  • asked a question related to Statistical Signal Processing
Question
3 answers
|h|- |\tilde_h| = |e_h|
Kindly provide the distribution for |e_h| if both the |h| and |\tilde_h| follow Nakagami-m distribution.
Where |h| is the absolute value actual channel fading coefficient  and |\tilde_h| is the estimated one.
Relevant answer
Answer
.
rephrasing :
Z = X - Y with X and Y Nakagami distributed
find the distribution of Z ?
filling the gaps in the question :
- assume that X and Y are independent (otherwise ... anything can happen)
- assume that X and Y follow the same Nakagami distribution (that is the same m and Omega parameters)
now
X = (Omega/2m)1/2 x with x distributed as chi2(2m)
Y = (Omega/2m)1/2 y with y distributed as chi2(2m)
(see the wikipedia link given by Gregory above)
x and y are indedent chi2(2m) distributed random variables and
Z = (Omega/2m)1/2 (x-y)
and the difference of two chi2(2m) independent random variables is variance-gamma distributed ; see
so Z follows a rescaled variance-gamma distribution
(kind of ugly but at least it has a name and a Bessel function in its formula !)
.
if X and Y are independent Nakagami distributed but with different parameters, you're left with a linear combination of independent chi2 distributed random variables for Z ...
very ugly but you could contemplate the following :
.and references therein
.
note that the above leaves you on your own for the distribution of |Z| !
(i have no reason to believe that the diffrence of two independent Nakagami distributions should be positive !)
.
  • asked a question related to Statistical Signal Processing
Question
14 answers
Hi everyone, 
Let A be a full rank square matrix (A has no null space). When does y^T A x = 0 occur ? (where T is transposition).
It could be that this problem is case-specific, so please find attached a document where x,y, and A take particular forms. 
But in any case, is there a condition on which y^T A x = 0 may occur ? 
Contributors would, indeed, be acknowledged.
Thank you very much
Relevant answer
By singular value decomposition any matrix A can be written as A=CDF where C, D, F are unitary, diagonal hermitean and unitary, respectively. Now, z^TDu=0 has a multitude of solutions: take any z and u belonging to subspaces spanned by disjoint subsets of basic vectors. Then take u=Fx, z=C^Ty (C and F are invertible).
  • asked a question related to Statistical Signal Processing
Question
6 answers
Let R' Rbe the estimated correlation matrix, where is the original correlation matrix and D is the matrix that models the estimation error. My question is the existence of models for this matrix D. 
Relevant answer
Answer
Jordi,
If your correlation estimate comes from multivariate normal(Gaussian) distribution,  The estimated correlation matrix (your R'), it will have the distribution of Wishart(R,df), where df is the number of degrees of freedom (usually, n - p +1, n is the number of samples you use and p is the size of R) Then, the expected value of the estimator for R' (the theoretical average of many estimated values of R') is R/(n-1) .
Hope this can help as well. 
  • asked a question related to Statistical Signal Processing
Question
33 answers
Dear all,
Assume we have the following vector linear model:
x = Hs + n where x is the received vector, H is a full column rank matrix and s is the vector of signals. The noise is n.
Why do some people go for a Bayesian approach for estimating H, s and the noise variance, whereas others take a deterministic one ?
Thanks in advance.
Relevant answer
Answer
Dear Ahmad,
  The answer contains almost nothing to do with x, H, s, n. Most people stick with what they are comfortable with, so statistical preferences often follow from exposure to mentors and colleagues who have their own history and comfort with various approaches.
   In my case, being almost equally uncomfortable either way, I would choose the technique based on what sort of answer I wish to have. Bayesian methods are about more than just priors; differences in evaluation can lead to different conclusions depending on the relationship between H and s. The way you specify and evaluate a model controls what you can reasonably conclude about the result. This is true for all methods... A frequentist approach has its own pitfalls.
   Depending on what you believe about H, the differences in method may be almost irrelevant (like if H approaches I, or 0). The differences may be large; they may even be irrelevant (like if H has underlying chaotic structure and both methods make different errors). No magic solutions on either side, and sometimes neither approach is the correct one.
Just my opinion (actual Bayesian practitioners will tend to disagree).
  • asked a question related to Statistical Signal Processing
Question
5 answers
My question is why we reuse the orthogonal pilot symbols among the cells in Massive MIMO which leads to the pilot contamination? In other words, why we don't have enough orthogonal pilots to serve all the users within the system with different pilots? This may have a relationship with another question, how we can generate orthogonal pilots?
Could anyone please refer to any reference that can be useful for me to answer these questions?
Relevant answer
Answer
The channels are time-varying and frequency-selective, so there is limited resources for estimating and using a wireless channel before it changes. If the channel is approximately fixed for 5 ms in time and 100 kHz in frequency, then you have 5*100 = 500 samples for transmission. These need to be divided between pilots and data.
Say that there are 1 million cells in the world, then you cannot physically give each of them one pilot sample. This is the case in contemporary cellular networks, and will be true also in massive MIMO.
On the other hand, most cells are sufficiently far apart so that the pilot contamination will be negligible. If we reuse the same pilot in every third, fourth, or seventh cell, then the pilot contamination will not be a big issue.
I would recommend my own paper on this topic and reference therein:
Emil Björnson, Erik G. Larsson, Mérouane Debbah, “Massive MIMO for Maximal Spectral Efficiency: How Many Users and Pilots Should Be Allocated?,” IEEE Transactions on Wireless Communications
  • asked a question related to Statistical Signal Processing
Question
8 answers
Dear All,
Assume the following system of equations:
Ax = b where b is the vector of data of size Nx1, x is the vector of unknown of size Nx1 and A is the matrix of coefficients of size NxN.
The solution is x = pinv(A)*b where pinv(A) is the pseudo inverse of A.
Now if A is of rank N-1, how do we solve of x? i know that infinite solutions exist but is there another approach of solving for x? 
Thank you in advance.
Relevant answer
Answer
In your case, if A is rank-deficient you have two problems:
- there might not be an exact solution at all. Since the range of A does not span the entire R^N but only an (N-1)-dimensional subspace, you can solve exactly for x only if b is in this subspace. Anything outside cannot be represented by A*x.
- at the same time there are infinitely many ("equally good") solutions since A has a non-empty null space, i.e., there is a one-dimensional subspace of vectors x that give A*x=0.
Interestingly, if you use the least squares solution pinv(A)*b, it provides a solution for both problems. For the first problem, it gives you the closest x, i.e., the one with the smallest error ||b-A*x||. At the same time, for the second problem, out of all solutions that share the smallest approximation error, it gives you the one with the smallest norm ("minimum-norm solution").
Depending on your application this may or may not be desirable. It depends on what you want to do with that x. You can use the degree of freedom also for other things, like minimizing another norm of x. LS is the simplest so if you just need any solution then it's really fine to use.
  • asked a question related to Statistical Signal Processing
Question
4 answers
Dear All,
Assume i have an array of uniform linear antenna array of 3 antennas, distance uncertainties and other imperfections might perturb the steering vector away from the true one. Thus, DoA estimation using ML or subspace techniques would fail. 
I would like to know if it is possible to calibrate when the receive number of signals are more than 3 (Due to severe multipath)???? 
Thank you in advance.
Relevant answer
Hello,
Usually, the number of antennas in an array should be greater than the number of received signals. This problem is hard because you receive coherent signals due to multipath effect and the number of elements is equal to the number of sources. For this case, you have to use the techniques based on virtual arrays and using methods which use moments or cumulants. Another way to find better performance is to use spatial smoothing pre-processing.
  • asked a question related to Statistical Signal Processing
Question
3 answers
Dear researchers
For a random process, do you think that the upper and lower envelops of that random process is independent from the process itself? if they are independent, how can we prove that statistically?
Hope you can point out some references if available.
Best regards 
Relevant answer
Answer
If your question as I think then follow this link with all its references and mathematics !
  • asked a question related to Statistical Signal Processing
Question
9 answers
We all know that the Correlation matrix is :
Rxx = E{x.x^H} where E{} denotes expectation and H is the hermitian operator.
In practice, and in most cases, the E{} is replaced by the sample average.
x is an N x 1 column complex vector.
I would like to know how the eigenvalues of Rxx are affected if x were affected by a diagonal matrix C that changes every sample and depends only on a scalar, say 'alfa' i.e.
Rx'x' = E{x'.x'^H} where x' = Cx
or
Rx'x' = 1/N * (C(1)x(1)[C(1)x(1)]^H + ......... + C(N)x(N)[C(N)x(N)]^H )
Relevant answer
Answer
Dear Ahmad,
I thought that C is a diagonal matrix with non-random diagonal. If that is not the case, then, if C is independent of X (which i think is the case for your problem since diagonals of C are functions of alpha and alpha seems to be independent of X), the correlation matrix will be E(CXX^H C^H)=E(E(CXX^HC^H|C))=E(CR_xx C^H)=E(T) where the (i,j)th element of T is t_{ij}=C_ii.C*_jj. r_xx(i,j). So E(t_ij)=E(C_ii C*_jj)r_xx(i,j). From your given information, E(C_ii C_jj*)=E(exp(-2*pi*i(i-j)alpha)) which I think you can find easily. 
  • asked a question related to Statistical Signal Processing
Question
7 answers
I am aware that industry nowadays are going towards fingerprinting techniques rather than online only-based algorithms. Could anyone provide with the most recent state-of-the-art paper that describes the indoor localization topic?
Thanks
Relevant answer
Answer
Thank you all for your answers
  • asked a question related to Statistical Signal Processing
Question
4 answers
First, I took a signal (X) and performed DWT for it 
Then I performed Y = AX (where A is random m×n matrix having m<<n) 
Later, It is taken as input to the reconstruction algorithm and is stored in other variable.
How can I perform IDWT of the reconstructed signal without knowing the wavelet coefficients of the reconstructed signal
Relevant answer
Answer
If I understood correctly, you have done:
1. XW = DWT(X) (which is taking the DWT of X)
2. Y = A · XW (which is random sampling)
3. XWest by L1 minimization or other means (GPSR?)
4. Then just do Xest = IDWT(XWest), where you can obtain the IDWT based on the DWT coefficients. For instance Matlab can do this for you.
If you use a filter bank implementation of the DWT, you will have N total wavelet coefficients for a signal of length N. Then, Y will have M samples and XWest and Xest should again be of length N.
  • asked a question related to Statistical Signal Processing
Question
27 answers
Dear All,
In a Deterministic Framework where we have the following Linear Model:
y(t) = H.x(t) + n(t)
where
y(t) is the observed vector of size Nx1 (we have T observations)
H is an NxP matrix (no constraint on P, P could be smaller or larger than N)
x(t) is a Px1 vector
n(t) is random noise.
It is well known that if n(t) is a Gaussian process, then you couldn't do any better than Max Likelihood, i.e. the L2 norm is optimal to estimate parameters in H and x(t). 
My question is : when does ML become sub-optimal ? 
Thanks.
Relevant answer
Answer
if P is larger than N, I am not sure that the ML is still optimal because the Hessian matrix of the Likelihood Function could no more be definite positive, hence some variances of the estimated x() might become infinite
if P is less than N, the ML is definetely optimal with regard to bias and the variance of the estimated parameters, but if that is valid for estimation, it is not for prediction.
  • asked a question related to Statistical Signal Processing
Question
6 answers
During the system state estimation, EKF is the useful method. But the initial state X0, the process noise covariance matrix Q, and the measurement noise covariance matrix R are not easily determined. Their values affect the estimated state directly. How best to choose their proper values?  
Relevant answer
Answer
A good source on how to make this choice rigorously for linear KF is chapter 5 of the book 'Time series analysis by state space models' by Durbin and Koopman. You may use it as a guide for the EKF as well, although EKF does diverge for poor choice of initial values.
  • asked a question related to Statistical Signal Processing
Question
4 answers
If we were able to estimate the noise power blindly for a conventional energy detector (CED), does that shift the CED from semi-blind to fully blind detector?
Relevant answer
Answer
As an ED needs information about the noise level it can not be considered as fully blind.
  • asked a question related to Statistical Signal Processing
Question
19 answers
I have a doubt in plotting an ROC , I have following parameters
1) Probablity of detection Theoritical (Pd_theory)
2) Probablity of false alarm assumed (Pfa)
3) Probablity of detection simulated (Pd_sim)
4) Probablity of false alarm simulated (Pf_sim)
And I want to plot Pd_sim with respect to false alarm. But I don't know which false alarm would be more correct to take. Pfa or Pf_sim and why? 
Relevant answer
Answer
Let me make my answer more clear. Take Pf as a parameter you choose (your requirement). For example let PF=0.1. Now find threshold for this Pf from analytical expressions for H0. Then find Pd for this threshold using analytical expression.
For simulation, find the threshold for given PF (90th percentile value from the histogram of test statistic under H0 obtained from Monte Carlo). Count the outcomes above this threshold for the distribution (histogram) under H1,  which gives  your Pd (sim).
Now you have  Pd(analytical) and Pd(Sim) for same Pf=0.1. Plot in the same graph and see whether  your simulation and analysis match. This is needed for validation of your work.
If  set of Pf  in analysis and simulation were chosen to be different, then  consider my previous answer, Make comparison from the whole trend of the graph. The graphs should definitely coincide  if you take large (say more than 10000 trails in Monte Carlo).
  • asked a question related to Statistical Signal Processing
Question
6 answers
See above
Relevant answer
Answer
Hello Govind. You can see that all the answers go in the same direction : your question is ill posed. But if my guess is correct, you are not looking for a method, but for a way to extract "information" from a time-frequency representation. There are many thing that can be done there; Basically, there are 3 types of approaches : Since you get an image, all kind of image processing can help (segmentation, contour,...). You can even introduce a markov field model on them.  As Struzik stressed, you can use wavelets transforms and analyse details signals in a multiscale perspective. There are also algortithms to perform statistical analysis in "real" time, and obtain information such as instantaneous frequency...As you see, your question is not precise enough, but I hope these ideas will help.
  • asked a question related to Statistical Signal Processing
Question
4 answers
Is there any standard steps of proving the equivalence between two stationary random processes? Thanks
Relevant answer
Answer
It depends what you mean by "prove" and "equivalent." If you can get enough samples to accurately estimate mean, variance, etc. then it's a hypothesis testing problem for the assumed statistics. If the processes are correlated you need to fit an ARMA model and get the parameters and compare them I suppose. That sounds harder since there is no unique model and some criterion has to be used to limit the number of free parameters. As a previous poster pointed out, the general case is to estimate the joint PDFs, which may not work in practice.
  • asked a question related to Statistical Signal Processing
Question
4 answers
In spectrum sensing, threshold is the most important term for estimating performance measure. In order to carry out particular type of CFAR detection, threshold is determined by reverse calculation from false alarm probability. But most of the time it doesn't match with predefined detection. So I want to know is their any other method by which we can find out threshold so that in determining performance measure it will perfectly match.
Relevant answer
Answer
Dear sir
You can determine threshold by monte-carlo approach, you consider the H0 hypothesis and try to meet the required false alarm probability.
BR
  • asked a question related to Statistical Signal Processing
Question
9 answers
Can Someone provide me an information about carrier phase (raw) data for GPS??
Relevant answer
Answer
The question is a bit wired. You can simply find all observation data (including carrier phase) in RINEX files which can be easily downloaded through IGS:
  • asked a question related to Statistical Signal Processing
Question
2 answers
It is clear that the combination of some objective functions is better than just one objective function but the problem is how can I combine them. Is it a random process or not? Does it have some rules?
For example, (OBJECTIVE FUNCTION=w1.ISE+w2.ITAE+w3.IAE). How can I determine the weight value (w) for these objective functions? Can I determine it randomly?
Sincerely
Relevant answer
Answer
Dear Mr.Younesi
Thank you a lot for your great help.
best wishes for you, too.
  • asked a question related to Statistical Signal Processing
Question
8 answers
Jus want to know difference between static and dynamic state estimation and their applications........plz help me out with this.... thanks in advance......
Relevant answer
Answer
the difference between static and dynamic state estimation is on the behavior of the state variable with time. in Static state estimation the State model is build on the assumption that the state variable is in steady state or quasi steady state i-e it remain constant with respect to time while in dynamic state estimation the model is build on the assumption of changing behavior of state variable w.r.t time.
real time state estimation can be possible with dynamic state estimation as no System always remain constant it's parameter may change with time
  • asked a question related to Statistical Signal Processing
Question
9 answers
In cyclostationary signal detection for particular type of signal modulation how to determine the value of cyclic frequency alpha so that it would give required detection. 
Relevant answer
Answer
Each modulated signal has a cyclic frequency such as BPSK ALPHA=2 it means we will have 2 picks frequency. So it depends on your modulated signal. It is why we call it feature detection, it means we must know alpha.
Good luck
  • asked a question related to Statistical Signal Processing
Question
7 answers
I have Implemented this detector in AWGN channel in Matlab but I am getting vague results. As its performance doesn't vary with change in SNR. Its a very strange result for me. I don't know what mistake I am doing in it. I am attaching my codes please have look and comment. Here I am finding cyclostationary feature of signal for its detection. Firstly I tried to Out FFT of a signal then shift its frequency by +alpha of tranform and -alpha to its conjugate. Then I multiplied both to taken sum of all. Thats how all theory explain about cyclostationary feature detection I will very grateful if somebody can help me in this.
function S=cyclio_stat_TestStatics(x,N)
lx=length(x);
X=zeros(2*N+1);
Y=zeros(2*N+1);
Ts=1/N;
for f=-N:N
d=exp(-j*2*pi*f*(0:lx-1)*Ts);
xf=x.*d;
n_r=lx:-1:1;
X(f+N+1)=sum(xf(n_r));
Y(f+N+1)=conj(sum(xf(n_r)));
end
alpha=10;
f=5;
f1=f+floor(alpha/2)+(floor(-((N-1)/2)):floor((N-1)/2));
f2=f-floor(alpha/2)+(floor(-((N-1)/2)):floor((N-1)/2));
S=sum(X(f1+N+1).*Y(f2+N+1))/N;
S=abs(S)/lx;
Relevant answer
Answer
Imtiyaz,
It seems to me your loop is too short to catch the mean period, and your algorithm is not iterating to exploit the over sampling feature of incoming signal, which is dearly needed to get the mean signal variation aroud the exact signal period when using cyclostationary process.
Regards
  • asked a question related to Statistical Signal Processing
Question
17 answers
I am simulating a feature detector in noisy environment which consist of AWGN Noise and Impulsive Noise. But I am getting a strange result as Signal with Impulsive+AWGN noise has better detection probability then with  Signal with only AWGN. I know its wrong at some point. How Is it possible that the signal with more noise like Impulsive noise has better detection possibility. Please share your experience.
Relevant answer
Answer
Two new theorems show how deliberately adding quantizer noise can improve statistical signal detection in array-based nonlinear correlation detection even in case of infinite-variance alpha-stable channel noise.The first theorem gives a necessary and sufficient condition for such quantizer noise to increase the detection probability for a fixed false-alarm probability.The second theorem shows that the array must contain more than one quantizer for a stochastic resonance noise benefit and that the noise benefit improves in the small quantizer noise limit as the number of array quqntizers increases. It further shows that symmetric uniform quantizer noise gives the optimal noise benefit among all symmetric scale-family noise types. 
  • asked a question related to Statistical Signal Processing
Question
2 answers
In signals, to detect extrema what is significance of fourth moment..
Relevant answer
Answer
As said before, the fourth moment is the kurtosis. The kurtosis is the parameter defining the fatness of the tails of the distribution. Kurtosis for the normal distribution is 3, in which case values beyond three standard deviations away from the mean are infrequent. If kurtosis is substantially larger than 3, then such values would be rather frequent. Hence, there is a direct relationship between kurtosis and whether a value ought to be regarded as extreme.
  • asked a question related to Statistical Signal Processing
Question
5 answers
As I have found out, their is no way to find the actual pdf and its parameters (like mean and variance)  when primary signal is exist in cyclostationary spectrum sensing. So how do I define analytic expression for the Probablity of  detection if it exists. 
Relevant answer
Answer
A closed form of the detection probability for a Ciclostationary Single Cycle Detector can be found in the following paper: "Cooperative Cyclostationary Spectrum Sensing in Cognitive Radios at Low SNR Regimes" No a priory knowledge about PU signal is required in this analysis
  • asked a question related to Statistical Signal Processing
Question
11 answers
Actually I am working on multichannel eeg data obtained from scalp electrode of meditating and non meditating subjects. We want to quantify the changes that occur in ones brain signals when one meditates.
I have preprocessed the signals by bandpass filtering, normalization and artifact removal by wavelet thresholding. After that i have i have segmented the data set of each channel( we have 64 channels per subject and 64000 samples from each channel, the sampling frequency being 256 Hz). I have considered 1 second(ie. 256 samples) segments with 50 percent overlapping So in total we have 499 segments per channel per subject.
Then I decomposed each of the segments using wavelet decomposition and calculated the statistics such as mean, variance, kurtosis and skewness from each band per segment per channel per subject. But I am unable to form a feature vector that I can input into a classifier. Please help.
Relevant answer
Answer
Our experience suggests that cross-power spectral density measures (which are the Fourier transform of the cross-correlations between the signals received from two distinct electrodes) are much more indicative of brain state than just power spectral density measure derived from a single electrode.
  • asked a question related to Statistical Signal Processing
Question
3 answers
In order to combat this?
Relevant answer
Answer
Youssef is on the right track.
The definition for resistor noise is as follows:
The AVAILABLE NOISE POWER, P, for a resistor is the product
Available Noise Power = kTB
Where K = Boltzman's Constant = 1.38 *10-23 Joules/Kelvin
T = The physical temperature of the resistor
B = Effective noise bandwidth of the measurement.
The mean-square noise voltage produced by a resistor is
Ave(V**2)=4kTBR
where R is the resistance in ohms.
I used to design very low noise amplifiers. I could connect a 50 ohm resistor to the amplifier input and measure the output noise power level. If I took my soldering iron and brought it near the resistor, the power meter reading would go up corresponding to the physical temperature of the resistor.
If you want to be an expert in electrical voltage or power produced by resister thermal noise , it is important to learn the meanings of the terms effective noise bandwidth (it is not the 3dB bandwidth except in very special circumstances) and available power.
  • asked a question related to Statistical Signal Processing
Question
25 answers
In my structural model the results indicate- signficant positive direct effect, signficant negative indirect effect and insignificant total effect. How do I interpret this?
Relevant answer
Answer
Hi Satpal,
I am trying to reply using an example. Let's take class attendance (x) and grade (y). If we test direct relationship b/w these two, we have positive effect (the more you go to lectures the higher your course grade).
Now, if we enter time of preparation to test as a mediator, we can see that the higher class attendance, the less time you need to prepare for test (-), but the less time you prepare for test, the lower is your grade (+).
Minus * Plus gives us negative indirect effect. In this case, direct and indirect effects will neutralize each other, and you can get insignificant total effect...
  • asked a question related to Statistical Signal Processing
Question
4 answers
Generally it is observed that the P value for direct effect is obtained from the general regression output whereas indirect significance levels are obtained from bootstrapping (two tailed signficance). Can we use all P values from bootstrapping?
Relevant answer
Answer
Hi, usually, when you calculate bootstrap p-values for direct effects, you get similar (if not identical) values to your "regular" p-values. So, the answer is yes, you can use bootstrap p.v. for direct effects as well. I attach here an answer of Linda Muthen from Mplus discussion board:
With respect to the p-values
1.) should I use the p-values of the default estimation to decide on significance or the confidence intervals / p-values from the bootstrap?
Linda K. Muthen posted on Thursday, December 15, 2011 - 11:25 am 
It's really up to you to decide on which p-values to use. You would need to investigate how to compute one-sided confidence intervals. Mplus does not compute them.
 In addition, when you test and report mediating effect, you should also report bootstrap confidence intervals, because what is really important in bootsrtapping is not significance, but the width of your confidence intervals (narrower CI are better, of course).
Good luck.
  • asked a question related to Statistical Signal Processing
Question
3 answers
How these parameters gives better results than other parameters?
Relevant answer
Answer
Both these parameters are used to measure the effectiveness of ECG denoising algorithms. CCR measures how much the denoised signal (processed signal) is closely related to the input ECG signal. MSE measures the difference (error) between processed and input ECG signal. Input ECG signal refers a simulated ECG signal which is free from noise. Then noise is added to this simulated ECG to make the noisy ECG. denoising algorithms applied to this noisy ECG to denoise the ECG. Then processed ECG is compared with the original simulated ECG. These parameters measure the the algorithms ability to remove noise without affecting the real more crucial ECG segments. 
  • asked a question related to Statistical Signal Processing
Question
4 answers
Matlab Code or some Idea to create environment for component carriers. 
Relevant answer
Answer
Dear sir
If you have matlab 2013 or 2014, this page will be very useful for you.
Good luck
Best regards
  • asked a question related to Statistical Signal Processing
Question
1 answer
I guess time-series analysis can be also studied under the scope of statistical signal processing. Is this correct? Maybe someone could give me a hand in selecting introductory, intermediate and advanced text books for multivariate time series analysis. Thanks!!
Relevant answer
Answer
Dear Marcelo,
please check the Wikipedia entry:
and some of the references at the end. Furthermore, I suggest to look at amazon.com for books on time-series analysis.
Good luck,
Reiner
  • asked a question related to Statistical Signal Processing
Question
8 answers
Resolution of Direction of Arrival.
Often mentions the term of resolution with DOA estimation which refers to the accuracy in determining the direction of the received signal in right direction and if there is more than source locate each one in the right direction.
Relevant answer
Answer
Assume if there are two sources in space for which you want to find thier DOAs, then what will be the minimum separation between them so that you will be able to detect /find DOA of these sources without any ambiguity is called the resolution of the DOA algorithm.
Regards
Srinivas
  • asked a question related to Statistical Signal Processing
Question
8 answers
I believe it is possible and hope to be able to share real data soon. Need feedback.
Relevant answer
Answer
In our recently supported project, we will observe sun radiations as a natural source of random samples, similar to www.random.org or others. But, we would like to share our data for your randomness tests. Those data will have (almost) exact statistics like, mean, variance and higher order moments. Best.
  • asked a question related to Statistical Signal Processing
Question
1 answer
I am working on hybrid platform UWB+Zigbee/WiFi, I was wondering if we can use the same reconfigurable transceiver on MATLAB? I am on the initial stages of the assignment and according to theory any signal with fractional bandwidth over 0.2 is considered as UWB, that means if we can adjust our transceiver in a way that it gives fractional bandwidth >0.2 it will be UWB and <0.2 means Narrow band. Please guide.
Relevant answer
Answer
Hi, As for as Zigbee and UWB are concern, I think yes with minor changes you can use. But for WiFi you will have to keep in mind that it is narrow band you need lots of changes on receiver block.
Are you using RAKE ? or DHTR ?
Let me know if you need more details help.
  • asked a question related to Statistical Signal Processing
Question
2 answers
Check whether it is unbiased and has minimum variance.
Relevant answer
Answer
Monte – Carlo simulation study is performed to compare the methods of estimation by using mean square Errors (MSE’s) and the mean percentage error (MPE’s), to investigate the performance of two estimators.
  • asked a question related to Statistical Signal Processing
Question
1 answer
If yes, then tell me why it is superior.
Relevant answer
Answer
Following paper explains the issues with existing multi resolution techniques and how Contourlet T is better.
Hope it helps.
The contourlet transform: an efficient directional multiresolution image representation IEEE Transactions on Image Processing In Image Processing, IEEE Transactions on, Vol. 14, No. 12. (December 2005), pp. 2091-2106, doi:10.1109/tip.2005.859376 by M. N. Do, M. Vetterli
  • asked a question related to Statistical Signal Processing
Question
16 answers
How to estimate the expected value of unknown signal in discretization points a linear dynamic system whose model is not known?
Relevant answer
Answer
Antonio excelent: "The form of the kernel is not crucial." - hit the jackpot.
Candidate Hi=z/(z-e^-aT) as the candidate is a good candidate but does not solve any problem.
To solve the problem of the candidates should create functional series H=H1*H2*H3...Hn in which the candidate will be a sub function.
The calculation of such functional series is not difficult.
Where is the problem ?
The problem is to solve: well I'm functional series counting correctly or wrong - remains to be resolved by mathematical proof.
  • asked a question related to Statistical Signal Processing
Question
2 answers
While transmitting the signal in highly noisy environments, increasing the signal power (i.e. increasing PSD) will not affect the signal.
Relevant answer
Answer
Dear Dileep Md, "Power spectral density function (PSD) which shows the strength of the variations(energy) as a function of frequency". So, its a directly related with the frequency.
  • asked a question related to Statistical Signal Processing
Question
12 answers
I want to find the registration point of a signal to process it further. Is centroid technique okay? Can you suggest some other technique?
My purpose is to find similarity and dissimilarity between two signals that look like same by eye. Any statistical method is also appreciated.
Thanks
Relevant answer
Answer
Hi Rajiv,
you may want to read about the idea of cepstrum. It might be useful especially if two signals coming from one device have a degree of overlap (the second signal begins before the first has ended).
NB yes it's cepstrum, not spectrum.
Hope this helps - Leszek.
  • asked a question related to Statistical Signal Processing
Question
21 answers
I'm working on pupil diameter data caused by emotions and I get two signals for positive and negative emotions. I'm looking for a way to find the differences between them using the whole signal rather than portion of it. I used 1st and 2nd derivatives but result is not clear and differences are not obvious.
Relevant answer
Answer
You could compute the covariance that measures how much a signal is similar to another. For example in Matlab, you can do that with the function cov().
See more about this in here:
  • asked a question related to Statistical Signal Processing
Question
2 answers
Is it ok to simplify that whenever a delay element is used with the output (eg: y(n-1) ), the function it self becomes IIR?
Relevant answer
Answer
Yes, it is ok to think this way. More generally, if the output y_n is combined with the inputs x_n, x_{n-1}, ..., x_{n-N+1}, then the filter has FIR. Otherwise, if y_{n-1} and/or other delayed outputs are involved, then the filter has IIR. In other words, the direct input-to-output structure is FIR and the feedback structure is IIR.