Science topic

# Digital Signal Processing - Science topic

All aspects of DSP in Radars, Acoustics, Medical and Physical equipment.
Questions related to Digital Signal Processing
Question
Hi to everyone, I am an engineering student and I started to learn signal processing / signals&systems topics. I think that one problem of self-learning is, can't find someone/teacher to ask the point you stuck it.
I don't understand how the CT impulse function is transformed to the discrete-time impulse as its amplitude is 1. How does this process work?
I have problems with converting CT to DT, the sampling, and the periodization. I am trying to watch several videos about it, but the actual mathematical operation of this "converter" is not clear for me.
What I mean is , what is the operator that converts;
""impulse(t) >> to impulse[n] with amplitude of 1""
or ""x(t) . p(t) impulse train >> to x[n] as a sequence""
x(t) . p(t) could be represented as = summation of the series of x(nT) . impulse ( t - nT )
But this is still not equal to a sequence of x[n] , because it contains scaled impulses with amplitude of infinity, right?
To remind it again, my question is how to transform X(t) to x[n] mathematically? How this sampling is occurring?
x(t)----->x(nTs) (Process of sampling which gives a discrete signal with infinite samples separated by sampling time, Ts)
To take the samples of x(t), you can simply multiply the function with Dirac(t-nTs) {This is one of the ways to carry out this operation---process of discretization/sampling)
PS: x(nTs) is actually a discrete signal whereas x[n] is a digital signal. x(nTs) can be treated as x[n] if its amplitude is quantized.
Question
I want to detect anomaly from a streaming data. Using FFT or DWT history is it possible to detect anomaly on the fly (online) . It will help a lot if anybody could suggest some related resources.
Thanks.
why not consider using S-transform as it combines the properties of FFT and wavelet transform.
Question
Researchers are now employing wifi sensing and wifi csi to design and develop various activity detection, heart beat monitor, and other devices. They train the data in an environment using machine learning or deep learning technology.
My issue is that because the wifi csi is highly sensitive to the environment, how will it operate if the environment changes, for example, if I train it at home and then use it in my workplace room? Is it necessary to train before attempting to use each environment?
Good answer by Aparna Sathya Murthy
Question
I have seen some formulas to determine the error threshold in different image denoising algorithms. My question is: "Is there a systematic way to determine which error threshold works better for your data?" -- a way which is motivated by the statistics of your data.
Some formulas are as follows:
C: constant
s: sigma (standard deviation of Gaussian noise)
n: signal dimension
sqrt: square root function
ln: natural logarithm
threshold = C*s*sqrt(n)
threshold = sqrt(s^2*n)
threshold = s*sqrt(C*ln(n)) % this is the universal threshold introduced in VISUSHRINK
I agree with this opinion
Question
I want to generate a sinusoidally varying impedance transmission line for the following specifications:
min. impedance 50ohm, max. impedance 170, period 10mm
You could do this in stripline or microstrip by changing the width of the track.
This would be called an impedance modulated grating, an will cause frequency dependent reflections. Microstrip filters can be very similar to this, with small variations to the periodicity to improve the performance.
Question
I am curious about what happened to the atomizer software by Buckheit J. (http://statweb.stanford.edu/~wavelab/personnel/) and if it is available somewhere.
Sadly I found only 2 dodgy sites that require a login to download the MATLAB code. Does someone have any information on where to get it from?
Alternatively, if there are other toolkits that have implemented this code please let me know, it does not have to be MATLAB, any language is fine for me :).
Thank you.
Old news but I found it here (http://sparselab.stanford.edu/atomizer/) if anyone finds this post while searching for it.
I've also downloaded the ZIP for posterity. Anyone feel free to get in touch if you need it. I plan on keeping it forever.
Question
Besides positioning solutions, Global Navigation Satellite Systems (GNSS) receivers can provide a reference clock signal known as Pulse-per-Second (PPS or 1-PPS). A TTL electrical signal that is used in several applications for synchronization purposes.
1. Is the PPS physically generated through a digitally-controlled oscillator (or line driver) whose offset is periodically re-initialized by the estimated clock bias (retrieved by means of PVT algorithms)?
2. Are there any specific filters/estimators devoted to a fine PPS generation and control?
3. Does some colleague know any reference providing technical details on this aspect?
we worked on the design and implementation of of a GPS receiver for the sake of extracting the one PPS, we could realize the signal acquisition phase and the tracking phase where we could generate a copy of the carrier with reduced frequency and of the pn conde clock generator. BY dividing these signal with appropriate division ratio one can get the one PPS. Its stability was to be evaluated. But the last divider stage is not yet realized and we plan to realize it and after that evaluate its accuracy and stability. please see the papers:
Once achieved I will notify you.
Best wishes
Question
While reading a research paper I have found that to find the abnormality in a signal authors were using a sliding window and in each window he was dividing the mean^2 by variance. After searching in the internet I found a term called fano factor but it was inverse of that. Could anyone please give an intuitive idea behind
the equation mean^2/variance?
It does not have unit. It is a coefficient. Therefore, you can compare different signals.
I think it stems from the coefficient of variations (CV). It is the inverse of square of CV. CV is a standardized deviation index.
Question
Digital Image processing based research topic for the final thesis of Masters, It must not be based on machine learning.
I suggest the following Topics:
1. Image and Video reconstruction.
2. Video depth estimation.
3. Aesthetic assessment and grading of photos.
Regards
Question
Consider a digital control loop which consists of a controller C and Kalman filter K, which is used to fuse and/or filter multiple sensor inputs from the plant and feed the filtered output to the controller. The prediction (or also called time-update) step of the KF, and the analysis and tuning of the control loop utilize a discretized state-space model of the plant, defined as
x_{k+1} = F * x_k + G * u_k,
y_k = H * x_k
where F is a transition matrix, G is the input gain matrix and H the measurement output matrix. For now I will ignore the process and measurement noise variables.
According to many text books about digital control and state estimation (for example "Digital Control of Dynamic Systems" by Franklin et al.), u_k = u(t_k), x_k = x(t_k) and y_k = y(t_k) are the control input, state and measurement output of the plant, which seem to available at the same point in time, t_k. This would mean that the output from the DAC of u_k, and sampling of y_k happen at the same moment in time, t_k. However this does not seem to hold for some classical implementations. Consider a typical pseudocode of a control loop below:
for iteration k = 1:inf
measurement update of KF x_k
compute u_k
output u_k through DAC
time update of KF x_{k+1}
wait for next sampling moment t_{k+1}
end for
Ignoring the time durations of the DAC and ADC processes, the description above will introduce error in the prediction step of the KF, because it is assumed that the value u_k is actuated at the same moment of time that y_k is sampled - t_k. However due to the time delay introduced from computations of the update step of the KF, and the controller this is not the case. If we define s_k to be the time when the value u_k is actuated, then clearly t_{k+1} - s_k < T, where T is the sampling period. It is clear that the prediction step no longer computes the predicted state correctly because it either a) uses the old actuation value u_{k-1} or b) uses the newly actuated u_k, and in both cases the time between actuation and sampling is equal to the sampling period, assumed in the model.
This leads me to believe that the control value u_k should be actuated at time t_{k+1}, to keep consistency white the sampling period and the prediction model.
Also consider the case when the KF's prediction and update steps are executed before the controller during iteration k. Then the prediction step clearly makes use of u_{k-1} to compute a time update x_{k+1} of the state. This also seems to contradict the original definitions.
So with all these assumptions laid forward, I would like to know what are the correct sampling and actuation times, and why such ambiguity exists in modern literature about such hybrid systems.
NOTE: Some of you may say that the definition holds for small sampling periods and when the computations are fast. However I consider the general case where sampling periods may be very large due to the computations involved in the control loop.
nice question.
Question
I have designed an FIR filter which discards the high frequency noise. I have tested the functionality through simulation (MATLAB). Now I wish to quantify this, that how good is my system and get results which I can compare with existing literature. Can someone describe a few parameters (preferably with mathematical definition) on basis of which I can do this.
Welcome!
The filters either analog or digital are characterized by a frequency transfer characteristics IH(exp jwTs)I , which can be obtained directly from H(z) with Z is the complex frequency transfer parameter where Z= exp jwTs.
Another method is to input a sinewave with known amplitude Vi and frequency and convert it into digital signal by sampling it and applying it to an A/D c onverter to digitize it and then apply it on the filter after which you convert the about digital out into analog signal .
Then measure the its peak value Vo. Then you can calculate the magnitude of the transfer characteristics IH(f)I= Vo/Vi, vary the frequency and keep the sampling frequency fs consatnt in the required range up to fs/2. Then Draw the transfer characteristics. You can see the cut-off frequency and the attenuation at high frequency.
Best wishes
Question
I am dealing with vibration signals which were acquired from different systems. They are mostly non-stationary and in some cases cyclostationary. What are the less expensive methods for removing noise from the signals? It can be parametric or non-parametric.
Ijaz Durrani
Thank you so much for providing me with useful information.
Question
Dear Community , I want to load my dataset from physionet Sleep edf , and try to separate the list of signals than the list of labels so I can apply the feature extraction , I used MNE python , but it gives the opportunity to create epochs for 1 subject only , any help please .
MNE-Python data structures are based around the FIF file format from Neuromag, but there are reader functions for a wide variety of other data formats. MNE-Python also has interfaces to a variety of publicly available datasets, which MNE-Python can download and manage for you.
See if this helps
Question
BER analyzer parameters meaning.
The reliability of data transmission characterizes the probability of getting a distortion for the transmitted data bit. This indicator is often referred to as the Bit Error Rate (BER). The BER value for communication channels without additional means of error protection is 10-4 — 10-6, in optical fiber — 10-9. A ber value of 10-4 indicates that on average, one bit is distorted out of 10,000 bits. The q-factor of the receiving system Q is determined from the expression:
Q = GA/TC,
or, in logarithmic form:
Q[dB] = GA[dB] - 10lgTC[x].
It is the q-factor of the receiving system that determines the signal-to-noise ratio (C/N) at the output of the low-noise Converter (LNC or LNB). It is important to note that the final C/N value does not depend on the LNC gain.
Question
Hello All
I'm trying to synchronize imu sensor with the Motion Capture system. Is this possible? If so, how?
Alireza
Hi Alireza,
First, what is the type of motion capture system do you have? Most of them actually have their own synchronization box, which serve as trigger in or trigger out for other external device including IMU. However, if your IMU sensor is a standalone sensor without any base to connected, then you can try an option to synchronize it by using some movement that cause spike on IMU signal like jumping.
Hope this helps you
Question
I have a data set of 10 subjects, each of whom repeated a task five time.
This data set includes two parameters simultaneously Measured. Through an MLP network, I intend to create a kind of regression between two parameters.
Do I have to give the data for each person to the network and average Rmse for all subjects or can all ten subject be entered into a network (10 input) for regression.
Question
I want a MATLAB code on " VIDEO WATERMARKING " using Discrete Wavelet Transform (DWT).
If it is your own p-code and you lost the m-code than it is very unfortunate. However being able to decode would be very unfortunate for all others whom count on p-coding for protection and sharing their code. So I still think it should not be possible.
Question
How should I determine threshold on received signal power(underwater acoustic signals) so that the signal can be decoded correctly. I understand that the threshold may vary device to device but I want the value for research purpose independent of any device.
Regards
I am not able to visualize your experimentation.Is the plot between I and Q components or between the vector components of the particle velocity channel?
Question
I'm curious to know if one can find the force of the collision by analyzing the audio or extracting features from the audio signal that estimates the force.
Hi
Regards
Question
Recently I have tried to implement the NLM denoising filter . When I use this algorithm I cannot reach the same results as in the paper. Furthermore, I cannot reach results presented in the paper . And the accuracy difference was up to 2dB.
Note, for the similarity measure I used 7X7 patch without Gaussian kernel over this patch.
The main problem, that this kernel was introduced in the theoretical part of the paper , however was not set in any numerical value (or I missed it completely in this or other papers where the NLM algorithm was discussed).
I tried the value of the kernel sigma = 0.65. Finally my results could reach approximately the same accuracy as in  (but still not exactly the same from 0 to up 0.3 dB depending on noise level).
So, can somebody tell me what is my glitch?
 Antoni Buades, Jean-Michel Morel, A non-local algorithm for image denoising.
 Ganchao Liu, Hua Zhong, and Licheng Jiao, Comparing Noisy Patches for Image Denoising: A Double Noise Similarity Model
Dear Mikhail Mozerov , in my opinion the best way is to contact Authors of research papers and request codes,... for results reproduction as mentioned by Bruno Martin
Question
I want to do texture segmentation. I have used 2-d wavelet decomposition and then calculated energy as feature vector. I have have calculated the feature vector of each pixel from a multi-textured image. Now from feature vectors, how can I achieve segmentation of a multi texture image?
it is very difficult to perform texture segmentation using feature vectors representing pixels. i suggest you take advantage of spatial redundancy and use spatial tiling. you may then use a clustering algorithm to cluster the resulting vectors.
Question
Hi,
I am trying to generate eye diagram for a particular signal along with defined Eye mask. But cannot find any reference for how to integrate Eye Mask along with Matlab Eye Diagram Object ? Any one have any information ?
First diagram is the matlab eye diagram generated to which I would like to add Eye mask to look like the second diagram.
Thank You Muhammad Ali for the suggestion. But I am afraid I am looking also for a way to pictorially represent Eye Mask Along with generated Eye diagram to get a quick glance on the performance. Now uploaded expected waveform.
Question
I need an accurate and real time algorithm to register optical and SAR image. can anyone help?
Question
Hello everybody,
I have observed a bit confusing behavior of my system response (or may be I am missing something).
I have a transfer function in S domain converted to Z domain with a 1kHz sampling frequency at the time of conversion using matlab, When I embed this discrete version of the transfer function to my system which is also sampling on the same frequency of 1kHz. The system works the way as expected (i.e. the step response is the same as that of the s-domain analogue controller).
But if I increase the sampling frequency of my system while using the SAME discrete transfer function that i just converted from s to z domain with a SAME conversion sampling frequency of 1kHz , the step response gets further faster.
My question is that, why the discrete system gets faster response than the analogue one, despite the transfer functions of the analogue controller and the discrete controller are the same.
What I understand, the step response of any transfer function should remain the same in either case (i.e. either the function is in s-domain or in z-domain) the response should be the same ?
Does this mean the digital controllers have the ability to fast the response of the same transfer function by changing the sampling frequency of the system?
It is important, not to confuse the system sampling frequency of my u-controller at which the u-controller is collecting the samples from ADC, with the sampling frequency that I used as a parameter required to convert the s-domain transfer function to z-domain transfer function.
I thank you all for your time.
Regards,
Iftikhar Abid
The conversion from the S-domain to the Z-domain can be accomplished by using the bilinear transformation.
- A transformation for S to Z
S= (Z-1) /(Z+1)
- And frequency prewarping
w analog = tan wd/2,
where wd= 2 pi f/fs
As one sees if one changes fs , one has to change w analog as a consequence of prewarping.
Best wishes
Question
I am working with a synthetic aperture radar system with fmcw signal, which transmits and receives signals continuously. The received signals are dechirped and their type is double (not complex). I want to separate the received signal of each pulse and prepare it for the range and cross-range compression.
In some instances, I've seen that the Hilbert transform is implemented on the signals to generate analytical complex signal, but I don't know its main reason and in many cases, it doesn't work appropriately!
I attached part of the received and transmitted signals.
Samples can conventiently be held in a 2D array of samples within each FM sweep (rows) vs samples from successive sweeps (columns). You will first want to focus the array by making phase offsets on samples as a function of their range and location within the synthetic array. Then a 2D FFT process of data will yield the cross-range vs range map. The FFT of the slow-time samples from successive sweeps gives the Doppler shift of a point which is a function of its cross-range location. The FFT of the fast-time samples within any given sweep gives the beat frequency which is a function of its range. This will get you a basic image/map.
Question
In Python, which way is best to extract the pitch of the speech signals?
Although, I extracted pitches via "piptrack" in "librosa" and "PitchDetection" in "upitch", but I'm not sure which of these is best and accurate.
Is there any simple and realtime or at least semi-realtime way instead of these?
There are so many varieties of tools for extracting pitch, but none of the fully automatic algorithms I know can guarantee accuracy and consistency of extracted f0, especially in terms of continuous f0 trajectories in connected speech. An alternative is to allow human operators to intervene where automatic algorithms helplessly fail. ProsodyPro (http://www.homepages.ucl.ac.uk/~uclyyix/ProsodyPro/) provides such a function. It is a script based on Praat—A program already with some of the best pitch extraction algorithms. But ProsodyPro allows human users to intervene with difficult cases by rectifying raw vocal pulse markings. It thus maximizes our ability to observe continuous f0 trajectories.
Question
Hi,
I have to simulate the performance of a multi-cell massive MIMO system for both the conventional and Pilot reuse 3. Does someone have some GitHub link, I will be really thankful!
Kind regards,
hi...i am able to write a code for single cell massive mimo system for MRC.ZF and MMSE receivers. Did u get the matlab code for multicell massive mimo, please send me..kiranec121@gmail.com
Question
The radar system that I'm working with contains a linear FMCW S-band (2.26-2.59 GHz) signal with a bandwidth of 330MHz and a pulse duration of 20ms. Also, the received signal is dechirped.
At least 44 samples. Given the power of DSPs these days, be safe and go for significant oversampling.
The useful range will depend on the transmit signal strength (Tx EIRP - includes antenna gain), the target radar cross-section and the receiver sensitivity (noise figure, LO phase noise, etc, Rx antenna gain). Google "radar range equation" and have a read. For good detection, you will need about 10dB received signal to noise ratio or more in the return signal. Use the radar range equation to estimate this and base your receiver bandwidth accordingly. Consider using a range amplitude correcting highpass filter (f^2 slope to correct for amplitude reduction for far-away targets) as well.
Cheers and have fun
Question
Lets say, I have 10 randomly generated bits (0's and 1's) in matlab. From my other parameter calculations, I have about 5 samples, this means 1 bit/5 samples (each bit is replicated 5 times). My total data vector will be 1 x 50 length. After this i perform BPSK modulation and multiply the data (baseband) with a carrier wave to form a data signal (passband). After this I add AWGN and then again multiply the signal with the carrier (from passband, back to baseband).
After the above process is completed i have to make the decision to get back the 0's and 1's to compare it with the initially generated bits. How should i proceed with it?
Does the below code makes sense or am I missing something?
data = randi(1,[1,0],10);
...
...
...
(steps as explained above)
%----bits retrieval------
variable = [];
for i = 1:length(data)
sum = 0;
for j = (i*5-4):length(final)
sum = sum + final(j);
c = mod(j,5);
if c == 0
break
end
end
if sum > A
variable = [variable 1];
else
variable = [variable 0];
end
end
%--------------------------code ends-----------------------------
here,
A is the amplitude of the carrier wave
final is the vector onto which i have perform the decision to check if its a bit 1 or bit 0.
thank you so much Vincent Savaux and Marwah Abdulrazzaq Naser for your answers, this has surely helped.
Question
Hello all! I meet a problem which confuses me these days.
Assume that I input a 100*100 mask into SLM and project it to the sample, then I use the CCD camera to collect the image. Usually, the size of the image from CCD (630*630 e.g.) is bigger than that of the simulation (100*100).
Therefore, how could I deal with the captured data in order that it is consistent with the size of the original mask? Since I need to reconstruct the image with some algorithms. I find that direct downsampling is not effective. Thanks in advance!
To scale down the size from 630X630 to 100X100, there is a simple MATLAB code which do this operation. 'imresize' will help you more. There are three ways the image can be resized according to 'imresize'. 1. Nearest-neighbor interpolation 2. Bilinear interpolation and 3. Bicubic interpolation.
'imresize' is not simply eliminating the pixels by down sampling. It uses the complete data of the image and interpret the scaled image pixels based on all the pixels present in the image. You can better understand by going through the link provided what these interpolations are.
thank you.
Question
I am working on transmitting ECG signal over wireless body area networks. According to IEEE802.15.4 standards , I am using ZigBee transceiver at 2.4 GHZ . Complex baseband equivalent channel model is used.
The steps are as follows :
1.      Signal Compression
2.      Quantization
3.      Coding
4.      OQPSK modulation using the matlab function (oqpskmod)
6.      Equalization
7.      OQPSK Demodulation using the matlab function (oqpskdemod)
8.      Decoding and Dequantization
*  According to the IEEE 802.15.4 standard, a pulse shaping step is performed in the transmitter after the OQPSK modulation step .
I didn`t perform this pulse shaping process, and I obtained reasonable results. Is it necessary to perform this step ?
If  yes , How should the receiver be modified ?
Your Design should follow IEEE 802.15.4 standard. Since pulse shaping is being used in the standard and ZigBee utilizes the IEEE 802.15.4, you can not ignore it.
Question
How to reduce the offset error in P controller?How to reduce the offset error in P controller?
By increasing the value of proportional gain, Kp, or by resetting the operating point to obtain zero steady state error using P-controller.
Question
There are two signals- consider them two vectors over equal time .
The length of the two signals are same.
Both the signals are continuous and random .
They have the same frequency and wavelength but different amplitude.
Visually they look similar.
How do I mathematically detect the change of periodicity.
Something like - both the signals are now pointing North.Both the signals are now pointing South.
If two signals have same frequency and occur during same time interval then they should be considered one signal. Adding such two signals is like adding two vectors having same directioN. There is no way to recover original signal.
Question
I was studying an algorithm concerning to hybrid frequency estimator which is used for the determination of off_nominal frequency in a signal . I wanted to know the explanation of how it is used to detect the off nominal frequency and how is the IIR filter helping in that?
You are giving not enough information here.
One method to find, if the received signal carrier frequency is off normal, and estimate the frequency deviation, uses a frequency discriminator. It works as follows:
1. Pass the signal to a circuit (eg an IIR filter) that produces a frequency-dependent phase shift.
(In the old FM days, we used an LC resonant circuit, producing 90 degree phase shift at the nominal
frequency. If you do this in digital, the LC resonant circuit becomes an IIR filter.)
2. Multiply the phase-shifted signal with the non-shifted signal, and low-pass.
3. Adjust the IIR filter such, that the phase shift is 90 degree at the nominal frequency
4. And that's it. You then get an output signal, which is proportional to the signal amplitude and to the carrier frequency shift.
Best Regards,
Henri.
Question
How can implementation an Adaptive Dictionary Reconstruction for Compressed Sensing of ECG Signals and how can an analysis of the overall power consumption of the proposed ECG compression framework as would be used in WBAN.
Thid is in class of machine learning in youtubre
Question
Hello, i'm having a project where I must implement ofdm simulation with mmse estimator for the rayleigh channel. Although the estimation seems tolerant, i'm getting no improve with ber, even for simulation of 10000 symbols.
I have attached the paper i'm trying to implement, with the matlab code and some representative figures to see exactly what i'm doing
I can't understand if i'm missing something very important when estimating the channel or when using specific pilot symbols or in somewhere else..
Anastasia
Hello Tesla;
I have same question.
Question
I am taking temperature, pressure and humidity data from weather sensor want to predict the data by applying adaptive filter as predictor model using Least Mean Square (LMS) algorithm to separately predicting these quantities.
I have just started working on adaptive filter so any suggestions would the helpful to me.
Years ago, I have studied energetic optimization and adaptive control of dissolved oxygen concentration in aerobic fermenters. I was able to estimate the parameters of the usual KLa correlation (on-line and on real-time) through recursive least squares (RLS) with forgetting factor. To improve performance, sinusoidal disturbance was imposed to stirring rate and air flow.
Question
If a signal is real-valued, its DFT is known to be hermitian-symmetric.
How does the DFT behaves when a signal is complex-valued and hermitian-symmetric or hermitian-antisymetric? Why?
Thank you.
In order to understand in detail, I would recommend this chapter:
Question
I have complex values of a periodic signal which is clearly visible in time domain. But I want to find its frequency content ? FFT is not working with me and I am looking for alternate ways to solve the problem.
One of the other method other than the DFT or FFT is the using a band pass filter bank. As the filter bandwidth decreases and its sharpness increases it can resolve the frequency components in the signal. The DFT and FFT are equivalent to using using filter bank.
In order to get a correct representation in the frequency domain you have to properly sample your signal with a sampling frequency fs>= fmax the highest frequency contained in the waveform. The other condition is that have to take sufficient length of the waveform or a time window which is long enough to resolve the lowest frequency in the signal. Increasing the sampling frequency or the window time time T will lead to increase the size of the FFT transform and increases the computational load.
These are the two parameters which control the resulting obtained fft analysis results.
Best wishes
Question
I am trying to write my dissertation about automatic quantification of algorithms. These algorithms are written as a C function, which represents the behaviour of a VLSI circuit. The main purpose of the dissertation is to maximize the number of removed bits from the word-lengths of the signals describing a VLSI circuit, by finding a sub-optimal combination which fits a rule. The rule is that any combination must cause an error less or equals to a boundary error.
In order to find this suitable combination which is close to the error boundary and maximize the removed bits, my dissertation supervisor suggested to use local search algorithms. Due to the execution of the quantification will be made over a GPU (CUDA), I have found that the differential evolution and cellular genetic algorithm are suitable for a SIMD machine and easy to implement and execute in parallel. The constraints of the problem are: use of fixed point quantification, error produced at the outputs = fitness function and word-lengths from 1 to 22 bits (integer values). Actually, I have implemented the canonical DE (DE/rand/1/bin) and cGA (NEWS, 2D toroidal grid) over CUDA for any number of signals describing the VLSI circuit.
Before testing the algorithms with real VLSI circuits, I am testing them with a synthetic benchmark to confirm the related work and suggestions made about DE. This benchmark returns an output error (1 output circuit) based on this formula: sum in j elements of [ (element_j_of_individual_i - element_j_of_local_optimum) * 2 * factor ] with factor selected randomly for each element for 0,5 to 0,9 . Hence, if an individual of the population is an exact match of the pre-selected local optimum, the error returned by this fitness function will be 0. For an individual who has at least one element under the corresponding element of the local optimum will be discarded (and if it belongs to the initial population, will be regenerated until obtaining a valid individual).
Using this schema, the parameters of the benchmark are:
- population size of 5D, 10D, 15D and 20D (with D = number of signals describing the VLSI circuit), with each element in the population set randomly from 16 to 22 for each execution. ex: for D = 5, individual_number_0 = {14, 17, 21, 19, 20}
- randomly pre-selected local optimum from these values: {6,7,8,9,10}. ex: for D = 5, local_optimum = {7, 10, 6, 9, 6}
- ten executions trying to eliminate someway the bias caused by a pre-selected local optimum
- F = 0.5 and CR = 0.1
- the algorithm will stop when the local optimum is found or when all the generated offsprings are not valid and/or not better than their parents
For this set-up, I have found that for 50, 100 and 150 signals, the DE found the exact pre-selected local optimum for populations of 5D, 10D, 15D and 20D in the ten executions in several iterations (if requested, I could upload the iterations, timings, etc). For 200 signals the DE only found the local optimum for 10D, 15D and 20D. For 250 signals, only one execution of the ten for 20D found the local optimum; not founding it for any iteration of 5D, 10D or 15D. I have tried to relax the termination condition of the search by establishing an error boundary some way close to 0 (like 50, 70, 100 values) to find sub-optimal solutions for population sizes of 5D, 10D, 15D and 20D (D = 250). Although I have relaxed the termination condition, the algorithm stops without founding the local optimum.
I have found the Q&A from Stephen Chen: 'What is the optimal/recommended population size for differential evolution? ' but I do not know if these questions will fit my needs a priori, because I would like to use DE for VLSI circuits up to 400 signals in a first approach.
(Edit): added some examples: one randomly initialized individual in the initial population and one randomly pre-selected local optimum.
Papers report that DE is sutable for optimization problems from low dimensional spaces to high decisión variables.
Question
What are the advantages and disadvantages of performing numerical integration from acceleration to displacement in the time domain and frequency domain, respectively?
When you deal with high frequencies, time domain integration such as trapezoidal method may give incorrect results. One thing to remember in frequency domain integration is that waveform needs to be demeaned and padded for DFT to avoid aliasing (caused by cyclic convolution property of the inverse Fourier transform)
I suggest the following paper, which nicely covers the topic:
Brandt, A. and Brincker, R. (2014). “Integrating time signals in frequency domain – Comparison with time domain integration,” Measurement, 58: 511-519.
Question
Hi,
I am going through building the concept of Digital Signal Processing analyzing the frequency response of Lowpass FIR Filter Design, I could find out the coefficients and analyzing their frequency response, instead of this process let's design lowpass FIR Filter. What steps we have to follow?
-Abhinna Biswal
Dear sir,
I am quite confused about your question. Would you like to design it in Matlab or in hand mathematically. Sir, I know very well about FIR filter. I would love to help you. Please let me know what your actual problem is.
Regards
Question
I am new to FPGA. I want to use FPGA kc705 to generate a 150MHz sinusoidal wave through DAC (DAC3283) of FMC150.
For the FMC150 to generate the wave, is it only require the signal of 8 bit IQ data pair , clock and frame stated in data sheet? Or it need another control signal for it to working ?
You will need to use the Board Support Package of the FMC 150 for the KC705 in order to integrate your HDL design (sinewave generator through e.g., a DDS IP core of Xilinx). The exact signaling to interface your design with the DAC device shall appear in one of the (hopefully) provided reference designs of the FMC150.
Question
I am transmitting my RF signal through atmosphere at a range of 10 kms. But it seems range is limited by environmental noise floor. Is there any way through signal processing that can mitigate the effect of environmental noise floor.
Dear Arbit,
You can reduce the noise as the colleagues said by reducing the noise of the rf amplifier to be a low noise amplifier. Also, you can limit the bandwidth of the reception by using rf band pass filter.
Also in the detection you can use matched filters or correlators which maximize the the signal to noise ratio.
The most effective technique to detect a signal impeded in the noise is by using spread spectrum transmission system in which the bits to be transmitted is chopped to a much higher rate chips. In this techniques the power is traded by increases bandwidth.
Best wishes
Question
I am starting my project in WSN by implementing DSP algorithms in actual WSN motes. How to check whether the hardware is capable of running a simple algorithm as convolution.
Hello;
Based on this characteristics of teh component:
IRIS 2.4GHz
• 2.4 GHz IEEE 802.15.4, Tiny Wireless Measurement System
• Up to Three Times Improved Radio Range and Twice the Program Memory Over Previous MICA Motes
• Designed Specifically for Deeply Embedded Sensor Networks
• 250 kbps, High Data Rate Radio
• Wireless Communications with Every Node as Router Capability
• It depends on the DSP Algorithm you would like to integrat, if it can be uploaded on memory or not.
Question
For example if I have a BER = 8 x 10^-12 and I want to change it to 8 x 10^-4 so i need to add some noise to signal for this purpose by using awgn function in matlab. But I want to add/subtract noise in vertical histogram of eye diagram for two level system. This will make the signal noisy and my required BER can be achieved. So how can I do this operation?
This is an interesting question. In order to answer it one has to return to the definition of the bit error rate as the area under the pdf of the noise from - infinity to zero signal which is the overlap area of the logic one signal in the positive half of the pdf versus signal amplitude resultant amplitude,
So,
Pe= Integral exp - ( x-x1)^2/2 sigma^2 , where sigma is the standard deviation of the noise, X1 is the logic value one. Knowing Pe one can get
(x-x1)/ sigma *sqroot 2. This means one can get the value of the signal deviation from its average value interns of the noise. In drawing this will be a Gaussian error curve peaks at x=x1 and decreases in both sides of x1. The overlap area with negative half of the signal represents the probability of the error.
Best wishes
Question
I would like to validate method for converting PSD to time series with existing code or examples.
As it is mentioned in the "TimesseriesFromPSD" from MATLAB help, you cannot regenerate time domain signal merely form a PSD (note that PSD is something different with Fourier transform of a time series). Because an important part of the signal's information has been omitted. I mean the phase of each frequency component is not available in PSD. So, if you want to come back to the time domain from a PSD, there would be infinite answers. "TimesseriesFromPSD" try to assign a phase for each frequency randomly (with some statistical considerations). Thus, each time you run it, a different time series would be generated.
Finally, I should say that unfortunately a single PSD has not enough information to estimate/regenerate its time series. Theoretically, it is impossible!
Question
I'm interested to find out the reason for increases of mainlobe's width and decreases of sidelobe's amplitude in windowing IIR filters with non-rectangular windows,like Bartlett,Hamming,... .
Dear Soheil,
As the signal is more spread in the time domain it will be more confined in the frequency domain. As an example assume that you have a sine wave which is windowed by a rectangular window, if the width of the window is infinite the frequency domain appears as an impulse with the amplitude of the sine wave. As the window decreases a major lobe and side lobes begin to appear with more spread as the window decreases. This is the consequence of the fact that any signal in time domain can be thought to be a summation of sinusoidal waves with specific frequencies. That is any signal in time domain can be converted to its equivalent representation in frequency domain by a combination of sinusoidal waves. The transform is the Fourier transform.
To demonstrate the effect of the width of the window let us assume that we have a rectangular window with the width T in time domain. This means we have a single rectangular pulse with width T. The frequency domain representation of the pulse will be the well known Sinc x with its known shape of a major lobe and many side lobes with decreasing amplitude as the frequency increases. The main lobe has the extension 2/T and every side lobe has an extension equal 1/T . The main lobe amplitude is equal to the amplitude of the pulse A times its duration T.
So, consequently as T increases the width of the main lobe decreases and its amplitude increases which means that the frequency domain will be more confined in the frequency domain. Th opposite is true.
There will be also the effect of shape of the pulse on the energy contained in the its frequency component. The rectangular pulse will contain strong side lobes because of the abrupt time changes at its edges increases its high frequency content and thereby leads to the growth of the side lobes. If one makes the pulse rectangular then the side lobes will be made to decrease as there is no longer abrupt transitions in time and the high frequency component will be suppressed.
The more smooth the boundary of the pulse, the less will be the high frequency components and the more the side lobes will be attenuated.
Then in summary there are two effects to suppress the side lobes:
Pulse shaping by using smoothing windows
and extending the window width in the time domain.
Best wishes
Question
Dear all,
I used oscilloscope to measure voltages and got data from two channels, each channel has time data and values data. now, I want to calculate magnitude and angle as ( A ∠±θ ) for each channel and magnitude and phase shift between two voltage channels as  ( A ∠±θ ) by using MATLAB.
DATA file in attachment.
Could you help me to do that?
Question: What is the formula for the phase of a sine wave? There is no phase of a sine wave. A sine wave has no phase. A phase can only develop between two sine waves. Two sine waves are mutually shifted in phase, if the time points of its zero passages do not coincide. = http://www.sengpielaudio.com/calculator-timedelayphase.htm
Question
I am using a FTDI's IC FT4222H, a programmable one which was released few months back. It's used for interfacing I2C/SPI based slave or master devices and acquire the signals or data. I am using the Evaluation module of the same IC to act as a I2C Master and communicate with a EEPROM 24LCB16 for reading and writing the data from and to it respectively.
I am using LabVIEW to communicate with the FTDI IC through USB. I am not using the Virtual COM port whereas I am importing the FT4222H .dll into vi and executing the program such a way.
I find the device is listed properly in the VI, it is getting recognize as FT422H. The mode selected is Mode 3 where the I2C Master/Slave, SPI Master/Slave is enabled and the GPIOs are disabled. So it is listed as FT4222H.
Even then the device is getting opened and doesn't through error from FT_Status till the device is initialized. Here I have configured the device as I2C Master and in the next step I am reading the data from EEPROM. But the following errors are listed.
1. Initialize- 1000( FT_STATUS)
3. Un initialize- 3
4. Close status-1.
The DWORD are listed under the Appendix of datasheet.
If someone can reason out the solution for this kindly help me out to go ahead. Its almost done only the write and read operation has to be performed.
I have attached the Zip file of VI and Sub-VIs that I am executing. If you find any errors in those please let me know.
Regards…
Question
I have to decide the area of research for my Master's and would like to know the current research trend in the field of DSP/ image processing.
3D / 4D Image reconstruction
Question
Could anyone give me any opinions on which network simulator is good for implementing orthogonal frequency-division multiplexing (OFDM) and simulation?
I have come across NS2, OMNET++, OPNET etc. I'm not comfortable with NS2. Can OMNET++ or OPNET be used?
Also, how can I interface a network simulator with MATLAB?
go for omnet++
learning level moderate
Question
I try to implement FBMC in simulink. I have already matlab code for FBMC but when I implement in simulink, it creates problem as poly phase network block is not available in simulink library.
No, I have implemented FBMC using code in m file but I am struggling to design it in simulink specially in using polyphase filtering.
Question
I understand that the purpose of using an equalizer is to shorten the Impulse Response of a channel.In most examples I have seen so far,equalization in done in the Z-domain.Now,I have an ADSL channel response from 1hz to 1.1MHZ.How can I convert this frequency response into the corresponding Z-transform response?In short,how can I design a matlab equalizer for this kind of channel?
Interested
Question
I want to derive an equivalent FIR filter for a first order IIR filter. One way to achieve this conversion is calculating impulse response of the IIR filter and using it as the equivalent FIR filter. However, regarding my research, I need an analytical expression for this conversion.
I mean, there are 4 coefficients for a first order IIR filter and I want to calculate coefficients of a 10th order FIR filter, which is equivalent of the IIR filter, in terms of the 4 coefficients.
Any suggestions?
Thanks,
Dear Rafet Şişman! It is impossible to convert IIR filter to FIR filter with any accuracy, we can only approximate IIR filter with FIR model.
The simplest way to do this analytically is to take Fourier transformation of the IIR filter transfer function F(f)
FIR_coefficients[N]=\int\limits_{-f0/2}^{f0/2} F(f) exp[2 pi i N(f/f0)],
where f0 is sampling rate, f is frequency.
So, the approximated IIR filter response now reads as
output_of_approximated_IIR[k]=\sum\limits{j} FIR_coefficients[j] X[k-j],
where X[k] is input signal. However, the integrals of the rational function multiplied b exponent in general case are not primitive functions.
To my mind, for practice, the proposed by Prof. Luiz Alberto Luz de Almeida and Prof. Fernando Soares Schlindwein solutions are the best ways for converting IIR filter to FIR filter.
Sincerely,
Daniil D. Stupin
Question
Hi, We have a digitizer, and we want to calculate it's response (pole- zero or amplitude-phase of the system),
knowing the chip used in digitizer, we know that it has 24-Bit resolution, and it's peak-peak is 5 (+-2.5) volts, Now, if anyone can suggest a way to obtain the response? For example by giving the step input (with a signal generator and obtain the outputs in counts),
Furthermore, How sampling rate affect the response, should we calculate one specific response for each sampling rate?
Thank you,
Hello Hossein,
Yes, you are on the right track. And I further understand that you have sampled the input signal at 200SPS.
Well, can I invite your kind attention to your own answer a year ago where you have shown your various input signals via 1.png graphics file.
I can see that you had fed a 50mV signal with 1Hz, 10% duty cycle and have sampled this pulse signal with various rates as 50, 100 and 200 SPS rates.
Now you have acquired the technique as to how to sample the signal, get the data and analyze using some tools like MATLAB, you repeat the same 1Hz 10% square wave input and check how your ADC is responding in a similar way as the you have fed this SINC input.
And see what results you get? Hopefully you shall get what you have been looking for such a long period of time!
Bye the way kindly accept my belated Ramadan Greetings!
All the best!
-Prasanna
Question
Based on the formula Δ f=1/T=fs/N, it can be understood that in order to improve time or frequency resolution one has to either change signal properties or adjust window length. Then how can it be claimed that a particular signal processing technique offers better resolution than the other? For instance, it is well known that Stockwell transform has a better resolution as compared to Short Time Fourier Transform, BUT HOW?
Linear (and quadratic also) transforms are subject to the uncertainty principle, which states that you cannot improve time resolution AND frequency resolution at the same time. This means that you cannot improve both of them at the same time, and thus, if a transform is reported to have, say, "better" time resolution than another transform, it means that it will have "worst" frequency resolution, because the gain in one must be compensated by a lost of the other.
Question
I have no formal training in electrical engineering (where DSP is staple) but I do have a bachelor's degree in mathematics. I prefer a textbook with plenty of examples. Thanks very much in advance!
Question
30MHz pulse modulated IF signal having width around one microsecond is sampled at 200MHz. We have requirement of measuring the IF pulse carrier frequency drift with an resolution of 1kHz.
Explored usage of costas implementation in FPGA,but not succeeded. Looking for alternative approach for receive frequency drift measurement in FPGA/digital domain.
Could any one suggest for algorithm/concept for IF pulse frequency measurement in FPGA.
M.Ashok
Dear Ashok,
Unless I am misunderstanding something, i wonder if it is you requirement related with the FPGA timing and triggering implementation to handle correctly your measurement system.
Thanks
Question
LTE and Wifi standards define Transmit EVM but not for the "user side". User side EVM usually depends on receiver specs (noise floor, etc) but what are the standard acceptable values of OFDM after wireless transmission using blind equalization or for spatial diversity or multiplexing?
Dear Usman,
It is so that the power amplifier in the transmitter has some nonlinearities leading to error vector magnitude of the QAM symbols. Ideally the power amplifier must mus not introduce any distortion in the symbols.
So, if it is given in the standard, it is some form of design help as at the end effect it is the delivered signal quality at the destination which is assessed by the quality of service. The bit error rate is the most appropriate quality of service as it gives directly the end quality of the data delivered at the receiver end.
As i hinted above, all the effects mentioned in the first post contribute to the resultant EVM,
If all the effects are random with an equivalent error standard deviation sigma, then the overall standard deviation sigmat= summation sigmai where i is a running index for all deviations.
Since sigmat is known and sigma transmitter are known, one can get the residual allowed error vector magnitudes.
Best wishes
Question
What kind of projects are out there in neural system analysis?
What are the take aways?
I know it can be applied to many of the industries, commercials, and at home.
But I am bit vague or cannot grip on what the term stands for and what can be done within.
I honestly thought neural systems are somewhat related to bioengineerings.
Would you kindly give some explanations??
Having hands on to data makes it easier to understand, Pick a language(matlab or python) and select an algorithm to see how it works on the data, look for youtube videos with simple examples..
Question
How can I use cooperative spectrum sensing to eliminate primary user emulation attacks in cognitive radio networks using energy detection?
Question
As a mathematician trying to understand the way the Signal-To-Noise ratio works in Digital Signal Processing, I have the following observation:
A signal is recorded, suppose I recorded a class lecture. When I insert this recording in audio-software which shows the recorded sound waves over time, I am able to determine the amplitude of the teacher's spoken voice and the amplitude of (static class) noise when the teacher is silent for some time. Suppose my recording indicates that the amplitude of the sound waves when my teacher speaks is at 50 dB and 20 dB when he is silent. For a signal-to-noise ratio I would instinctively divide 50 over 20, obtaining a ratio of 2.5. Or maybe more instinctively, the noise is 40% of the total incoming sound (noise-to-signal). Is my intuition failing me because the scale of sound (dBs) is not linear?
From one source I read that I could interpret determining the signal-to-noise ratio as [Teacher+Noise in dB]-[Noise in dB]=[Signal-to-Noise in dB], which would result in a 30 dB signal-to-noise ratio in the above mentioned example. Can anyone confirm if this is correct?
Signal to noise ratio (SNR) is usually expressed in dB, especially in audio and sound applications, because of the very large dynamic range of human hearing.
As a mathematician, you should ask yourself: If I have a physical quantity x with huge dynamic range, and then, in order to compress its dynamic range, I take the logarithm of x using y = log(x), what are the physical dimensions that I should ascribe to y? If x is power in Watts, for instance, then what units should you give to y = log(x)?
Take some time to answer that now....Look up some series expansions for log(x) for starters..
...What you should conclude is that it is WRONG to construct a new variable y=log(x) whenever x is NOT dimensionless. (The same is true, incidentally, for sin(x), exp(x), etc...)
Rather, what you should do is first make a new dimensionless quantity x'=x/x_ref, where x_ref is any reference value for x. In audio, x_ref is the minimum perceptible sound level. It is set by convention. Then both x' and y=log(x') are dimensionless quantities, and we have mapped dimensionless x' (a ratio) to dimensionless y.
This is roughly what dB is doing, when we take y = 10 log10(x/x_ref) to convert x into dB. We are simply converting one dimensionless ratio into another, but we always remind ourselves and the world that we have made this conversion by including the dimension units dB when reporting y.
For signal to noise ratio (SNR), we have a mean signal power x in Watts, and mean noise power n in Watts. And then:
1. Your sound meter will read sound pressure levels of X_SPL = 10 Log10(x/x_ref) dB, and N_SPL = 10 Log10(n/x_ref) dB.
2. Your signal to noise ratio in linear units is snr = x/n
3. Your signal to noise ratio in dB is SNR = 10 Log10(snr) = 10 Log10(x/n) = 10Log10(x/x_ref * x_ref/n) = 10Log10(x/x_ref)-10Log10(n/x_ref) = X_SPL-N_SPL dB
In your case, if the signal of interest X_SPL has a sound pressure level (SPL) of 50 dB (your teacher, as measured by your sound level meter), and the background noise level N_SPL is 30 dB (as measured by your sound level meter), then the SNR in dB is SNR = 50 dB - 30 dB = 20 dB -- much as you have it at the end of your question.
Question
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
Kindly Suggest me some research articles in Digital image processing using Digital Topology.
You can also visit my page and download my papers if you want.
Question
Hello everyone,
According to Steven W. Smith, in his book: The Scientist and Engineer's Guide to Digital Signal Processing, when designing a digital filter, "Good performance in the time domain results in poor performance in the frequency domain, and vice versa", and one needs to find a good trade-off for one's needs; and from what I understood, bad performance in the time domain means that the signal's waveform will be altered, meaning it will be somehow distorted after going through the filter. Additionally, the filter's step response is of the most importance when it comes to achieving good performance in the time domain. http://www.dspguide.com/CH14.PDF
So, after I designed two filters with matlab that aim to rid a signal from the electromagnetic interference (lowpass), and the DC component (highpass), and thinking that linear phase filters preserve the shape of the signal because all frequency components are shifted equally, so I went with the FIR filter option, and by using the function fircls1 that allows for defining the ripple levels in the passband and stopband, with the former being the concern for preserving the amplitude levels of the different frequency components, I thought I am getting the best possible "time domain" response, and the shape of the waveform in the time domain is as intact as could possibly be, moreover, I used an adequate number of points in the filter kernal to get the roll-off I desired, and after all that I came across that phrase in the first paragraph, this time I paid more attention and said let's have a look at the step response of the filter. Before I was solely concerned about the frequency response, ripple and stopband attenuation, etc.
I am attaching the step responses of my lowpass filter (left) and highpass (filter) right, and I would like to understand, what did I do wrong? I have looked at the filtered signal and it looks good, it is delayed considerably, and I know FIR filters requires longer execution time, but I am working on it offline, as in not in real-time, and I think I optimized the frequency response, as I mentioned, pretty much, so if that phrase means what I think it means, then the signal is not preserved and its waveform is some how distorted, and I cannot have that for the intended application, so could some one please help me make sense of these seemingly contradictory understandings. Thank you very much!
Dear Colleagues,
This question is a good one.
The digital filters are classified int FIR and IIR. The FIR filters gas zeros only in z-domain and if they are symmetrical meaning that the filter coefficient have even symmetry around their median the filter will be phase linear. If the filter is phase linear, it will preserve the integrity of the waveform meaning that all the frequency component of the waveform will be delayed by the same amount. This is required to avoid phase distortion of the signals in instrumentation and communications. So, such types of filters results by default in a linear phase response.
The infinite impulse response filters contains in addition to zeros , poles in the z-domian and they normally designed to have a standard filter form function such as Butterworth, Chebychev, Elleptic or Bessel function.
All filter types introduce nonlinear phase distortion except the maximally flat group delay filter, the Bessel filter. So, if you seek to build IIR filters having the lowest phase distortion, you have to mimic the Bessel filter function.
Both types of filter, the symmetrical FIR and Bessel IIR will need more hard ware to get the same attenuation in the stop band with the IIR filter have less hardware.
The best way to test the phase linearity is to determine the phase frequency response of the filter Phi(f) of the frequency response H(f)
Also, the best way to see the effectiveness of the filter is to filter we measure its amplitude frequency response magnitude of H(f).
This is accomplishes by input sinewave input at the filter with constant amplitude and measure the output amplitude and phase. This can be repeated foe all the frequency range of the filter.
One advice is that the signals in time domain are abstract and hard to interpret them.
Best wishes
Question
I'v been doing an investigation about detection of a specific signal and I need to determine that how a received signal resembles the reference signal. A primary approach to this problem is done simply by taking cross correlation of signals, but I need to know about more accurate methods. References would be also appreciated.
Hi Alireza,
actually there are many ways to determine the similarity of signals (or the distance between them). Here some cues:
Using cross correlation (or correlation coefficient as normalized measure) surely is a widely applied method. As alternative correlation approach you could consider rank based correlation (Spearman's rho) (cross correlation e.g. assumes a linear dependency which might not be true).
Similarity of signals also can be accessed in the frequency domain, you could have a look for "coherence" to find more information.
Even by using measures of information theory similarity can be qunatified. Have e.g. a look to "entropy".
Apart from that, there are many distance metrics which can be applied similarly (the smaller the distance the bigger the similarity). The most often used metric is the euclidean distance (or a normalized version), but you'll find other metrics when searching for "distance measures".
Now, what's best suited strongly depends on your signals and what you regard to be similar (which can be problem specific). Hope this helps.
Greetings, Sebastian
Question
I like to clear my confusion regarding MDP. It says only those process can be define by MDP that possess Markov Property. But, i think their are millions of other that don't posses that like in digital signal processing, control etc. If no, Is there any alternative for MDP?
I think there is perhaps a fundamental flaw in your implicit question. Those systems that enjoy the Markov property (perhaps in weak form?) can give us nice benefits that we can exploit in computation. Systems that do not enjoy this property cannot take advantage of the benefits. It would be unrealistic to expect that they would. If you want to use Markov based methods your system has to play by the rules. One hope though -- what may appear non-Markovian may in some instances be cleverly transformed. In my field this might include the mover / stayer model. (Divide the groups into two -- one of which has the Markov property.) I also like semi-markov processes where you can make assumptions about holding time distributions. Much mathematical modeling is in the art of designing your representation of the problem to fit a convenient/tractable form.
Question
I want to begin my work on natural language processing using the idea of compressive sensing, which is supposed to begin very quickly. Can I get some ideas/relevant details which would be worthy to look after? Any existing challenge that researchers faced earlier, which I should look after? I may be keen to use machine learning applications as well. Please help me out.
Question
I search for the derivation of the DFT of Discrete-time periodic chirp signal?
I derived a solution in a special case in the following paper.:
Reza Dianat and Farokh Marvasti, " Evaluation of a Class of Quadratic Gauss Sums by Sampling a Continuous Chirp Signal ", Sampling Theory in Signal and image Processing, vol. 15, 2016
Although there are general solutions, my solution is based on sampling theory and more familiar to signal-processing community.
Question
In ERP/P300 signal analysis, xDAWN is well-known to find the spatial filter.
xDAWN Algorithm to Enhance Evoked Potentials: Application to Brain–Computer Interface
A Tutorial on EEG Signal Processing Techniques for Mental State Recognition in Brain-Computer Interfaces
But I still do not know very well about xDAWN. So far, I know that the first column of D is 0 except the positions of stimuli onset, but how about the other columns? or we do not need to know the others then we can create the Toeplitz matrix?
Would you please give me an example? Where can I find the source code of xDAWN to let me study more about it?
This question is old but it is still unanswered, so here I go...
The Toeplitz matrix is just the beginning of the algorithm, and these are just matrices where all the values from the same diagonal have the same constant value. In this case, as you will have a 1 in the first column of the matrix, in the k-th row, every other column will have to have a one on that only diagonal, zero elsewhere. Think of it as the identity matrix "pushed" downwards.
They used it to represent the time-locked delay of the ERP, i.e. how many sample points do you need to wait since the onset of the stimulus until you actually start seeing the ERP. Because you multiply this matrix with A, which is the ERP signal, the result is the A matrix, representing the ERP, pushed also downwards and delayed k units.
I like the idea of "evoked potential algebra" that the authors use in the paper ( )
Page 138 of Lotte's Brain Computer Interfaces 1 book offers a brief explanation of the algorithm that perhaps may help you as well (optimization).
You can find an implementation in OpenVibe's source code and also in the MNE-Python package.
Question
Dear experts
What is the difference between narrow-band and broad-band transducers?