Science topic

# Digital Signal Processing - Science topic

All aspects of DSP in Radars, Acoustics, Medical and Physical equipment.

Questions related to Digital Signal Processing

They say in the comments and documentation that it is implemented using a "Direct Form II Transposed" realization.

But for FIR filters (i.e. when a = 1) I suspect it uses an fft() to do the convolution in the frequency domain when it is faster to do so.

This would make sense because the procedure operates on a batch of data, not a sequential stream.

I don't suppose it really matters if the result is the same. I was just wondering if anybody knew for sure (for sizing and timing reasons).

I am designing a digital system that should be minimum phase (invertible).

But, duo to other constraints on the problem, the currently designed filter is non minimum-phase. Now, i want to convert this system to the best minimum-phase of it.

I have a vector based on a signal in which I need to calculate the log-likelihood and need to maximize it using maximum likelihood estimation. Is there any way to do this in MATLAB using the in-build function

**mle().**Hello!

I have collected muscle activity data with the Muscle Sensor v3 Kit. Now I would like to apply a machine learning algorithm to it. According to the datasheet for this sensor, it has already been amplified, rectified, and smoothed.

Would anyone be able to tell me if the data needs to be denoised before applying machine learning? Here's the data how it looks like after plotting.

Here's the data how it looks like after plotting:

Digital wave algorithms can be used to perform electrical network simulation in time domain. The response of a circuit simulated by means of the DW equivalent can be used as "exact" solution to compare the response coming from a Nodal Analysis solver like Spice.

i have implemented a recursive least square algorithm. i am testing it using random discrete time functions and works well. when i am trying to estimate the parameters of a certain transfer function it doesn't estimate them correctly unless i add noise to the system. is that reasonable? what are the specifications so that an rls algorithm works well?

Is there any MATLAB code available for Fast Transversal Filter (FTF)?

Hi,

I have been working on a conference solution based on a smartphone grid. Right now, I am stuck on Android Smartphones, where although I have managed to normalize loudness programmatically when it gets played on different phones then, due to the Hardware difference of speakers, it plays with different loudness.

I am keen to know if there is any way to make it sound the same on different Android Smartphones concerning loudness.

I am using DRC (dynamic range compressor with gain) to normalize it on specific dBs. But still, it sounds different on different smartphones in terms of loudness.

Regards,

Khubaib

Most of us don't give a second thought on the platform to opt while looking at the implementation of DSP algorithms.

I want a MATLAB code on " VIDEO WATERMARKING " using Discrete Wavelet Transform (DWT).

Hi to everyone, I am an engineering student and I started to learn signal processing / signals&systems topics. I think that one problem of self-learning is, can't find someone/teacher to ask the point you stuck it.

I don't understand how the CT impulse function is transformed to the discrete-time impulse as its amplitude is 1. How does this process work?

I have problems with converting CT to DT, the sampling, and the periodization. I am trying to watch several videos about it, but the actual mathematical operation of this "converter" is not clear for me.

What I mean is , what is the operator that converts;

""impulse(t) >> to impulse[n] with amplitude of 1""

or ""x(t) . p(t) impulse train >> to x[n] as a sequence""

x(t) . p(t) could be represented as = summation of the series of x(nT) . impulse ( t - nT )

But this is still not equal to a sequence of x[n] , because it contains scaled impulses with amplitude of infinity, right?

To remind it again, my question is how to transform X(t) to x[n] mathematically? How this sampling is occurring?

I want to detect anomaly from a streaming data. Using FFT or DWT history is it possible to detect anomaly on the fly (online) . It will help a lot if anybody could suggest some related resources.

Thanks.

Researchers are now employing wifi sensing and wifi csi to design and develop various activity detection, heart beat monitor, and other devices. They train the data in an environment using machine learning or deep learning technology.

My issue is that because the wifi csi is highly sensitive to the environment, how will it operate if the environment changes, for example, if I train it at home and then use it in my workplace room? Is it necessary to train before attempting to use each environment?

I have seen some formulas to determine the error threshold in different image denoising algorithms. My question is: "Is there a systematic way to determine which error threshold works better for your data?" -- a way which is motivated by the statistics of your data.

Some formulas are as follows:

C: constant

s: sigma (standard deviation of Gaussian noise)

n: signal dimension

sqrt: square root function

ln: natural logarithm

threshold = C*s*sqrt(n)

threshold = sqrt(s^2*n)

threshold = s*sqrt(C*ln(n)) % this is the universal threshold introduced in VISUSHRINK

I want to generate a sinusoidally varying impedance transmission line for the following specifications:

min. impedance 50ohm, max. impedance 170, period 10mm

I am curious about what happened to the atomizer software by Buckheit J. (http://statweb.stanford.edu/~wavelab/personnel/) and if it is available somewhere.

Sadly I found only 2 dodgy sites that require a login to download the MATLAB code. Does someone have any information on where to get it from?

Alternatively, if there are other toolkits that have implemented this code please let me know, it does not have to be MATLAB, any language is fine for me :).

Thank you.

*Besides positioning solutions, Global Navigation Satellite Systems (GNSS) receivers can provide a reference clock signal known as Pulse-per-Second (PPS or 1-PPS). A TTL electrical signal that is used in several applications for synchronization purposes.*

- Is the PPS physically generated through a digitally-controlled oscillator (or line driver) whose offset is periodically re-initialized by the estimated clock bias (retrieved by means of PVT algorithms)?
- Are there any specific filters/estimators devoted to a fine PPS generation and control?
- Does some colleague know any reference providing technical details on this aspect?

While reading a research paper I have found that to find the abnormality in a signal authors were using a sliding window and in each window he was dividing the mean^2 by variance. After searching in the internet I found a term called fano factor but it was inverse of that. Could anyone please give an intuitive idea behind

the equation mean^2/variance?

Digital Image processing based research topic for the final thesis of Masters, It must not be based on machine learning.

Consider a digital control loop which consists of a controller C and Kalman filter K, which is used to fuse and/or filter multiple sensor inputs from the plant and feed the filtered output to the controller. The prediction (or also called time-update) step of the KF, and the analysis and tuning of the control loop utilize a discretized state-space model of the plant, defined as

x_{k+1} = F * x_k + G * u_k,

y_k = H * x_k

where F is a transition matrix, G is the input gain matrix and H the measurement output matrix. For now I will ignore the process and measurement noise variables.

According to many text books about digital control and state estimation (for example "Digital Control of Dynamic Systems" by Franklin et al.), u_k = u(t_k), x_k = x(t_k) and y_k = y(t_k) are the control input, state and measurement output of the plant, which seem to available at the same point in time, t_k. This would mean that the output from the DAC of u_k, and sampling of y_k happen at the same moment in time, t_k. However this does not seem to hold for some classical implementations. Consider a typical pseudocode of a control loop below:

for iteration k = 1:inf

sample y_k from ADC

measurement update of KF x_k

compute u_k

output u_k through DAC

time update of KF x_{k+1}

wait for next sampling moment t_{k+1}

end for

Ignoring the time durations of the DAC and ADC processes, the description above will introduce error in the prediction step of the KF, because it is assumed that the value u_k is actuated at the same moment of time that y_k is sampled - t_k. However due to the time delay introduced from computations of the update step of the KF, and the controller this is not the case. If we define s_k to be the time when the value u_k is actuated, then clearly t_{k+1} - s_k < T, where T is the sampling period. It is clear that the prediction step no longer computes the predicted state correctly because it either a) uses the old actuation value u_{k-1} or b) uses the newly actuated u_k, and in both cases the time between actuation and sampling is equal to the sampling period, assumed in the model.

This leads me to believe that the control value u_k should be actuated at time t_{k+1}, to keep consistency white the sampling period and the prediction model.

Also consider the case when the KF's prediction and update steps are executed before the controller during iteration k. Then the prediction step clearly makes use of u_{k-1} to compute a time update x_{k+1} of the state. This also seems to contradict the original definitions.

So with all these assumptions laid forward, I would like to know what are the correct sampling and actuation times, and why such ambiguity exists in modern literature about such hybrid systems.

NOTE: Some of you may say that the definition holds for small sampling periods and when the computations are fast. However I consider the general case where sampling periods may be very large due to the computations involved in the control loop.

I have designed an FIR filter which discards the high frequency noise. I have tested the functionality through simulation (MATLAB). Now I wish to quantify this, that how good is my system and get results which I can compare with existing literature. Can someone describe a few parameters (preferably with mathematical definition) on basis of which I can do this.

I am dealing with vibration signals which were acquired from different systems. They are mostly non-stationary and in some cases cyclostationary. What are the less expensive methods for removing noise from the signals? It can be parametric or non-parametric.

Dear Community , I want to load my dataset from physionet Sleep edf , and try to separate the list of signals than the list of labels so I can apply the feature extraction , I used MNE python , but it gives the opportunity to create epochs for 1 subject only , any help please .

Hello All

I'm trying to synchronize imu sensor with the Motion Capture system. Is this possible? If so, how?

thanks for answering

Alireza

I have a data set of 10 subjects, each of whom repeated a task five time.

This data set includes two parameters simultaneously Measured. Through an MLP network, I intend to create a kind of regression between two parameters.

Do I have to give the data for each person to the network and average Rmse for all subjects or can all ten subject be entered into a network (10 input) for regression.

How should I determine threshold on received signal power(underwater acoustic signals) so that the signal can be decoded correctly. I understand that the threshold may vary device to device but I want the value for research purpose independent of any device.

Regards

I'm curious to know if one can find the force of the collision by analyzing the audio or extracting features from the audio signal that estimates the force.

Recently I have tried to implement the NLM denoising filter [1]. When I use this algorithm I cannot reach the same results as in the paper. Furthermore, I cannot reach results presented in the paper [2]. And the accuracy difference was up to 2dB.

Note, for the similarity measure I used 7X7 patch without Gaussian kernel over this patch.

The main problem, that this kernel was introduced in the theoretical part of the paper [1], however was not set in any numerical value (or I missed it completely in this or other papers where the NLM algorithm was discussed).

I tried the value of the kernel sigma = 0.65. Finally my results could reach approximately the same accuracy as in [2] (but still not exactly the same from 0 to up 0.3 dB depending on noise level).

So, can somebody tell me what is my glitch?

[1] Antoni Buades, Jean-Michel Morel, A non-local algorithm for image denoising.

[2] Ganchao Liu, Hua Zhong, and Licheng Jiao, Comparing Noisy Patches for Image Denoising: A Double Noise Similarity Model

I want to do texture segmentation. I have used 2-d wavelet decomposition and then calculated energy as feature vector. I have have calculated the feature vector of each pixel from a multi-textured image. Now from feature vectors, how can I achieve segmentation of a multi texture image?

Hi,

I am trying to generate eye diagram for a particular signal along with defined Eye mask. But cannot find any reference for how to integrate Eye Mask along with Matlab Eye Diagram Object ? Any one have any information ?

First diagram is the matlab eye diagram generated to which I would like to add Eye mask to look like the second diagram.

I need an accurate and real time algorithm to register optical and SAR image. can anyone help?

Hello everybody,

I have observed a bit confusing behavior of my system response (or may be I am missing something).

I have a transfer function in S domain converted to Z domain with a 1kHz sampling frequency at the time of conversion using matlab, When I embed this discrete version of the transfer function to my system which is also sampling on the same frequency of 1kHz. The system works the way as expected (i.e. the step response is the same as that of the s-domain analogue controller).

But if I increase the sampling frequency of my system while using the

**SAME**discrete transfer function that i just converted from s to z domain with a SAME conversion sampling frequency of 1kHz , the step response gets further faster.My question is that, why the discrete system gets faster response than the analogue one, despite the transfer functions of the analogue controller and the discrete controller are the same.

What I understand, the step response of any transfer function should remain the same in either case (i.e. either the function is in s-domain or in z-domain) the response should be the same ?

Does this mean the digital controllers have the ability to fast the response of the same transfer function by changing the sampling frequency of the system?

**It is important, not to confuse the system sampling frequency of my u-controller at which the u-controller is collecting the samples from ADC, with the sampling frequency that I used as a parameter required to convert the s-domain transfer function to z-domain transfer function.**

I thank you all for your time.

Regards,

Iftikhar Abid

I am working with a synthetic aperture radar system with fmcw signal, which transmits and receives signals continuously. The received signals are dechirped and their type is double (not complex). I want to separate the received signal of each pulse and prepare it for the range and cross-range compression.

In some instances, I've seen that the Hilbert transform is implemented on the signals to generate analytical complex signal, but I don't know its main reason and in many cases, it doesn't work appropriately!

I attached part of the received and transmitted signals.

I appreciate your comments in advance.

In Python, which way is best to extract the pitch of the speech signals?

Although, I extracted pitches via "piptrack" in "librosa" and "PitchDetection" in "upitch", but I'm not sure which of these is best and accurate.

Is there any simple and realtime or at least semi-realtime way instead of these?

Hi,

I have to simulate the performance of a multi-cell massive MIMO system for both the conventional and Pilot reuse 3. Does someone have some GitHub link, I will be really thankful!

Kind regards,

The radar system that I'm working with contains a linear FMCW S-band (2.26-2.59 GHz) signal with a bandwidth of 330MHz and a pulse duration of 20ms. Also, the received signal is dechirped.

Thanks for your comments and suggestions in advance.

Lets say, I have 10 randomly generated bits (0's and 1's) in matlab. From my other parameter calculations, I have about 5 samples, this means 1 bit/5 samples (each bit is replicated 5 times). My total data vector will be 1 x 50 length. After this i perform BPSK modulation and multiply the data (baseband) with a carrier wave to form a data signal (passband). After this I add AWGN and then again multiply the signal with the carrier (from passband, back to baseband).

After the above process is completed i have to make the decision to get back the 0's and 1's to compare it with the initially generated bits. How should i proceed with it?

Does the below code makes sense or am I missing something?

data = randi(1,[1,0],10);

...

...

...

(steps as explained above)

%----bits retrieval------

variable = [];

for i = 1:length(data)

sum = 0;

for j = (i*5-4):length(final)

sum = sum + final(j);

c = mod(j,5);

if c == 0

break

end

end

if sum > A

variable = [variable 1];

else

variable = [variable 0];

end

end

%--------------------------code ends-----------------------------

here,

A is the amplitude of the carrier wave

final is the vector onto which i have perform the decision to check if its a bit 1 or bit 0.

As i want to add the simulink model of PMU to my simulation but From where I can get the PMU simulink model?or from and how I construct the simulink model of PMU in MATLAB?

Hello all! I meet a problem which confuses me these days.

Assume that I input a 100*100 mask into SLM and project it to the sample, then I use the CCD camera to collect the image. Usually, the size of the image from CCD (630*630 e.g.) is bigger than that of the simulation (100*100).

Therefore, how could I deal with the captured data in order that it is consistent with the size of the original mask? Since I need to reconstruct the image with some algorithms. I find that direct downsampling is not effective. Thanks in advance!

I am working on transmitting ECG signal over wireless body area networks. According to IEEE802.15.4 standards , I am using ZigBee transceiver at 2.4 GHZ . Complex baseband equivalent channel model is used.

The steps are as follows :

1. Signal Compression

2. Quantization

3. Coding

4. OQPSK modulation using the matlab function (

**oqpskmod**)5. Fading Channel plus AWGN

6. Equalization

7. OQPSK Demodulation using the matlab function (

**oqpskdemod**)8. Decoding and Dequantization

* According to the IEEE 802.15.4 standard, a pulse shaping step is performed in the transmitter after the OQPSK modulation step .

I didn`t perform this pulse shaping process, and I obtained reasonable results. Is it necessary to perform this step ?

If yes , How should the receiver be modified ?

How to reduce the offset error in P controller?How to reduce the offset error in P controller?

There are two signals- consider them two vectors over equal time .

The length of the two signals are same.

Both the signals are continuous and random .

They have the same frequency and wavelength but different amplitude.

Visually they look similar.

How do I mathematically detect the change of periodicity.

Something like - both the signals are now pointing North.Both the signals are now pointing South.

I was studying an algorithm concerning to hybrid frequency estimator which is used for the determination of off_nominal frequency in a signal . I wanted to know the explanation of how it is used to detect the off nominal frequency and how is the IIR filter helping in that?

How can implementation an Adaptive Dictionary Reconstruction for Compressed Sensing of ECG Signals and how can an analysis of the overall power consumption of the proposed ECG compression framework as would be used in WBAN.

Hello, i'm having a project where I must implement ofdm simulation with mmse estimator for the rayleigh channel. Although the estimation seems tolerant, i'm getting no improve with ber, even for simulation of 10000 symbols.

I have attached the paper i'm trying to implement, with the matlab code and some representative figures to see exactly what i'm doing

I can't understand if i'm missing something very important when estimating the channel or when using specific pilot symbols or in somewhere else..

Thanks in advance,

Anastasia

I am taking temperature, pressure and humidity data from weather sensor want to predict the data by applying adaptive filter as predictor model using Least Mean Square (LMS) algorithm to separately predicting these quantities.

I have just started working on adaptive filter so any suggestions would the helpful to me.

Thanks in advance.

Hello, dear readers.

If a signal is real-valued, its DFT is known to be hermitian-symmetric.

How does the DFT behaves when a signal is complex-valued and hermitian-symmetric or hermitian-antisymetric? Why?

Thank you.

I have complex values of a periodic signal which is clearly visible in time domain. But I want to find its frequency content ? FFT is not working with me and I am looking for alternate ways to solve the problem.

I am trying to write my dissertation about automatic quantification of algorithms. These algorithms are written as a C function, which represents the behaviour of a VLSI circuit. The main purpose of the dissertation is to maximize the number of removed bits from the word-lengths of the signals describing a VLSI circuit, by finding a sub-optimal combination which fits a rule. The rule is that any combination must cause an error less or equals to a boundary error.

In order to find this suitable combination which is close to the error boundary and maximize the removed bits, my dissertation supervisor suggested to use local search algorithms. Due to the execution of the quantification will be made over a GPU (CUDA), I have found that the differential evolution and cellular genetic algorithm are suitable for a SIMD machine and easy to implement and execute in parallel. The constraints of the problem are: use of fixed point quantification, error produced at the outputs = fitness function and word-lengths from 1 to 22 bits (integer values). Actually, I have implemented the canonical DE (DE/rand/1/bin) and cGA (NEWS, 2D toroidal grid) over CUDA for any number of signals describing the VLSI circuit.

Before testing the algorithms with real VLSI circuits, I am testing them with a synthetic benchmark to confirm the related work and suggestions made about DE. This benchmark returns an output error (1 output circuit) based on this formula: sum in j elements of [ (element_j_of_individual_i - element_j_of_local_optimum) * 2 * factor ] with factor selected randomly for each element for 0,5 to 0,9 . Hence, if an individual of the population is an exact match of the pre-selected local optimum, the error returned by this fitness function will be 0. For an individual who has at least one element under the corresponding element of the local optimum will be discarded (and if it belongs to the initial population, will be regenerated until obtaining a valid individual).

Using this schema, the parameters of the benchmark are:

- population size of 5D, 10D, 15D and 20D (with D = number of signals describing the VLSI circuit), with each element in the population set randomly from 16 to 22 for each execution. ex: for D = 5, individual_number_0 = {14, 17, 21, 19, 20}

- randomly pre-selected local optimum from these values: {6,7,8,9,10}. ex: for D = 5, local_optimum = {7, 10, 6, 9, 6}

- ten executions trying to eliminate someway the bias caused by a pre-selected local optimum

- F = 0.5 and CR = 0.1

- the algorithm will stop when the local optimum is found or when all the generated offsprings are not valid and/or not better than their parents

For this set-up, I have found that for 50, 100 and 150 signals, the DE found the exact pre-selected local optimum for populations of 5D, 10D, 15D and 20D in the ten executions in several iterations (if requested, I could upload the iterations, timings, etc). For 200 signals the DE only found the local optimum for 10D, 15D and 20D. For 250 signals, only one execution of the ten for 20D found the local optimum; not founding it for any iteration of 5D, 10D or 15D. I have tried to relax the termination condition of the search by establishing an error boundary some way close to 0 (like 50, 70, 100 values) to find sub-optimal solutions for population sizes of 5D, 10D, 15D and 20D (D = 250). Although I have relaxed the termination condition, the algorithm stops without founding the local optimum.

I have found the Q&A from Stephen Chen: 'What is the optimal/recommended population size for differential evolution? ' but I do not know if these questions will fit my needs a priori, because I would like to use DE for VLSI circuits up to 400 signals in a first approach.

(Edit): added some examples: one randomly initialized individual in the initial population and one randomly pre-selected local optimum.

What are the advantages and disadvantages of performing numerical integration from acceleration to displacement in the time domain and frequency domain, respectively?

Hi,

I am going through building the concept of Digital Signal Processing analyzing the frequency response of Lowpass FIR Filter Design, I could find out the coefficients and analyzing their frequency response, instead of this process let's design lowpass FIR Filter. What steps we have to follow?

-Abhinna Biswal

I am new to FPGA. I want to use FPGA kc705 to generate a 150MHz sinusoidal wave through DAC (DAC3283) of FMC150.

For the FMC150 to generate the wave, is it only require the signal of 8 bit IQ data pair , clock and frame stated in data sheet? Or it need another control signal for it to working ?

I am transmitting my RF signal through atmosphere at a range of 10 kms. But it seems range is limited by environmental noise floor. Is there any way through signal processing that can mitigate the effect of environmental noise floor.

I am starting my project in WSN by implementing DSP algorithms in actual WSN motes. How to check whether the hardware is capable of running a simple algorithm as convolution.

For example if I have a BER = 8 x 10^-12 and I want to change it to 8 x 10^-4 so i need to add some noise to signal for this purpose by using awgn function in matlab. But I want to add/subtract noise in vertical histogram of eye diagram for two level system. This will make the signal noisy and my required BER can be achieved. So how can I do this operation?

I would like to validate method for converting PSD to time series with existing code or examples.

I'm interested to find out the reason for increases of mainlobe's width and decreases of sidelobe's amplitude in windowing IIR filters with non-rectangular windows,like Bartlett,Hamming,... .

Dear all,

I used oscilloscope to measure voltages and got data from two channels, each channel has time data and values data. now, I want to calculate magnitude and angle as ( A ∠±θ ) for each channel and magnitude and phase shift between two voltage channels as ( A ∠±θ ) by using MATLAB.

DATA file in attachment.

Could you help me to do that?

I am using a FTDI's IC FT4222H, a programmable one which was released few months back. It's used for interfacing I2C/SPI based slave or master devices and acquire the signals or data. I am using the Evaluation module of the same IC to act as a I2C Master and communicate with a EEPROM 24LCB16 for reading and writing the data from and to it respectively.

I am using LabVIEW to communicate with the FTDI IC through USB. I am not using the Virtual COM port whereas I am importing the FT4222H .dll into vi and executing the program such a way.

I find the device is listed properly in the VI, it is getting recognize as FT422H. The mode selected is Mode 3 where the I2C Master/Slave, SPI Master/Slave is enabled and the GPIOs are disabled. So it is listed as FT4222H.

Even then the device is getting opened and doesn't through error from FT_Status till the device is initialized. Here I have configured the device as I2C Master and in the next step I am reading the data from EEPROM. But the following errors are listed.

1. Initialize- 1000( FT_STATUS)

2.Read device - 3

3. Un initialize- 3

4. Close status-1.

The DWORD are listed under the Appendix of datasheet.

If someone can reason out the solution for this kindly help me out to go ahead. Its almost done only the write and read operation has to be performed.

I have attached the Zip file of VI and Sub-VIs that I am executing. If you find any errors in those please let me know.

I have to decide the area of research for my Master's and would like to know the current research trend in the field of DSP/ image processing.

I try to implement FBMC in simulink. I have already matlab code for FBMC but when I implement in simulink, it creates problem as poly phase network block is not available in simulink library.

I understand that the purpose of using an equalizer is to shorten the Impulse Response of a channel.In most examples I have seen so far,equalization in done in the Z-domain.Now,I have an ADSL channel response from 1hz to 1.1MHZ.How can I convert this frequency response into the corresponding Z-transform response?In short,how can I design a matlab equalizer for this kind of channel?

Hi, We have a digitizer, and we want to calculate it's response (pole- zero or amplitude-phase of the system),

knowing the chip used in digitizer, we know that it has 24-Bit resolution, and it's peak-peak is 5 (+-2.5) volts, Now, if anyone can suggest a way to obtain the response? For example by giving the step input (with a signal generator and obtain the outputs in counts),

Furthermore, How sampling rate affect the response, should we calculate one specific response for each sampling rate?

Thank you,

Based on the formula

**Δ**f=1/T=fs/N, it can be understood that in order to improve time or frequency resolution one has to either change signal properties or adjust window length. Then how can it be claimed that a particular signal processing technique offers better resolution than the other? For instance, it is well known that Stockwell transform has a better resolution as compared to Short Time Fourier Transform, BUT HOW?I have no formal training in electrical engineering (where DSP is staple) but I do have a bachelor's degree in mathematics. I prefer a textbook with plenty of examples. Thanks very much in advance!

30MHz pulse modulated IF signal having width around one microsecond is sampled at 200MHz. We have requirement of measuring the IF pulse carrier frequency drift with an resolution of 1kHz.

Explored usage of costas implementation in FPGA,but not succeeded. Looking for alternative approach for receive frequency drift measurement in FPGA/digital domain.

Could any one suggest for algorithm/concept for IF pulse frequency measurement in FPGA.

Thanks in advance.

M.Ashok

LTE and Wifi standards define Transmit EVM but not for the "user side". User side EVM usually depends on receiver specs (noise floor, etc) but what are the standard acceptable values of OFDM after wireless transmission using blind equalization or for spatial diversity or multiplexing?

What kind of projects are out there in neural system analysis?

What are the take aways?

I know it can be applied to many of the industries, commercials, and at home.

But I am bit vague or cannot grip on what the term stands for and what can be done within.

I honestly thought neural systems are somewhat related to bioengineerings.

Would you kindly give some explanations??

How can I use cooperative spectrum sensing to eliminate primary user emulation attacks in cognitive radio networks using energy detection?

As a mathematician trying to understand the way the Signal-To-Noise ratio works in Digital Signal Processing, I have the following observation:

A signal is recorded, suppose I recorded a class lecture. When I insert this recording in audio-software which shows the recorded sound waves over time, I am able to determine the amplitude of the teacher's spoken voice and the amplitude of (static class) noise when the teacher is silent for some time. Suppose my recording indicates that the amplitude of the sound waves when my teacher speaks is at 50 dB and 20 dB when he is silent. For a signal-to-noise ratio I would instinctively divide 50 over 20, obtaining a ratio of 2.5. Or maybe more instinctively, the noise is 40% of the total incoming sound (noise-to-signal). Is my intuition failing me because the scale of sound (dBs) is not linear?

From one source I read that I could interpret determining the signal-to-noise ratio as [Teacher+Noise in dB]-[Noise in dB]=[Signal-to-Noise in dB], which would result in a 30 dB signal-to-noise ratio in the above mentioned example. Can anyone confirm if this is correct?

*Digital image processing*is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing,

*digital image processing*has many advantages over analog image processing.

Kindly Suggest me some research articles in Digital image processing using Digital Topology.