Science topic

# Communication & Signal Processing - Science topic

Explore the latest questions and answers in Communication & Signal Processing, and find Communication & Signal Processing experts.
Questions related to Communication & Signal Processing
• asked a question related to Communication & Signal Processing
Question
They say in the comments and documentation that it is implemented using a "Direct Form II Transposed" realization.
But for FIR filters (i.e. when a = 1) I suspect it uses an fft() to do the convolution in the frequency domain when it is faster to do so.
This would make sense because the procedure operates on a batch of data, not a sequential stream.
I don't suppose it really matters if the result is the same. I was just wondering if anybody knew for sure (for sizing and timing reasons).
Hugh Lachlan Kennedy My implementation is attached and it's equivalent to use 'y = filter(B,1,x)' in MATLAB, where B is your FIR coefficients. I've tested it with some actual data and all results are the same.
• asked a question related to Communication & Signal Processing
Question
Hello, I need to find the amplitude of the FFT of a real signal in Matlab. I would like to get the same amplitude in the frequency domain (with fft) and in the time domain.
1) Division by N: amplitude = abs(fft (signal)/N), where "N" is the signal length;
2) Multiplication by 2: amplitude = 2*abs(fft(signal)/N;
3) Division by N/2: amplitude: abs(fft (signal)./N/2);
4) Others follow Parseval's theorem: amplitude = abs(fft(signal)./factor), being "factor" equal 1/fs (fs - sampling frequency).
Can anybody help me?
Thank you
I know that my answer to your question is very late. As your question is a basic question I would like to give my opinion.
- The peak value of a sinusoidal wave can be found in time domain by discretizing the time domain signal and build a time sequence of amplitudes. By comparing the values we can locate the highest amplitude which can be considered as corresponding to the peak value. The value located will be most accurate when one one samples at the time of the peaks and also when increases the sampling frequency.
So there are two requirements to increase the accuracy:
- Aligning the sampling with peaks
- Increasing the sampling frequency
Let is assume that we transformed this signal to frequency domain.
It is so that the DFT is just the sampling of the frequency spectrum of the signal.
So one can locate a signal in the discrete frequency domain with the resolution of
fs/2N. One always hits a signal at the intended frequency +- fs/2N.
As for the Fourier transform of single sine wave with a time window N/fs, one gets sinx/x Fourier transform. with the sine wave at the center of this curve.
The discrete Fourier transform will locate the point of maximum amplitude with the resolution of fs/2N as i hinted previously.
So, there will be an error in locating the peak value either in the time domain and in frequency domain because of the discritization.
Best wishes
• asked a question related to Communication & Signal Processing
Question
Hi!
I have two expressions in time domain; one of which is a simple one, another is a complicated, convoluted one. It is too difficult to prove they are equal in the time domain.
However, the Fourier transform (frequency domain) of the two expressions is identical. Then, can I claim with 100% confidence that the two expressions in time domain are indeed equal?
No, There are many different functions may yield the same FT image.
• asked a question related to Communication & Signal Processing
Question
Colleagues, good day.
-add path loss and fading to channel model between station antennas and telephone antennas, in the context of 5g network, MIMO technology and the NLOS scenario.
-add the dependence between resource blocks in time and frequency.
After watching several articles, I still have questions.
Now we have a simple channel model Y = H * X, where H is the complex channel matrix.
It is necessary to add here the fading and path loss for the NLOS scenario.
I went through many articles, chose the following
3) article "Channel estimation and channel tracking for correlated block-fading channels in massive MIMO systems" https://www.sciencedirect.com/science/article/pii/S2352864816301614?via%3Dihub#!
and watched a few questions from this site.
From winner2 [1] I plan to take the formula for path loss ((4.23) page 43), the scenario of experiment B1 (page 14 table 2-1, page 17), and the constants for the formula in accordance with this scenario (page 44 B1).
Question 1. Do I understand correctly that now, as indicated in [4a] by Emil Björnson , it is enough to convert the formula (pathloss + fading) and get variance = 10 ^ ((- pathloss + fading) / 10); and now multiply the channel matrix by sqrt (variance / 2)?
Question 2. In the documentation of the winner [1], it's recommended taking the random fading log-normal value. In [3] and [4a, b] it's proposes either Gaus or Rayleigh. Could you please comment on which one should I chose.
Question 3. In article [3], it’s literally the only place where mention is made of the correlation between resource blocks and their influence on each other. chapter 2 first paragraph and formula (2). I did not fully understand the reproducibility of this dependence in the context of my channel model. Have you seen such dependencies, can you share links to sources, where I could see and use them?
Thank you very much in advance.
I will not repeat the answer of my colleague Navid. I would like to add some conceptual comments which may be useful for the issue.
Which channel model is dominant depends on the transmission media. If the media is homogeneous the line of sight model is dominant. If the medium contains large obstacles such as building then lognormal channel model will prevail. If the medium is scattering rich it will be multipath dominant with Rayleigh fading is the most suitable model. In case of making a study one can study the effect of such propagation models on the system performance.
As far the variation of the channel model with time represented by resource element, depends on the time variation of the channel. Generally the channel can be considered consatnt during the correlation time which is dictated by the Doppler shift.
Best wishes
• asked a question related to Communication & Signal Processing
Question
Hello RG!
I want to know, is there any journal related to communications and signal processing which accepts paper containing only problem formulation? I mean mentioning some serious problems related to a system and provide a small solution, so that others can refer to those problems and propose some solutions.
If there are some papers like that, please refer it.
Any suggestion is highly appreciated.
It depends on the journal in which you will publish the paper
Best Regards Mohammed Saquib Khan
• asked a question related to Communication & Signal Processing
Question
How can implementation an Adaptive Dictionary Reconstruction for Compressed Sensing of ECG Signals and how can an analysis of the overall power consumption of the proposed ECG compression framework as would be used in WBAN.
Thid is in class of machine learning in youtubre
• asked a question related to Communication & Signal Processing
Question
Flying Car is now reality. Still it is not affordable in many aspects but future of flying cars cannot be underestimated. It has huge potential to give flying wings to an individual. Everybody can fly like birds.
Question is what are the possible pros and cons?
and
What are the major challenges which need to be addressed to make flying cars reality to common man?
first of all, the factor of saftey is very important, if a car stops on the road it just sits there while if it flies it would just fall on people or houses, there is two technological “limitations“ that needs to be considered, one of them being what will you use to power the car ? if it was gasoline then you are talking about a considerably large jet engine, if it was electric then you would have two problems first being that the propellers would be large in size and would not generate much thrust, the second which is a technological limitation of storing electric power most commonly used method is li-po and li-ion batteries which is not verry efficient and have safty issues of self igniting if subjected to shock, overcharging, and short circuiting.
• asked a question related to Communication & Signal Processing
Question
Can anyone suggest Scopus indexed journals with good impact factor based wireless communication or signal processing with shorter review time ,fast publication and no or low fees for public. ?
journal suggester of Springer and Elsevier will help. Please stay away from individuals asking for your paper to be emailed to them! Please avoid journals like IJSSE/IJERT and the likes. Verify the I.F claims of any journal by checking in the clavariate site and JCR.
• asked a question related to Communication & Signal Processing
Question
I want to display a signal waveform passed through AWGN channel, so I followed these block diagrams and referenced this website then finished this program.
At transmitter, I sent a bit stream 1 and 0, then modulated it to BPSK signal. When the signal was received, used coherent detection to demodulate it. To verify my program ( It seems to be right from the waveform. After modulating 1 and 0, the phase of two waveform shifted by 180º each other. ), I calculated the BER. The BER varies with SNR. However, I set SNR=1dB, the BER is still 0. It doesn't make sense. I don't know why result is that. Who can help me to check? Thanks
dear,
try this program:
%"Author: Krishna"
clear N = 10^6 % number of bits or symbols rand('state',100); % initializing the rand() function randn('state',200); % initializing the randn() function % Transmitter ip = rand(1,N)>0.5; % generating 0,1 with equal probability s = 2*ip-1; % BPSK modulation 0 -> -1; 1 -> 1 n = 1/sqrt(2)*[randn(1,N) + j*randn(1,N)]; % white gaussian noise, 0dB variance Eb_N0_dB = [-3:10]; % multiple Eb/N0 values for ii = 1:length(Eb_N0_dB) % Noise addition y = s + 10^(-Eb_N0_dB(ii)/20)*n; % additive white gaussian noise % receiver - hard decision decoding ipHat = real(y)>0; % counting the errors nErr(ii) = size(find([ip- ipHat]),2); end simBer = nErr/N; % simulated ber theoryBer = 0.5*erfc(sqrt(10.^(Eb_N0_dB/10))); % theoretical ber % plot close all figure semilogy(Eb_N0_dB,theoryBer,'b.-'); hold on semilogy(Eb_N0_dB,simBer,'mx-'); axis([-3 10 10^-5 0.5]) grid on legend('theory', 'simulation'); xlabel('Eb/No, dB'); ylabel('Bit Error Rate'); title('Bit error probability curve for BPSK modulation');
• asked a question related to Communication & Signal Processing
Question
I understand that the purpose of using an equalizer is to shorten the Impulse Response of a channel.In most examples I have seen so far,equalization in done in the Z-domain.Now,I have an ADSL channel response from 1hz to 1.1MHZ.How can I convert this frequency response into the corresponding Z-transform response?In short,how can I design a matlab equalizer for this kind of channel?
Interested
• asked a question related to Communication & Signal Processing
Question
I have been trying to figure out how to to generate a time series dataset for feedforward neural network.
Let's say I want to simulate a time series dataset with lenght n and order m using feedforward neural network by following steps:
1- firstly I randomly initial m values for weights, biases and data using uniform distribution with the specific parametes as mean and variance
2- secondly I calculate the n elements of time series data using feedforward neural network function
3- thirdly I randomly generate white noise using normal distribution with the specific parametes as mean and variance
4- then I adding generated white noise to my n elements of time series data which calculated form second step
I have found resource related to adding white noise to series and I see at the first we should generate n time series data completly and then we should adding white noise to each elements of generated time series. So when we want generate first element (m+1 element) we should not adding white noise to it. Therfor the next data elements as m+2, m+3, m+4,... generate using feedforward neural network function whithout the effects of white noise of m+1, m+2, m+3 ,...
As I see the data generation using feedforward neural network by above steps led to same constant value when m is smal (equel to 1 or 2,3) and led to zero when m is large (m=6,7,..14) and my data series whithout white noise is same to a const line for smal value of m, and is a zero for large value of m! And finally by adding white noise to it my simulated series is same to generated white noise.
What I am trying to find out for simulate a time series dataset for feedforward neural network?
What type of data.
• asked a question related to Communication & Signal Processing
Question
I got a high value for q-factor for SIW filter. is this value correct for these structures?
Far difference between SIW resonator and dielectric resonator. The loss of a resonator is dominated by copper loss which is not present in DR. Replace all metals of SIW by PEC then Qu will exceed even 5000.
• asked a question related to Communication & Signal Processing
Question
I use following code to simulate a QPSK signal through 3-path Rayleigh channel,
signal_r=2*(rand([Ls,1])>0.5)-1;
signal_i=2*(rand([Ls,1])>0.5)-1;
QPSK=signal_r+1i*signal_i;
Es=((QPSK)' *(QPSK) ) / Ls;
N0=Es/10^(SNR_dB/10);
h=sqrt(P/2)*(randn(1,3)+1i*randn(1,3));
Is it right?
Why do I need to multiply (randn(1,3)+1i*randn(1,3)) by sqrt(P/2)? The denominator is always 2?  How do I define P ?
h is called  impulse response of Rayleigh channel? 3 paths are also called 3 taps?
Hello,
overall, your code is not bad, it just needs to be improved. I mean:
- you didn't create a signal, but symbols. In order to simulate a single carrier signal, you just have to oversample (zero pading) and filter (root raised cosine for instance) the symbols.
- I don't understand why you multiply your channel by sqrt(P/2). If it is for a normalization matter, then it is wrong. You may remove it. However, your 3 taps channel is OK, the Rayleigh-ness of the channel is intrinsic to the function randn. In further  works, just take into account the delays and the gains of the taps. But for a first test your code is fine.
- Then you can use the conv function to filter the signal by the channel.
- The SNR is usually defined as the power of the received signal over the noise variance. So for a given SNR : assess the power of the received signal P = sum(abs(x).^2)/length(x), then create a noise vector of power 1 : N1 = sqrt(0.5)*(randn(1,L)+1i*randn(1,L)), and finally create the noise vector with expected power:
N = sqrt(P_N)*N1 with P_N = P*10^(-SNR/10)
- Finally, the signal is the sum of the transmitted signal over the channel plus noise N.
Regards
• asked a question related to Communication & Signal Processing
Question
Hi everyone, I'm working on arithmetic compression. I am trying to compress the channel iut of the international standard. With my algorithm I have a compression ratio of 0.86 if I take the channel A iut with a modulation ofdm of 128 subcarriers.
NB: I send the exponential form of the complex numbers therefore of the angles comprised between 0 and 2pi.
Then my question is: would anyone have an idea to create an algorithm arithmetic enough powerful because mine gives results that do not bother me
Thank you dear Luay Shihab . I think this is very clear resume on the compression. Now I understand better the compression process.
So on  practice or  implementation. My idea is to implement arithmetic compression which is an entropy compression (lossless). My test vector is a vector of 500 elements for example. Each element can take the value from 1 to 256. In my vector there will necessarily redundancies. And I can do compression with an arithmetic algorithm: the one proposed by: " Arithmetic coding for data compression I. Witten, R. Neal am\nd J. Cleary Communication of the ACM, 1987 Volume 30 Number 6 "
It works well but I would like to know is that there would not be another method more effective? Which would make it possible to have one with the same vector a better compression.
Ps: if you are interested I can share the part of the code that I modified and implement.
thank you.
• asked a question related to Communication & Signal Processing
Question
Hi all,
I am trying to simulate a very simple wireless communication system. I am giving the description of the system below.
1. The wireless Communication system has one transmit antenna and one
2. The wireless channel is having multipath (In the simulation I have used static
4-tap channel coefficients  just for simplicity)
3. I am adding AWGN noise at the receiver corresponding to the snr value (snr
varries from 0 dB to 10 dB. Signal power is assumed to be 1)
4. In the receiver I am using Maximum Likelihood sequence estimation. (I am
using comm.MLSEEqualizer object in MATLAB for implementing MLSE)

Now the problem is when I simulate the program, The BER graph is remaining flat between 0.1 to 0.01 even though I increase the snr to 50 dB.
I really can not understand what mistake I have done. Can someone please help me to solve this problem?
I have attached my matlab program and BER graph for your reference.
Regards,
Balaji.D
Hi,
Probably the equalizer should reset each SNR loop. I did it through a workaround but you can check the MLSE structure and do reset each snr loop.
check this code, it works!
clc;clear;
close all;
ii=1;
snr_vec=-10:1:30;
for snr_i=1:length(snr_vec)

snr=snr_vec(snr_i);

N = 100000;
%chCoeffs = [0.986+0.5i;0.845;0.337;0.32345+0.31i;0.2];
chCoeffs = [0.5;0.00000000001];
mlse = comm.MLSEEqualizer('TracebackDepth',10,'Channel',chCoeffs,'Constellation',(1/sqrt(2))*qammod(0:3,4));
errorate = comm.ErrorRate;

Tx_Data = randi([1 2],N,1)-1;
modData = (1/sqrt(2))*qammod(Tx_Data,4,'InputType','bit');
chanOutput = conv(chCoeffs,modData);%filter(chCoeffs,1,modData);
chanOutput = chanOutput(1:end-length(chCoeffs)+1);
noise = (randn(1,length(chanOutput))+1i*randn(1,length(chanOutput)))/sqrt(2)/sqrt(db2pow(snr));
Rx_data = chanOutput+noise.';%awgn(chanOutput,snr);
equalized_Data = mlse(Rx_data);
% clear mlse
% mlse = comm.MLSEEqualizer('TracebackDepth',10,'Channel',chCoeffs,'Constellation',(1/sqrt(2))*qammod(0:3,4));
% reset(mlse)
demodData = qamdemod(equalized_Data,4,'OutputType','bit');
errorStats = errorate(Tx_Data,demodData);
% errorStats(2)
ber(snr_i) = errorStats(1);
save ber ber snr_vec;
NumofErr(snr_i) = errorStats(2);
% ii=ii+1;
clear;
end
semilogy(1:length(snr_vec),ber)
grid on;

• asked a question related to Communication & Signal Processing
Question
Wireless communication, Stststical analysis, Signal Processing
Dear Rupaban,
I suggest to you attach publiction and files in topics.
Best regards
Deleted research item The research item mentioned here has been deleted
• asked a question related to Communication & Signal Processing
Question
I have measured data from a vector network analyzer. Now I need to analyze the data by doing frourier transformation and inverse transformation.
Since I am new to the matlab environment I get stuck every time without any result.
can someone help me, please?. They are small formulas
What do you mean exactly?!
I saw the files. You should enter the equations in Matlab with commands such as sum,...
• asked a question related to Communication & Signal Processing
Question
My doubt is about the dimension of the subspace when one signal is being oversampling. I would like to 'visualize' one example of this key idea of blind calibration. Next, the original text:
"Assume that the sensor network is slightly oversampling the phenomenon being sensed. Mathematically, this means that the calibrated snapshot x lies in a lower dimensional subspace of n-dimensional Euclidean space.
Let S denote this “signal subspace” and assume that it is r-dimensional, for some integer 0<r<n. For example, if the signal being measured is bandlimited and the sensors are spaced closer than required by the Shannon-Nyquist sampling rate, then x will lie in a lower dimensional subspace spanned by frequency basis vectors. If we oversample (relative to Shannon-Nyquist) by a factor of 2, then r =n/2. "
Thanks for your answers. I finally found something that helped me to visualize the general idea. In Blind Drift Calibration of Sensor Networks using Signal Space Projection and Kalman Filter they explain what is a 'signal subspace' and also how to construct the orthogonal projection matrix P.
• asked a question related to Communication & Signal Processing
Question
Does anyone have ook noncoherent demodulation simulation in MATLAB comparing the results to the theoretical formula?
I am implementing the ook noncoherent demodulation simulation in MATLAB, e.g., having a time domain signal, adding noise, etc.
And my results, Symbol error rate vs ES/N0, has some bias compared to the theoretical formula.
Thanks a lot
• asked a question related to Communication & Signal Processing
Question
For a complex signal x to be transmitted over a channel with complex impulse response h, x will be convolved with h. What will the mathematical operation done in the reveiver to cancel the effect of h? I am confused because I have a code in which x is multiplied with h in the transmitter, and in the receiver, the received signal is multiplied by conjucate h and divided by h squared.
Dear Moahmed,
in OFDM we use one-tap equalizer, assuming the channel can be approximated by a single complex factor over the frequency range that a subcarrier occupies (layout of your subcarrier-bandwidths such that the channel can be assumed frequency-nonselective). This equalization is done by the complex multiplication.
• asked a question related to Communication & Signal Processing
Question
I am looking for a coding/error correction scheme which I can apply after QAM modulation. not on binary data or before QAM modulation. I'll be grateful
Trellis Coded Modulation (TCM) which combines the coding and modulation together.
There are a lot of literature on TCM in 1990s.
• asked a question related to Communication & Signal Processing
Question
How can we ensure that a symbol can be decoded in a time slice in NOMA?
Thanks Abdullah, tried to download it but it didnt work. Anyways, I am reading one relevant paper and hope i ll soon find the answer to my question
• asked a question related to Communication & Signal Processing
Question
I am working on the implementation and investigation of DVB-S2 LDPC codes.
Normally the output of the Matched filter after the channel and before the LDPC decoder is scaled by a factor of 4* Eb/No. (In the case of Soft Inputs)
I am using BPSK modulation and the noise being added is AWGN.
In the case of Hard Inputs the scaling factor 4* Eb/No varies, in the case of code rate of 9/10 the optimum scaling factor we found was 1.6 * Eb/No. For code rate of 1/2 it was 2.6 * Eb/No.
Can some one explain to me why is there a difference in this scaling factor? Is there a theoretical method to calculate this? Is it a code defined parameter?
Any help would be greatly appreciated.
Dear Mitun,
I suggest to you link and attached files in topics.
-Patent US8286048 - Dynamically scaled LLR for an LDPC decoder ...
Best regards
• asked a question related to Communication & Signal Processing
Question
I tried finding by checking correlation between two signals, but its not helping much is there any standard method to find match between two signals?
computing and comparing the RMS error
• asked a question related to Communication & Signal Processing
Question
Hi,
I am attempting to simulate rayleigh fading in the amplitude of a signal by multiplying a transmitted signal with rayleigh distributed coefficients. I create the Rayleigh random variable using two gaussian random variables of zero mean and variance 1. The problem is that a lot of these coefficients give a value greater than 1, so this causes amplification of the signal. So I was wondering if I should lower the mean of the rayleigh distribution and if so, what would be a practical value?
Variance is computed over many experiments for a random variable. An average value computed over a long period should be same as defined in your code. Do an experiment by  capturing complex Gaussian RV:  h=(1/sqrt{2}) * (N(0, 1) + j N(0,1))  for t=1, t=10, t=100, t=1000, and t=10000 times. Compute mean(h) and var(h) for t=1, t=10, t=100, t=1000, and t=10000. You will see that mean and variance converge to its specified value. In essence, instantaneous values can be greater/equal/less than the specified value but the average value will converge to the desired value.  Note that Rayleigh fading coefficient is abs(h).
• asked a question related to Communication & Signal Processing
Question
Massive MIMO Communications,
Trinh Van Chien and Emil Bj ¨ ornson
It could be based on OFDM, or some other multicarrier scheme. OFDMA usually refers to a system with only one user per subcarrier, while all users in massive MIMO use all subcarrier. The separation is in the spatial domain instead.
• asked a question related to Communication & Signal Processing
Question
One reviewer required that"Proof that the results a OFDM signal are the chaotic signals. " But we cannot. We think the OFDM signal is not the chaotic signal. But we cannot gave some detailed illustration. Please help me.
OFDM is a multiplex scheme, if the transmitted data series is a chaotic one, than the corresponding OFDM signal may have some chaotic properties. Without any restrictions on the modulated data, we cannot say that an OFDM signal is a chaotic signal. For example, if the data is a periodic one, then the OFDM signal is also periodic, It is obviously not chaotic.
• asked a question related to Communication & Signal Processing
Question
Basically I want to know more about the synchronization and timing of USRPs. I had gone through many online articles and information available,want to acquire some practical knowledge.
If you want to synchronize two USRPs, you need to first perform certain calibrations before data transmission. Furthermore, when you are transmitting the data, there should be some preamble at the start of your transmit data which will be detected by the receiver to sense that data transmission is initiated at Tx..
Consider the example of a QPSK transmitter and receiver system using two USRPs in the following two links
In the main block diagrams in both these models, above the main block diagrams, there are three links in blue color mentioning some "companion block" and above them is one note which says that "run the companion block first before running the main model". This means that before actually running the model, you need to click on the blue links and run them which performs frequency calibration between the Tx and Rx USRPs. This is regarding the frequency calibration of Tx and Rx. When you will run the example, you will get to know how to put the IP address of the Tx and Rx USRPs to do the calibration.
For data synchronization, in the Tx model link, just look at the second block diagram that shows some data generation. In that, there is one "UNIPOLAR BARKER CODE" which is used as a preamble. This preamble is detected at the Rx to sense that data transmission is initiated at the Tx.
If you can understand the QPSK example mentioned in those links, I hope you will be able to understand the synchronization related issues.
Best of Luck..
• asked a question related to Communication & Signal Processing
Question
In a bayesian framework where observations are independent normally distributed with unknown mean and variance - is there a closed form expression for the conditional expectation of the marginal likelihood for series of observations where the expectation is conditioned on the parameter vector?
Intuitively, you proposal is right. You need to make the question more clearly.
• asked a question related to Communication & Signal Processing
Question
Hi, currently I'm doing project based on bio-inspired signal as new source waveform for radar application. However, I faced difficulty on detection. Typical time domain correlation/match filter method seem doesn't work with my signal. Is there any better other option on how I can do my detection? Kindly please advise.
Below are some properties of my signal that I'm working with:-
1. Very short (less than 3ms)
2. Signal pattern have its own unique properties (each signal contain unique frequency component, and can be found on both Transmit and Echo signal).
3. Wideband signal.
Before that, I have try pitch detection technique and it seem there is acceptable result. But, I'm just not very sure the appropriate method on how to correlate between Tx and Echo signal since I'm quiet new in signal processing field. If somebody can advised me on how to perform correlation in frequency domain with basic matlab code/algorithm, it will be very good.
I work on the studies of stochastic resonance which is able to utilize noise to detect weak signals and moreover is suitable to short data sets.
• asked a question related to Communication & Signal Processing
Question
In a LTE-Advanced network if the transmiting signal is a randomly generated bit sequence and when sending it to the receiver it goes through OFDM. Finally i need to calculate RSRP and RSRQ using a reference signal. This reference signal is the same as the initially generated transmitted signal ?
In LTE network, a UE measures two parameters on reference signal. Reference signal is transmitted with the OFDM symbol. OFDM symbol 0 and 4 in the slot are for reference signals. The following URL gives very good understanding of what you might be looking for.
• asked a question related to Communication & Signal Processing
Question
Hi, please can anyone  tell me how to remove the ICI effects of the phase noise of the oscillator from the OFDM signal. I already calculate and remove the Common Phase Error (CPE) and using the Linear interpolation (LI -CPE) to estimate the ICI which is due to the phase noise.
Dear Khalid & Lavish,
Many thanks for your reply. I already have many papers on this subject and the following dissertation is excellent.
However, I face a problem in the implementation of the ICI estimation method especially the Linear Interpolation LI-CPE method which was mentioned in the in the paper that Lavish mentioned.
• asked a question related to Communication & Signal Processing
Question
ECG feature extraction algorithm for mobile healthcare applications.
If you are speaking about the sampling characteristics when sampling ECG then the first question to ask is 'what problem are you solving?'.  Specifically, are you only interested in calculating HR or do you intend on detecting arrhythmias?  If arrhythmias: life-threatening, atrial, ventricular?  Do you want to work with people with pacemakers (single/dual)?  The answers then will drive your sampling rates of anywhere from 100 Hz to 500 Hz,  It also will drive what type of bandwidth filter you want to use.  Real-life ECG does not look like it does on TV hospitals shows:  it is full of noise.
• asked a question related to Communication & Signal Processing
Question
Hi,
I am using a digital lock-in amplifier. Considering lock-in amplifiers acts as a very very narrow band filters, is the frequency of a reference signal is equal to or approximately equal to the frequency of an input signal ?
thanks and regards
The reference frequency has to be the same as the frequency of the signal to be measured. (Although some lock-in amps also measure phase.) If it's not the same, the output of the lock-in amp will be zero. See this description:
• asked a question related to Communication & Signal Processing
Question
To set up a speaker recognition system using NIST 2004 dataset I found speaker indices of test "x???.sph" from the address : http://www.itl.nist.gov/iad/mig/tests/spk/2006/
To train total variability I need to use speaker indices of train data "t???.sph". where can I find it?
ِDear Amir
No I did not find the answer for NIST 2004
I have not NIST 2008, If you have it, we may have beneficial negotiations.
• asked a question related to Communication & Signal Processing
Question
In OFDM some of the subcarriers are zero-padded at the edges for oversampling. Say, if there are 256-IFFT bins, then only 128 are used. How can I relate it to the sampling theorem.
Dear Gokul,
The basic parameter for ofdm system is the symbol time Tsy which is related to sub carrier frequency spacing deltaf. This time interval contains a number of samples equal to the total number of fft points comprising all the sub carriers whether it has a value or is nulled.
Therefore for 256 subcarrires one has 256 samples. In the frequency domain they cover a bandwidth= N deltaf, and in time domain then the sampling time will be Tsa = Tsy/N = 1/ N deltaf and the sampling frequency will be fs= 1/Tsa= N deltaf. The equality of the number of samples and the number of sub carriers is dictated by the equivalence of signal in the discrete time and  frequency domain.i
wish you success
• asked a question related to Communication & Signal Processing
Question
I mean, there are certain things that can only be expressed with music? This language that use no words but notes. Aren't the words capable of express the same thing? There is a certain type of knowledge that admits no words but music instead?
Maximiliano,
''Why the humans created the music?''  WE did'nt, only add some more. Many animal species have created musics much before our primate ancestors.  Why did biological evolution created it?  It makes senses that some early animals had to detect other moving animals through the sound they emit while moving and walking.  These type of sound contains a lot of rythms associated with moving animal bodies. The purpose of the detection is also to infer the type of movement, the type of animal doing it, the position and direction of trajectory, etc.  HOw this could be best achieved?  Through interpretation of the sound vibrations in the ears through the motor systems equipped for enacting such movement in the animal.  So I assume that the origin of music is this interpretation of sound through the self-enaction of the motor system.  At some point in evolution it became possible for some species to go futher and to actually produce sound using the motor system in ways that were pleasing for the animals and could be used for many social functions such as mate selection through musical selection or coordination of motions into a kind of swimming dance as for sea mammals, etc, etc.
Music is a universal cultural elements of homo sapiens and music has many common elements with human languages.  Babies responded first to the musical part of the language sound and it is likely that humans became humans through a form of collective singing dancing. There are many empirical evidences for this hypothesis.
• asked a question related to Communication & Signal Processing
Question
Currently, I am working on the project named "Denoising of ECG Signals using Empirical Mode Decomposition". But I don't know anything about Empirical mode Decomposition and how it performs denoising operation. I want to write a research article on it.
Kindly guide me for this.
• asked a question related to Communication & Signal Processing
Question
For sensing a real time signal by using SDR what is the procedure to follow to set the threshold value. In most of the documents its mentioned as trial and error method. Is it the average of the power. But then if noise power is more  or less then it leads to mis-detection  and false alarm. Can any one suggest a way to set the threshold to detect a real time signal using SDR?
Mr. Kishore,
The threshold setting depends on the spectrum sensing technique to be used.
The first thing you must do is to estimate the signal power when the channel is idle (white space). Assuming that the noise is zero mean, and follows a Gaussian distribution, the sensing threshold (in case of the energy detection) depends on the false alarm probability (Pfa) you need by means of the Marcum-Q function (or the Gamma cumulative distribution function). The desired Pfa determines the appropriate threshold value, according to some noise variance (noise power).
Important: if your application is real-time, the sensing time to use is an important parameter. You must select a number of samples to do signal processing, without decreasing so much the overall throughput of your system. This is a fundamental tradeoff.
I recommend to take a look to the paper attached for more details. It's a performance comparison between spectrum sensing techniques. The corresponding equations for threshold setting are given for different techniques.
You can specially use equation 10, where \gamma is the threshold value and N is the number of samples (sensing time). Note that the test statistic used is the normalized energy (see equation 9), where you need to estimate the noise variance (noise power in white spaces).
Note that in equation 10, the threshold value must be cumputed from the INVERSE Gamma cumulative distribution function. In MatLab this can be done using the funtion "gaminv(P,A,B)", where P is the Pfa, A = N/2 and B = 2/N.
I hope this helps.
Best regards,
Luis M Gato.
• asked a question related to Communication & Signal Processing
Question
The coexistence of wireless devices (WDs) represent an important challenge. As far I understand the coexistence concept considers at least two considerations - spectrum (frequency) & coverage area- and I include other one: time. With these three 'parameters' I can describe, in wide sense, the coexistence but I do not know if my context is complete, then I would like to know your idea,
-How do you define Coexistence of WDs?
-What parameter must I consider to define it?
Thanks!
For me the coexistence of different wireless devices is the presence of one in the activity region of the other. It has a sense of neighborhood. From the conceptual point of view coexistent is possible  when they do not interfere with each other. So, every receiver must not receive an interference greater than its threshold value in its in and out of band.  So, the other transmitter must work at a power , distance, frequency, time and code such that its recieved power at the intended reciever must be smaller than the threshold.  If the direction is also known it can be used to control the interference between a foreign transmitter and intended receiver.This is the space division.
So there are the known division methods to prevent interference: Frequency, time, code division, space and power level control to control the interfere at certain receiver.
Best wishes
• asked a question related to Communication & Signal Processing
Question
For a research I need to use 3 Proximity sensors at the tool tip of 3DoF RRR robot arm, aligned to (X3,Y3,Z3) frame. How to convert these sensor readings w.r.t base frame coordinates (X,Y,Z)? What about Eular angles method?
Dear Dr. Mohan Chandra , thanks lot for the valuable information.
• asked a question related to Communication & Signal Processing
Question
Please suggest on this, i am trying square law method, hilbert also i am looking into it
Many different options here.  How much noise is there?  Is the noise in-band or out-of-band?  How fast does the frequency of the 3-6 Hz vibration change?  How fast does the 0.33 Hz envelope change - both in amplitude and in frequency.
The following should work with either analog or digital signal processing:
Band pass filter the vibration signal to reject DC and low frequencies below 3 Hz and frequencies higher than 6 Hz.
Extract the amplitude of the vibration envelope.  Options include precision rectification followed by a low pass filter or peak-follower circuit with appropriate (e.g. ~0.5 s) decay time.
Multiply the envelope signal by a 0.33 Hz cosine wave and by a 0.33 Hz sine wave, and low pass filter each product to derive separate in-phase and quadrature signals.  The time constant of the filter determines the detection bandwidth.  Square and add the in-phase and quadrature signals and take the square root of the result to extract the amplitude of the 0.33 Hz envelope modulation.  For analog processing, the multiplication, square and square root operations can be implemented using 4-quadrant multiplier integrated circuits.
Alternatively, if the frequency and phase of the 0.33 Hz changes reasonably slowly, apply a phase-locked loop to extract a stable 0.33 Hz reference signal, and use this for synchronous (lock-in) detection of the 0.33 Hz envelope signal.  This can offer lower noise than the RMS detection scheme outlined in the preceding paragraph.
• asked a question related to Communication & Signal Processing
Question
Hello everyone,
I am trying to evaluate the performance of FSO communication system and compare it to RF system under foggy weather. I selected Nakagami-m as a distribution for the RF signal. To do this comparison, I need the proper values of Nakagami-m under fog.
Can anyone help with the Nakagami-m fading parameters for the RF signal?
Dear Maged,
The effect of FOG on the transmission coefficient  of the electromagnetic waves is measured and the data are given in the Link:https://www.rand.org/content/dam/rand/pubs/reports/2006/R1694.pdf
These data  can be used to calculate the model parameters of the wireless channel.
In case of no multi path the data can be used directly to calculate the path loss. In case of multipath the impulse response of the channel will be sum of impulses randomly delayed from the first received impulse. The muyipath delay model depends on the terrain of the propagation environment and you may adapt the patern suiting your environment. I think that the fog will affect the strength of the received copies from the different paths.Therefore you can adapt a model similar to yours multipath one and modify the attenuation values of its different copies to suit the FOG propagation medium.
This is just a proposal to proceed in the solution of the problem.
Best wishes
• asked a question related to Communication & Signal Processing
Question
"Electric signal processes at light speed. If we can represent a large prime number in binary and cryptography algorithms are applied in the form of light, then processing speed will be faster". Please suggest some idea whether this type of implementation is possible or not.
Usually, if light is used in communications, it is not used for processing. It's used for the transmission medium. Processing is done in silicon.
The advantage of light in the transmission medium is not propagation speed, but rather bandwidth. So your Shannon capacity of the medium will typically be larger than RF media. Greater capacity is often referred to as "faster," because it can transfer more bits/second.
Any form of encryption or coding should also be applicable for use in light transmissions, I would think. Ultimately, if you use laser instead of incoherent light, you should also be able to use RF-like modulation schemes. Otherwise, with incoherent light, you will typically be limited to using "intensity modulation," and what they call wavelength division multiplexing (which is frequency division multiplexing).
Okay, so what do you mean by "processing speed will be faster"? Are you considering that the processors are also optical? So that their theoretically broader bandwidth will increase processing speeds? Perhaps. Here's a start, to see what some issues are and what some arguments for and against might be.
• asked a question related to Communication & Signal Processing
Question
wireless channel, fading channels, channels, spectrum sensing
Nakagami channels are used when the received signal has contributions from both diffuse and specular scattering, i.e., the electric field is the sum of a strong component (which is not necessarily line of sight) and several contributions with less amplitude. The m parameter relates the amplitudes of strong and weak components. Rayleigh fading is obtained when m=1. The Nakagami model is in general very similar to a Rician channel, but its pdf has a closed form expression which is simpler to evaluate numerically (it does not contain Bessel functions) and fits better some measurements. There is also an equivalence between the K parameter in the Rician distribution and the m parameter in Nakagami. For a very brief introduction to multipath channels you can see Andrea Goldsmith's book:
Or if you need something more especific about Nakagami fading, you can check this physical model:
Why is Nakagami-m fading channel a good in practice in place of Rayleigh fading ? - ResearchGate. Available from: https://www.researchgate.net/post/Why_is_Nakagami-m_fading_channel_a_good_in_practice_in_place_of_Rayleigh_fading [accessed Jan 5, 2016].
• asked a question related to Communication & Signal Processing
Question
How to use filter coefficients of FIR filter in optisystem without coupler?
Use fork in Tools box.
• asked a question related to Communication & Signal Processing
Question
the use of Phase digitizer in order to the sample the in-phase and quadrature outputs from the I/Q detector to get the  the discrete instantaneous phase
the topic
instantaneous frequency deception jammer.
You can convert from Cartesian to Polar coordinates by using the CORDIC algorithm.
It is widely used in digital environments as an efficient tool for such conversation process. For detailed information please revert to the link:http://www.mathworks.com/help/fixedpoint/ug/convert-cartesian-to-polar-using-cordic-vectoring-kernel.html
wish you the best.
• asked a question related to Communication & Signal Processing
Question
In AF relay systems, there need two phase to complete the whole communication:T/2, T/2. Then the information rate can be expressed as R=1/2log(1+SNR).
respectively
In my communication system, the source communicates with the destination,
the total communication time is T, and it needs two steps to complete the whole transmission. The first step takes T1 time, the second step (destination receives information signals) takes T2 time respectively, and T1+T2=T.
Could I express the destination received information rate as
R=(T2/T)log(1+SNR)  ?
Yes, provided T2 represents the total information transmission duration over the entire period T.
• asked a question related to Communication & Signal Processing
Question
As for linear ZF or MMSE detector, the coefficient of soft output is that symbol power over symbol interference power plus white noise power.
As for non-linear ZF-SIC or MMSE-SIC detector, the coefficient of soft output is that symbol power over symbol interference power plus not only white noise power but also error propagation power of decision feedback.
Does anyone know how to calculate this factor of error propagation power of decision feedback, and get a correct LLR value.
Many thanks
Jiajun Zhu
Calculating LLRs requires generating a set of vector symbol candidates. Once you have a set of candidates, the LLRs can be computed the same way as a sphere decoder does it. How to generate such set? As an starting point, I'd suggest you to try the following approach:
1. Calculated the the ZF/MMSE equalized vector symbol.
2. Before performing hard decision, find the K closest constellation vectors to  your   ZF/MMSE equalized vector symbol.
3. Then you will have K+1 candidates from which you can compute the LLRs. (K obviously depends on the size of the constellation set).
Hope this helps.
• asked a question related to Communication & Signal Processing
Question
The question is about a imaginary cubic R^3 pointcoordinate-representation in which each corepoint (monade) is a basment for a graph which orientation can be defined using two angles (phi and psi).
This could have the advantage to be able to integrate functions based on tangents within the two angles defining a single line in the ordinary coordinatesystem and can solve problems like send and return values in communication systems (VSWR - r).
Once implemented - which is a very difficult quest, there could be a way to simplify functionality that can be integrated seperately - isn't it ?
The two angled representation in sort of cubes of the R^3 can then be used for magnetic and electric waves or for example building a function for calculating the evolutionvelocity of pointamout-mathematical functions for numerical sciences.
Using the Taylor-rows with the bernoulli-numbers for tangential calculations an amount of points or mathematical elements could be paralellized and the precision could be adapted fluently!
sorry I do not have the expertise to answer this question.
• asked a question related to Communication & Signal Processing
Question
I'm planning to use less time consuming programming languages/ libraries for signal processing. As far as I know it is possible to use phyton or It++ (DSP libraries based on C++). I've already used it++ for signal processing, however I'd like to try something different. Could anyone recommend which one is better phyton, it++ or a different one?
Python (with scipy/numpy packages) s by far your best bet.
• asked a question related to Communication & Signal Processing
Question
For normalized auto correlation, we normalizes the sequence so that the auto-correlations  at zero lag are identically 1.0.
So, I want to know how it will be in the case of cross correlations?
Hello,
Assume you would like to calculate the normalised cross correlation of two sequences, x(n) and y(n), of length N. Then
Normalised_CrossCorr = 1/N * sum{ [x(n) - mean(x)]* [y(n) - mean(y)] }/ (sqrt(var(x)*var(y))
where
>the sum is taken from 1 till N.
> mean(x) is the mean of x
>var(x) is the variance of x.
>sqrt is square root
• asked a question related to Communication & Signal Processing
Question
I can't distinguish between the main operation of the matched filter and the correlation receiver.
I think you are talking about the minimum euclidean distance receiver, the two receivers you mentioned the Correlation receiver and the matched filter are 2 implementations of the minimum euclidean distance receiver. sometimes they are also known as ML receiver.  The output of the two implementations will be the same at a particular sample say t=(n+1)Ts. If only the value at the sampling instant is of
interest, then the matched filter and correlator. give the same result, as has been already noted. But, intermediate results are quite different .
• asked a question related to Communication & Signal Processing
Question
I need a simple understanding on how I can calculate the LCL parameters for a filter which appears between my single phase inverter and the utility grid.
cheers
If you want to design the LCL filter for a shunt-APF, the attached paper would be of help.
Good luck
• asked a question related to Communication & Signal Processing
Question
I am doing some simple detection of voice. Say I have 10 samples of a person "A'S "voice.1,2,.......10 (10 samples of A uttering say "Hello"). Now I want to design a simple system which can differentiate between the voice from rest of the voices.
So I chose a simple method, where I take A's voice 1st sample and correlate with the rest of the his voices 2,3,...10. I can see that correlation coefficient comes to be say 100,105,.......99 respectively for the samples.
Now when I correlate the voice A's 1 sample with any other voice of a person saying "hello" then I am expected to get a lower correlation value. Say 50, 60 etc.
Now my questions are:
1. How can I create a classifier this case.Imean I can set a threshold say 98 manullay . But how to do it using a classifer, please send me any matlab code.
2. I am taking A's 1st sample as a base. Am I doing correct or it is a bad idea to select one sample and correlate with with rest.
if there is any other better solution to detect based on correlation, then please let me know.
My question is not with respect to voice detetion techniques rather about designing a classifier.
I'm assuming by correlation, you mean the Pearson coefficient, http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
If that's the case, what you're assembling is a series of dot products between your new sample vector, and your collection of old sample vectors. Say you have new vector v, and labelled voice data vectors (x_i, y_i). If you think about the dual form of the training optimization procedure, a linear classifier -- the logistic regression, or the support vector machine -- is going to also be making its decisions based on a weighted average of  \sum_i \alpha_i x_i^Tv, where the precise values of \alpha will be set by the classifier's training algorithm as a function of how much it thinks the dot product of v with x_i is in terms of making new predictions. (Maybe your seventh recording of "HELLO" is garbled somehow; a good linear classifier can effectively learn to ignore that example.)
Your correlation approach is basically one option that a linear classifier (with an intercept term) is going to consider when it's performing its training procedure. Take a look at MATLAB's glmfit: http://www.mathworks.com/help/stats/glmfit.html It can help you produce these kinds of linear classifiers.
• asked a question related to Communication & Signal Processing
Question
.
The main factor determining the impact of a channel (e.g. time-dispersive / doubly dispersive / optical channel) to an FBMC signal is the prototype filter of the FBMC Tx itself. Based on the Ambiguity Function of the filter, you can calculate the interference pattern in the time frequency lattice (TFL) after the signal passed through the channel. A good basic overview on these descriptions is e.g. given in the thesis of Jinfeng Du (http://www.ict.kth.se/publications/publications/2008/TRITA-ICT-COS-0803.pdf).
When we talk about receiver processing, there are several types of equalization algorithms and implementations, whose performance strongly interacts with other parts of your system model. Can you give more details about the channel model (dispersion, frequency-dependency etc.) of the channel you are considering?
• asked a question related to Communication & Signal Processing
Question
how to evaluate the lateral filters with minimal value of maximum ripple in their stop bands.
Please see the attached pdf  which is based on earlier pdf . Hope it is useful.
Cheers
• asked a question related to Communication & Signal Processing
Question
I am trying to use the corr2 function of matlab to find the correlation coefficient between two time series data so that I can find the similarity between the two signals.
But I fail to do the same when the signals are of different length and are timeshifted.
DTW is a technique which I can use but I want to know how to compare signals using only correlation in this case.
I think you can follow this method:
Suppose size(B)<size(A)
1. c=xcorr(A,B);     this gives you crosscorrelation between A and B (here B is zero padded to reach the size of A)
2. [YMax,XMax]=max(c);   this gives you the delay (Xmax) for which there is a maximum of correlation (Ymax) between A and B.
3. You see the parts of the signals A and B which are involved in the calculation of  (Ymax) and you take off all the other samples (for A and B).
4. Now you have two new signals A_hat and B_hat with size<=size(B); you apply the corr2 function and you have your correlation coefficient.
Regard
• asked a question related to Communication & Signal Processing
Question
Can someone please clarify the following questions I have in compressed sensing.
1. Compressed sensing says that we need not acquire the entire signal X of dimension Nx1 , but instead we can take only few measurements y of dimension Mx1 where M<<N... and use them to reconstruct the signal X.
But we write the equation as : y = A x X where A is a measurement matrix which is MxN.
Looking at this equation I feel that in order to get y or M measurements or the compressed signal , we need to have a complete signal X, which implies that we need to have or measure X completely anyway... so how does this make compressed sensing useful in saying that we can directly compress while the data is being acquired?
2. If all we require is very few measurements M, which can be accomplished by correlating only few columns of A with X, why do we need X to be a sparse vector or a compressible vector in some other domain?
3. Does compressed sensing reduce the number of sensors required to sample the data? Or does it only reduce the storage and processing equipment & time required to compress the data after it is sampled from the sensors?
Thanks a lot for clarifying my confusion!
Nalin, The idea behind compressed sensing means we are sensing at compressed rate (below Nyquist rate). It is different from "sensing and compress" like in jpeg and others.  If one needs 20 sensors in a traditional sensing to sense a signal x of length 20. Then if x is sparse having, say 2 non-zero elements whose locations are unknown, then only 3 sensors (2*long20) will suffice instead. Further, if you know the location of those non-zero elements (2 in this example), then only 2 sensors will do the job.
• asked a question related to Communication & Signal Processing
Question
The reference I've read indicates that the minimum training frames should be greater or equal to the number of transmitter antennas. Herein, the training frames denote that S = [s1, s2, s3, ... sN], where si is a nTx*1 vector and N is total number training frames.
When I choose Least square estimation LS = S' * inv( S * S' ) to achieve the channel estimation, I find that sometimes Matlab displays "Warning: Matrix is singular to working precision. ". Then the estimation is incorrect. If I increase the training length, the number of this cases appearance declines.
So, what is the problem in this case? How can I determine the training length to avoid the problem in reality?
On the basis of required accuracy of the channel estimation, optimal training sequences of minimum length are determined and it is given by
NP ≥ NT (L + 1) + L
Where, Where, Np = the number of training symbols per transmit antenna and per frame.
The training sequences very much effect the channel capacity, and by implementing the Optimal training sequence length improved MSE and capacity both.
• asked a question related to Communication & Signal Processing
Question
The two standards seem to compete for the TV White Space (TVWS) because they are different at nearly whole protocol stack. For example the transmission power for IEEE 802.22 is 4W (36dBm) that is almost 80% greater than transmission power cap for IEEE 802.11af which is 100mW (20dBm). Additionally, there are also differences in reception sensitivity which is -97dBm  for the former and -64dBm for the latter as well as differences at MAC. This heterogeneity could be problematic if the two standard want to compete for the TVWS.
Interesting question. The two are targeting different use scenarios, though. IEEE 802.22 is to provide wireless broadband over a metro area (so-called "regional area nets"), whereas IEEE 802.11 nets are local, shared LAN hotspots, hopefully much shorter range.
So, my thinking is, to the extent that any of these white space schemes can be shown to work well, there's no reason to believe that 802.11af and 802.22 cannot coexist peacefully. I would think that you can pack more 802.11af nets than 802.22 nets over the same RF channel, in a given area, because co-channel interference will be less pronounced with the lower power WiFi than with the RANs. In either case, it will be a combination of spectrum sensing and geo-location database that will determine whether or not the space is "white." And the presence of an 802.22 RAN will be felt much further away, for other systems wanting to use that channel.
To me, 802.22 is more of a tall stick, long(ish) range, TV-like standard than 802.11af.
• asked a question related to Communication & Signal Processing
Question
As for training sequence, we can use the known simples to estimate the channel and then cancel ISI. However, in reality, there are not only frequency-selective channel but also carrier frequency offset.
In that case, channel estimation does not work in my simulation when I introduce carrier frequency offset. Probably, I need to cancel the carrier frequency offset firstly for promising reliable channel estimation.
However, the phase has been distored by frequency selective channel. Hence, the normal methods to estimate frequency offset does not work.
How can I cancel the carrier frequency offset before channel estimation ?
Otherwise, how can I combine carrier frequency offset estimation and channel estimation at the same time?
Many thanks
Sorry I forgot to answer the question about channel estimation. The short answer is that it is the most practical to remove the dopper shift before processing the received signal.
As you are aware, a well-designed testing signal (training signal) must be sent over the channel and its signal properly recieved and analyzed at the receiver. The receiver must have an exact replica of the initial transmitted signal. It is equivalent to measuring the impulse respone of the channel although we often use the Fourier Transform equivalent and do the analysis in the frequency domain, i. e., use the transfer function. Doppler shift of the transmitted signal affects all parts of the signal spectrum but if we send a signal that has a carrier component, we can correct for doppler shift by estimating the received carrier frequency as compared to the local receiver clock.
• asked a question related to Communication & Signal Processing
Question
What is the difference between periodicity in time domain and periodicity in spatial domain when we say that cos and sine function has the period of 2*pi what does it mean whether we are referring to time domain  or space domain?
In the time domain the signal repeates itself along time. In spatial domain, it repeats itself with position (say in the 'x' direction). In mathematical terms, just replace t with x. The period will not be in seconds but in metres. The frequency will not be in Hz (1/s) but in 'per metre', 1/m. Simple!
• asked a question related to Communication & Signal Processing
Question
Dear all,
I want to estimate a PSD of the primary user (PU) using consensus algorithm.
Do you have an idea of how to adapt a consensus algorithm for a power spectrum density (PSD) estimation??
Thanks
BR
Could you please explain more why?
• asked a question related to Communication & Signal Processing
Question
I am using a simulink model to simulate a dynamic system and want to optimize the system during each time step and then go to next step. Is this possible when using optimization algorithm written in m files? Is there a way to update state variables in simulink after each optimization step while using optimization algorithm in matlab for a time series input data?
Hi,
You can always run any Simulink file from your .m file (or your optimization algorithm written in .m file) by simply using the following command:
simopt=simset('solver','"here you should define the solver you want to use','SrcWorkspace','base'); %if there is any variable needs to be read from workspace
sim('The simulink file name goes here',[0 0.1],simopt); %[0 0.1] is the simulation time which you can change it.

If you want to update the optimised parameter in Simulink at every step (let's say we're optimising a gain "kp" in Simulink file), you can add the following command to your .m file so that every time updates the parameter kp (which is "Gain" in this case with the current optimised value of kp in Simulink)
set_param('The simulink file name goes here/kp','Gain',kp);

I'm not sure if this is what you are after. Hope it helps.
• asked a question related to Communication & Signal Processing
Question
I am wondering what the Hilbert transform pair do in order to create carrierless amplitude and phase modulation?
I will try to answer this question in a very non-scientific manner.
Imagine a spiral whisk (see enclosed picture). If you keep the spiral part perpendicularly to your eyes and you look through, ignoring the 3D aspect of the object, you will see a modulated sine signal (a 2D projection), lets call it x(t).
The Hilbert transform shifts the 2D projected signal x(t) by 90 degrees in phase to create a signal y(t) = hilbert[x(t)] so that if you combine the original front projection x(t) and its Hilbert transform y(t) you got something we call the analytic signal z(t) = x(t) + j y(t). The analytical signal is, in our case, exactly the spiral whisk.
In terms of mathematics, the front projection of the whisk x(t) is the real part, the top projection y(t) would be the imaginary part, and the whole 3D spiral whisk is the analytic signal z(t), so z(t) = x(t) + j y(t).
Now, obviously, when you take the analytic signal (the spiral whisk), it is easy to extract the envelope part (the absolute value) and the carrier (phase, that can be modulated).
• asked a question related to Communication & Signal Processing
Question
Is it true to do like "http://www.dsplog.com/2008/08/10/ber-bpsk-rayleigh-channel/" (but it is for flat fading)
Dear Zeynab,
The multipath wireless channel is characterized by a correlation bandwidth Bc which is by definition the the inverse of the delay spread time. That is Bc= 1/( Tn-1 - To), where Tn-1 is the largest delay and T0 i s the smallest delay of the received signal at the receiver. If the signal transmitted across this channel is smaller than Bc, then you shall observe flat fading. Frequency selective fading will be observed when the transmitted signal bandwidth is greater than Bc. Then i would propose to model the transmitter and the receiver in matlab/ simulink and connect a Rayleigh fading channel. Set Bc of the channel and vary the bandwidth of the transmitted signal by decreasing the symbol time. When you realize the above conditions you will get flat fading or selective fading according to the above condition.
Thank you for your interesting question.
wish you success.
• asked a question related to Communication & Signal Processing
Question
When sampling frequency is the twice the system (hardware) bandwidth, the sampled noise is almost white noise. In most cases, the hardware bandwidth equals the bandwidth of the signals from a channel.
Correlation-based detectors sometimes require oversampling to guarantee a correlated primary signal. Nevertheless, oversampling also causes the noise to be colored, and this is not desired in these kinds of detectors. How could I still obtain white noise after oversampling?
I know that the sampling rate can influence the quantization noise. Its spectrum can be strongly correlated to the input signal (e.g., low amplitude periodic signals). The correlation depends on the ratio between the sampling frequency and the signal frequency. Solutions: fine-tuning the sampling frequency; summing up the signal with a small amount of white noise.
• asked a question related to Communication & Signal Processing
Question
I'm currently working on designing a communication system for THz frequencies for different digital modulation techniques. Can anyone please tell me how can I get more information about "Advanced Design System" software related to my project?
You can check agilent site for more details
• asked a question related to Communication & Signal Processing
Question
Through my readings, l found out that it is possible to carry both data and energy through fiber optics cable. My ending work is to be able to deliver data and electricity (converted from light energy or power source) through the same channel and that channel is the fiber optics cable.
The core of fibres is very tiny. Therefore, increasing the power may increase the power density in the core to a level where the material shows non-linear effects, such as Brillouin scattering. This, in turn, brings up unwanted frequencies to propagate in the fibre, with the consequence of unwanted cross-modulation terms in the photodiode in the receiver.
• asked a question related to Communication & Signal Processing
Question
I am using an FIR filter with LMS to achieve adaptive noise cancellation, and hence I need to find the step size of the algorithm. How can I approach this to find the step size?
Thanks