Science topic
Communication & Signal Processing - Science topic
Explore the latest questions and answers in Communication & Signal Processing, and find Communication & Signal Processing experts.
Questions related to Communication & Signal Processing
They say in the comments and documentation that it is implemented using a "Direct Form II Transposed" realization.
But for FIR filters (i.e. when a = 1) I suspect it uses an fft() to do the convolution in the frequency domain when it is faster to do so.
This would make sense because the procedure operates on a batch of data, not a sequential stream.
I don't suppose it really matters if the result is the same. I was just wondering if anybody knew for sure (for sizing and timing reasons).
Hello, I need to find the amplitude of the FFT of a real signal in Matlab. I would like to get the same amplitude in the frequency domain (with fft) and in the time domain.
I've read about some methods:
1) Division by N: amplitude = abs(fft (signal)/N), where "N" is the signal length;
2) Multiplication by 2: amplitude = 2*abs(fft(signal)/N;
3) Division by N/2: amplitude: abs(fft (signal)./N/2);
4) Others follow Parseval's theorem: amplitude = abs(fft(signal)./factor), being "factor" equal 1/fs (fs - sampling frequency).
Can anybody help me?
Thank you
Hi!
I have two expressions in time domain; one of which is a simple one, another is a complicated, convoluted one. It is too difficult to prove they are equal in the time domain.
However, the Fourier transform (frequency domain) of the two expressions is identical. Then, can I claim with 100% confidence that the two expressions in time domain are indeed equal?
Colleagues, good day.
I have a task:
-add path loss and fading to channel model between station antennas and telephone antennas, in the context of 5g network, MIMO technology and the NLOS scenario.
-add the dependence between resource blocks in time and frequency.
After watching several articles, I still have questions.
Could you please correct me, give advices.
Now we have a simple channel model Y = H * X, where H is the complex channel matrix.
It is necessary to add here the fading and path loss for the NLOS scenario.
I went through many articles, chose the following
1) documentation winner2 https://www.cept.org/files/8339/winner2%20-%20final%20report.pdf
2) quadriga documentation from here link after 4. DISCLAIMER https://quadriga-channel-model.de/#Download
3) article "Channel estimation and channel tracking for correlated block-fading channels in massive MIMO systems" https://www.sciencedirect.com/science/article/pii/S2352864816301614?via%3Dihub#!
and watched a few questions from this site.
From winner2 [1] I plan to take the formula for path loss ((4.23) page 43), the scenario of experiment B1 (page 14 table 2-1, page 17), and the constants for the formula in accordance with this scenario (page 44 B1).
Question 1. Do I understand correctly that now, as indicated in [4a] by Emil Björnson , it is enough to convert the formula (pathloss + fading) and get variance = 10 ^ ((- pathloss + fading) / 10); and now multiply the channel matrix by sqrt (variance / 2)?
Question 2. In the documentation of the winner [1], it's recommended taking the random fading log-normal value. In [3] and [4a, b] it's proposes either Gaus or Rayleigh. Could you please comment on which one should I chose.
Question 3. In article [3], it’s literally the only place where mention is made of the correlation between resource blocks and their influence on each other. chapter 2 first paragraph and formula (2). I did not fully understand the reproducibility of this dependence in the context of my channel model. Have you seen such dependencies, can you share links to sources, where I could see and use them?
Thank you very much in advance.
Hello RG!
I want to know, is there any journal related to communications and signal processing which accepts paper containing only problem formulation? I mean mentioning some serious problems related to a system and provide a small solution, so that others can refer to those problems and propose some solutions.
If there are some papers like that, please refer it.
Any suggestion is highly appreciated.
How can implementation an Adaptive Dictionary Reconstruction for Compressed Sensing of ECG Signals and how can an analysis of the overall power consumption of the proposed ECG compression framework as would be used in WBAN.
Flying Car is now reality. Still it is not affordable in many aspects but future of flying cars cannot be underestimated. It has huge potential to give flying wings to an individual. Everybody can fly like birds.
Question is what are the possible pros and cons?
and
What are the major challenges which need to be addressed to make flying cars reality to common man?
Can anyone suggest Scopus indexed journals with good impact factor based wireless communication or signal processing with shorter review time ,fast publication and no or low fees for public. ?
I want to display a signal waveform passed through AWGN channel, so I followed these block diagrams and referenced this website then finished this program.
At transmitter, I sent a bit stream 1 and 0, then modulated it to BPSK signal. When the signal was received, used coherent detection to demodulate it. To verify my program ( It seems to be right from the waveform. After modulating 1 and 0, the phase of two waveform shifted by 180º each other. ), I calculated the BER. The BER varies with SNR. However, I set SNR=1dB, the BER is still 0. It doesn't make sense. I don't know why result is that. Who can help me to check? Thanks
I understand that the purpose of using an equalizer is to shorten the Impulse Response of a channel.In most examples I have seen so far,equalization in done in the Z-domain.Now,I have an ADSL channel response from 1hz to 1.1MHZ.How can I convert this frequency response into the corresponding Z-transform response?In short,how can I design a matlab equalizer for this kind of channel?
I have been trying to figure out how to to generate a time series dataset for feedforward neural network.
Let's say I want to simulate a time series dataset with lenght n and order m using feedforward neural network by following steps:
1- firstly I randomly initial m values for weights, biases and data using uniform distribution with the specific parametes as mean and variance
2- secondly I calculate the n elements of time series data using feedforward neural network function
3- thirdly I randomly generate white noise using normal distribution with the specific parametes as mean and variance
4- then I adding generated white noise to my n elements of time series data which calculated form second step
I have found resource related to adding white noise to series and I see at the first we should generate n time series data completly and then we should adding white noise to each elements of generated time series. So when we want generate first element (m+1 element) we should not adding white noise to it. Therfor the next data elements as m+2, m+3, m+4,... generate using feedforward neural network function whithout the effects of white noise of m+1, m+2, m+3 ,...
As I see the data generation using feedforward neural network by above steps led to same constant value when m is smal (equel to 1 or 2,3) and led to zero when m is large (m=6,7,..14) and my data series whithout white noise is same to a const line for smal value of m, and is a zero for large value of m! And finally by adding white noise to it my simulated series is same to generated white noise.
What I am trying to find out for simulate a time series dataset for feedforward neural network?
I got a high value for q-factor for SIW filter. is this value correct for these structures?
Thanks in advance.
I use following code to simulate a QPSK signal through 3-path Rayleigh channel,
signal_r=2*(rand([Ls,1])>0.5)-1;
signal_i=2*(rand([Ls,1])>0.5)-1;
QPSK=signal_r+1i*signal_i;
Es=((QPSK)' *(QPSK) ) / Ls;
N0=Es/10^(SNR_dB/10);
h=sqrt(P/2)*(randn(1,3)+1i*randn(1,3));
fading=conv(QPSK , h);
noise=sqrt(N0/4)*( randn(length(fading),1)+1i*randn(length(fading),1) );
received = fading+noise;
Is it right?
Why do I need to multiply (randn(1,3)+1i*randn(1,3)) by sqrt(P/2)? The denominator is always 2? How do I define P ?
h is called impulse response of Rayleigh channel? 3 paths are also called 3 taps?
Thank you for your answer.
Hi everyone, I'm working on arithmetic compression. I am trying to compress the channel iut of the international standard. With my algorithm I have a compression ratio of 0.86 if I take the channel A iut with a modulation ofdm of 128 subcarriers.
NB: I send the exponential form of the complex numbers therefore of the angles comprised between 0 and 2pi.
Then my question is: would anyone have an idea to create an algorithm arithmetic enough powerful because mine gives results that do not bother me
Hi all,
I am trying to simulate a very simple wireless communication system. I am giving the description of the system below.
1. The wireless Communication system has one transmit antenna and one
receive antenna.
2. The wireless channel is having multipath (In the simulation I have used static
4-tap channel coefficients just for simplicity)
3. I am adding AWGN noise at the receiver corresponding to the snr value (snr
varries from 0 dB to 10 dB. Signal power is assumed to be 1)
4. In the receiver I am using Maximum Likelihood sequence estimation. (I am
using comm.MLSEEqualizer object in MATLAB for implementing MLSE)
Now the problem is when I simulate the program, The BER graph is remaining flat between 0.1 to 0.01 even though I increase the snr to 50 dB.
I really can not understand what mistake I have done. Can someone please help me to solve this problem?
I have attached my matlab program and BER graph for your reference.
Regards,
Balaji.D
Wireless communication, Stststical analysis, Signal Processing
I have measured data from a vector network analyzer. Now I need to analyze the data by doing frourier transformation and inverse transformation.
Since I am new to the matlab environment I get stuck every time without any result.
can someone help me, please?. They are small formulas
My doubt is about the dimension of the subspace when one signal is being oversampling. I would like to 'visualize' one example of this key idea of blind calibration. Next, the original text:
"Assume that the sensor network is slightly oversampling the phenomenon being sensed. Mathematically, this means that the calibrated snapshot x lies in a lower dimensional subspace of n-dimensional Euclidean space.
Let S denote this “signal subspace” and assume that it is r-dimensional, for some integer 0<r<n. For example, if the signal being measured is bandlimited and the sensors are spaced closer than required by the Shannon-Nyquist sampling rate, then x will lie in a lower dimensional subspace spanned by frequency basis vectors. If we oversample (relative to Shannon-Nyquist) by a factor of 2, then r =n/2. "
Does anyone have ook noncoherent demodulation simulation in MATLAB comparing the results to the theoretical formula?
I am implementing the ook noncoherent demodulation simulation in MATLAB, e.g., having a time domain signal, adding noise, etc.
And my results, Symbol error rate vs ES/N0, has some bias compared to the theoretical formula.
For a complex signal x to be transmitted over a channel with complex impulse response h, x will be convolved with h. What will the mathematical operation done in the reveiver to cancel the effect of h? I am confused because I have a code in which x is multiplied with h in the transmitter, and in the receiver, the received signal is multiplied by conjucate h and divided by h squared.
I am looking for a coding/error correction scheme which I can apply after QAM modulation. not on binary data or before QAM modulation. I'll be grateful
How can we ensure that a symbol can be decoded in a time slice in NOMA?
I am working on the implementation and investigation of DVB-S2 LDPC codes.
Normally the output of the Matched filter after the channel and before the LDPC decoder is scaled by a factor of 4* Eb/No. (In the case of Soft Inputs)
I am using BPSK modulation and the noise being added is AWGN.
In the case of Hard Inputs the scaling factor 4* Eb/No varies, in the case of code rate of 9/10 the optimum scaling factor we found was 1.6 * Eb/No. For code rate of 1/2 it was 2.6 * Eb/No.
Can some one explain to me why is there a difference in this scaling factor? Is there a theoretical method to calculate this? Is it a code defined parameter?
Any help would be greatly appreciated.
I tried finding by checking correlation between two signals, but its not helping much is there any standard method to find match between two signals?
Hi,
I am attempting to simulate rayleigh fading in the amplitude of a signal by multiplying a transmitted signal with rayleigh distributed coefficients. I create the Rayleigh random variable using two gaussian random variables of zero mean and variance 1. The problem is that a lot of these coefficients give a value greater than 1, so this causes amplification of the signal. So I was wondering if I should lower the mean of the rayleigh distribution and if so, what would be a practical value?
Massive MIMO Communications,
Trinh Van Chien and Emil Bj ¨ ornson
One reviewer required that"Proof that the results a OFDM signal are the chaotic signals. " But we cannot. We think the OFDM signal is not the chaotic signal. But we cannot gave some detailed illustration. Please help me.
In a bayesian framework where observations are independent normally distributed with unknown mean and variance - is there a closed form expression for the conditional expectation of the marginal likelihood for series of observations where the expectation is conditioned on the parameter vector?
Hi, currently I'm doing project based on bio-inspired signal as new source waveform for radar application. However, I faced difficulty on detection. Typical time domain correlation/match filter method seem doesn't work with my signal. Is there any better other option on how I can do my detection? Kindly please advise.
Below are some properties of my signal that I'm working with:-
1. Very short (less than 3ms)
2. Signal pattern have its own unique properties (each signal contain unique frequency component, and can be found on both Transmit and Echo signal).
3. Wideband signal.
Before that, I have try pitch detection technique and it seem there is acceptable result. But, I'm just not very sure the appropriate method on how to correlate between Tx and Echo signal since I'm quiet new in signal processing field. If somebody can advised me on how to perform correlation in frequency domain with basic matlab code/algorithm, it will be very good.
Thanks in advances.
In a LTE-Advanced network if the transmiting signal is a randomly generated bit sequence and when sending it to the receiver it goes through OFDM. Finally i need to calculate RSRP and RSRQ using a reference signal. This reference signal is the same as the initially generated transmitted signal ?
Hi, please can anyone tell me how to remove the ICI effects of the phase noise of the oscillator from the OFDM signal. I already calculate and remove the Common Phase Error (CPE) and using the Linear interpolation (LI -CPE) to estimate the ICI which is due to the phase noise.
ECG feature extraction algorithm for mobile healthcare applications.
Hi,
I am using a digital lock-in amplifier. Considering lock-in amplifiers acts as a very very narrow band filters, is the frequency of a reference signal is equal to or approximately equal to the frequency of an input signal ?
thanks and regards
To set up a speaker recognition system using NIST 2004 dataset I found speaker indices of test "x???.sph" from the address : http://www.itl.nist.gov/iad/mig/tests/spk/2006/
To train total variability I need to use speaker indices of train data "t???.sph". where can I find it?
Please help me.
Thanks in advance
In OFDM some of the subcarriers are zero-padded at the edges for oversampling. Say, if there are 256-IFFT bins, then only 128 are used. How can I relate it to the sampling theorem.
I mean, there are certain things that can only be expressed with music? This language that use no words but notes. Aren't the words capable of express the same thing? There is a certain type of knowledge that admits no words but music instead?
Currently, I am working on the project named "Denoising of ECG Signals using Empirical Mode Decomposition". But I don't know anything about Empirical mode Decomposition and how it performs denoising operation. I want to write a research article on it.
Kindly guide me for this.
For sensing a real time signal by using SDR what is the procedure to follow to set the threshold value. In most of the documents its mentioned as trial and error method. Is it the average of the power. But then if noise power is more or less then it leads to mis-detection and false alarm. Can any one suggest a way to set the threshold to detect a real time signal using SDR?
The coexistence of wireless devices (WDs) represent an important challenge. As far I understand the coexistence concept considers at least two considerations - spectrum (frequency) & coverage area- and I include other one: time. With these three 'parameters' I can describe, in wide sense, the coexistence but I do not know if my context is complete, then I would like to know your idea,
-How do you define Coexistence of WDs?
-What parameter must I consider to define it?
Your comments are important for me.
Thanks!
For a research I need to use 3 Proximity sensors at the tool tip of 3DoF RRR robot arm, aligned to (X3,Y3,Z3) frame. How to convert these sensor readings w.r.t base frame coordinates (X,Y,Z)? What about Eular angles method?
Please suggest on this, i am trying square law method, hilbert also i am looking into it
Hello everyone,
I am trying to evaluate the performance of FSO communication system and compare it to RF system under foggy weather. I selected Nakagami-m as a distribution for the RF signal. To do this comparison, I need the proper values of Nakagami-m under fog.
Can anyone help with the Nakagami-m fading parameters for the RF signal?
"Electric signal processes at light speed. If we can represent a large prime number in binary and cryptography algorithms are applied in the form of light, then processing speed will be faster". Please suggest some idea whether this type of implementation is possible or not.
wireless channel, fading channels, channels, spectrum sensing
How to use filter coefficients of FIR filter in optisystem without coupler?
the use of Phase digitizer in order to the sample the in-phase and quadrature outputs from the I/Q detector to get the the discrete instantaneous phase
the topic
instantaneous frequency deception jammer.
In AF relay systems, there need two phase to complete the whole communication:T/2, T/2. Then the information rate can be expressed as R=1/2log(1+SNR).
respectively
In my communication system, the source communicates with the destination,
the total communication time is T, and it needs two steps to complete the whole transmission. The first step takes T1 time, the second step (destination receives information signals) takes T2 time respectively, and T1+T2=T.
Could I express the destination received information rate as
R=(T2/T)log(1+SNR) ?
As for linear ZF or MMSE detector, the coefficient of soft output is that symbol power over symbol interference power plus white noise power.
As for non-linear ZF-SIC or MMSE-SIC detector, the coefficient of soft output is that symbol power over symbol interference power plus not only white noise power but also error propagation power of decision feedback.
Does anyone know how to calculate this factor of error propagation power of decision feedback, and get a correct LLR value.
Many thanks
Jiajun Zhu
I'm planning to use less time consuming programming languages/ libraries for signal processing. As far as I know it is possible to use phyton or It++ (DSP libraries based on C++). I've already used it++ for signal processing, however I'd like to try something different. Could anyone recommend which one is better phyton, it++ or a different one?
For normalized auto correlation, we normalizes the sequence so that the auto-correlations at zero lag are identically 1.0.
So, I want to know how it will be in the case of cross correlations?
I can't distinguish between the main operation of the matched filter and the correlation receiver.
I need a simple understanding on how I can calculate the LCL parameters for a filter which appears between my single phase inverter and the utility grid.
cheers
I am doing some simple detection of voice. Say I have 10 samples of a person "A'S "voice.1,2,.......10 (10 samples of A uttering say "Hello"). Now I want to design a simple system which can differentiate between the voice from rest of the voices.
So I chose a simple method, where I take A's voice 1st sample and correlate with the rest of the his voices 2,3,...10. I can see that correlation coefficient comes to be say 100,105,.......99 respectively for the samples.
Now when I correlate the voice A's 1 sample with any other voice of a person saying "hello" then I am expected to get a lower correlation value. Say 50, 60 etc.
Now my questions are:
1. How can I create a classifier this case.Imean I can set a threshold say 98 manullay . But how to do it using a classifer, please send me any matlab code.
2. I am taking A's 1st sample as a base. Am I doing correct or it is a bad idea to select one sample and correlate with with rest.
if there is any other better solution to detect based on correlation, then please let me know.
My question is not with respect to voice detetion techniques rather about designing a classifier.
how to evaluate the lateral filters with minimal value of maximum ripple in their stop bands.
I am trying to use the corr2 function of matlab to find the correlation coefficient between two time series data so that I can find the similarity between the two signals.
But I fail to do the same when the signals are of different length and are timeshifted.
DTW is a technique which I can use but I want to know how to compare signals using only correlation in this case.
Can someone please clarify the following questions I have in compressed sensing.
1. Compressed sensing says that we need not acquire the entire signal X of dimension Nx1 , but instead we can take only few measurements y of dimension Mx1 where M<<N... and use them to reconstruct the signal X.
But we write the equation as : y = A x X where A is a measurement matrix which is MxN.
Looking at this equation I feel that in order to get y or M measurements or the compressed signal , we need to have a complete signal X, which implies that we need to have or measure X completely anyway... so how does this make compressed sensing useful in saying that we can directly compress while the data is being acquired?
2. If all we require is very few measurements M, which can be accomplished by correlating only few columns of A with X, why do we need X to be a sparse vector or a compressible vector in some other domain?
3. Does compressed sensing reduce the number of sensors required to sample the data? Or does it only reduce the storage and processing equipment & time required to compress the data after it is sampled from the sensors?
Thanks a lot for clarifying my confusion!
The reference I've read indicates that the minimum training frames should be greater or equal to the number of transmitter antennas. Herein, the training frames denote that S = [s1, s2, s3, ... sN], where si is a nTx*1 vector and N is total number training frames.
When I choose Least square estimation LS = S' * inv( S * S' ) to achieve the channel estimation, I find that sometimes Matlab displays "Warning: Matrix is singular to working precision. ". Then the estimation is incorrect. If I increase the training length, the number of this cases appearance declines.
So, what is the problem in this case? How can I determine the training length to avoid the problem in reality?
The two standards seem to compete for the TV White Space (TVWS) because they are different at nearly whole protocol stack. For example the transmission power for IEEE 802.22 is 4W (36dBm) that is almost 80% greater than transmission power cap for IEEE 802.11af which is 100mW (20dBm). Additionally, there are also differences in reception sensitivity which is -97dBm for the former and -64dBm for the latter as well as differences at MAC. This heterogeneity could be problematic if the two standard want to compete for the TVWS.
As for training sequence, we can use the known simples to estimate the channel and then cancel ISI. However, in reality, there are not only frequency-selective channel but also carrier frequency offset.
In that case, channel estimation does not work in my simulation when I introduce carrier frequency offset. Probably, I need to cancel the carrier frequency offset firstly for promising reliable channel estimation.
However, the phase has been distored by frequency selective channel. Hence, the normal methods to estimate frequency offset does not work.
How can I cancel the carrier frequency offset before channel estimation ?
Otherwise, how can I combine carrier frequency offset estimation and channel estimation at the same time?
Many thanks
What is the difference between periodicity in time domain and periodicity in spatial domain when we say that cos and sine function has the period of 2*pi what does it mean whether we are referring to time domain or space domain?
Dear all,
I want to estimate a PSD of the primary user (PU) using consensus algorithm.
Do you have an idea of how to adapt a consensus algorithm for a power spectrum density (PSD) estimation??
Thanks
BR
I am using a simulink model to simulate a dynamic system and want to optimize the system during each time step and then go to next step. Is this possible when using optimization algorithm written in m files? Is there a way to update state variables in simulink after each optimization step while using optimization algorithm in matlab for a time series input data?
I am wondering what the Hilbert transform pair do in order to create carrierless amplitude and phase modulation?
Is it true to do like "http://www.dsplog.com/2008/08/10/ber-bpsk-rayleigh-channel/" (but it is for flat fading)
When sampling frequency is the twice the system (hardware) bandwidth, the sampled noise is almost white noise. In most cases, the hardware bandwidth equals the bandwidth of the signals from a channel.
Correlation-based detectors sometimes require oversampling to guarantee a correlated primary signal. Nevertheless, oversampling also causes the noise to be colored, and this is not desired in these kinds of detectors. How could I still obtain white noise after oversampling?
I'm currently working on designing a communication system for THz frequencies for different digital modulation techniques. Can anyone please tell me how can I get more information about "Advanced Design System" software related to my project?
Through my readings, l found out that it is possible to carry both data and energy through fiber optics cable. My ending work is to be able to deliver data and electricity (converted from light energy or power source) through the same channel and that channel is the fiber optics cable.
I am using an FIR filter with LMS to achieve adaptive noise cancellation, and hence I need to find the step size of the algorithm. How can I approach this to find the step size?
Thanks
I'm trying to develop a code in order to filter data using a bandpass filter. The passed frequency changes in every iteration. Can anyone suggest a function of frequency so that I could multiply it into the data to filter it?
1. What is the maximum number of bits that can be generated by the ADC being produced now?
2. What is the maximum number of input bits that can be processed by a microprocessor?
3. Specific questions about Optical Time Domain Reflectometer (OTDR). Anyone know how the microprocessor inside the OTDR convert the input bits from ADC to the data on the graph (dB vs km), shown by OTDR?
I am interested in Fountain codes and would like to hear of anyone using them, and in what capacity.
What are the research problems that need to be solved?