Science topics: Control Systems EngineeringSystem Identification

Science topic

# System Identification - Science topic

Explore the latest questions and answers in System Identification, and find System Identification experts.

Questions related to System Identification

Hi

I am working on data driven model of the microgrid, for that, i need the reliable datasets for the identification of MG data driven Model.

Thanks

Hello Everyone

How can I tune lstm network for dynamic system identification ?

Actually Matlab has LSTM identification example

But I think that the network is tuned according to system dynamic because when I write different transfer function into model, the network do not work properly.

Is there any reseach to indicate how adjust lsmt options for dynamics system properties ? For example Can I use high or low learnRateDropPeriod for slow system ?

Thanks for your interests

Suppose we have the FRF data (vector of frequencies, and vector of complex responses), how to build up a state-space model with predefined structures? I know MATLAB function "ssest" can do the job in principle.

I did it for 2 by 2 matrices via fixing some variables, seems good, though got the feeling that initial guess of free variables is tricky to set. The main concern is that for large-dimension matrices, this is cumbersome to perform the "Structured Estimation". Anyone can provide some codes for achieving this? Or provide another way to get the same thing here.

Cheers

I faced a very simple yet problematic phenomena when trying to find the bode plot of an unknown system with oscilloscope.

as we know we can simply inject a signal to a system by a signal generator and swipe the frequency then measure the input and output of the system and then by comparing the gain and phase shift plot the bode diagram.

here is the problem. when you have an unknown system with no prior knowledge. how can you find that the phase shift is positive or negative. as it can be seen in the picture the phase shift both can be considered +20 and -160

Machine learning methods, particularly Neural Network models, have been increasingly being adapted for forward and inverse problems in science and engineering communities.

When it comes to inverse problems, also known as model discovery, identifying the coefficients in PDEs or ODEs, the advantages of ML over established methods in the field of system identification are almost never discussed. In machine learning, as in system identification, the function space (polynomials, trigonometric functions, or derivatives) is pre-selected by the user. If so, why not solve for the coefficients using established regression techniques? Why use Machine Learning? For example, what is the role of Machine Learning in SINDy? What are the advantages of Machine Learning for inverse problems (or model discovery) if the user needs to select the function space apriori?

For example Fit, VAF, MSE, RMSE: what is the best?

I'm dealing with identifying model dynamics for human movement mistakes from each attempt (trial) to the next, so my time-series data is discrete in nature, unlike most sampled data, also depending on the variable, an input of such nature might exist(Sys ID) or not(time-series estimation). I come from a control engineering background, and I'm well aware of the importance of sampling time selection for analysis, system Identification, and control design for sampled-data systems. I'm confused about what I should select as the sampling period in either MATLAB or Python when dealing with a discrete-time system or signal for which sampling time is not defined. I'd appreciate it if someone could point to a paper/book that discusses non-sampled discrete-time system identification or time-series estimation.

In the MATLAB system identification toolbox, I can enter one as a sampling period. However, MATLAB then will assume that it is a 1 sec sampling period rather than the fact that the signal is a non-sampled discrete time.

Hi System Identification and System Dynamics experts and gurus,

I want to fit a library of linear state-space systems that shall be integrated into a linear parameter varying system (attached diagram, see: https://www.mathworks.com/help/control/ug/linear-parameter-varying-models.html) as a function of the scheduling parameter p. I know that the System Identification Toolbox in Matlab can only fit one state-space system at a time. Any help or info about how to do that in Matlab or Python?

dear experts

i have estimated state space model form input output data using system identification, now plz guide how to design LQR controller

A question about Matlab implementation of estimation technique

I am working of super twisting sliding mode control design for electric dynamic load simulator

i have used ode45 for that, now how i can import data for system identification. plz guide

[t1 x1]=ode45(@STSMC_ELS, [0 2], [0 0 0]);

Hello friends,

I estimated the dynamical model of my system using arx model and nonlinear arx model by Matlab system identification toolbox which the nonlinear arx model is more than 99% similar to the actual data. Now, for designing the controller, I need to have the state-space model. The question is how can I convert these models to obtain state-space matrices.

I have to mention that, I convert these models using idss(sys) command, but is there another way to do this?

Best

Hamid

When running AVL, the program outputs derivative values, however I have not been able to find documentation on whether the derivatives are dimensional, or dimensionless based on the *.AVL input file.

Currently, I am looking for a passionate teammate to do collaborative research in System Identification. We will try to combine an optimization technique to get the most optimal model. bachelor students, master students, no problem. You may also ask your friends to join this group. Sure, I will be the last author, no worries. My target is SCI Journal.

Best regards,

Yeza

I am trying to use the advanced PID controller(auto-tuning) to control the speed of induction motor drive with FOC. I would like to ask how to get the transfer function for the induction motor and inverter by system identification method. anyone can give me useful information.

Currently, I am working on System Identification using MATLAB.

My problem is although I got a very small error. But, the model does not represent well the original data. Does anyone know where the problem is?

According to this example, validation using time domain data will have a higher fitness score. Does anyone know how it works? and why?

I am working on the real data of the physical property. I have gotten the bode plot data (I am working on frequency domain data). I have tried to identify the data and get the model. But the fitness is very low (40%).

When I checked using state space identification with 10 state, I got a better result around 80%. But the requirement is I have to use second order system.

My question is, how to increase the fitness score of my model with the limitation I have to use second order system?

It is appreciated if you recommend a benchmark study with the available dataset for video processing.

I'm approaching for the first time to the Systems Identification.

Specifically, I need to identify a multi-inputs single-output real system with (unknow) finite delay from the measurement sets.

Can someone suggest me some useful books about these arguments?

Thanks

The identification results are needed to tune the PID Controller that controls the real system. Unfortunately, I can't operate directly on the real system and I have no information about it (transfer function, state space model, etc.). So I can't tune the PID manually, whit some method like Ziegler-Nichols nor excite the system with some particular input.

I only have the measures caught by sensors during three different work cycles and so I think that the only way is to apply the System Identification approach.

Does anyone know how to use Sparse mass stiffness and damping matrices which are extracted from Ansys to represent it State-space model identification using System Identification Toolbox™ ?

I'm working on Greenhouse Climate control Using the Fuzzy Logic Controller, so I'm searching for the mathematic model of Greenhouse structure, and the heater, cooler, and humifier transfer functions .

System Identification

ERA/OKID algorithm

ERA (Eigensystem Realization Algorithm)

OKID(Observer Kalman Filter IDentification)

Extract mass,Stiffness and Damping matrices of structures

I have a real stable system. However, when I try to reconstruct the state-space matrices of my system by using the subspace identification, it resulted in an unstable A matrix where its eigenvalues are located outside the unit circle.

I know that there are some ways to forced the A matrix to be stable. But it tends to give us a biased result since the stability is forced, not naturally identify as a stable system.

I have torques and angular positions data (p) to model a second-order linear model T=Is2p+Bsp+kp(s=j*2*pi*f). So first I converted my data( torque, angular position ) from the time domain into the frequency domain. next, frequency domain derivative is done from angular positions to obtain velocity and acceleration data. finally, a least square command lsqminnorm(MATLAB) used to predict its coefficients, I expect to have a linear relation but the results showed very low R2 (<30%), and my coefficient not positive always!

filtering data :

angular displacements: moving average

torques: low pass Butterworth cutoff frequency(4 HZ) sampling (130 Hz )

velocities and accelerations: only pass frequency between [-5 5] to decrease noise

Could anyone help me out with this?

what Can I do to get a better estimation?

here is part of my codes

%%

angle_Data_p = movmean(angle_Data,5);

%% derivative

N=2^nextpow2(length(angle_Data_p ));

df = 1/(N*dt); %Fs/K

Nyq = 1/(2*dt); %Fs/2

A = fft(angle_Data_p );

A = fftshift(A);

f=-Nyq : df : Nyq-df;

A(f>5)=0+0i;

A(f<-5)=0+0i;

iomega_array = 1i*2*pi*(-Nyq : df : Nyq-df); %-FS/2:Fs/N:FS/2

iomega_exp =1 % 1 for velocity and 2 for acceleration

for j = 1 : N

if iomega_array(j) ~= 0

A(j) = A(j) * (iomega_array(j) ^ iomega_exp); % *iw or *-w2

else

A(j) = complex(0.0,0.0);

end

end

A = ifftshift(A);

velocity_freq_p=A; %% including both part (real + imaginary ) in least square

Velocity_time=real( ifft(A));

%%

[b2,a2] = butter(4,fc/(Fs/2));

torque=filter(b2,a2,S(5).data.torque);

T = fft(torque);

T = fftshift(T);

f=-Nyq : df : Nyq-df;

A(f>7)=0+0i;

A(f<-7)=0+0i;

torque_freq=ifftshift(T);

% same procedure for fft of angular frequency data --> angle_freqData_p

phi_P=[accele_freq_p(1:end) velocity_freq_p(1:end) angle_freqData_p(1:end)];

TorqueP_freqData=(torque_freq(1:end));

Theta = lsqminnorm((phi_P),(TorqueP_freqData))

stimatedT2=phi_P*Theta ;

Rsq2_S = 1 - sum((TorqueP_freqData - stimatedT2).^2)/sum((TorqueP_freqData - mean(TorqueP_freqData)).^2)

I am facing problems to applying these methods algorithmically in languages like python and MATLAB . I was wondering if there is any routine or dedicated toolbox for time varying system identification.

I am looking for Python packages which represent a good alternative to Matlab's System Identification Toolbox (or at least for parts of it). It would be great if you could recommend Python packages for linear and nonlinear system identification where you have already gained extensive experience. What are your experiences with e.g. SIPPY or SysIdentPy? Are there better and more comprehensive Python packages?

I especially plan to use the identified models for MPC and NMPC. Thanks in advance!

Best regards,

Günther

Does anyone know public datasets with data collected from accelerometer sensors preferably wireless sensors used in structural health monitoring projects?

Hello,

I'm performing system identification of the lateral closed-loop dynamics of a quadrotor.

My model receives a setpoint and should return position and acceleration.

I've proposed a second order model of the type, see pictures, and solved it with MATLAB's greyest():

The model performs very well for the position, 97% on a cross-validation dataset, but the performance is very low for acceleration 43%.

Note the acceleration dataset comes out from an accelerometer so it's noisy, I've already tried low pass filtering the noise to improve performance with no positive results.

Any ideas on how to improve this?

Thanks!

Hello,

I have a reference input and measured output, r and y in the figure. (Data has been acquired in closed loop)

I'm interested in identifying the whole closed loop dynamics (not just the plant), the reference is excited with a PRBS and y is the lateral acceleration of a UAV.

Any ideas on how to do this?

Thank you

I am dealing with vibration signals which were acquired from different systems. They are mostly non-stationary and in some cases cyclostationary. What are the less expensive methods for removing noise from the signals? It can be parametric or non-parametric.

I am trying to model a system which consists of a second order linear part (for example a set of mass spring damper with variable parameters for spring and damper) and a non-linear part which is a cascade of a derivative, a time delay, a static non-linearity (e.g. half-wave rectifier) and a low-pass system. I want to use neural networks for this purpose. I have tried time delay neural networks which were not successful. I am now trying to use RNNs (Recurrent Neural Networks). Since I am relatively new to this subject, I was wondering which network architecture are suitable for this purpose? (I have to use a limited number of parameters)

Dear All

There is 1-DOF Series Elastic Actuator(the scheme and state equation is given in the image). A=[0 1; -K/m 0] B= [0 -1/m]*F

where K is the stifness constant of the spring, m is the mass of joint, x is the distance, done by spring.

Could you give me the suggestions on how to write the controller, which gives proper value of spring constant(K) according to the desired(reference) distance, which must be passed by spring. It is necessary to highlight that the control input force is constant and distance can be controlled only by modifying the value of spring constant.

My previous idea was to apply adaptive MPC controller(with Kalman Filter or System Identification).

But the problem is that the value of stiffness constant, is located in "A" matrix, and it must be treated as "INPUT" variable, and not as variable with disturbance or nonlinear one.

P.S. It also didn't work with Linear Function.

Thank You in advance.

Dear all,

I have a set of I/O data.

I

**estimated a transfer function based**(*command: tfest*) on this data in MATLAB. (I removed the trend from the data before it:*command: detrend*).Can I design the PID-controller based on this (

*command: pidtune*) and use it in the process (As the data is without trend)?If

**no**, how to adjust the PID controller settings for the real process?Thanks

Dear colleagues,

I've been researching on the application of machine learning techniques to improve the performance of complex communications systems. The main reason of using machine learning in those systems is because it is difficult to find a closed-form function that models their behavior and, therefore, optimize them.

However, in the past days I've been thinking about whether there would be a way to learn the transfer function of a system (even if it is non-linear) from the data of its inputs and outputs. Any thoughts regarding this idea?

Thanks in advance for your interesting and helpful comments.

I am working on system identification for general dynamic systems.

The idea is that I think it can be well-presented using regression using the input and output data.

Do you know anyone who worked on this before, or maybe, you can direct me to published work of something similar?

Thank you very much

the Root least square method with forgetting factor for implementation in MATLAB i have an ARX model for parameter estimation. How to calculate the P(0) for this method ?

Hello, I want to stimulate brain using electricity. Therefore, I need system identification. For example, if the min and max value of amplitude of my electricity is 2 mA and 6 mA. And, the min and max value of frequency value is 50 Hz and 100 Hz. In this case, when I do system identification, can I do system identification with min and max value of electrical stimulation? For example, 2 mA with 50 Hz, 6 mA with 100 Hz, 2 mA with 100 Hz, and 6 mA with 50 Hz using the four signals, can I do system identification?

Or should I include mid range values, for example, 3 mA with 75 Hz, and 4 mA with 80 Hz including many cases. How can I make good input signals for system identification?

Can anyone suggest a reliable source in which the modal parameters of one or more real arch concrete dams have been well obtained through field tests? In fact, I need these values to validate a damage detection method proposed for these types of structures.

My system has multiple inputs i.e, voltage, phase shifts, etc. my final aim is to develop a transfer function of my system and then work on its controllability, but I am unable to proceed as the system identification app is considering only one input. Please anyone having knowledge of the above content or having idea about where I may be doing a mistake, may come forward with suggestions. Will be deeply obliged.

I am looking for Matlab code for EM algorithm for system identification models (i.e ARX or ARMAX)

Does the accuracy of system identification affect performance of controlling an object in control engineering?

For example, I did system identification and obtained matrix A, B, C, D using method A and the cross correlation with the actually measured data is 0.6.

And we did the same thing with method B and we have got the cross correction 0.8.

In this case, we can say that for system identification, method B is better than method A.

Usually in this case, can we get a better control performance result using the A, B, C, D matrix from method B compared with A?

I found the process of detection of model structure in reduced models very trial and error based. I think we always prefer to have linear models like ARX(Autoregressive Exogenous

**)**model instead of nonlinear models like NARX( Nonlinear Autoregressive Exogenous) model whether our main model is no-linear.I need some help and guidance in the procedure of detection of the model structure. Is there any MATLAB toolbox or software for this purpose?

I am currently working on a model from a circuit that includes diodes and time varying terms. I have the experimental data and I want to find the best parameters to fit the data with my model obtained from physical laws.

Thank you!

I'm still new with the system identification subject but I have a gist of it. I had obtained the experiment data and are using MATLAB's ARMAX function to build ARIMAX model structure. I'm trying the black box approach and I was wondering the combinations of order I should choose. I tried using MATLAB's AIC and tried to find the lowest AIC. However, when I choose the model with the lowest AIC and compared it, it turns out bad and return with a huger error based on the graph.

Hi guys !

I have a fundamental question regarding the fitness criterion when performing a model estimation for system identification. This far, I have obtained a good model for a driving simulator identification task. In this post, I want to discuss the SISO vertical model. The training data has been obtained by an acceleration sensor which delivers the vertical acceleration signal. Measurements have been performed in a frequency range of 0.2-20Hz. It is obvious that for small frequencies the sensor noise is considerable.

After estimating the (nonlinear) system models, I have calculated the goodnessOfFit using the NRMSE and NMSE focus. I have found that the NMSE yields significantly better fit results compared to the NRMSE. I have understood that the NRMSE is the better choice for describing a 1:1 fit, either the model fits perfectly or it doesn't. However, the NMSE seems to punish outliners less harsh than the NRMSE does.

Thus, I want to argument that my model is good based upon a NMSE measure. The reason being is that for the lower frequency parts the identified model is robust against noise and does not model it. Therefore, it is obvious that the model will not represent the noise in the training signal and is better described by a NMSE criterion than a NRMSE one.

The bigger plot (lolimot_noise_training.jpg) shows what i mean. Training was done using a local linear model approach with a concatenation of single sines as training data. The model was cross-validated using sweep and noise test signals.

The original training signal (unzoomed) can be found in the second plot (copy.jpg).

To put the plot in numbers:

Fit using NRMSE for the training signal : 87%

Fit using NMSE for the training signal : 98%

Similar values for the test/cross-validation signals.

So my final question is: What do you think of this argumentation. From the plot it is clear that the obtained model is sufficient and good enough to fully represent the system dynamics. I would like to pursue my argumentation using the NMSE as a measure of fit instead using the NRMSE. Though, I have found that in most literature the NRMSE was used most often, however being performed on more 'clinical' training data without too much of a noise.

Thank you very much !

Hi guys,

Reading my question title surely has given at least some of you a flashback on their experiences during the estimation of a nonlinear system model. I hope to get some tips, tricks and useful critic on my proceedings with the model estimation. The project I’m working on is part of my bachelor’s thesis. I am thankful for every useful input form you!

What am I identifying?

My task is to identify a fully-fledged driving simulator capable of movements in nine degrees of freedom in total. The main goal is to obtain a good system model which fits the estimation and validation data and can be used for further investigation (if needed). Most importantly, as I’m conducting the thesis with an automotive OEM, not only the identification per se but also the whole process from generating measurement data, selecting a suitable model and optimizing the parameters shall be worked out in order to have a reference for future research purposes.

The driving simulator has been shown to by nonlinear and dynamic and shall be investigated as a MIMO system.

What has been done so far?

Measurement: All nine degrees of freedom have been excited with suitable position signals (Sine sweeps, discrete sine excitations, white and pink noise, Amplitude modulated pseudo binary signals) and the output has been measured as acceleration. Not only have the individual degrees of freedom (longitudinal, lateral, …) been excited (which would be a SISO case) but also a multidimensional excitation (by exciting all degrees of freedom) has been performed to identify the MIMO system.

Model selection: I am working with Matlab’s built-in toolbox from Prof. Ljung as well as a Lolimot identification toolbox from Prof. Nelles. So far, I have gained deep insight in both toolboxes and examined the different approaches in more or less full detail. In the beginning, I have played around with the GUI to get a feeling for the system models. Now, I’m exclusively working with the toolbox functions in Matlab scripts to change the model and estimation parameters arbitrarily. I want to concentrate my thesis on the estimation of a Lolimot, Narx, Hammerstein-Wiener and a linear model. This way, I want to compare the different models and I want to show that a linear model for example is not sufficient for the underlying driving simulator. In conclusion, I want to find the model that performs best for my system.

What am I planning to do next?

In the next steps of my bachelor’s thesis, I want to examine the above mentioned system models and thus have to perform a parameter optimization. The models rely on a different set of parameters (e.g. time delays, nonlinearity estimator parameters, …). As testing out all parameter combinations does not seem to be a viable option w.r.t. computing time, I have defined a DoE and want to perform a subset selection which will be representative of all parameter combinations. Using this subset (which is noticeably smaller than the huge amount of parameter possibilities of the DoE) the models shall be estimated and compared using their respective loss function values. This allows me to assign a unique value to every parameter set of the obtained subset which reflects whether the model is better or worse. Next, I want to build a response surface model and find its global minimum to find the best parameter combination of the whole subset and consequently of all parameter variations.

What questions do I have?

Before I work on the above mentioned parameter optimization, I want to make sure that I have understood everything this far and that my data is suitable for an identification. I have gained quite some understanding reading various system identification publications, however I still am not sure on two things.

Excitation signals:

The above mentioned excitations have been measured with a set of acceleration sensors all around the vehicle mockup. The measurement output has shown some pretty good results, which I used to identify other system properties like latency, phase lag, etc. I am sure that the measured signals themselves are pretty good and show minor noise in the relevant frequencies and obviously a bit more noise for lower frequencies where the noise characteristics of the sensor itself takes over. However, I am not sure whether the type of excitation is right. For dynamic systems sine sweeps and APRBS signals have yielded good results in the literature. However, an APRBS signal (step excitations with different amplitudes) shows steep peaks in the measured output of the simulator. The vehicle moves (for vertical signals) up, idles a few seconds and moves down again. The peaks result in the steep movement up and back down again. Between that is just dead time. Thus, I am not sure, whether the system dynamic has been excited strong enough. A sine sweep seems to be better and the system models estimated with both toolboxes seem to confirm that or at least manage to obtain a fit to the estimation sweep data, whereas the APRBS data is very hard to fit.

So the question here is: Is such an excitation with dead time between measurement output peaks even suitable for an identification?

Another question is: The discrete sinusoidal excitations have been designed to excite the system with one sinusoidal signal which is faded in and out, then there is 2 seconds of dead time and then the next sinusoid follows. The measured output follows suit and shows excitation with dead time between the sinusoids. Is this critical as well?

The final question here is: I have also conducted measurements with white and pink noise inputs. The statistical character makes this kind of input especially useful. Though, the signals had to be manipulated in amplitude and smoothened to not overexcite the simulator dynamics (and eventually to crash the simulator). This means, that the frequency band is not as wide as a ‘normal’ white noise, but should be in the relevant are of the simulator. Is an identification with that kind of estimation signal suitable?

Estimation and validation data:

When estimating the system models, estimation, validation and test data can be assigned. The system is being estimated based on the estimation data (training data) and can be validated by plotting the system output for the validation data. What I fundamentally do not seem to understand, or have not read yet, is whether the estimation and validation data can be fundamentally different. In most examples I have seen that the system has been trained with e.g. step inputs and validated with a different independent set of step inputs. It was then tested again with a third independent step input. What I am trying to do however, is to estimate the system based on e.g. the Sweep data, to and to validate it on the white noise signals. The question thus is: Is that even a good approach? The signals are fundamentally different.

As far as I understand or want to understand is that a successful identification of a system should be capable to represent all input-output combinations possible for the system. It is very clear to me that this will never be the case. But the underlying system in my case is able to perform sine and step excitations and many more. Should I have measured an input-output combination that contained all kinds of excitations?

In other words: What is the best way to estimate and validate the model in my case? Ljung’s toolbox does not even take validation data in consideration during estimation, it much rather relies on the user to evaluate the fit to the validation data. This is very understandable, since in most cases the evaluation is a mere decision of the user.

I am thanking all of you for input to my problems!

Hi guys,

Reading my question title surely has given at least some of you a flashback on their experiences during the estimation of a nonlinear system model. I hope to get some tips, tricks and useful critic on my proceedings with the model estimation. The project I’m working on is part of my bachelor’s thesis. I am thankful for every useful input form you !

**What am I identifying?**

My task is to identify a fully-fledged driving simulator capable of movements in nine degrees of freedom in total. The main goal is to obtain a good system model which fits the estimation and validation data and can be used for further investigation (if needed). Most importantly, as I’m conducting the thesis with an automotive OEM, not only the identification per se but also the whole process from generating measurement data, selecting a suitable model and optimizing the parameters shall be worked out in order to have a reference for future research purposes.

The driving simulator has been shown to by nonlinear and dynamic and shall be investigated as a MIMO system.

**What has been done so far?**

*Measurement:*All nine degrees of freedom have been excited with suitable position signals (Sine sweeps, discrete sine excitations, white and pink noise, Amplitude modulated pseudo binary signals) and the output has been measured as acceleration. Not only have the individual degrees of freedom (longitudinal, lateral, …) been excited (which would be a SISO case) but also a multidimensional excitation (by exciting all degrees of freedom) has been performed to identify the MIMO system.

*Model selection:*I am working with Matlab’s built-in toolbox from Prof. Ljung as well as a Lolimot identification toolbox from Prof. Nelles. So far, I have gained deep insight in both toolboxes and examined the different approaches in more or less full detail. In the beginning, I have played around with the GUI to get a feeling for the system models. Now, I’m exclusively working with the toolbox functions in Matlab scripts to change the model and estimation parameters arbitrarily. I want to concentrate my thesis on the estimation of a Lolimot, Narx, Hammerstein-Wiener and a linear model. This way, I want to compare the different models and I want to show that a linear model for example is not sufficient for the underlying driving simulator. In conclusion, I want to find the model that performs best for my system.

**What am I planning to do next?**

In the next steps of my bachelor’s thesis, I want to examine the above mentioned system models and thus have to perform a parameter optimization. The models rely on a different set of parameters (e.g. time delays, nonlinearity estimator parameters, …). As testing out all parameter combinations does not seem to be a viable option w.r.t. computing time, I have defined a DoE and want to perform a subset selection which will be representative of all parameter combinations. Using this subset (which is noticeably smaller than the huge amount of parameter possibilities of the DoE) the models shall be estimated and compared using their respective loss function values. This allows me to assign a unique value to every parameter set of the obtained subset which reflects whether the model is better or worse. Next, I want to build a response surface model and find its global minimum to find the best parameter combination of the whole subset and consequently of all parameter variations.

What questions do I have?

Before I work on the above mentioned parameter optimization, I want to make sure that I have understood everything this far and that my data is suitable for an identification. I have gained quite some understanding reading various system identification publications, however I still am not sure on two things.

*Excitation signals:*

The above mentioned excitations have been measured with a set of acceleration sensors all around the vehicle mockup. The measurement output has shown some pretty good results, which I used to identify other system properties like latency, phase lag, etc. I am sure that the measured signals themselves are pretty good and show minor noise in the relevant frequencies and obviously a bit more noise for lower frequencies where the noise characteristics of the sensor itself takes over. However, I am not sure whether the type of excitation is right. For dynamic systems sine sweeps and APRBS signals have yielded good results in the literature. However, an APRBS signal (step excitations with different amplitudes) shows steep peaks in the measured output of the simulator. The vehicle moves (for vertical signals) up, idles a few seconds and moves down again. The peaks result in the steep movement up and back down again. Between that is just dead time. Thus, I am not sure, whether the system dynamic has been excited strong enough. A sine sweep seems to be better and the system models estimated with both toolboxes seem to confirm that or at least manage to obtain a fit to the estimation sweep data, whereas the APRBS data is very hard to fit.

So the question here is: Is such an excitation with dead time between measurement output peaks even suitable for an identification?

Another question is: The discrete sinusoidal excitations have been designed to excite the system with one sinusoidal signal which is faded in and out, then there is 2 seconds of dead time and then the next sinusoid follows. The measured output follows suit and shows excitation with dead time between the sinusoids. Is this critical as well?

The final question here is: I have also conducted measurements with white and pink noise inputs. The statistical character makes this kind of input especially useful. Though, the signals had to be manipulated in amplitude and smoothened to not overexcite the simulator dynamics (and eventually to crash the simulator). This means, that the frequency band is not as wide as a ‘normal’ white noise, but should be in the relevant are of the simulator. Is an identification with that kind of estimation signal suitable?

*Estimation and validation data:*

When estimating the system models, estimation, validation and test data can be assigned. The system is being estimated based on the estimation data (training data) and can be validated by plotting the system output for the validation data. What I fundamentally do not seem to understand, or have not read yet, is whether the estimation and validation data can be fundamentally different. In most examples I have seen that the system has been trained with e.g. step inputs and validated with a different independent set of step inputs. It was then tested again with a third independent step input. What I am trying to do however, is to estimate the system based on e.g. the Sweep data, to and to validate it on the white noise signals. The question thus is: Is that even a good approach? The signals are fundamentally different.

As far as I understand or want to understand is that a successful identification of a system should be capable to represent all input-output combinations possible for the system. It is very clear to me that this will never be the case. But the underlying system in my case is able to perform sine and step excitations and many more. Should I have measured an input-output combination that contained all kinds of excitations?

In other words: What is the best way to estimate and validate the model in my case? Ljung’s toolbox does not even take validation data in consideration during estimation, it much rather relies on the user to evaluate the fit to the validation data. This is very understandable, since in most cases the evaluation is a mere decision of the user.

**I am thanking all of you for input to my problems!**

Hi,
I have a quad-rotor in x configuration attached to a test bench system with 3 degrees of freedom which is stabilized by PID controllers.

In order to run system identification, I've tried to use logged experimental data: 2 inputs (outputs of two PID controllers) and 2 IMU outputs (roll, pitch ).

The issue is that the identified state model using MATLAB n4sid function isn't responding as expected, even though the goodness of fit is more than 80 %.

how to choose regularisation parameter for PAPA (Proportionate affine projection )algorithm so that i can get smooth learning curve .

I need to apply a Kalman Filter for System Identification. The function can be of the forme.g. A*cos(omega*t-phi)1. I was not able to fit the data by a Kalman Filter while something like A*((t-tau)/T)^alpha*exp(-(t-tau)/T) seems to be managable (it just requires sometimes lots of iterations). Is there a criterion/rule of thumb/etc. that states for these kind of nonlinear functions it works/likely works, ...

Thanks in advance.

I would like to know the mathematical background behind MATLAB'S linear grey-box system identification (idgrey and greyest).

its easy to build block hankel matrix for 1xN(N=1000) vector. but how would we buid block hankel matrix for MxN(M=3,N=1000) matrix.?

Hello, I have a 1-D signal and I want to estimate damping for each mode. Is there any toolbox available using wavelet?

Thanks

Hye all,

Previously, i have developed MPC using linear ARX model (from system identification toolbox). <-- here, I do not use MPC toolbox available in Matlab, I have designed the MPC based on this book: MPC using Matlab, by Liuping Wang.

My question is, how do I use NARX equation (from system identification toolbox) into the MPC?

Any opinions, references from all very much appreciated.

Hi All,

I used System identification Toolbox to identify a discrete time state space model with 2 inputs 1 output and 1e-09 sample time. Data fitting is 100% as shown in Toolbox, but i get very high values with a step response or given input values..what kind of problem could this be? I uploaded data, iddata and ss-model

thanks

Hi All,

How to decide the number of poles and zeros in the system identification toolbox in the determination of the transfer function for any particular system, given the input and output time series?

I am working on shunt active power filtering. I am devising a model on simulink that can suggest compensation currents for real world non linear loads (e.g. microwave, energy savers etc). So I have measured the 3phase voltage and currents for these loads and have converted in csv file to import in matlab. now I want to generate a non linear model that can relate with these voltages and currents. Can anyone suggest a mechanism to do that?

I have to identify a transfer function from input and output data, but each time i do it with System Identification Toolbox, i dont get the right value for the output..do the sample time value have a role in this ? which value to put if input and output data contain only the variable values and not the sampling time in the experiment ? Basically if the output depends on the input value and not on time, how to get the transfer function from the data ?

is it possible to transfer the data from fuzzy logic designer to system identification? if possible, can I know the detailed step to follow.

Hello, I am working on a model:

dx/dt = Ax + Bu

y = Cx + e

It is equivalent to an output-error model where noise is not modeled. I have trained it using real data through maximum likelihood estimation (MLE) and parameters in A, B, C are estimated.

This problem is that the residual is not white. Will this violate the assumption of MLE that y is a sequence of independent variables?

**I work on Vitek 2 Compact System for identification of bacterial isolates and PCR technique using 16S rRNA gene ... but the results are mismatched... can any one tell me why**

I intend to conduct a modal testing on a clamped-clamped steel beam in ABAQUS.

-The input is random signal with normal distribution, zero mean, and maximum amplitude of 5 (N).

-Duration of loading is 30 seconds.

-The length of the beam is 2 meters which is monitored by 5 sensors attached on top surface of the I-shape beam.

-Sampling rate is 100 so the cut-off frequency is 50 (Hz).

-The beam's natural frequencies are in range of 4 to 35 (Hz).

Have I set the right parameter to perform modal analysis?

I am recording the acceleration data of the sensors for a structural system identification method, unfortunately the identified frequencies from the Stochastic System Identification (SSI) method are not satisfying and do not match with real frequencies, I believe the above parameters should change to achieve my desired results.

I would be appreciated if anyone help me out with the right numbers for duration of the analysis, sampling rate, number of sensors, and the type of excitation applied to the beam.

Thank you.

There are many different kinds of soft computing methods used for identification of complex system including robotic manipulators and mechatronic systems. But for making an intelligent choice to extract the best dynamic of under study systems, we will not be successful to model them very well.

Hello All,

I was doing some behavioural modelling of the torque transfer characteristics of a belt drive system from the driver pulley to the driven pulley. While doing the same, i have tried to see how the angular velocity is getting transferred as well. I would explain my point with the following equations.

G_omega= (Omega_driven)/(Omega_driver )

G_Torque= (Torque_driven)/(Torque_driver )

G_Torque= (I∗alpha_driven)/(I∗alpha_driver )
where I is the Moment of inertia and alpha is the angular acceleration

G_Torque= (I∗(d(omega_driven))/dt)/(I∗(d(omega_driver))/dt)

where omega is the angular velocity

In Laplace domain, with zero initial conditions,
G_Torque= (I∗s∗omega_driven)/(I∗s∗omega_driver )

After cancelling I and s terms,
G_Torque=G_omega.

So does this really mean that the angular velocity transfer from the driver to the driven pulley has the same behaviour as the torque transfer and the step response of both of them will look the same ?

I feel like cancelling the moment of inertia terms is valid only if there is no slip or flex in the belt (could anybody confirm?). But even if there is a slip in the belt (then different moment of inertias felt by the driver and driven pulleys), doesn't it mean that this is already captured by the omega differences at both sides and this model will again work for torque even in the case of torque ?

The reason for this is obviously due to the fact that anglular velocity can be measured easily unlike torque.
Much appreciated if somebody could point me in the right direction.

Thank you,
Alex

Taxonomists,

What do you think about alternative systems of identification of species? Is DNA barcoding going to replace Linnaean binomial nomenclature? What are the advantages of a numeric system? Nomenclature is the topic of my dissertation and subject of my further research so I am interested in your opinions.

Tanya Kelley

I intend to determine the Transfer matrix of a TRMS by system identification. In this context, which input would be better and why, chirp or PRBS ??

The discrete-time identified transfer function fits well with the data (Fit to estimation data:

**97.9%**, see also**y_ym.jpg**produced by System identification Apps). However, when I tried to test it with lsim as follows, I found y and yd are quite different (see**y_yd.jpg**), and yd is near zeros, why??**-------------- MATLAB code ------------------**

data=iddata(y, u, Ts)

np = 5;