Science topic

# Linear Systems - Science topic

Explore the latest questions and answers in Linear Systems, and find Linear Systems experts.
Questions related to Linear Systems
• asked a question related to Linear Systems
Question
Dear all,
I am currently dealing with a problem of spurious oscillations in wave propagation in elastoplastic bodies.
The problem in question is a 1D column subjected to a 10Hz sine wave acceleration input motion at the bottom of the mesh. Abaqus/Standard is used for the dynamic analysis.
Generally, the stress strain behavior looks fine, the same with displacements and velocities. The oscillations are most pronouncedly experienced in the calculated accelerations. They appear in time when the element/Gauss point is subjected to a sharp change in stiffness, e.g. from elastic branch into plastic one when no smooth hyperbolic relationship is used. The oscillations are of much higher frequency than the input motion [say around 100Hz].
I am wondering where these oscillations come from. I have tried to include numerical damping in the HHT direct time integration scheme however this did not influence the observed oscillations. I am wondering if further “play” with alpha, beta, gamma parameters in the HHT method could result in damping out the oscillation I experience. So far I have only used set of parameters as suggested in Abaqus Manual for “application=moderate dissipation” option.
I have also tried the effects of time and space discretization, non of the two was effective [for space discretization going for much finer solution than the minimum of 10 nodes per wave length as typically advised]. The problem remains and is insensitive to mesh or time step refinement. The only way of removing the oscillations is applying the Rayleigh damping, however this seems to be artificial way of removing the problem since the constitutive model is elastoplastic and deemed to be capable of accounting for material damping.
Generally, Abaqus manual says “The principal advantage of these operators is that they are unconditionally stable for linear systems; there is no mathematical limit on the size of the time increment that can be used to integrate a linear system” so I understand that maybe this scheme is not stable or inaccurate for the nonlinear dynamic problems or a family of nonlinear dynamic problems. Would some another commonly used direct time integration scheme, such as lets say Bathe be more accurate here?
Anyone has experienced maybe a similar problem? Where is it I can look for the reason of the oscillations?
Yes, I solved the problem. In brief, we should be mindful when assessing the origin of unexpected oscillations in numerical studies, which in fact can be due to numerical reasons (e.g. occurrence of strain discontinuity) but can also represent physical phenomena (especially in more complex nonlinear analysis). The latter is easier to be concluded if the oscillations can be attributed to a particular physical phenomenon but possibly not recognized before in the 'new' context/perspective. (Note here that it is certainly prudent to assume at the very first inspection of the results that any unexpected oscillations in our numerical studies are spurious).
You are welcome to see more on this topic in my publications listed on Researchgate.
Regarding your work, thanks for sharing the link, judging from the abstract of your work (paper not available to my institution), your problem is probably a bit different than mine.
• asked a question related to Linear Systems
Question
Suppose there is a linear system of equation Ax =b, x is required. A is a large symmetric matrix.
Which method is faster to compute x?
1. If I solve it through the calculation of inverse of A and compute x = A-1b.
2. If I solve the linear equations through some iterative method (like Gauss-elimination, Gauss-seidel).
Hi Iman,
For large symmetric matrices, the Conjugate Gradient (CG) and Conjugate Gradient Least Squares (CGLS) methods are very effective, super fast, and require little memory to implement. The entire matrix A need not be loaded into memory, and the method can read columns and rows of matrix A to apply the algorithm. See this excellent attached book on inverse problems (Aster et al.) for more details.
Regards,
Hamzeh
• asked a question related to Linear Systems
Question
I am working on signal inversion. I am facing a matrix inversion problem. I have a linear system where, the signal and data are known. I am computing the inverse by simple matrix inversion. However, I am appending a vector with both data and signal matrix. I want to compute the inverse without going for simple inversion again. I want an analytical formula of the data coefficient matrix in terms of data, signal, previous coefficient matrix and the appended vector.
Of course an analytical expression exists-in terms of the minors of A-it’s, just, not useful for any practical calculation. It’s, also, possible to have an expression in terms of the eigenvectors and eigenvalues, that’s more useful.
However what all these, mathematically equivalent, approaches amount to is to provide ways for checking numerical methods.
• asked a question related to Linear Systems
Question
The response function is other than ratio-dependent.
Where all derivatives are evaluated at the equilibrium point x=x_{\rm e}\ . Its eigenvalues determine linear stability properties of the equilibrium. An equilibrium is asymptotically stable if all eigenvalues have negative real parts; it is unstable if at least one eigenvalue has positive real part.The Jacobian matrix at each equilibrium is J=f'(x)\ . An equilibrium is asymptotically stable when f'(x)<0\ ; that is, the slope of f is negative. It is unstable when f'(x)>0\ . The left two equilibria in the figure are hyperbolic (f'(x) \neq 0), the others are non-hyperbolic because the slope (eigenvalue) is zero.
• asked a question related to Linear Systems
Question
I am trying to model a Chaotic fractional order system in LabVIEW. I can do it in matlab but i want to create the model in LabVIEW
Can any one share LabVIEW VIs for fractional order systems?
• asked a question related to Linear Systems
Question
Dear all,
I would like to study regarding control of linear and nonlinear systems in detail. So, please suggest me some books which can provide in-depth knowledge regarding it. Thank you.
Dear Prof. Ankur Gajjar,
Please refer to my work other than the suggestion of Dear Scientists on the solution of linear equations in the form of AX=B.
Kind regards
• asked a question related to Linear Systems
Question
Linear stability analysis fails to determine the local stability property of a non-hyperbolic equilibrium point as there is a emergence of a centre subspace (other than stable and unstable subspace) of the linearized system corresponding to the eigenvalue whose real part is zero. Centre manifold of the corresponding nonlinear system may not be unique. So, what is the exact procedure to analyze such kind of situation?
The short answer is that there is no "exact" procedure for analyzing non-hyperbolic equilibrium points. However, there are some work using blowing up/down methods. You may look at this methods in a differential equation literature. You can check the way we used in one of our paper. "Bifurcations and global dynamics in a predator–prey model with a strong Allee effect on the prey, and a ratio-dependent functional response." P Aguirre, JD Flores, E González-Olivares - Nonlinear Analysis: Real World Applications, 2014. In this paper we analyzed the stability of the origin using blow up/down techniques.
• asked a question related to Linear Systems
Question
In control theory, using Routh array test, it can be established that a quadratic polynomial
p(x) = a x^2 + b x + c (where a > 0)
is a Hurwitz polynomial (i.e. it has roots with negative real part) if and only if a > 0, b > 0 and c > 0. In an equivalent way, this can be proved using Hurwitz determinants.
I am looking for a simple proof for this fact. Without loss of generality, we can assume a = 1.
For p(x) = x^2 + b x + c, we can use the root finder formula and discuss various cases.
Is there any simple proof? I welcome your ideas and suggestions. Thank you!
I thank Victor for pointing me in the right direction.. Vieta's formulas give the easiest proof for the assertion.
Indeed, we suppose that p(x) = x^2 + b x + c as the given polynomial.
Suppose that its roots are: x1 = m + i n, x2 = m - i n (where n = 0 or n > 0).
If p(x) is Hurwitz, it has stable roots. This implies that m < 0.
By Vieta's formulas, x1 + x2 = - b and x1 x2 = c.
This gives: b = - (x1 + x2) = - 2 m > 0 and c = x1 x2 = m^2 + n^2 > 0.
Conversely, suppose that b > 0 and c > 0. We show that p(x) is a Hurwitz polynomial. In other words, we must show that m < 0.
This is quite obvious since 2 m = - b < 0 and so m < 0.
We have proved the claim that p(x) = x^2 + b x + c is a Hurwitz polynomial if and only if b > 0 and c > 0.
This is quite an elegant proof and I thank Victor again.
• asked a question related to Linear Systems
Question
I have torques and angular positions data (p) to model a second-order linear model T=Is2p+Bsp+kp(s=j*2*pi*f). So first I converted my data( torque, angular position ) from the time domain into the frequency domain. next, frequency domain derivative is done from angular positions to obtain velocity and acceleration data. finally, a least square command lsqminnorm(MATLAB) used to predict its coefficients, I expect to have a linear relation but the results showed very low R2 (<30%), and my coefficient not positive always!
filtering data :
angular displacements: moving average
torques: low pass Butterworth cutoff frequency(4 HZ) sampling (130 Hz )
velocities and accelerations: only pass frequency between [-5 5] to decrease noise
Could anyone help me out with this?
what Can I do to get a better estimation?
here is part of my codes
%%
angle_Data_p = movmean(angle_Data,5);
%% derivative
N=2^nextpow2(length(angle_Data_p ));
df = 1/(N*dt); %Fs/K
Nyq = 1/(2*dt); %Fs/2
A = fft(angle_Data_p );
A = fftshift(A);
f=-Nyq : df : Nyq-df;
A(f>5)=0+0i;
A(f<-5)=0+0i;
iomega_array = 1i*2*pi*(-Nyq : df : Nyq-df); %-FS/2:Fs/N:FS/2
iomega_exp =1 % 1 for velocity and 2 for acceleration
for j = 1 : N
if iomega_array(j) ~= 0
A(j) = A(j) * (iomega_array(j) ^ iomega_exp); % *iw or *-w2
else
A(j) = complex(0.0,0.0);
end
end
A = ifftshift(A);
velocity_freq_p=A; %% including both part (real + imaginary ) in least square
Velocity_time=real( ifft(A));
%%
[b2,a2] = butter(4,fc/(Fs/2));
torque=filter(b2,a2,S(5).data.torque);
T = fft(torque);
T = fftshift(T);
f=-Nyq : df : Nyq-df;
A(f>7)=0+0i;
A(f<-7)=0+0i;
torque_freq=ifftshift(T);
% same procedure for fft of angular frequency data --> angle_freqData_p
phi_P=[accele_freq_p(1:end) velocity_freq_p(1:end) angle_freqData_p(1:end)];
TorqueP_freqData=(torque_freq(1:end));
Theta = lsqminnorm((phi_P),(TorqueP_freqData))
stimatedT2=phi_P*Theta ;
Rsq2_S = 1 - sum((TorqueP_freqData - stimatedT2).^2)/sum((TorqueP_freqData - mean(TorqueP_freqData)).^2)
Dear Delaram Rabiei,
In addition to what is proposed above, i suggest you to see links and attached files on topic.
Best regards
• asked a question related to Linear Systems
Question
As is well known, iterative methods for solving linear systems such as Successive Over Relaxation and the like, are very attractive for solving many problems such as sparse matrices. These methods, in general, are formulated in the context of determined system in which the number of equations is equal to the unknowns. Now and for sake of simplicity, let us assume that we have one additional observation and We need to update the previous solution. In other words, now we have an over determined system with the provision of this additional observation. The question is how to include this observation to the to update the previously computed parameters. Indeed, the theory of parameter estimation provides a lot of guidelines to handle this task to get an optimal solution in the sense of least squares. But let us assume that we need to stick to the iterative approach for parameters. Then with assumption how we could handle the additional observation for parameters update and what kind of errors that we need to minimize.
If you are looking for sparse solution to a linear system of arbitrary size m by n, you can apply the iterative method called " Orthogonal Matching Pursuit (OMP) algorithm ".
Also, Lasso Regression model can be used in such cases, which involves a regularization term in a L1 norm. For sparsity in solution, minimization in L1 or Linfinity norm is sought.
OMP algorithm generates a sparse solution based on minimization in L0 norm ( number of non zero entries in the vector)
• asked a question related to Linear Systems
Question
When I read papers in computer vision field, I saw many energy functions or objective functions. I can understand the paper from high level, but when I try to implement the paper I stuck on solving the energy function.
I am implementing DynamicFusion: Reconstruction and Tracking of Non-rigid Scenes in Real-Time, published in 2015 CVPR. The energy function is that:
E(Wt,V,Dt,E)=Data(Wt,V,Dt)+λReg(Wt,E)
The paper use Gauss-Newton to solve the parameters. Should I convert the energy function into a linear system like Ax+b? If so, how to combine Data term and Reg term to be Ax+b?
And why papers usually don't give the deviation of the energy function?
Dear Chang Che Kuei,
I suggest you to see links and attached files on topic.
• asked a question related to Linear Systems
Question
Dear all, I have some questions concerning using pole placement techniques (state feedback controller)with model predictive control for linear systems? To be precise, will be possible using pole placement to stabilise or ameliorate the system's stability and then apply Model predictive Control to the closedloop stabilised system.
Any ideas will be apreciated.
Thank you.
Thank you for your help Mr. Zahoor.
• asked a question related to Linear Systems
Question
I faced such a problem. I have a nonlinear system for control synthesis and I should compare not only my controllers but also a linear version of my system to describe the legitimacy of this linearization. But it never occurred to me how to compare it in numerical. We often do it in a frequency domain for linear systems comparing bandwidth, gain, or phase margin. And we have a numerical result. But I can't do the Laplace transform (for example) because of no superposition principle being. I heard about nonlinear Fourier transform but I doubt that it could help me
First, we cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking at the qualitative dynamics of a system. Equilibrium points– steady states of the system– are an important feature that we look for. Many systems settle into a equilibrium state after some time, so they might tell us about the long-term behavior of the system.
Equilibrium points can be stable or unstable: put loosely, if you start near an equilibrium you might, over time, move closer (stable equilibrium) or away (unstable equilibrium) from the equilibrium. Physicists often draw pictures that look like hills and valleys: if you were to put a ball on a hill top and give it a push, it would roll down either side of the hill. If you were to put a ball at the bottom of a valley and push it, it would fall back to the bottom of the valley.
• asked a question related to Linear Systems
Question
A signal is split into two parts and one of them is going through a filter (say, with a transfer function H(f)) and the other part stays unchanged. Then I want to know how to calculate their cross correlation function. My guess is, given the spectral density function S(f), it will be the ordinary Wiener-Khinchine theorem with an addition of the transfer function: R=Integral{S(f)H(f)*exp(i*2*pi*f*t)df}
Agreeing with Pascal Salart, but making it simple. One signal is x(t), the other is y(t), you study the covariance matrix E(x(t+i) y(t+j)), which can be estimated by low pass filtering vectors v(t, i) = (x(t+i-N), ... x(t+i))
and w(t, j) =(y(t+j-N),... y(t+j)) with N a length of observation window,
then the scalar product <v(t, i), w(t, j)> estimates the mean expectation E() above.
The matrix obtained is a Gram matrix, hence it diagonalises with eigenvectors, eigenvalues obtained by Gram-Schmidt algorithm. QED...
Ok ?
• asked a question related to Linear Systems
Question
Hi all, I have a question for Control system development specialists.
I compute controllability of linear system through MATLAB function ctrb, and I know that system have 1 uncontrollable state. How to define which state is uncontrollable?
Dear Niccolo Lemonis,
Your system is under a state space representation (A,B,C,D), I presume it is in any form it will be necessary to transform it by a change of vector basis in diagonal canonical form if the poles (eigenvalue of matrix A) are distinct case 1 or that of Jordan form if the poles are multiple cases 2.
The state case 1 cannot be controlled if the corresponding row of matrix B is zero, for case 2 it will be the last row of Jordan's block (degree of multiplicity of the pole) and its correspondence of matrix B which must be zero. It is also the same principle applied to the Kalman decomposition.
Best regards
• asked a question related to Linear Systems
Question
I have a model that consist of two ODEs and one PDE which are all coupled and non-linear. When I linearize the system around the steady state and simulate the system with initial values that are different from steady state I get a stationary deviation in one of the solutions. Due to non-linearities I expect a quite different behavior for the linear system. However, if I multiply some the elements of the system matrix with a constant (relaxation or dampning coefficient?), the linear system now resembles the non-linear system much better in terms of transient behavior and that I no longer have a stationary deviation which I had before. Is this method for fitting the linear model allowed and have some theory that can justify the use of such relaxation constants? Or is it just a 'cheat' which is not valid as the system does change due to the use of such constant?
It is difficult to give any hint without knowing the equations. However, at a general level linearization around an equilibrium might not provide a correct behaviour if the linearized system is not exponentially stable, i.e., if some of its eigenvalues are zero. For example, the ODE
y'=-y3
is stable around y=0, but its linearization is
y'=0
for which each initial condition is stationary.
• asked a question related to Linear Systems
Question
Dear collegs,
Let us have a linear system:
Ax=b, (1)
A is a matrix of (N,N) shape, b, x are vectors of N shape. We decrease number of equations to form a underdetermined system:
A'x=b', (2)
A' is a matrix of (M,N) shape, b' is a vector of M shape, M < N.
Can we find any formula for a difference between normal pseudo-solution of the system (2) and exact solution of the system (1)?
Can we find how many equations must we use to estimate exact solution by normal pseudo-solution with the known precision?
Can we determine a convergence rate of the normal pseudo-solution to the exact one in dependence of matrix A parameters (condition number, singular values, etc.)?
• asked a question related to Linear Systems
Question
Hello dear Dr.Krack!
I have the mass and stiffness matrices of the finite element model of a free structure. Of course the linear system made by these matrices have 3 rigid modes and the first three natural frequencies are zero! When trying to compute the linear FRF of this system using your codes, I notice that the results are not close to the test structure's FRF results. I think some additional modifications need to be made to the system or your codes to account for the free-free boundary condition of the system. What do you think those modifications would be?
نعم
• asked a question related to Linear Systems
Question
Dear community,
I am trying to build a model of a Furuta pendulum in Simulink/Simscape. Unfortunately, when I try to linearize my model with the integrated Model Linearizer, I get an unexpected result. The evaluation of the linearized system shows, that it is only poorly controllable, although a classic Furuta pendulum should be fully controllable according to literature. Therefore I assume, that there must be something wrong with my model or the way I linearized it but I can´t figure out what it is.... I´d highly appreciate any help on that, as this is bothing me for quite some time now. The model is attached to this post. Furthermore I have attached a screenshot of the linearized system.
My controllability matrix (ctrb(A,B)) then looks like this with rank = 1, which I believe can´t be right...
Controllability matrix =
1.0e+26 *
0 0.0000 0.0000 -0.0000 0.0000
0.0000 0.0000 -0.0000 0.0000 -0.0007
0 0.0000 0.0000 -0.0000 0.0000
0.0000 0.0000 -0.0000 0.0000 -0.0015
0.0000 -0.0000 0.0000 -0.0000 9.2972
Thank you and best regards, Joo
Dear Joo ...
Regards
• asked a question related to Linear Systems
Question
What is SOR Algorithm for linear system of equations
Hi, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. You can refer to some papers on this topic:
• A. Hadjidimos, Successive overrelaxation (SOR) and related methods, Journal of Computational and Applied Mathematics 123 (2000), 177-199.
• Yousef Saad, Iterative Methods for Sparse Linear Systems, 1st edition, PWS, 1996.
Best
• asked a question related to Linear Systems
Question
I am teaching an holistic course about ecosystem and use of natural recurses.
During the course we discuss how ecosystems are complex systems and intervention may brig very unpredictable consequences.
...I can´t help and wonder: are all nature interactions part of complex systems, leaving liner systems to be an approximation of nature to try and describe parts of it, or do we actually have linear system in nature?
I think you are not answering the original question. Is linearity assumption useful and actually accurate in many applications? Absolutely. But "do we actually have linear system in nature" or are they "an approximation of nature"?
None of the examples you provided are linear. Let us take your grass and deer example: what about the negative numbers of deer? Or grass? Or even not an integer number of deer (they are quantized, as far as I can tell)? Or even outside of the mathematical definition of linearity, what happens when the number of deer becomes too large to be supported by the meadow of a fixed size - then the 'eating rate' will be capped at some point. Does the postulated proportionality hold at any point in time, or averaged over some period of time?
Ggiven that the absolute majority of "things" in the universe (mass/energy etc) are quantized, the existence of anything "truly linear" seems to be impossible. Maybe one can do something "truly linear" with time or gravity, but then one gets into things like "what happened before the big bang" - or even "what is time any ways"...
• asked a question related to Linear Systems
Question
I am still unsure about the relationship between BIBO and Lyapunov stability of simple undelayed LTI SISO systems.
Basic facts:
1) The system is STABLE if it has all system poles (eigenvalues) in the open left-half plane (LHP) or even single poles on the imaginary axis.
2) The system is ASYMPTOTICALLY or EXPONENTIALLY STABLE if it has all system poles (eigenvalues) in the open left-half plane (LHP).
3) The system is BIBO STABLE if it has all system poles (eigenvalues) in the open left-half plane. Or, the system is BIBO STABLE if its impulse function is absolutely integrable (i.e., it is L1-stable).
4) Btw., it is a fact that LTI SISO systems with DELAYS can have infinitely many poles in the LHP except for the complex infinity. Such systems are EXPONENTIALLY stable but they can/cannot be ASYMPTOTICALLY, Hinf or BIBO stable. Here, moreover, BIBO implies Hinf stability.
Notes:
- Some authors consider BIBO stability as a feature of the TRANSFER FUNCTION, not the SYSTEM itself. That is, there may exist unstable modes that cannot be seen at the output in the system. Therefore, every asymptotically Lyapunov stable system is BIBO, not vice-versa.
- I found also the idea in the literature that BIBO is stronger than asymptotical Lyapunov stability – however, I mean that this is incorrect.
Could anyone clearly explain me whether it exist any general relationship (inclusion, implication,…) between BIBO and (asymptotic) Lyapunov stability for SISO LTI delay-free systems, please?
Dear Libor,
I don’t know if you eventually found the answer to your question. I shall give my answer below.
The output y(t) of a system has two components: the free component y_L(t) generated by the initial conditions and the forced component y_F(t) generated by the input signal.
The forced component is the convolution between the causal impulse response h_C(t) (the inverse Laplace of the transfer function H(s)) and the input signal u(t) and has two sub-components: the transitory component y_T(t) related to the system’s structure (influenced by the signs of the poles) and the permanent component y_P(t) related to the input signal (influenced by the input’s poles).
If the linear system is exponential stable, then the free- and transitory components of the output response vanish as the time increase sufficiently, i.e. y_L(t) -> 0 and y_T(t) -> 0 as t-> infinity. Thus, the only component which remains is y_P(t) and moreover, for exponentially stabile systems the permanent component is of the same shape as the input signal as t-> infinity.
BIBO-stability reflects the input-output stability and it is evaluated on the transfer function, H(s). On the state space model one evaluates Lyapunov, asymptotic, exponential and other kind of stability properties.
Consequently:
- If the linear system is internally Lyapunov exponential stable, then for bounded inputs the output responses are bounded, i.e. THE SYSTEM IS BIBO-STABLE;
- If the linear system is internally unstable, but it is also non-minimal such that all the uncontrollable or the unobservable eigenvalues are those with positive sign (within the open right complex half -plane, inducing instability), then THE SYSTEM IS BIBO-STABLE. Explanation: taking into account that the transfer function obtained from the state-space model is always the irreducible, the unstable roots of the characteristic polynomial (poles of H(s)) will be simplified by the similar zeros of the nominator of H(s); consequently, H(s) will have only negative poles which means that the transitory component vanish as time tends to infinity and for bounded inputs the permanent component of the output response will be also bounded as determined by the poles of the input (the Laplace transform of u(t)).
- Conversely, if the transfer function has all its poles with negative real part, i.e. the system is BIBO stable, we cannot know if it is also Lyapunov exponentially stable (the characteristic polynomial can have uncontrollable or unobservable eigenvalues with positive real parts). From a stable transfer function one obtains a minimal state realization which is thus exponentially stable, but this will not reflect the internal state of the system which can be evaluated only on the system mathematical model of differential equations – i.e. the equations which result directly from the mathematical modeling process.
- A transfer function with all its poles in the open left complex half-plane has the impulse response absolutely integrable, i.e. it is L1-stable and vice-versa.
• asked a question related to Linear Systems
Question
I want to transform a linear system matrix from x basis to z basis where Z=TrX. Tr is obtained through QR factorization. The linear system is in state-space form. Furthermore, I can use the Transformation to obtain regular form matrix for Eigenvalue placement design or Linear-Quadratic minimization feedback sliding mode control.
What I understand from the question is if A is the matrix of your linear system in X basis then one can always get its QR factorization and find its eigenvalues through QR algorithm. Now write your system as per your eigenvalues obtained.
• asked a question related to Linear Systems
Question
I have a nonlinear system, I linearized it using the Jacobian method and got the A, B, C, D matrices for state-space representation. How could I take into account the change of the equilibrium point - i.e. the point where the system is linearized - during a simulation? I tried to solve it by subtracting u0 from the input and adding y0 to the output of the system - please see the attached picture -, but the results are incorrect because the operating point where the system is linearized is changing.
First of all, you need to make sure that the Jacobian matrix evaluated at the equilibrium state is hyperbolic. ( The eigenvalues has non zero real parts) Then you can apply the Hartman-Grobman theorem. That is, the linear system and nonlinear system behave the same in the neighborhood of the equilibrium point. So, the main point is to determine the eigenvalues of the Jacobian matrix which essential to study the stability of your system.
Best regards
• asked a question related to Linear Systems
Question
Dear friends and colleagues! I have designed a Simulink model of an inverted pendulum control system. In order to study the LQR I have also linearised the model and obtained the K gain matrix for the controller, which works well for the linear system, now I am hoping to apply it to the non-linear system as well. However, I do not quite understand the reference for the control system.
The system has 4 states: position of the cart, pendulum angle and their derivatives. I am modeling a data acquisition system, so I assume I can only have encoders measuring the position and angle, and then I model software derivatives I could perform on a computer. Then I have a block to get u=K1*x1+ K2*x2+K3*x3+K4*x4. That seems to be my input in the system. I, however, assume there must be some comparison block, but I can not seem to wrap my head around it. In the diagram I have attached to this question you can see the reference is set to zero, but I did it merely to have some refernece in general. But what is this zero? What if my desired states are not zero? What if i want to move my reference to some other point during execution pf the program? Which one do I move? If I still want to stabilise the pendulum but not at x=0, but somewhere at x=ref?
Hi, so far, it seems you are on the right path. However, I have some remarks:
1. what's your control goal? I assume is the control objective is to control the system such that the cart reaches a desired position and the inverted pendulum stabilizes in the upright position. Then that desired position and angle (with respect to your model) giving you the upright position are your controller reference (could be 180° or 0°, it depends which angle you took as the reference to develop your model).
2. Because you are thinking in implementation and as you said the only measurements available will be position and angle, then you would need an observer to estimate the other two states (derivatives). I recommend a Kalman Filter.
3. In order to better understand how to use your linear approximation to control your non-linear model, the following paper could help (which explain how to translate your linear model to the operating point of the nonlinear system):
Regards.
• asked a question related to Linear Systems
Question
a) Can subharmonics ever appear in linear systems?
b) Is it possible for subharmonics to appear in p.e. 2-DOF nonlinear systems? What are the conditions for this to happen? For example, this does not happen in the periodically forced Duffing oscillator (typical nonlinear system).
see
Farey sequence in the appearance of subharmonic Shapiro steps
Odavić, Jovan, Mali, Petar, Tekić, JasminaJournal:Physical Review EYear:2015
• asked a question related to Linear Systems
Question
I have an element with an impedance that has a complicated relationship with frequency. I have defined the mathematical relation for the impedance as a MATLAB function and used MATLAB function block to simulate it in Simulink. The problem is that I could not find any variable impedance element which can be controlled by a signal in Simulink. The only element which could be used is variable resistance that neglects the imaginary part of the impedance. On the other hand, impedance block is only useful for defining linear systems with zeros and poles and polynomial type relations for impedance. The impedance relation with angular frequency is attached.
Indeed, I have also faced another problem. The mathematical relation which is available for describing this impedance is a relation between frequency and impedance. On the other hand, this nonlinear element is connected to a nonlinear power-electronic circuit which results in not having any unique or finite frequencies in the voltage which is exerted to this element. I need a time-domain relation for the impedance of this element which depends on the continuous range of frequencies of the voltage which is applied to the element. I do not know how I can do it. Inverse Fourier transform only gives me a function of time. Such an impedance which is only a function of time, does not have any relation with the waveform of the voltage which is applied to it. Since the relation with frequency is nonlinear, I cannot consider all "j2pi*f"s in the relation between voltage and current, equivalent with the derivative of voltage. Maybe the Taylor exception of the frequency-domain relation of impedance can be helpful to find some time-domain relation for impedance as a function of the first, second, and ... derivatives of voltage. But it makes it too much complicated and forces some estimations and errors to the simulation doe to applying finite terms in the Taylor exception. Is it possible to simulate this element's electrical behavior in Simscape in any better way?
• asked a question related to Linear Systems
Question
Let say i want to write a program/script on MATLAB, and condition is that MATLAB code should generate a noise signal from gauss distribution of mean "11" and variance="18". But i am confused about which formula/expression i will use here? formula in which i will put values of mean and variance and then i will get noise signal generated from gauss distribution
sigma = sqrt(18); % standard deviation
mu = 11; % mean
z = randn(100,1); % Sample from Standard Gaussian (mean 0, variance 1)
x = z*sigma + mu; % Sample from Gaussian (mean 11, variance 18)
• asked a question related to Linear Systems
Question
I m doing mixed mode simulation for TFETs inverter in Sentaurus .Its showing linear system cannot be solve and didnt converge. can anyone know this error or any other ways to do the same.
Thank you very much for your reply Dr Deepak Kumar . Anyhow, I have managed to simulate a TFET inverter circuit using TCAD mixed-mode simulation. Still, there are difficulties with other circuits. If you don't mind, could I contact you in above-given number in future?
• asked a question related to Linear Systems
Question
For extended Kalman filter, we need to do matrix differentiation for converting the non linear system to a linear system using first order Taylor approximation
Thank you so much.
• asked a question related to Linear Systems
Question
How does K (The controller Gain) affect J (the objective function) in the LQR Control? How do I explain this a bit theoritically?
Thank you everyone for all your time and good explanations. It is clear :)
• asked a question related to Linear Systems
Question
Hello,
I am using ILU factorization as the preconditioner of a Bi-CGSTAB solver for solving a linear system of equations Ax=b. The preconditioner matrix M=(LU)^-1 is calculated by backward substitution method, solving Ly=c and Us=y. However, when A has zero diagonal elements (e.g. A(2,2) = 0) U will also have zero diagonal elements ( U(2,2 )=0) ,which makes M impossible to be solved by the backward substitution method.
How could I reorder my system of equation in order to tackle this problem?
According to your assumptions, we have n distinct equations.
To reach your request proceed as the following:
Rearrange the equations as follows:
1). The coefficient of the first unknown in the first equation is not zero.
2).The coefficient of the second unknown in the second equation is not zero.
etc
n).The coefficient of the nth-unknown in the nth-equation is not zero.
The new augmented matrix guaranteed your request.
Otherwise, if your matrix is full of zeros and det(A) =0, then the system has
i). No solution
or
ii) Infinite number of solutions.
Best regards
• asked a question related to Linear Systems
Question
I'm looking for an algorithm for solving (and representation of non-zero elements) linear systems with large sparse unsymmetric matrices (not diagonally dominant).
I'm interested in ONLY DIRECT method.
(The matrices are obtained from FDM for hyperbolic PDE).
The Petsc library provide a generic interface to call various solvers, including some direct solvers for square unsymmetric matrices.
Listed direct solver for square unsymmetric sparse matrices (seqaij or aij) are mostly based on LU decomposition : Pastix, MUMPS, superLU, UMFPACK and KLU from SuiteSparse, MATLAB and LUSOL.
Using Petsc makes it easy to change from one solver to another, or try a preconditionner. Direct solvers are provided as preconditionners. For instance, command line options can be:
-pc_type lu -pc_factor_mat_solver_type superlu_dist -ksp_monitor -ksp_view
scipy's scipy.sparse.linalg.spsolve seems to wrap UMFPACK and SuperLU :
• asked a question related to Linear Systems
Question
Generally speaking, a linear system is solvable and a decentralized system can be solved more efficiently.
Decentralized implies that there is no single point where the choice is made. Each hub settles on a choice for it's very own conduct and the subsequent framework conduct is the total reaction but, a "linear system"deal with all together at once
• asked a question related to Linear Systems
Question
Consider the state-space representation of a linear system as described by
x(t) = Ax(t)+Bu(t)
y(t) =Cx(t)
state estimation of above system is possible with Luenberger observer .
BUT IF
x(t) = Ax(t)+Bu(t)
y(t) =Cx(t)+a(t)
Question is that if an attack vector is added , then how can you estimate the states keeping that in mind there is no noise and disturbance in the system.
I'm thinking you might be able to augment your state vector x(t) to incorporate a(t), i.e. use z(t) = [x(t); a(t)], and re-define A, B, C (&D?) for this augmented system. This would also allow you to model aspects of a(t)'s behaviour too.
• asked a question related to Linear Systems
Question
Hi, everyone,
Do you know some good MATLAB subroutines (direct solvers) for solving nonsymmetric Toeplitz linear systems ? I note that most direct solvers are only suitable for symmetric Toeplitz matrix, but recently, during the implementation of my numerical simulations, I need to solve a series of Nonsymmetric Toeplitz linear systems, i.e., A*x_i = b_i. i = 1:s.  the size of A is from 8 to 256. I feel that this size is not large, so I do not want to consider the iterative solvers, such as gmres with circulant preconditioners. I think that if the subroutines can be used like the following form, it will be great:
>>  y = nonsymToeplitz(c,r,x)  % c is the first column of Toeplitz matrix A, and r is the first row of A,   x is the rhs,  i.e., y = A^{-1} x;
If you know some information for this, please tell me.
Hi, Xiang-Ming Gu
LEVINSONG(R,V,N) solves a Hermitian Toeplitz system of equations using the Levinson-Durbin recursion, it is available on http://mathforum.org/kb/message.jspa?messageID=74329
• asked a question related to Linear Systems
Question
for linear control systems x_dot=Ax+Bu the reachability set can be calculated using the Image of the controllability matrix, i.e
R=([B AB A^2B,....,]) and reachability set=Im(R)
when rank(R)=n, and we do not have any control constraint the reachibilaty set of linear system is R^n (n is the dimension of states)
if we have a non-linear affine-control system
x_dot=f(x)+g(x)*u
R can be calculated using Lie algebra
R=[g1,g2,[f,g1],[f,g2],...]
my question is, in this case reachability set is again Im(R)?
and if reachability set=Im(R), how can we compute the rachability set, because here R will be a matrix with function arrays (function of states)
• asked a question related to Linear Systems
Question
Dear friends
I have a serious problem in order to regenerate the results of the attached paper. I do exactly mentioned method (FEM) in the paper but I don't get correct results.
This paper about gas face seal and compressible zeroth and first order perturbed Reynolds equation. I Solved zeroth order with FEM and FDM correctly and I got same results that bring out in the paper (Opening force for various pressure ratio) and I solve first order equation with 1 pressure ratio (figure 5 and 6) (I am thankful to you if look at the paper), but in other figure I can't get the paper's results. I try many method to solve linear system of equation (Ax=b) such as direct method, iterative method (i.e. gauss-sidel, cgs, gmres, bicg, pcg ...) but I failed, also I try many network grid and there is no achievement.
So what should I do?
I really know my question is general, but I really don't know about the errors.
Thanks dear Ryan Vogt and Debopam Ghosh
• asked a question related to Linear Systems
Question
Hello
anybody knows how we can get the A,B,C and D state-space matrices from PowerFactory?
you can get ABCD matrices from PowerFactory which are MATLAB readable file but they arenot the A,B,C and D state-space matrices.
Thank you so much
Hi
One of the disadvantages of DIgSILENT PowerFactory is the lack of B,C,D matrix in out put. This tool only present the A matrix. Thus for control application it is not suitable.
• asked a question related to Linear Systems
Question
In control system, if we have to analysze a onlinear system, we linearize the system around its equilibrium point. I would like to ask why we consider only the equilibrium points?? Suppose, if we consider some other random point for linearization, what shall be the effect in the analysis?
If you control a system, you often want to operate it around an equilibrium, so it makes sense to use this point. However, there might be cases for which you want shift this operating point via (nonlinear) control and generate a new equilibrium. The simple idea is given here
Then you could linearize about the new equilibrium point of the closed loop.
Generally, for control purposes you also can linearize along a known trajectory. Think of a rocket or a robot arm. In this case your system is time varying. To use well established linear tools, you can check (closed loop) stability on these operating points (i.e. which are not equilibrium points) along the trajectory, or design controllers along the trajectory. This will give you the behavior close to this point. However, stability does not mean that it will return to this point.
• asked a question related to Linear Systems
Question
Hello People!
I have a question regarding how researchers use the matrices from the sparse matrix marketplace(https://sparse.tamu.edu/) for solving linear systems of equations.
So given that we have to solve for x in : Ax = b
The matrices from the market are the A for the above equation. But what about the vector b? How do we decide on that? Do we just pre-select some random vector x and multiply A with it to give us the vector b, and use that vector b to recalculate x using some method (that's the focus of the researcher)? Or is it something else.
I've come across some papers where people mention the name of the matrices taken for testing and their initial guess vector x0. But I am confused as to how to they select a vector b? Is there a general practice? Or does it depend from author to author
There must be something I am missing, might be very silly, so I apologize in advance!
Some Papers for reference and example (with page numbers) :
Page 6-7 of:
Page 848 -849 of :
Thanks !
In "Incomplete Cholesky factorizations with limited memory" (Lin, More, paragraph above formula (4.1)), it is said: "the vector b is the vector of all ones". Or you can look at https://github.com/vakho10/Sparse-Storage-Formats
• asked a question related to Linear Systems
Question
I am wondering to enforce state dissipation to stabilize nonlinear or linear systems. Assume a nonlinear control system as xdot=-x3+u. Then to forcefully dissipate x as exponentially by setting: x = x0*exp(-a*t), where a is dissipation rate, so xdot = -a*x0*exp(-a*t), hence from the evolution dynamics; xdot=-x3+u, we have the control variable as u= xdot+x3= -a*x0*exp(-a*t) + (x0*exp(-a*t))3,
or in state feedback format; u(x)=-a*x+x3.
This is a time-varying open-loop control and in other format a state feedback strategy. So what is your idea? How do you think about that? Does it worth as a new control methodology?!
• asked a question related to Linear Systems
Question
what are the advantages and  disadvantages of Matching Pursuit Algorithms for Sparse Approximation? and if there are alternative method better than Matching Pursuit
The advantages of OMP and MP algorithms for Direction of Arrival estimation:
Applying BS algorithms to a DOA problem enhances resolution
and decreases complexity. Moreover, the knowledge of the number
of signal sources is not required to know in these algorithms. In
addition, they do not need any post-processing to converge to
the ML solution since the output of these algorithms is straightly
the DOAs. ML algorithm compares all feasible directions and then
selects the most likely one. On the other hand, BS algorithms
compare some of the angles and select them in a smart method.
Hence, BS algorithms are much more computationally efficient
in comparison to other algorithms of DOA estimation such as
MUSIC and ESPRIT to approach to the ML solution. Moreover, BS
algorithms converge to the ML solution when the value of SNR
is low, whereas other approaches converge at high SNRs only.
In addition, in other methods for DOA estimation, the number
of estimated DOAs is limited by the number of antennas. The BS
based DOA estimation methods can estimate more DOAs than the
antennas number. Among BS methods, OMP algorithm provides
slightly higher performance than MP algorithm with moderately
higher computational complexity.
• asked a question related to Linear Systems
Question
Hello,
I have the following system of equation Ax = B where A is a 2x4 matrix, B is a 2x1 and x is a 4x1 matrix.
A = [ a b 0 0]
0 0 c d]
I am looking for X can I use a pseudo inverse matrix here , if no? may you tell me how to proceed or what other method to use.
By using matlab software, x=pinv(A)*B
• asked a question related to Linear Systems
Question
Most Engineer use nonlinear system than linear system, but the mathematical methods is difficult for nonlinear.
Hi Nasser,
Nonlinear systems are complicated because of the high dependency of the system variables on each others. I have to tell you that Most of the Engineers are using linear systems in their analysis. Before start solving any Engineering problem we try do linearization to get a linearized model. That is because, the nonlinear problems are difficult to solve and are so expensive. However, linear problems give very close solution to the nonlinear ones with less cost, time and effort.
I can list you many different examples of solving nonlinear problems with a linearzied model if you want. But I think it might be enough to mention simply why we are perform linearzation to solve nonlinear problems.
Best regards
Mohamed
• asked a question related to Linear Systems
Question
x1*=x12 +x2 +u
x2* =-2u
* represents time derivative.
Yes Houssem, the chosen output could represent the flat output which always has a physical meaning. In this example however, no output is defined.
• asked a question related to Linear Systems
Question
In most of the literature for Generalized MPC, it is assumed that D matrix in state space model of the system is zero, I have a system with non-zero D (not a strictly proper system), what should I do?
Dear Ali,
A true system that is not strictly clean is a non-causal system! what i can offer for the moment is to see the file attached this can help you solve your problem.
Best regards
• asked a question related to Linear Systems
Question
Consider linear system  dx/dt=Ax+Bu, y=Cx.  Assume that there exists u=Kx such that the system is asymptotically stable , i.e. x is asymptotically stable. Does this mean that y=cx is also asymptotically stable ?
Thank you!
Dear Xu, The question is little confusing since you have asked the asymptotic stability of y=Cx. The question may mean the stability of output signal 'y', or it may mean the stability of output dynamics dy/dt=C dx / dt. I am answering both the cases.
If the stability or boundedness of the output signal ''y" is in question, then the output is stable for any norm bounded matrix C. As the states "x" tends to zero in forward time, the output also asymptotically converges to zero provided the matrix C is finite (If C is time dependent, then also you have to check whether C is norm bounded or not ). However one should be careful about the fact that, depending on the elements of the matrix C, the output may initially peak or oscillate.
The answer to the second case is little complicated. Since
dy/dt=CAx+CBKx (u=Kx), the same K may stabilize the x system, but it does not mean it can always stabilize the y system. Therefore the asymptotic stability of output dynamics (y system) not only depends on x but also depends on the C matrix.
• asked a question related to Linear Systems
Question
Dear friends,
Is it a correct way to use a linear system with uncertainties instead of the main nonlinear system?why?
I recommend transforming the system into LTV system. I tried that.you can transform a nonlinear system into a linear time varying one. The uncertainty will be covered by the state and input matrixes.
• asked a question related to Linear Systems
Question
I am trying to perform a simulation of polymer brushes based on PNIPAM with GROMACS code.
Generating a coordinate file for the system of interest is not problematic, but creating a topology of this non-linear case is complicated (default gromacs tools are dedicated to linear systems like peptide).
Could you recommend the best tool for this problem (topology generation)?
• asked a question related to Linear Systems
Question
Is the multiplication of positive definite and negative definite matrix is a positive definite matrix even if they do not commute.
Dear Ashraf, product of posdef matrix A and negdef matrix B is a negdef matrix:
if uTAu > 0 for every u, uTBu < 0 for every u, then uTABu = uTAu uTBu/(uuT) < 0 because uuT > 0 .
Gianluca
• asked a question related to Linear Systems
Question
The literature on the time delay systems is reach with many good books. I would like to know if there any introductory book on this subject that is suitable for undergraduate level.
As a co-author (with H. Gorecki, A. Korytowski and S. Fuksa) I may recommend: Analysis and Synthesis of Time-Dalay Systems. Wiley. Chichester. 1989
• asked a question related to Linear Systems
Question
I want to know if in the literature respect to non-minimum phase nonlinear systems there are methods for tracking control. I researched a lot and I found some methods as Byrnes Isidori regulartor(use exosystem to generate trajectories) and other as Method of Devasisa, Chen . But the problems is that i not sure if those methods have been tested since I have only seen cases for regulation or control from one fixed point to another one (solving the iternal dynamics using Bounday Value Problem)  but not for tracking explicitly.
Regards
@Dimel
Your question belongs to classical  physics or quantum physics . Highlight the same in detail.
B.Rath
• asked a question related to Linear Systems
Question
I have a nonlinear MIMO system
$\dot{x}=f(x)+g(x)u$, also the sliding surface for my outputs happens to be a vector S=0. Now I can write $\dot{S}=F(x)+G(x)U$. Can I not extract U from this relationship by enforcing attraction condition. i.e $\dot{V}< -\eta*|s|*-k*s$?
or is it necessary to convert the system into normal form first?
Do you have relative degrees higher than one in some of the outputs?
• asked a question related to Linear Systems
Question
Hi everyone, I'm trying to solve a large sparse block tridiagonal linear equations (about 10^5 X 10^5), and when I use block LU decomposition or backslash in Matlab, it is very time consuming. So could you please advice me how to solve it fast?
My guess is that you must make sure to use a sparse solver (specifically, the Thomas algorithm) also when solving the linear subsystems associated to each block. It is not clear from your description of the steps you attempted that this was the case.
More generally, a number of specific algorithms are described in the literature, see for example
Heller, Don. "Some aspects of the cyclic reduction algorithm for block tridiagonal linear systems." SIAM Journal on Numerical Analysis 13.4 (1976): 484-496.
and the many papers that refer to it.
• asked a question related to Linear Systems
Question
Hi
please, i want to know if i can use a classic kalman filter to estimate the states of a linear system with a variable state matrix A.
Thank you
Hello. Jose Fernando provided a good answer above. I think I can clarify a bit further.
The "classical" Kalman filter is posed in continuous time for a system of the form dx/dt=Ax(t)+Bu(t)+Gw(t), z(t)=Hx(t)+v(t) where w(t) and v(t) are white Gaussian noise (WGN) with zero mean and known covariance Q and R, respectively, x(0) is Gaussian with known mean and covariance and all three of x(0), v(t) and w(t) are uncorrelated. It is a simple matter to proceed from the constant-coefficient continuous time linear stochastic system to a discrete time equivalent system by "discretizing" the continuous time system via the "variation of constants" formula. This involves integrating the dynamical equation and assuming x(t) is constant between sampling times. The process noise covariance Q(k) of the discrete time system can also be obtained this way. The classical Kalman filter for the discrete time system can then be written down quite easily. In general, the coefficient matrices for the discrete-time KF depend on the time, but this is not a problem.
Now when the continuous time system is not time invariant (in terms of its coefficient matrices), it is not generally possible to discretize the continuous time system to obtain the discrete time equivalent and hence the KF. However, the discrete-time KF is general in the sense that the equations hold even when the coefficient matrices are time varying. You just have to know what they are at each time instant.
So I think this answers your question: if the continuous time system is time varying, you can't easily write down the KF, but you can for a time-varying discrete-time stochastic linear system.
• asked a question related to Linear Systems
Question
Consider the Lyapunov equation given by A'P+PA+I=0, where I is the identity matrix, A is Hurwitz, and P is a positive definite and symmetric n by n matrix. How can we find an upper bound to the Frobenius norm of P, i.e., ||P||_F, using the eigenvalues of A ?
I was able to find a relationship with eigenvalues of P(see attached picture) but I need to find out relation with eigenvalues of A.
Any idea how should I proceed?
In your opinion what is Frobenius Norm?
• asked a question related to Linear Systems
Question
Hi Dears, I have a problem in solving system of linear equations involving singular matrix. How we can fix the singularity of the matrix?
You don't just fix the singularity. If the matrix is singular, then you have to work with it by using basic linear algebra. Numerically, its simpler to use, e.g., a QR decomposition of the matrix (instead echelon form) to determine if there are many or no solutions.
One often used solution strategy is to solve it in the least square sense. This gives you a solution which leads to the smallest residual norm in the Euclidean norm which, however, will be nonzero in general.
• asked a question related to Linear Systems
Question
Hello,
i wish to know if there is any algorithme to get rid of matrix singularity.
thank you.
thank you all to your responses they was really helpfull, and i managed to ''get rid '' of the singularity.
• asked a question related to Linear Systems
Question
Is there a definition of convex optimization problem for COMLEX-VALUED MATRIX VARIABLES where objective and constraint functions are real-valued ?
Are KKT conditions true for convex optimization problem with COMPLEX-VALUED MATRIX VARIABLES where objective and constraint functions are real-valued ?
They are often seen in MIMO communication problems.
Thank you very much.
• asked a question related to Linear Systems
Question
I have currently used the Krylov and Multifrontal methods in order to solve disperse linear systems. Next figure shows the characteristic curves number of nodes vs CPU time of the Krylov and Multifrontal methods. Does anyone know about direct or iterative methods for decrementing the computation time in the solution of large sparse linear systems? Which one?
• asked a question related to Linear Systems
Question
HAUTUS, M. L. J., Controllability and observability conditions of linear autonomous systems. Nederl. Akad. Wetensch., Proc., Ser. A 72, 443-448 (1969).
Thank you very much. However, I am already having the attached one. I need the original one.
--Vikas
• asked a question related to Linear Systems
Question
Suppose we want to solve following attached general eigenvalue problem in which the matrices K11, M11 are symmetric, positive definite and very ill conditioned as well as K12.
What method do you recommend.
I attached my matrices in Matlab, It will be so appreciable if someone could help me solve it with Matlab.
One of the solutions could be using Moore-Penrose pseudoinverse of matrix. This can be obtained using pinv function in Matlab.
I tried with the matrices provided by you and it seems to work and did not result in inf eigenvalue. You can use the following code:
K = [K11 K12; K21 K22];
M = [M11 M12; M21 M22];
[v d] = eig(pinv(M)*K);
diag(d)
Still I would suggest you to have a closer look into the eigenvalues to ensure the correctness of the solution.
Hope this helps.
• asked a question related to Linear Systems
Question
Following a good tradition of asking for examples of specific systems (non-lin., non-min-phase etc.) I'd like to ask if somebody could give me an example of a simple but still physically relevant LTV system.
Most textbooks contain examples with terms like t*exp(-2t) and so on, which are clearly artificial. Students are normally not very excited about dealing with such problems (which I find completely reasonable). I thus wonder if there are any examples which stem from real problems, but can be addressed within the framework of a class.
I'd be particularly interested in non-periodic cases, but the periodic ones are also welcome. Any references will be very helpful.
Dear Alexander,
1.  You are right:
-- the resistance R is not linearly dependent on V,
-- is time varying.
But this is not the issue. The equation sought should be a linear differential equation for a chosen observed quantity ( i, V , R, C, or whatever else) replacing x in the equation
(*)  d{x(t)}/d{t} = A(t) x(t),   (the LHS denotes the derivative of x with respect to t )
AND  the coefficient A(t) should be independent of anything but t . In your model such a quatity and such an equation  is not distinguished yet.
2. There is also the possibility, that  any other variable is chosen for the independent variable (instead of  t ). Even, one can imagine an equation of the following form d{R(V)}/d{V} = Q(V) R(V) .  I am not developing this point of view.
3. The solution of  (*)  equals   x(t) = x(0) \exp[ \int_0^t \, A(u)\, du ]
4.   Function R(t)=(wt+1)/wC is a solution of the following differential linear non-homogeneous time independent equation:
d{R(t)}/d{t} = 1/C, with the initial condition R(0) = 1/wC
On the other hand, its inverse G=1/R = wC/(wt +1) satisfies the following differential linear homogeneous time varying equation:
d{G(t)}/d{t} = - [ w/(wt+1)] G(t), with initial condition G(0) = wC.
5. Other details are omitted for readability of the message:)
Regards, Joachim
• asked a question related to Linear Systems
Question
I know it's in Hamilton's Nonlinear Acoustics, but what's the original source?
Hauke is correct. That particular work was presented at the ASA meeting in 1963. See Section 3 of the attached article. It also mentions some interesting facts about that event.
• asked a question related to Linear Systems
Question
I am trying to linearize two nonlinear functions which I attached. Please check the attachment there I clearly explained the formulations.
Piecewise linearisation is the normal technique - take a look at https://www.hindawi.com/journals/mpe/2013/101376/
• asked a question related to Linear Systems
Question
I have 5th order nonlinear system of relative degree 5. That is, for feedback linearization this system is completely feedback linearizable where the input-output feedback linearization is equivalent to input-state feedback linearization.
Can any one suggest me relevant theory behind Asymptotic Output tracking problem to track a desired step-signal for this system which is based on feedback linearization?
Furthermore, it will be helpful enough if this theory of Asymptotic Output Regulation problem can be extended to the case where only the output(assume, the first state) is only available for measurement.
Hello Anirudh Nath,
As mentioned, you are able to achieve the complete linearization of your 5th order system, since the order is equal to the relative degree of the system with the selected output. In such case, you can define an input/control signal such that all the nonlinearities are cancelled and the resulting output dynamics (which, in this case, is the whole closed-loop dynamics) turns out to be linear and time-invariant. Therefore, to ensure that the output trajectories converge to the origin, you are able to apply the available theory for linear systems. That is, you can compute the characteristic polynomial of the constant matrix A, which characterize the output dynamics, and apply the Routh-Hurwitz criterion.
However, if you redefine the output signal such that the relative degree of the system is less than its order, then you have to analyse the resulting internal dynamics, in addition to the external dynamics (the output dynamics).
As the Dr. Gómez-Espinosa mentioned, the best books for the study of feedback linearization theory are the ones by Khalil, and Slotine and Li.
I also can recommend some of my papers, where you can find practical applications of the feedback linearization theory to 4th order systems, which are two-degrees-of-freedom underactuated mechanical systems. I'm sure that you can acquire some insight from them in order to solve your problem.
[1] Carlos Aguilar-Avelar and Javier Moreno-Valenzuela, "New Feedback Linearization-Based Control for Arm Trajectory Tracking of the Furuta Pendulum", IEEE/ASME Transactions on Mechatronics, vol. 21, no. 2, pp. 638-648, 2015.
[2] Carlos Aguilar-Avelar and Javier Moreno-Valenzuela, "A Feedback Linearization Controller for Trajectory Tracking of the Furuta Pendulum", Proc. of the 2014 American Control Conference, Portland, Jun. 2014, pp. 4543-4548.
[3] Carlos Aguilar-Avelar and Javier Moreno-Valenzuela, "A composite controller for trajectory tracking applied to the Furuta pendulum", ISA Transactions, Vol. 57, pp. 286-294, 2015.
Best regards,
Carlos Aguilar-Avelar.
• asked a question related to Linear Systems
Question
could anyone provide the source code to plot basin of attraction of a given nonlinear ODE system?
If your are looking for source code
1. you must indicate at least the language
2. probably this is not the proper forum
3. http://lmgtfy.com/?q=plot+basin+attraction+code
• asked a question related to Linear Systems
Question
I'm using the Machine Learning Algorithms (i.e. Linear Regression) for doing a prediction model for the arrival time of CMEs based on the CME initial characteristics and the interplanetary state.
I was wondering how to perform the Multivariable Linear Regression algorithm for Multioutput in Matlab.
Thank you Dr. Chinedu, I will try that.
• asked a question related to Linear Systems
Question