Science topic

# Linear Algebra - Science topic

Explore the latest questions and answers in Linear Algebra, and find Linear Algebra experts.
Questions related to Linear Algebra
• asked a question related to Linear Algebra
Question
For my current research, I try to find some applications for the following two problems:
- rank-estimation of singular dense symmetric / Hermitian matrices;
- minimum-norm solution x⋆ for dense least-squares problems min ∥b − Ax∥₂ when b is not in the range of A, A is symmetric / Hermitian and rank(A) < min(m, n).
Jack Don McLovin
Do you have a reference? I never heard of this application.
• asked a question related to Linear Algebra
Question
How does this ionospheric-free linear combination work?
Because the ionospheric delays (and extra pseudo ranges or phases) differ at different frequencies, but the true distances are the same, by combining two equations you could exclude one variable. See "Eliminating the effect of the TEC" in Hofmann-Wellenhof et al. (2008).
• asked a question related to Linear Algebra
Question
didactics of mathematics
Students should be shown the application of mathematics from a practical point of view. On the other hand, it is necessary to respect the theoretical side of mathematics, which is not easy for everyone to learn.@Peter Kepp Rickardo Gomes
• asked a question related to Linear Algebra
Question
Given DS= diag{1S} which is a vertex limiting operator. Where 1S is an indicator vector/characteristic vector
This DS is decomposed as DS= PSPST where PS is a coordinate vector. Is this an orthogonal vector?
Please let me know the linear algebra behind this.
1S is an indicator vector
• asked a question related to Linear Algebra
Question
I m new to research field.
Robot manipulator design using lie algebra
• asked a question related to Linear Algebra
Question
why can't the passive elements shift the dc potential to some frequency that the input signal contains?
Thank You
1) Passive elements can not provide amplification. Amplification can ONLY be done with the help of active elements.
2) If we look into the I-V curve of an element, the V/I ratio must be negative in at least one of the quadrants for an active element (i.e. V/I ratio must be negative either for V>0 or V<0). This gives a non-linear curve.
NOTE: In electronics, a V-type curve is also no-linear which is treated as linear in mathematics.
• asked a question related to Linear Algebra
Question
Given the a, b and c vectors of a general crystal structure (not necessarily cubic, can be say, monoclinic), is there a general rotation matrix formula for taking the (001) surface of the bulk to any surface. I ask a specific question here: If I wanted to rotate the bulk monoclinic Ga2O3 so the top surface (001) is now the (-201) face, how would I do that. I am familiar with general rotation matrices about a general axis from linear algebra, but I'm not sure how to apply this information to this system. It would be nice to know a general rotation matrix formula for rotating a crystal from one orientation to another regardless of the structure. I really appreciate any help in advance.
You can neglect the j component, so--
[0,1] dot [-2,1] = sqrt(5) * 1 * cos(theta),
so theta = 63.465 degrees counter-clockwise.
u' = [ui * sin(63.465), uj, uk * cos(63.465)]
v' = [vi * sin(63.465), vj, vk * cos(63.465)]
w' = [wi * sin(63.465), wj, wk * cos(63.465)]
Easy... I just typed that out in about 5 minutes in the coffee shop. For more information, see Marion and Thornton, "Classical Mechanics," chapter 1 (rotation matrices are like page 4, if I remember correctly). Please correct me, if you find an error...
I school Eric every time he has a question, because he slanders me behind my back at every opportunity, with absolutely zero evidence, when I barely even know him.
• asked a question related to Linear Algebra
Question
Dear Researchers,
In linear algebra, what does actually mean when it is said that two matrices span the same space?
For example, if matrix A spans the same space as matrix B, is that mean A=B? or
what does this information (i.e., spanning the same space) tell about the relationship between A and B?
• asked a question related to Linear Algebra
Question
I recently asked in math stackexchange a question regarding 3D rotations in geometric algebra https://math.stackexchange.com/questions/3922021/angle-and-plane-of-rotation-in-3d-geometric-algebra.
I've added a new question regarding 4D or even n-D rotations and rotors. The reformulated question is as follows:
"In general, a rotor in 4D consists of a scalar, 6 bivectors and one four-vector (in 3D, a rotor is just composed of a scalar and 3 bivectors). Then, assume that we already know two 4-dimensional vectors and one is a rotated version of the other. How is it possible to derive an expression to compute the associated rotor? does it exist? If so, could a general expression for n dimensions be obtained?"
• asked a question related to Linear Algebra
Question
For example, I have two vectors A and B in 2D rectangular coordinates (x,y). I can calculate the scalar (dot) product as (A,B)=Ax Bx + Ay By. In polar coordinates (r,phi), it will be (A,B) = Ar Br + Aphi Bphi, since these coordinates are ortogonal and normalized. If I want to make transition what should I write?
1) Ar Br + Aphi Bphi = (Ax2+Ay2)0.5(Bx2+By2)0.5+atan(Ay/Ax)atan(By/Bx) without Lame coefficient or
2) Ar Br + Aphi Bphi =(Ax2+Ay2)0.5(Bx2+By2)0.5(1+atan(Ay/Ax)atan(By/Bx)) with Lame coefficient.
And finally, both these cases are not the same as (A,B)=Ax Bx + Ay By. How to explain this inconsistency?
(A,B)_polar=A_rB_r+A_phiB_phi are attempting because the polar coordinates are orthonormal just as Cartesian. But the polar coordinates are not based on globally affine coordinates. Polar basis vectors are dependent on the position vectors. For example, unit vectors hat(r) defined at two different points (position) are different vectors (in Cartesian space R^2)-can not be overlapped by parallel transform. So if you apply for product to two different position vectors, A=A_r hat(r)+A_phi hat(phi) and B=B_r hat(r')+B_phi hat(phi'), then you will end up with (A,B)=A_r B_r hat(r)*hat(r')+etc...which its not A_rB-r+A_phiB_phi.
However, if you apply dot product to the two vectors (not position vectors) defined with respect to the same hat(r) and hat(phi) based on a common position vector, then obviously we would have (A,B)_polar=A_rB_r+A_phiB_phi.
• asked a question related to Linear Algebra
Question
How to linearize any of these surface functions (separately) near the origin?
I have attached the statement of the question, both as a screenshot, and as well as a PDF, for your perusal. Thank you.
It seems the linearization is accomplished by replacing x1, for x1^2. And separately by replacing x2, for x2^2 & x2^4.
In this way, the surface function is linearized about the origin (0,0), it means we can find f1(x1,x2)=a*x1+b*x2, whilst a and b are calculable in terms of the algebraic parameters, k and c.
But my question transforms to another level. How, we can find a compact algebraic expression for f1(x1,x2), and f2(x1,x2), close enough to the origin. This algebraic expression, need NOT be necessarily linear (it could be a nonlinear function).
Question synopsis:
1--How to find another compact analytical expression equivalent to f1(x1,x2), f2(x1,x2)? (with fair accuracy)
2-- Is it possible to find an approximation near the origin (0,0), for f1(x1,x2), f2(x1,x2), as a function of only one of the two variables (either x1, or x2)?
Regarding the second synopsis, I am to cite another ResearchGate question linked below:
However, the gist of the idea in this link is not clear to me.
• asked a question related to Linear Algebra
Question
Considering a matrix A which has vectors v1=[1;0;0] and v2=[1;1;0] i.e. this matrix A is spanned by the vectors v1 and v2.
The rank of this matrix A is 2. Going by the definition of the rank of a matrix it means the number of independent vectors or the dimension of the row space.
Seeing A={v1,v2} with a cardinality of 2 an we say that the cardinality is the same as the rank of the matrix which in turn means that it gives the number of independent vectors spanning A
In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows.
By definition, two sets are of the same cardinality if there exists a one-to-one correspondence between their elements. For a finite set, the cardinality is the number of its elements. ... For example, Z and R are infinite sets of different cardinalities while Z and Q are infinite sets of the same cardinality.
• asked a question related to Linear Algebra
Question
Are there any conditions under which the difference between two matrices i.e. A-B will be invertible? In particular, I have a positive definite matrix A but B is a square matrix not necessarily symmetric. However, B has the form MP-1N with P as a square invertible matrix and M and N as arbitrary matrices of appropriate dimensions.
I think the question is not well posted.
B should be square and is not an additional condition!!
Also, A is a square; otherwise, there is no meaning to ask about the inverse.
In all cases, A - B is a square matrix, and det(A-B) is not zero, ensures the invertibility of A - B.
• asked a question related to Linear Algebra
Question
I have confirmed that the Hessenberg determinant whose elements are the Bernoulli numbers $B_{2r}$ is negative. See the picture uploaded here. My question is: What is the accurate value of the Hessenberg determinant in the equation (10) in the picture? Can one find a simple formula for the Hessenberg determinant in the equation (10) in the picture? Perhaps it is easy for you, but right now it is difficult for me.
• asked a question related to Linear Algebra
Question
I have drived a formula of computing a special Hessenberg determinant. See the picture uploaded here. My question is: Can this formula be simplified more concisely, more meaningfully, and more significantly?
Till now, I do not get the book
J. M. Hoene-Wro\'nski, \emph{Introduction \a la Philosophie des Math\'ematiques: Et Technie de l'Algorithmie}, Paris, 1811.
• asked a question related to Linear Algebra
Question
Dear friends:
In some calculation in control theory, I need to show that the following matrix
E = I - (C B)^{-1} B C
is a singular matrix. Here, B is (n X 1) column vector and C is (1 X n) row vector. Also, I is the identify matrix of order n. So, the matrix E is well-defined.
I have verified this by trying many examples from MATLAB, but I need a mathematical proof.
This is perhaps a simple calculation in linear algebra, but I don't see it!
Any help on this is highly appreciated.. Thanks..
(B*C) is always a rank-one matrix, a singular matrix. (C*B) is a scalar, say a and nonzero eigenvalue of (B*C)=a, then that matrix E=I-(C*B)^{-1}(B*C) will singular.
• asked a question related to Linear Algebra
Question
I have question about resizing the complex arrays.
I need to resize the complex valued array with interpolation method.
I tried scikit-image but this one didn't support complex data type.
I also tried to resize by cv2, and also it didn't work.
even with the real value and imaginary value separately.
Is there any solution to this??
I would recommend to ask this Q at Stack Overflow community too:
• asked a question related to Linear Algebra
Question
when i search the articles to know the correlation coefficient between two variables, i got only regression equations showing relationship between two variables.
for example: total length of femur= 32.19+0.16 (segment 1)
from this equation can i calculate value of correlation coefficient between total length of femur and segment 1.
Yes, you can. The beta value and correlation value are exactly the same when the standard deviations of both variables are the same.
If we know the standard deviations of both variables and the beta value, we can easily calculate the correlation coefficient. Please check out the example below
• asked a question related to Linear Algebra
Question
Dear all,
As we know, interval matrices are matrices with 0 and 1 entries with the property that the ones in each column (row) are contiguous. Interval matrices are totally unimodular (TU). Hence, the integer programming (IP) problems with such matrices of technical coefficients and can be solved as linear programming problems.
However, in the consecutive-ones with wrap around, the ones are wrapped.
For instance, in the following matrix, the ones are wrapped in columns 4 and 5:
1 0 0 1 1
1 1 0 0 1
1 1 1 0 0
0 1 1 1 0
0 0 1 1 1
This example is not TU with a sub-matrix with determinant 2 (deleting rows and columns 2 and 4).
Two questions:
1- When wrapping does not violate TU property?
2- Is there a general approach to solve IP problems with consecutive ones and wrapping around matrix of technical coefficients, efficiently?
Thank you for your kind help/
If all righthand sides are 1, then you can reduce the problem to a series of shortest-path problems:
If not, then you can use the method in this paper:
• asked a question related to Linear Algebra
Question
30 years ago on April 1, 1991 A. K. Lenstra announced the factorization of RSA-100 challenge.
RSA-100 = 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139
Today, it takes less than 2 hours two factorize this number in a KALI Linux VM machine with only 2 cores in a MacBook 2017 using CADO-nfs tool (https://gitlab.inria.fr/cado-nfs/cado-nfs).
Nice technology development...
--------------------------------------------------------------------------------------------------
Info:Linear Algebra: Aggregate statistics:
Info:Linear Algebra: Krylov: CPU time 274.68, WCT time 81.91, iteration CPU time 0.01, COMM 0.0, cpu-wait 0.0, comm-wait 0.0 (5888 iterations)
Info:Linear Algebra: Lingen CPU time 18.94, WCT time 4.87
Info:Linear Algebra: Mksol: CPU time 149.01, WCT time 45.63, iteration CPU time 0.01, COMM 0.0, cpu-wait 0.0, comm-wait 0.0 (2944 iterations)
Info:Square Root: Total cpu/real time for sqrt: 69.39/20.0587
Info:Filtering - Duplicate Removal, removal pass: Total cpu/real time for dup2: 22.99/14.251
Info:Filtering - Duplicate Removal, removal pass: Aggregate statistics:
Info:Filtering - Duplicate Removal, removal pass: CPU time for dup2: 12.899999999999999s
Info:Polynomial Selection (root optimized): Aggregate statistics:
Info:Polynomial Selection (root optimized): Total time: 70.76
Info:Polynomial Selection (root optimized): Rootsieve time: 70.33
Info:Polynomial Selection (size optimized): Aggregate statistics:
Info:Polynomial Selection (size optimized): potential collisions: 5781.44
Info:Polynomial Selection (size optimized): raw lognorm (nr/min/av/max/std): 5779/32.780/37.835/38.680/0.701
Info:Polynomial Selection (size optimized): optimized lognorm (nr/min/av/max/std): 3330/32.780/36.367/38.650/1.003
Info:Polynomial Selection (size optimized): Total time: 125.17
Info:HTTP server: Shutting down HTTP server
Info:Complete Factorization / Discrete logarithm: Total cpu/elapsed time for entire factorization: 7221.06/2350.02
Info:root: Cleaning up computation data in /tmp/cado.433q3ve9
40094690950920881030683735292761468389214899724061
37975227936943673922808872755445627854565536638199
--------------------------------------------------------------------------------------------------
Not problem I like to joke with the people but I am good person. The research was made only 3 months ago. With the quotient and the remainder I use a function and I insert that data in the function it´s derivatives tell us a bound and an average of the biggest factor. Funny and magic like mathematics. Take care and good luck
• asked a question related to Linear Algebra
Question
where matrices A and B are known while X and Y are to be solved.
For example,
A=[a 0;
0 b];
B=[a a 0;
0 0 b;
0 0 b;];
a and b are known elements.
It is easy to see that the solution is
X=t*[1 1 0;
0 0 1;];
Y=1/t*[1 0;
0 1;
0 1;];
where t is any non-zero real number.
But how to derive this solution step by step in a systimatical way?
It would be better if there is a programmable approach.
Dear Liming, at a first glance the problem seems not to have a unique solution in general. Assuming for example that all the matrices are nxn, you need to solve n2 equations (corresponding to the elements of B) in 2n2 unknowns (the elements of X and Y.
If what matters is to find a solution, even if not unique, a possible approach could be to look for a minimum of the norm ||XAY-B|| with respect to the entries xij and yij. To get a differentiable function, one could take the Frobenius norm of the residual XAY-B.
• asked a question related to Linear Algebra
Question
Data sets, when structured, can be put in vector form (v(1)...v(n)), adding time dependency it's v(i, t) for i=1...n and t=1...T.
Then we have a matrix of terms v(i, j)...
Matrices are important, they can represent linear operators in finite dimension. Composing such operators f, g, as fog translates into matrix product FxG with obvious notations.
Now a classical matrix M is a table of lines and columns, containing numbers or variables. Precisely at line i and column j of such table, we store term m(i, j), usually belonging to real number set R, or complex number set C, or more generally to a group G.
What about generalising such matrix of numbers into a matrix of set (in any field of science, this could mean storing all data collected for a particular parameter "m(i, j)", which is a set M(i, j) of data?
What can we observe, say, define on such matrices of sets?
If you are as curious as me, in your own field of science or engineering, please follow the link below, and more importantly, feedback here with comments, thoughts, advice on how to take this further.
Ref:
Thank you for sharing this Question
• asked a question related to Linear Algebra
Question
Prove or disprove that for every field F of characteristic two, there exist vector spaces U and V over F and mapping from U to V, which is F-homogeneous but not additive.
• asked a question related to Linear Algebra
Question
I have a problem with CANON functions of different Matlab versions (e.g. 2007b, 2010b, 2011a) returning inconsistent results for the same call parameters (sysd = canon(sysd0,'modal'):
- the input/output relation stays the same - OK.
- the diagonal A-matrix is permutated slightly - still OK,
- the B and C matrices differ significantly.
I undersdand the modal decomposition has in multiple solutions, however, as far as I know, there is no useful control over the CANON function realization. Furthermore in newer Matlab versions the canon function calls p-type subfunctions which makes them inaccessible.
The obtained model is being used for tuning of predictive controller. Different B-matrix subsequently yielding significantly different results for certain set of tuning weights.
I have the original data generated using Matlab 2007. As I want to replicate it, and further provide public the code for performing a benchmark test, I need to have the CANON function to return consistent results regardless of the Matlab version used... Or find an alternative way defining/norming the B & C matrices afterwards
Is there any way to get consistent modal decomposition?
The b*b' approach, namely the LQG/LTR strategy is asymptotically singular. Thus it is intrinsically faulted.
• asked a question related to Linear Algebra
Question
Is there a way to transform a similarity matrix from high space to low space and keep the same knowledge? For example, the attached matrix.
To reduce of dimensionality of space you can use Tensorized Random Projections:
• asked a question related to Linear Algebra
Question
The mathematics behind the inverse of large sparse matrices is very interesting and widely used in several fields. Sometimes, It is required to find the inverse of these kinds of matrices. However, finding the same is computationally costly. I want to know, the related research, what happens when a single entry (or a few entries) are perturbed in the original matrix then how much it will affect the entries of inverse of the matrix.
A standard trick in these cases is to use the Sherman Morrison formula
However, the inverse of a sparse matrix does not have to be sparse and in particular one does not want to store inverses for sparse large matrices, so that the formula would rather have to be applied to the action A^-1b of the inverse on the right hand side of the linear system, so as to correct the solution to the original linear system A^-1b with a hopefully limited amount of operations.
Please notice that this is a very generic comment, I am sure somebody in the sparse solver community will have studied the problem in much greater depth.
• asked a question related to Linear Algebra
Question
Does the journal Linear Algebra and Its Applications accept manuscript written in Latex only?
You can visit the journal website and check the available options for paper submission format, if this is not clearly provided in the website you can drop an email query to the journal editor .
• asked a question related to Linear Algebra
Question
I want to learn how to calculate variance component.
I have a basic knowledge about linear algebra.
Unfortunately, I still not understand about how the process of Henderson method, MIVQUE, and REML etc.
There are few resources on the Internet.
Can somebody suggest some basic courses or books for me?
• asked a question related to Linear Algebra
Question
I know that the 3 vectors x,y,z in Rn , where angles between them are 120are coplanar .
Indeed it is an interesting problem. It is more profound than it seems. Your note about my counterexample is true. I have proved that your claim is valid in R3, but I am not sure about higher dimensions.
Regards
• asked a question related to Linear Algebra
Question
Dear Colleagues
Where can one find the following formula (see the picture attached) for computing a special tridiagonal determinant? Please give a reference in which one can find, or from which one can derive, the formula showed by the picture. Thank a lot.
Best regards
Feng Qi (F. Qi)
The following formally papers are related to this question:
[1] Feng Qi, Viera Cernanova, and Yuri S. Semenov, Some tridiagonal determinants related to central Delannoy numbers, the Chebyshev polynomials, and the Fibonacci polynomials, University Politehnica of Bucharest Scientific Bulletin Series A---Applied Mathematics and Physics 81 (2019), no. 1, 123--136.
[2] Feng Qi and Ai-Qi Liu, Alternative proofs of some formulas for two tridiagonal determinants, Acta Universitatis Sapientiae Mathematica 10 (2018), no. 2, 287--297; available online at https://doi.org/10.2478/ausm-2018-0022
[3] Feng Qi, Wen Wang, Dongkyu Lim, and Bai-Ni Guo, Several explicit and recurrent formulas for determinants of tridiagonal matrices via generalized continued fractions, Nonlinear Analysis: Problems, Applications and Computational Methods, Editors: Zakia Hammouch, Hemen Dutta, Said Melliani, Michael Ruzhansky; Springer Book Series Lecture Notes in Networks and Systems. The 6th International Congress of the Moroccan Society of Applied Mathematics (SM2A 2019) organized by Sultan Moulay Slimane University, Faculte des sciences et techniques, BP 523, Beni-Mellal, Morocco, during 7-9 November, 2019.
• asked a question related to Linear Algebra
Question
Dear Colleagues
About computation of the general tridiagonal determinant, I have a guess, see the PNG or PDF files uploaded with this message. Could you please supply a proof for (verify) the guess or deny the guess? Anyway, thank you a lot.
Best regards
Feng Qi (F. Qi)
The following formally papers are related to this question:
[1] Feng Qi, Viera Cernanova, and Yuri S. Semenov, Some tridiagonal determinants related to central Delannoy numbers, the Chebyshev polynomials, and the Fibonacci polynomials, University Politehnica of Bucharest Scientific Bulletin Series A---Applied Mathematics and Physics 81 (2019), no. 1, 123--136.
[2] Feng Qi and Ai-Qi Liu, Alternative proofs of some formulas for two tridiagonal determinants, Acta Universitatis Sapientiae Mathematica 10 (2018), no. 2, 287--297; available online at https://doi.org/10.2478/ausm-2018-0022
[3] Feng Qi, Wen Wang, Dongkyu Lim, and Bai-Ni Guo, Several explicit and recurrent formulas for determinants of tridiagonal matrices via generalized continued fractions, Nonlinear Analysis: Problems, Applications and Computational Methods, Editors: Zakia Hammouch, Hemen Dutta, Said Melliani, Michael Ruzhansky; Springer Book Series Lecture Notes in Networks and Systems. The 6th International Congress of the Moroccan Society of Applied Mathematics (SM2A 2019) organized by Sultan Moulay Slimane University, Faculte des sciences et techniques, BP 523, Beni-Mellal, Morocco, during 7-9 November, 2019.
• asked a question related to Linear Algebra
Question
Hi all,
I have a basic doubt in linear algebra. The determinant of a 2x2 matrix can be considered as the area of a parallelogram. Similarly what could be the physical interpretation of a characteristic equation  and roots of the characteristic Equation (Eigen Value)?
This question is bugging my mind for a long time now. Can any one here enlighten me?
Regards,
Balaji
Mathematically, eigenvalues can interpret the stability of your system depending on the sign of your eigenvalues.
• asked a question related to Linear Algebra
Question
Prove that if W is a diagonal matrix having positive diagonal elements and size (2^n – 1)x(2^n – 1), K is a matrix with size (2^n – 1)xn, then:
A = K'*(inv(W) - K*inv(K'*W*K)*K')*K
is a positive definite matrix.
Where:
K '- transpose of a matrix K
inv (W) is the inverse matrix of the matrix W
Using the Monte-Carlo method, I find that the matrix inv(W) - K*inv(K'*W*K)*K' can be negative definite.
Thank you so much for reading my question
I am looking forward to getting your response!
I appreciate the answer of Dr. Peter Breuer .
• asked a question related to Linear Algebra
Question
We have a stochastic dynamic model: Xk+1 =f(Xk,uk,wk ). We can design a cost function to be optimized using dynamic programming algorithm. How do we design a cost function for this dynamic system to ensure stability?
In Chapter 4 of Ref. [a] for a quadratic cost function and a linear system (Xk+1 =AXk+Buk+wk), a proposition shows that under a few assumptions, the quadratic cost function results in a stable fixed state feedback. However, I think about how we can consider stability issue in the designation of the cost function as a whole when we are going to define the optimal control problem for a nonlinear system generally. Can we use the meaning of stability to design the cost function? Please share me your ideas.
[a] Bertsekas, Dimitri P., et al. Dynamic programming and optimal control. Vol. 1. No. 2. Belmont, MA: Athena scientific, 1995.
Unfortunately, the attached article
" [a] Bertsekas, Dimitri P., et al. Dynamic programming and optimal control. Vol. 1. No. 2. Belmont, MA: Athena scientific, 1995."
is full of typing errors.
General talking, we consider the linearization of the nonlinear system. Next, we study the stability of the equilibrium state of the new linear system, which indicates the nature of the stability of the nonlinear system in some neighborhoods.
The signs of the real parts of the eigenvalues of the Jacobian matrix decide which approach we should follow. We have the direct and indirect Lyapunov methods to study the stability based on the eigenvalues.
Best regards
• asked a question related to Linear Algebra
Question
Is there any alternative topic/theory/mathematical foundation to compressed sensing (CS) theory?
successive to Nyquist Criterion is CS theory, is there any theory that surpasses the CS theory ?
Dear Vishwaraj B Manur,
First of all, we should separate the concept of Sampling against the concept of Sensing. These two are not interchangeable!
1. Compressed Sensing theory states that it could recover a set of coefficients (which represents in a specific transform domain the useful information from the analyzed signal) from less samples than Nyquist sampling criteria in order to be able to reconstruct a signal (of course as it could be reconstructed from uniform samples by classical Shannon theory).
2. Compressive Sampling theory states that a signal can be sampled by a protocol (non-uniform sampling, random sampling, modulation and sampling, etc.) which will allow later to be reconstructed by means of a Compressed Sensing algorithm which knows about the used sampling protocol.
3. There are at least 4 sampling ways (according to Figure 2 from https://core.ac.uk/download/pdf/34645298.pdf ) to acquire the information from a signal. Take into account that practical CS is a lossy compression, and this is due to the non-ideal process which happens when the sampling process take place.
• asked a question related to Linear Algebra
Question
Hi everyone,
I have implemented an EKF in a power systems application. When I run a simulation in Matlab, in some iterations of the filter I get a Kalman gain matrix (K) with negative values and/or absolute values greater than 1. In some books I have read that the Kalman gain is a real value between 0 an1.
Is this correct ? Or is it an indication that something wrong with the Kalman filter?
I have another opinion though I am not certain and welcome any corrections.
I think only the Kalman gains that correspond to measurable states are between [0 1]. Those states that are not directly measurable could have Kalman gains beyond [0 1].
• asked a question related to Linear Algebra
Question
I need to multiply the inverse of a matrix A to another matrix B. If B were to be a vector, I would simply solve a linear system Ax = B to get the product x=inv(A)B. With B being a matrix, I don't know the most efficient way|method to compute inv(A)B.
As your system matrix most probably is sparse, you might be better off with numerical linear algebra routines especially designed for large, sparse matrices. I did stuff on that in the late 80s, through the Cray-2 machine, but there must be more streamlined ways to do it nowadays. (In those days one had to "cut" the lengths of the vectors in order to match the smartest size of sub-vectors for the linear algebra routines, but nowadays that will most surely be automatic, to a degree, nowadays.) Talk with numerical linear algebra experts, and they will help you.
• asked a question related to Linear Algebra
Question
If B(H) is the algebra of bounded linear operators acting on an infinite dimensional complex Hilbert space, then which elements of B(H) that can't be written as a linear combination of orthogonal projections ?
I found something more interesting here https://arxiv.org/pdf/1608.04445.pdf
• asked a question related to Linear Algebra
Question
What is the actual difference between a tensor and matrix? A lot of recent researchers are working on tensor optimization. I found a lot of differences related to multi-dimensional linear algebra and preserving transformations. My question is two-fold.
Is it only used because it provides us a more flexible representation for multi-dimension operations?
Or does it have performance differences during computation compared with the matrix operations?
The matrix is a second-order tensor. Here, all operations of tensor calculus are applicable. Acrом
• asked a question related to Linear Algebra
Question
What are the most important applications of linear algebra?
I agree with Dr Ali
• asked a question related to Linear Algebra
Question
Does someone know how to get an empirical equation from three variable data?
The data is listed in the attached file
without sin, sinh and tanh
y = 22.1343029549693 + 10.8292764965611*x + 0.296189687976693*x*z^2 + 27*z/x^2 - 2.11016161227361*x*z - 0.270179597887278*x^2
• asked a question related to Linear Algebra
Question
If X is a quasi-Banach space without a separating dual that is no p-normable for some 0<p<1. Is it possible for there to exist a p-norm making X into a p-Banach space such that this new space is not isomorphic to X. In other words, are there non-trivial examples quasi-Banach envelopes.
If 0<p<1 then p-normability is equivalent to having Rademacher type p. So, if the ability to equip X with an p-norm implies X has type p in with its original quasi-norm then X is its own p-Banach envelope.
This question is uninteresting in the case X has a separating dual as such spaces may be equipped with r-norms for all 0<r<=1. And, it may be that many of these are not equivalent to X's given quasi-norm.
I'm not sure of understanding the question. Nevertheless, in the case it was useful for someone, let me point put the following example: Let 0<p<q<1 and put X=l_p+L_q. The Banach envelope of X is l_1+0=l_1, which does not separate the points of X. The q-Banach envelope of X is l_q+L_q. So, X one-to-one embeds into its q-Banach envelope. In other words, the q-norm of l_q+L_q is a q-norm on X which is not equivalent to the p-norm X is equipped with.
• asked a question related to Linear Algebra
Question
I am looking at a pde of the form
D^2 u + f u = k u,
here D^2 denotes the laplacian, u and f are complex functions on IR^n. I want as much information as possible about each PDE. For example, how many solutions exist for each value of k and properties of the solutions themselves, such as smoothness, boundedness, stationary points etc. Can you recommend your favourite reference volumes for eigenvalue problems for PDEs? I have a bunch of resources on PDEs in general, so that is not what I am looking for. Instead, I am looking for the subtleties that mainly relates to eigenvalue problems.
I hope that the attached articles are useful in your research.
Best regards
• asked a question related to Linear Algebra
Question
How to prove or where to find a proof of the lower Hessenberg determinant showed by two pictures uploaded here?
Dear All Colleagues
The final version has been accepted for publication in the Mathematica Slovaca. For details, please click at the website:
• asked a question related to Linear Algebra
Question
I understand the idea of Best Worst Method for multi criteria decision making
and i know that there is a solver to get the weights but i need to understand the mathematical equation and know how to solve it with my own self.
Can any one help me?
{|𝑤𝐵 − 𝑎𝐵𝑗𝑤𝑗|} ≤ 𝜉L for all j
{|𝑤𝑗 − 𝑎𝑗𝑊𝑤𝑊|} ≤ 𝜉𝐿 for all j
Σ𝑤𝑗 = 1, 𝑤𝑗 ≥ 0 for all j
Dear Nouran,
You can find the answer in this paper:
Best regards,
Sarbast
• asked a question related to Linear Algebra
Question
Many control theory scientists don't know nothing about controllability. Is it from the reason that in technical disciplines, scientists have not learned many linear algebra and analysis ( functional ).
Dear Danilo,
I just saw your answers and additional points of view. You are right concerning dimensionality. In the system is infinite dimensional because of a time delay, then we may avoid the problem by taking a discrete-time representation and using standard controllability tools. Now, as you pointed out, if the system is governed by partial differential equations the problem is different and much harder, I guess. Having said that, let me raise the following *practical* question: in the case of high-dimentional (not to mention infinte-dimensional) systems, doest it make sense, or better, is it really necessary to be able to place the system state anywhere in such a high-dimensional space? Perhaps the region of interest is a low-dimensional manifold where the controlled system should stay...
Regards.
• asked a question related to Linear Algebra
Question
mathematical solution , please find attached file
computer vision , linear algebra , mathematics , grey level co-occurrence matrix
I hope that the attached article helps you to do your assignment.
Best regards
• asked a question related to Linear Algebra
Question
How is linear algebra used to represents nonlinear models in 3d game programming or in real coordinate space applications?
You need more advanced topics than linear algebra.
Different techniques in computer science are useful for doing the job. See, for example, the following article:
https://www.sciencedirect.com › topics › computer-science › nonlinear-transformation...
Best regards
• asked a question related to Linear Algebra
Question
Suppose each element of matrix A is the linear presentation of n independent variables. And matrix B is the formed by such n variables.
The question is, does there always exist T1,T2 that enable A=T1BT2?
If not, then how to formulate A and B (or add some constraints) so that there always exist T1,T2 that enable A=T1BT2?
You can use the direct approach, as explained in my previous answer.
See the attached file.
Best regards
• asked a question related to Linear Algebra
Question
Is there essentially a difference? Which one is optimal or low complexity ? is here a relation with the rank?
• asked a question related to Linear Algebra
Question
Hello,
I am using ILU factorization as the preconditioner of a Bi-CGSTAB solver for solving a linear system of equations Ax=b. The preconditioner matrix M=(LU)^-1 is calculated by backward substitution method, solving Ly=c and Us=y. However, when A has zero diagonal elements (e.g. A(2,2) = 0) U will also have zero diagonal elements ( U(2,2 )=0) ,which makes M impossible to be solved by the backward substitution method.
How could I reorder my system of equation in order to tackle this problem?
According to your assumptions, we have n distinct equations.
To reach your request proceed as the following:
Rearrange the equations as follows:
1). The coefficient of the first unknown in the first equation is not zero.
2).The coefficient of the second unknown in the second equation is not zero.
etc
n).The coefficient of the nth-unknown in the nth-equation is not zero.
The new augmented matrix guaranteed your request.
Otherwise, if your matrix is full of zeros and det(A) =0, then the system has
i). No solution
or
ii) Infinite number of solutions.
Best regards
• asked a question related to Linear Algebra
Question
Hi everyone,
I have a matrix defined such that each row has a sum lesser than one. I want to know if there is a theorem or result saying that the spectral radius of these kind of matrices is always lesser than one.
Using simulation, I verified that this result holds (1000000 matrices simulated randomly such that the sum of each line is 0.99999) but I'm looking for a theoretical result.
Best regards,
Gershgorin disks work for columns too.
One can observe that A and AT have the same spectrum.
Best wishes
• asked a question related to Linear Algebra
Question
Some classical methods used in the field of linear algebra,such as linear regression via linear least squares and singular-value decomposition, are linear algebra methods, and other methods, such as principal component analysis, were born from the marriage of linear algebra and statistics. To read and understand machine learning, you must be able to read and understand linear algebra. This book helps machine learning practitioners, get on top of linear algebra, fast.
Thanks, very useful reference
• asked a question related to Linear Algebra
Question
If I have a MIMO system with a number of subsystems with moderate amount of coupling sensitivity and if those individual subsystems are BIBO stable then under what conditions the whole system can be considered to be BIBO stable ? Is there any necessary and sufficient conditions that needs to be satisfied ? An answer from the control theoretic point of view would be most appreciated.
Biswajit,
In that case you are trying out some very old concepts afresh !
Not very relevant today perhaps (that's just my opinion, of course !), because control concepts themselves are a lot more mature and evolved today !!
Have you checked out total stability concepts, for instance ?
Cheers !
-Sanjay
• asked a question related to Linear Algebra
Question
Hi,
In minimizing the difference between two variables inside an absolute term e.g., Min |a-b| . How to make the term linear so that can be solved by LP or MILP . Where a and b are free integer variable (they take positive and negative values).
To minimise |V-1|, where V is a positive continuous variable, just minimise x subject to x >= V-1 and x >= 1- V.
• asked a question related to Linear Algebra
Question
A simple example is explored in attached pdf by building a matrix a with given user rating and movie claasification and weights , but we do not seem to recover our original data components , hence some exploration was done using Scilab (in pdf). Would welcome constructive insights and comments .
Good morning, I have studied your PDF and let me allow following comments.
1. You construct the final matrix as the sum of two matrices, these partial matrices are NOT orthogonal. Consequently, you can NOT recover these partial matrices exactly from the SVD of the final matrix, because in the SVD ALL matrices of type $\sigma_i*u_i*v_i^t$ are orthogonal!
2. There are formal errors in your PDF file - for example, the element $la(3,3)$ does not exist.
• asked a question related to Linear Algebra
Question
Help me please. What is the name of this matrix, which has the following properties:
- Binary matrix
- The binary code formed by the (k + 1) -th row of this matrix is equal to the binary code formed by the k-th row added to "1".
The matrix is as follows:
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
Thank you very much!
This matrix encodes all integers from 0 to 2^k-1, for given no. of rows 2^k and columns k, as binary strings.
I believe it can be named binary encoder matrix .
• asked a question related to Linear Algebra
Question
It's common to know that performance of MMSE and ZF are pretty close over Massive MIMO, and i've been trying to prove it in my simulation.
Though there's a problem about MMSE equalization,
I've been using 10*100 (10 transmitting antennas over 100 receving antennas),
y = Hx + v
H'*y = H'Hx +Hv
xhat = M*H'y;
To simplify the notation, H = H'*H
For ZF: M_zf = H'inv(H*H') (pseudo inverse)
For MMSE: M_mmse = H'*inv(H*H'+ H*sigma_v2/sigma_x2)
If both algorithms' performance were close,
M_zf = M_mmse approximately.
However, when I implenmented MMSE equalization, I also need to consider the normalization, for ZF, on the other hand, it's unnecessary.
So the normalization was diag(gain*H)
Therefore, M_mmse_nor = M_mmse ./ diag(M_mmse * H)
Regardless the normalization, performance of MMSE and ZF should be simliar over 10*100 MIMO system theoratically due to the H and H'
Looking forward to a discussion.
Thanks,
Dear Zeyang,
You can find in the literature elaborate comparison between the the zero forcing equalization and the MMSE. The MMSE is more accurate and suits a wide range of channel sate conditions expressed in S/N ratio. The zero forcing method is an approximate method more suitable for high signal to noise ratio where one can consider that the noise is much smaller than the received signal.
The MMSE is working well for all signal to noise ratios but it needs nuch more computational effort.
Best wishes
• asked a question related to Linear Algebra
Question
I'm trying to optimize a linear transform (3*3 matrix) by the use of PSO algorithm. The problem is when I increase swarm size, I get different solutions which are not even close to each other. Surprisingly, best objective function value in each swarm size mode doesn't change significantly. I have defined 9 dimensions for the solution X(1), X(2), ..., X(9) Since there are 9 members for the linear transform matrix. I set swarm size as follows: 9, 18, 27, 36, 45, 54, 63, 72, 81, 90. In the figure below I plotted each linear transform obtained by each swarm size. For this purpose I used unit vectors of each linear transform. If linear transforms were the same, their unit vectors should have overlay onto each other. what could be the problem, any idea?
David E. Stewart Thanks for the tips, I appreciate your help. If you don't mind I explain a little more about the problem, I hope it comes handy. The transform I'm trying to find is called CAT (Chromatic Adaptation Transform). Typical values for such transform is as follows:
M = [0.7982 0.3389 -0.1371;
-0.5918 1.5512 0.0406;
0.0008 0.0239 0.9753]
little changes to each value above have meaningful impacts on my model. An optimized version of the matrix above could be like this:
M_Optimized = [1.2694 -0.0988 -0.1706;
-0.8364 1.8006 0.0357;
0.0297 -0.0315 1.0018]
Is there any possible tweaks to the algorithm parameter which help me to take care of little value changes? I'm afraid default particle search steps might be too large for this space. I use particleswarm command in MATLAB with default social and cognitive weights.
This transform is the main part of my model. This model tries to estimate some experimental data. Therefore my objective function is the mean error of the model in respect to experimental data. This objective function includes a non-linear function (color difference DE2000 formula) and I suppose there are multiple local minimum in the search space and these points differ slightly from each other. What could I do to make sure I'm reaching the global minimum?
I'll try the tips you mentioned. I just gave these information to clear the situation. Your further suggestions are appreciated.
• asked a question related to Linear Algebra
Question
Suppose we have computed the list of coefficients for the regression $y = \Sigma_{k=1}^p a_ik x_k$ where we have N measurements values, i.e. N vectors (y,X=[x1,...,xp]). $y$ is the objective, while the xi are the variables. The a_ik coefficients are obtained e.g. with the least squares method.
When we receive a new set of measurements e.g. M additional vectors (y',X') , is it possible to compute, or have a nice approximate, of the updated coefficients a_ik ?
One common component here is the product X'X which enters into the term to be inverted or manipulated in some way. (Called sum of squares.)
When you add a few rows, the elements of the new sum of squares matrix are the old elements plus the new sums of squares from the added rows.
See the attached small example; "yellow" = original data; "blue" = new data
• asked a question related to Linear Algebra
Question
Greetings,
Completing Bachelors in Engineering this June'19, I thought I'd start with Masters/PhD in Gravitational Physics this fall but I received rejections from almost every graduate school I applied to. To where I received an offer from, I won't be able to pay off the tuition fees.
Of course I knew that to receive an offer, one needs to have some experience with the subject. With the engineering curriculum on one hand, I tried to manage my interests in gravity. From watching lecture videos by Frederic Schuller and Leonard Susskind to reading books by Sean Carrol and to even doing a summer research internship on black hole geometries, I tried to gain experience on the subject.
I wish to understand relativity from a mathematical point of view.
" A good course in more abstract algebra dealing with vector spaces, inner products/orthogonality, and that sort of thing is a must. To my knowledge this is normally taught in a second year linear algebra course and is typically kept out of first year courses. Obviously a course in differential equations is required and probably a course in partial differential equations is required as well.
The question is more about the mathematical aspect, I'd say having a course in analysis up to topological spaces is a huge plus. That way if you're curious about the more mathematical nature of manifolds, you could pick up a book like Lee and be off to the races. If you want to study anything at a level higher, say Wald, then a course in analysis including topological spaces is a must.
I'd also say a good course in classical differential geometry (2 and 3 dimensional things) is a good pre-req to build a geometrical idea of what is going on, albeit the methods used in those types of courses do not generalise. "
- Professor X
^I am looking for an opportunity to study all of this.
I would be grateful for any opportunity/guidance given.
Thanking you
PS: I really wanted to do Part III of the Mathematical Tripos from Cambridge University, but sadly my grades won't allow me to even apply :p
There are two sides to your problem: the practical and the ambitional. You will have to look after both. Recognize the practical issues but don't let go of your ambition. You may have to get a temporary job just to live, but that does not mean you give up on your dreams.
Your problem is not unique and has been overcome by famous scientists. Faraday started working for a bookbinder and ended as a revered scientist. His personal drive got him through. Dirac got a first degree in electrical engineering and ended as a revered theorist. Einstein worked early on in a Patent office and ended as a revered theorist. Other examples can be found, such as Ramanujan. Now there's a great example of talent beating disadvantage. So you see, it's not the end of the world if there are practical difficulties in your way at this time in your life. If you keep your spirits high, focused on what really interests you, you may succeed. It may be very hard, but don't give up.
You should understand that training is not enough. You have to practice being creative. Some people on this forum will probably disagree with the following suggestion, but have a go at writing a paper on a novel topic and seeing the reaction. It may take time to find a problem that you can work on, and you may very well get rejection. But having a go will teach you more than doing a lecture course on analysis. Papers do not all have to be in quantum field theory or relativity. Go on the arXives and see what sort of topics are viable for you. Most likely, at this stage, it might be in the General Physics section. But at least you might start from there.
Good luck in your ambition. Never give up.
George Jaroszkiewicz
• asked a question related to Linear Algebra
Question
I am looking for a reference book on linear algebra that uses linear maps instead of matrices and uses Einstein summation notation.
Regards,
Zubair
you can the the following book :
" Finite Dimensional Vector Spaces " , by Paul R. Halmos, in this book the co-ordinate free, axiomatic approach to linear algebra is presented.
• asked a question related to Linear Algebra
Question
This is an idea for probably a Master's Thesis project.
We know that we can approximate functions in a minimax senses, suitably sampled, using Linear Programming.
Where:
F is the function or series to be approximated.
A is the approximant or approximating function: a linear combination of basis functions such as a0 + a1*x +a2*x^2....
E is the error where the peaks (the maxima) are to be minimized and the result will be a series of equal peaks of generally alternating sign.
The equations look like this:
The inequalities in the linear program notation might look like this:
a01 + a11*x + a21*x^2 ...... + aN1*x^N - F1 <= E1
and, this would be rearranged to solve for the aij and the Ei
This is easy enough to program except that there are a lot of data points to be carried in the linear programming formulation.
And, it's not particularly computationally efficient.
An advantage is that one can include equality constraints:
a01 + a11*x + a21*x^2 ...... + aN1*x^N - F1 = 0
Of course, this reduces the number of degrees of freedom by one (each) and may not be feasible or desired in the context of the entire objective. i.e. if F is zero everywhere but the equality constraint is 10^6 then that would be counterproductive. But, if F is zero everywhere AND the equality constraint at xij is zero then the problem definition is consistent.
The same problem can be solved with the Remez Exchange algorithm, is likely easier to implement and is more computationally efficient (I believe). And, there is a Modified Remez Exchange algorithm that will allow equality constraints. (GC. Temes et al., "The Optimization of Bandlimited Systems", Proc. IEEE, vol. 61, No. 2, pp. 196-234, Feb., 1973..)
So, two different methods can be used to achieve the same outcome. That suggests that there's a relationship between the two. But how do they relate in an understandable / practical mathematical context? How are they similar? How are they different? What further extensions of the Remez algorithm might be suggested with this illumination?
I think my introduction threw things off track. The last part paraphrased here is the question I asked ... or ... intended to ask:
Two different methods(linear programming with a minimax objective/formulation and the Remez Exchange algorithm) can be used to achieve the same outcome. That suggests that there's a relationship between the two.
But how does the Remez Exchange algorithm (REA)and a minimax-contructed linear program (MCLP) relate in an understandable / practical mathematical context?
(I'm not able to express the relationship in this manner).
An: .......
How are they similar? How are they different?
A1: Without modification, the REA can't impose equality contraints while the MCLP can, fairly directly. Maybe that's a big deal and maybe not.
An: ......
What further extensions of the Remez algorithm might be suggested with this illumination?
• asked a question related to Linear Algebra
Question
Dear friends
I have a serious problem in order to regenerate the results of the attached paper. I do exactly mentioned method (FEM) in the paper but I don't get correct results.
This paper about gas face seal and compressible zeroth and first order perturbed Reynolds equation. I Solved zeroth order with FEM and FDM correctly and I got same results that bring out in the paper (Opening force for various pressure ratio) and I solve first order equation with 1 pressure ratio (figure 5 and 6) (I am thankful to you if look at the paper), but in other figure I can't get the paper's results. I try many method to solve linear system of equation (Ax=b) such as direct method, iterative method (i.e. gauss-sidel, cgs, gmres, bicg, pcg ...) but I failed, also I try many network grid and there is no achievement.
So what should I do?
I really know my question is general, but I really don't know about the errors.
Thanks dear Ryan Vogt and Debopam Ghosh
• asked a question related to Linear Algebra
Question
I need to calculate the eigenvector from the highest eigenvalue of an ill-conditioned matrix (kappa > 10^15). Since my goal is not to search for maxima/minima nor to calculate all eigenvectors, I'm not sure whether the ill-condition may be an issue.
I feel you try to compute maximum eigen value and corresponding eigen vector by Power's method. Since the smallest eigen value will be near zero, it will not create any problem in convergence of the procedure.
• asked a question related to Linear Algebra
Question
Dear all
I need an efficient algorithm to calculate ATA when A is a sparse matrix with unknown structure.
I have same problem
• asked a question related to Linear Algebra
Question
In the theory of the stability of the differential operators, one could prove the stability results based on spectra of an operator, (all eigenvalues must be negative for example).
one problem with the above method is that not all linear operators are self-adjoint (for examples operators in convection diffusion form) and their corresponding eigenvalue problem can not be solved analytically, hence spectra of the operator can not be calculated analytically. On the other hand there is a definition related to spectra, which is called pseudo-spectra, that somehow evaluates the approximated spectrum , even for non self-adjoint operators.
I want to know is it possible to establish stability results for a differential operator based on pseudo-spectra?
The question you ask is a very broad one. Since differential operators are unbounded - the definition of the on which one defines the operator is important. When one goes to infinite dimensions, there might not even be a point spectrum (eigenvalues) - the spectrum may be only the continuous spectrum and residual spectrum. If the operator has a self joint extension or a normal extension then the problem becomes more tractable but for non-normal operators the problem is quite complex.
At one time a class of operators "spectral operators" were defined, see Dunford and Schwartz, "Linear Operators", vol III, to address the issues on non-normal operators. Proving a non-normal operator - even an ordinary differential operator - was not a trivial task. The goal of this effort was to expand the concept of Jordan Canonical form to first bounded linear operators on a Hilbert/Banach space and then to unbounded linear operators, e.g., differential operators. For compact operators or operators with compact resolvents - that has been solved. For bounded normal and unitary operators, that has been solved (spectral theorem). For unbounded operators with a compact resolvent, that has been pretty much solved (by using the spectral theorem the spectral theorem). For unbounded self adjoint operators, there is also a spectral theorem. However, for non-normal bounded operators and non-self adjoint unbounded operators - there is no general result on a canonical form.
In these problems stability questions be quite problem specific.
• asked a question related to Linear Algebra
Question
Hello, I need to transform the initial reachabiltiy Matrix into the final reachability Matrix. I already did some research but I don't get the approach. Can someone help me?
Deleted research item The research item mentioned here has been deleted
Good Luck.
• asked a question related to Linear Algebra
Question
I know how to plot a 2D real vector. Let's say I have a vector 'a' represented in a row matrix, a = [1 1]. I can plot it in xy plane and I will be getting a line having the equation x=y.
Similarly, I know how to plot a complex number. Let's say I have a complex number z=a+ib. I can plot in real-imag plane and I will be getting a similar line having the equation x=y.
But I don't know how to plot a complex vector 'c' represented in a row matrix, c = [1+i 1+i].
Kindly guide in plotting other alternate complex vectors such as [1+i 1-i], [1 i], [i 1]
Dear Hakeem,
V = ( a+ib, c+id,e+if)= (a,c,e)+i(b,d,f):
Any complex component corresponds to a point in R^2
A vector of 2 complex components requires a representation
in R^2 X R^2 which is homeomorphic to R^4.
A vector of 3 complex components requires a representation
in R^2 X R^2 XR^2 which is homeomorphic to R^6.
The geometry of R^4 or R^6 is not attainable in general as simple as R^3 ,
but we can imagine their projections.
For physics and to tackle some problems in physics as electromagnetics
when dealing with ( sinusoidal field), the American physicists J.Willard Gibss, (1880), invented the idea of bi-vector, where the complex vector splits into two real vectors as the following: the complex vector
V = ( a+ib, c+id,e+if)= (a,c,e)+i(b,d,f)
where the real part of V=Re(V)=(a,c,e)
and the imaginary part of V = Im(V) = (b,d,f)
( following your question V = ( a+ib, c+id) (a,c)+i(b,d) )
and applying this definition, it is easily proved that :
"all multilinear identities valid for real vectors are also valid for
complex vectors"
The geometry and algebra of such representation are available in the attached paper, hope you find it useful in your research.
Best regards
PS.
The suggested presentation is not unique, we can use the tools of differential geometry to tackle such complex vectors based on the mathematical model under consideration.