Science topic

# Linear Algebra - Science topic

Explore the latest questions and answers in Linear Algebra, and find Linear Algebra experts.

Questions related to Linear Algebra

For my current research, I try to find some applications for the following two problems:

- rank-estimation of singular dense symmetric / Hermitian matrices;

- minimum-norm solution x⋆ for dense least-squares problems min ∥b − Ax∥₂ when b is not in the range of A, A is symmetric / Hermitian and rank(A) < min(m, n).

How does this ionospheric-free linear combination work?

Given D

_{S}= diag{1_{S}} which is a vertex limiting operator. Where 1_{S }is an indicator vector/characteristic vectorThis D

_{S}is decomposed as D_{S}= P_{S}P_{S}^{T}where P_{S}is a coordinate vector. Is this an orthogonal vector?Please let me know the linear algebra behind this.

why can't the passive elements shift the dc potential to some frequency that the input signal contains?

Thank You

Given the a, b and c vectors of a general crystal structure (not necessarily cubic, can be say, monoclinic), is there a general rotation matrix formula for taking the (001) surface of the bulk to any surface. I ask a specific question here: If I wanted to rotate the bulk monoclinic Ga2O3 so the top surface (001) is now the (-201) face, how would I do that. I am familiar with general rotation matrices about a general axis from linear algebra, but I'm not sure how to apply this information to this system. It would be nice to know a general rotation matrix formula for rotating a crystal from one orientation to another regardless of the structure. I really appreciate any help in advance.

Dear Researchers,

In linear algebra, what does actually mean when it is said that two matrices span the same space?

For example, if matrix A spans the same space as matrix B, is that mean A=B? or

what does this information (i.e., spanning the same space) tell about the relationship between A and B?

I appreciate your response and your clarifications

I recently asked in math stackexchange a question regarding 3D rotations in geometric algebra https://math.stackexchange.com/questions/3922021/angle-and-plane-of-rotation-in-3d-geometric-algebra.

I've added a new question regarding 4D or even n-D rotations and rotors. The reformulated question is as follows:

"In general, a rotor in 4D consists of a scalar, 6 bivectors and one four-vector (in 3D, a rotor is just composed of a scalar and 3 bivectors). Then, assume that we already know two 4-dimensional vectors and one is a rotated version of the other. How is it possible to derive an expression to compute the associated rotor? does it exist? If so, could a general expression for n dimensions be obtained?"

For example, I have two vectors

**A**and**B**in 2D rectangular coordinates (x,y). I can calculate the scalar (dot) product as (**A**,**B**)=A_{x}B_{x}+ A_{y}B_{y}. In polar coordinates (r,phi), it will be (**A**,**B**) = A_{r}B_{r}+ A_{phi}B_{phi}, since these coordinates are ortogonal and normalized. If I want to make transition what should I write?1) A

_{r}B_{r}+ A_{phi}B_{phi}= (A_{x}^{2}+A_{y}^{2})^{0.5}(B_{x}^{2}+B_{y}^{2})^{0.5}+atan(A_{y}/A_{x})atan(B_{y}/B_{x}) without Lame coefficient or2) A

_{r}B_{r}+ A_{phi}B_{phi}=(A_{x}^{2}+A_{y}^{2})^{0.5}(B_{x}^{2}+B_{y}^{2})^{0.5}(1+atan(A_{y}/A_{x})atan(B_{y}/B_{x})) with Lame coefficient.And finally, both these cases are not the same as (

**A**,**B**)=A_{x}B_{x}+ A_{y}B_{y}. How to explain this inconsistency?How to linearize any of these surface functions (separately) near the origin?

I have attached the statement of the question, both as a screenshot, and as well as a PDF, for your perusal. Thank you.

Considering a matrix A which has vectors v1=[1;0;0] and v2=[1;1;0] i.e. this matrix A is spanned by the vectors v1 and v2.

The rank of this matrix A is 2. Going by the definition of the rank of a matrix it means the number of independent vectors or the dimension of the row space.

Seeing A={v1,v2} with a cardinality of 2 an we say that the cardinality is the same as the rank of the matrix which in turn means that it gives the number of independent vectors spanning A

Are there any conditions under which the difference between two matrices i.e.

**A**-**B**will be invertible? In particular, I have a positive definite matrix**A**but**B**is a square matrix not necessarily symmetric. However,**B**has the form**MP**^{-1}**N**with**P**as a square invertible matrix and**M**and**N**as arbitrary matrices of appropriate dimensions.I have confirmed that the Hessenberg determinant whose elements are the Bernoulli numbers $B_{2r}$ is negative. See the picture uploaded here. My question is: What is the accurate value of the Hessenberg determinant in the equation (10) in the picture? Can one find a simple formula for the Hessenberg determinant in the equation (10) in the picture? Perhaps it is easy for you, but right now it is difficult for me.

I have drived a formula of computing a special Hessenberg determinant. See the picture uploaded here. My question is: Can this formula be simplified more concisely, more meaningfully, and more significantly?

Dear friends:

In some calculation in control theory, I need to show that the following matrix

E = I - (C B)^{-1} B C

is a singular matrix. Here, B is (n X 1) column vector and C is (1 X n) row vector. Also, I is the identify matrix of order n. So, the matrix E is well-defined.

I have verified this by trying many examples from MATLAB, but I need a mathematical proof.

This is perhaps a simple calculation in linear algebra, but I don't see it!

Any help on this is highly appreciated.. Thanks..

I have question about resizing the complex arrays.

I need to resize the complex valued array with interpolation method.

I tried scikit-image but this one didn't support complex data type.

I also tried to resize by cv2, and also it didn't work.

even with the real value and imaginary value separately.

Is there any solution to this??

when i search the articles to know the correlation coefficient between two variables, i got only regression equations showing relationship between two variables.

for example: total length of femur= 32.19+0.16 (segment 1)

from this equation can i calculate value of correlation coefficient between total length of femur and segment 1.

Dear all,

As we know, interval matrices are matrices with 0 and 1 entries with the property that the ones in each column (row) are contiguous. Interval matrices are totally unimodular (TU). Hence, the integer programming (IP) problems with such matrices of technical coefficients and can be solved as linear programming problems.

However, in the consecutive-ones with wrap around, the ones are wrapped.

For instance, in the following matrix, the ones are wrapped in columns 4 and 5:

1 0 0 1 1

1 1 0 0 1

1 1 1 0 0

0 1 1 1 0

0 0 1 1 1

This example is not TU with a sub-matrix with determinant 2 (deleting rows and columns 2 and 4).

Two questions:

1- When wrapping does not violate TU property?

2- Is there a general approach to solve IP problems with consecutive ones and wrapping around matrix of technical coefficients, efficiently?

Thank you for your kind help/

30 years ago on April 1, 1991 A. K. Lenstra announced the factorization of RSA-100 challenge.

RSA-100 = 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139

Today, it takes less than 2 hours two factorize this number in a KALI Linux VM machine with only 2 cores in a MacBook 2017 using CADO-nfs tool (https://gitlab.inria.fr/cado-nfs/cado-nfs).

Nice technology development...

--------------------------------------------------------------------------------------------------

Info:Linear Algebra: Aggregate statistics:

Info:Linear Algebra: Krylov: CPU time 274.68, WCT time 81.91, iteration CPU time 0.01, COMM 0.0, cpu-wait 0.0, comm-wait 0.0 (5888 iterations)

Info:Linear Algebra: Lingen CPU time 18.94, WCT time 4.87

Info:Linear Algebra: Mksol: CPU time 149.01, WCT time 45.63, iteration CPU time 0.01, COMM 0.0, cpu-wait 0.0, comm-wait 0.0 (2944 iterations)

Info:Square Root: Total cpu/real time for sqrt: 69.39/20.0587

Info:Filtering - Duplicate Removal, removal pass: Total cpu/real time for dup2: 22.99/14.251

Info:Filtering - Duplicate Removal, removal pass: Aggregate statistics:

Info:Filtering - Duplicate Removal, removal pass: CPU time for dup2: 12.899999999999999s

Info:Polynomial Selection (root optimized): Aggregate statistics:

Info:Polynomial Selection (root optimized): Total time: 70.76

Info:Polynomial Selection (root optimized): Rootsieve time: 70.33

Info:Polynomial Selection (size optimized): Aggregate statistics:

Info:Polynomial Selection (size optimized): potential collisions: 5781.44

Info:Polynomial Selection (size optimized): raw lognorm (nr/min/av/max/std): 5779/32.780/37.835/38.680/0.701

Info:Polynomial Selection (size optimized): optimized lognorm (nr/min/av/max/std): 3330/32.780/36.367/38.650/1.003

Info:Polynomial Selection (size optimized): Total time: 125.17

Info:HTTP server: Shutting down HTTP server

Info:Complete Factorization / Discrete logarithm: Total cpu/elapsed time for entire factorization: 7221.06/2350.02

Info:root: Cleaning up computation data in /tmp/cado.433q3ve9

**40094690950920881030683735292761468389214899724061**

**37975227936943673922808872755445627854565536638199**

--------------------------------------------------------------------------------------------------

where matrices

**A**and**B**are known while**X**and**Y**are to be solved.For example,

**A**=[a 0;

0 b];

**B**=[a a 0;

0 0 b;

0 0 b;];

a and b are known elements.

It is easy to see that the solution is

**X**=t*[1 1 0;

0 0 1;];

**Y**=1/t*[1 0;

0 1;

0 1;];

where t is any non-zero real number.

But how to derive this solution step by step in a systimatical way?

It would be better if there is a programmable approach.

Data sets, when structured, can be put in vector form (v(1)...v(n)), adding time dependency it's v(i, t) for i=1...n and t=1...T.

Then we have a matrix of terms v(i, j)...

Matrices are important, they can represent linear operators in finite dimension. Composing such operators f, g, as fog translates into matrix product FxG with obvious notations.

Now a classical matrix M is a table of lines and columns, containing numbers or variables. Precisely at line i and column j of such table, we store term m(i, j), usually belonging to real number set R, or complex number set C, or more generally to a group G.

What about generalising such matrix of numbers into a matrix of set (in any field of science, this could mean storing all data collected for a particular parameter "m(i, j)", which is a set M(i, j) of data?

What can we observe, say, define on such matrices of sets?

If you are as curious as me, in your own field of science or engineering, please follow the link below, and more importantly, feedback here with comments, thoughts, advice on how to take this further.

Ref:

Prove or disprove that for every field F of characteristic two, there exist vector spaces U and V over F and mapping from U to V, which is F-homogeneous but not additive.

I have a problem with CANON functions of different Matlab versions (e.g. 2007b, 2010b, 2011a) returning inconsistent results for the same call parameters (sysd = canon(sysd0,'modal'):

- the input/output relation stays the same - OK.

- the diagonal A-matrix is permutated slightly - still OK,

- the B and C matrices differ significantly.

I undersdand the modal decomposition has in multiple solutions, however, as far as I know, there is no useful control over the CANON function realization. Furthermore in newer Matlab versions the canon function calls p-type subfunctions which makes them inaccessible.

The obtained model is being used for tuning of predictive controller. Different B-matrix subsequently yielding significantly different results for certain set of tuning weights.

I have the original data generated using Matlab 2007. As I want to replicate it, and further provide public the code for performing a benchmark test, I need to have the CANON function to return consistent results regardless of the Matlab version used... Or find an alternative way defining/norming the B & C matrices afterwards

Is there any way to get consistent modal decomposition?

Is there a way to transform a similarity matrix from high space to low space and keep the same knowledge? For example, the attached matrix.

The mathematics behind the inverse of large sparse matrices is very interesting and widely used in several fields. Sometimes, It is required to find the inverse of these kinds of matrices. However, finding the same is computationally costly. I want to know, the related research, what happens when a single entry (or a few entries) are perturbed in the original matrix then how much it will affect the entries of inverse of the matrix.

Does the journal Linear Algebra and Its Applications accept manuscript written in Latex only?

I want to learn how to calculate variance component.

I have a basic knowledge about linear algebra.

Unfortunately, I still not understand about how the process of Henderson method, MIVQUE, and REML etc.

I really confuse about the changing in quadratic form etc.

There are few resources on the Internet.

Can somebody suggest some basic courses or books for me?

I know that the 3 vectors x,y,z in

**R**, where angles between them are 120`are coplanar .^{n}Dear Colleagues

Where can one find the following formula (see the picture attached) for computing a special tridiagonal determinant? Please give a reference in which one can find, or from which one can derive, the formula showed by the picture. Thank a lot.

Best regards

Feng Qi (F. Qi)

Dear Colleagues

About computation of the general tridiagonal determinant, I have a guess, see the PNG or PDF files uploaded with this message. Could you please supply a proof for (verify) the guess or deny the guess? Anyway, thank you a lot.

Best regards

Feng Qi (F. Qi)

Hi all,

I have a basic doubt in linear algebra. The determinant of a 2x2 matrix can be considered as the area of a parallelogram. Similarly what could be the physical interpretation of a characteristic equation and roots of the characteristic Equation (Eigen Value)?

This question is bugging my mind for a long time now. Can any one here enlighten me?

Regards,

Balaji

Prove that if W is a diagonal matrix having positive diagonal elements and size (2^n – 1)x(2^n – 1), K is a matrix with size (2^n – 1)xn, then:

A = K'*(inv(W) - K*inv(K'*W*K)*K')*K

is a positive definite matrix.

Where:

K '- transpose of a matrix K

inv (W) is the inverse matrix of the matrix W

Using the Monte-Carlo method, I find that the matrix inv(W) - K*inv(K'*W*K)*K' can be negative definite.

Thank you so much for reading my question

I am looking forward to getting your response!

We have a stochastic dynamic model: X

_{k+1}=f(X_{k,}u_{k,}w_{k }). We can design a cost function to be optimized using dynamic programming algorithm. How do we design a cost function for this dynamic system to ensure stability?In Chapter 4 of Ref. [a] for a quadratic cost function and a linear system (X

_{k+1}=AX_{k}+Bu_{k}+w_{k}), a proposition shows that under a few assumptions, the quadratic cost function results in a stable fixed state feedback. However, I think about how we can consider stability issue in the designation of the cost function as a whole when we are going to define the optimal control problem for a nonlinear system generally. Can we use the meaning of stability to design the cost function? Please share me your ideas.[a] Bertsekas, Dimitri P., et al.

*Dynamic programming and optimal control*. Vol. 1. No. 2. Belmont, MA: Athena scientific, 1995.Is there any alternative topic/theory/mathematical foundation to compressed sensing (CS) theory?

successive to Nyquist Criterion is CS theory, is there any theory that surpasses the CS theory ?

Hi everyone,

I have implemented an EKF in a power systems application. When I run a simulation in Matlab, in some iterations of the filter I get a Kalman gain matrix (

**K**) with negative values and/or absolute values greater than 1. In some books I have read that the Kalman gain is a real value between 0 an1.Is this correct ? Or is it an indication that something wrong with the Kalman filter?

I need to multiply the inverse of a matrix A to another matrix B. If B were to be a vector, I would simply solve a linear system Ax = B to get the product x=inv(A)B. With B being a matrix, I don't know the most efficient way|method to compute inv(A)B.

Kindly share your experience.

If B(H) is the algebra of bounded linear operators acting on an infinite dimensional complex Hilbert space, then which elements of B(H) that can't be written as a linear combination of orthogonal projections ?

What is the actual difference between a tensor and matrix? A lot of recent researchers are working on tensor optimization. I found a lot of differences related to multi-dimensional linear algebra and preserving transformations. My question is two-fold.

Is it only used because it provides us a more flexible representation for multi-dimension operations?

Or does it have performance differences during computation compared with the matrix operations?

Does someone know how to get an empirical equation from three variable data?

The data is listed in the attached file

Thanks in advance

If X is a quasi-Banach space without a separating dual that is no p-normable for some 0<p<1. Is it possible for there to exist a p-norm making X into a p-Banach space such that this new space is not isomorphic to X. In other words, are there non-trivial examples quasi-Banach envelopes.

If 0<p<1 then p-normability is equivalent to having Rademacher type p. So, if the ability to equip X with an p-norm implies X has type p in with its original quasi-norm then X is its own p-Banach envelope.

This question is uninteresting in the case X has a separating dual as such spaces may be equipped with r-norms for all 0<r<=1. And, it may be that many of these are not equivalent to X's given quasi-norm.

I am looking at a pde of the form

D^2 u + f u = k u,

here D^2 denotes the laplacian, u and f are complex functions on IR^n. I want as much information as possible about each PDE. For example, how many solutions exist for each value of k and properties of the solutions themselves, such as smoothness, boundedness, stationary points etc. Can you recommend your favourite reference volumes for eigenvalue problems for PDEs? I have a bunch of resources on PDEs in general, so that is not what I am looking for. Instead, I am looking for the subtleties that mainly relates to eigenvalue problems.

Thank you for your time.

How to prove or where to find a proof of the lower Hessenberg determinant showed by two pictures uploaded here?

I understand the idea of Best Worst Method for multi criteria decision making

and i know that there is a solver to get the weights but i need to understand the mathematical equation and know how to solve it with my own self.

Can any one help me?

{|𝑤𝐵 − 𝑎𝐵𝑗𝑤𝑗|} ≤ 𝜉

*L*for all j{|𝑤𝑗 − 𝑎𝑗𝑊𝑤𝑊|} ≤ 𝜉𝐿 for all j

Σ𝑤𝑗 = 1, 𝑤𝑗 ≥ 0 for all j

Many control theory scientists don't know nothing about controllability. Is it from the reason that in technical disciplines, scientists have not learned many linear algebra and analysis ( functional ).

mathematical solution , please find attached file

computer vision , linear algebra , mathematics , grey level co-occurrence matrix

**How is linear algebra used to represents nonlinear models in 3d game programming or in real coordinate space applications?**

Suppose each element of matrix

**A**is the**linear presentation**of n independent variables. And matrix**B**is the formed by such n variables.The question is, does there always exist T

_{1},T_{2}that enable**A=T**_{1}**BT**_{2}?If not, then how to formulate A and B (or add some constraints) so that there always exist T

_{1},T_{2}that enable A=T_{1}BT_{2}?Is there essentially a difference? Which one is optimal or low complexity ? is here a relation with the rank?

Hello,

I am using ILU factorization as the preconditioner of a Bi-CGSTAB solver for solving a linear system of equations Ax=b. The preconditioner matrix M=(LU)^-1 is calculated by backward substitution method, solving Ly=c and Us=y. However, when A has zero diagonal elements (e.g. A(2,2) = 0) U will also have zero diagonal elements ( U(2,2 )=0) ,which makes M impossible to be solved by the backward substitution method.

How could I reorder my system of equation in order to tackle this problem?

Hi everyone,

I have a matrix defined such that each row has a sum lesser than one. I want to know if there is a theorem or result saying that the spectral radius of these kind of matrices is always lesser than one.

Using simulation, I verified that this result holds (1000000 matrices simulated randomly such that the sum of each line is 0.99999) but I'm looking for a theoretical result.

Best regards,

Some classical methods used in the field of linear algebra,such as linear regression via linear least squares and singular-value decomposition, are linear algebra methods, and other methods, such as principal component analysis, were born from the marriage of linear algebra and statistics. To read and understand machine learning, you must be able to read and understand linear algebra. This book helps machine learning practitioners, get on top of linear algebra, fast.

If I have a MIMO system with a number of subsystems with moderate amount of coupling sensitivity and if those individual subsystems are BIBO stable then under what conditions the whole system can be considered to be BIBO stable ? Is there any necessary and sufficient conditions that needs to be satisfied ? An answer from the control theoretic point of view would be most appreciated.

Hi,

In minimizing the difference between two variables inside an absolute term e.g., Min |a-b| . How to make the term linear so that can be solved by LP or MILP . Where a and b are free integer variable (they take positive and negative values).

A simple example is explored in attached pdf by building a matrix a with given user rating and movie claasification and weights , but we do not seem to recover our original data components , hence some exploration was done using Scilab (in pdf). Would welcome constructive insights and comments .

Help me please. What is the name of this matrix, which has the following properties:

- Binary matrix

- The binary code formed by the (k + 1) -th row of this matrix is equal to the binary code formed by the k-th row added to "1".

The matrix is as follows:

0 0 0

0 0 1

0 1 0

0 1 1

1 0 0

1 0 1

1 1 0

1 1 1

Thank you very much!

It's common to know that performance of MMSE and ZF are pretty close over Massive MIMO, and i've been trying to prove it in my simulation.

Though there's a problem about MMSE equalization,

I've been using 10*100 (10 transmitting antennas over 100 receving antennas),

y = Hx + v

H'*y = H'Hx +Hv

xhat = M*H'y;

To simplify the notation, H = H'*H

For ZF: M_zf = H'inv(H*H') (pseudo inverse)

For MMSE: M_mmse = H'*inv(H*H'+ H*sigma_v2/sigma_x2)

If both algorithms' performance were close,

M_zf = M_mmse approximately.

However, when I implenmented MMSE equalization, I also need to consider the normalization, for ZF, on the other hand, it's unnecessary.

So the normalization was diag(gain*H)

Therefore, M_mmse_nor = M_mmse ./ diag(M_mmse * H)

Regardless the normalization, performance of MMSE and ZF should be simliar over 10*100 MIMO system theoratically due to the H and H'

Looking forward to a discussion.

Thanks,

I'm trying to optimize a linear transform (3*3 matrix) by the use of PSO algorithm. The problem is when I increase swarm size, I get different solutions which are not even close to each other. Surprisingly, best objective function value in each swarm size mode doesn't change significantly. I have defined 9 dimensions for the solution X(1), X(2), ..., X(9) Since there are 9 members for the linear transform matrix. I set swarm size as follows: 9, 18, 27, 36, 45, 54, 63, 72, 81, 90. In the figure below I plotted each linear transform obtained by each swarm size. For this purpose I used unit vectors of each linear transform. If linear transforms were the same, their unit vectors should have overlay onto each other. what could be the problem, any idea?

Suppose we have computed the list of coefficients for the regression $ y = \Sigma_{k=1}^p a_ik x_k $ where we have N measurements values, i.e. N vectors (y,X=[x1,...,xp]). $y$ is the objective, while the xi are the variables. The a_ik coefficients are obtained e.g. with the least squares method.

When we receive a new set of measurements e.g. M additional vectors (y',X') , is it possible to compute, or have a nice approximate, of the updated coefficients a_ik ?

Greetings,

Completing Bachelors in Engineering this June'19, I thought I'd start with Masters/PhD in Gravitational Physics this fall but I received rejections from almost every graduate school I applied to. To where I received an offer from, I won't be able to pay off the tuition fees.

Of course I knew that to receive an offer, one needs to have some experience with the subject. With the engineering curriculum on one hand, I tried to manage my interests in gravity. From watching lecture videos by Frederic Schuller and Leonard Susskind to reading books by Sean Carrol and to even doing a summer research internship on black hole geometries, I tried to gain experience on the subject.

I wish to understand relativity from a mathematical point of view.

" A good course in more abstract algebra dealing with vector spaces, inner products/orthogonality, and that sort of thing is a must. To my knowledge this is normally taught in a second year linear algebra course and is typically kept out of first year courses. Obviously a course in differential equations is required and probably a course in partial differential equations is required as well.

The question is more about the mathematical aspect, I'd say having a course in analysis up to topological spaces is a huge plus. That way if you're curious about the more mathematical nature of manifolds, you could pick up a book like Lee and be off to the races. If you want to study anything at a level higher, say Wald, then a course in analysis including topological spaces is a must.

I'd also say a good course in classical differential geometry (2 and 3 dimensional things) is a good pre-req to build a geometrical idea of what is going on, albeit the methods used in those types of courses do not generalise. "

- Professor X

^I am looking for an opportunity to study all of this.

I would be grateful for any opportunity/guidance given.

Thanking you

PS: I really wanted to do Part III of the Mathematical Tripos from Cambridge University, but sadly my grades won't allow me to even apply :p

I am looking for a reference book on linear algebra that uses linear maps instead of matrices and uses Einstein summation notation.

Thanks in advance for your help.

Regards,

Zubair

This is an idea for probably a Master's Thesis project.

We know that we can approximate functions in a minimax senses, suitably sampled, using Linear Programming.

Where:

F is the function or series to be approximated.

A is the approximant or approximating function: a linear combination of basis functions such as a0 + a1*x +a2*x^2....

E is the error where the peaks (the maxima) are to be minimized and the result will be a series of equal peaks of generally alternating sign.

The equations look like this:

The inequalities in the linear program notation might look like this:

a01 + a11*x + a21*x^2 ...... + aN1*x^N - F1 <= E1

and, this would be rearranged to solve for the aij and the Ei

This is easy enough to program except that there are a lot of data points to be carried in the linear programming formulation.

And, it's not particularly computationally efficient.

An advantage is that one can include equality constraints:

a01 + a11*x + a21*x^2 ...... + aN1*x^N - F1 = 0

Of course, this reduces the number of degrees of freedom by one (each) and may not be feasible or desired in the context of the entire objective. i.e. if F is zero everywhere but the equality constraint is 10^6 then that would be counterproductive. But, if F is zero everywhere AND the equality constraint at xij is zero then the problem definition is consistent.

The same problem can be solved with the Remez Exchange algorithm, is likely easier to implement and is more computationally efficient (I believe). And, there is a Modified Remez Exchange algorithm that will allow equality constraints. (GC. Temes et al., "The Optimization of Bandlimited Systems", Proc. IEEE, vol. 61, No. 2, pp. 196-234, Feb., 1973..)

So, two different methods can be used to achieve the same outcome. That suggests that there's a relationship between the two. But how do they relate in an understandable / practical mathematical context? How are they similar? How are they different? What further extensions of the Remez algorithm might be suggested with this illumination?

Dear friends

I have a serious problem in order to regenerate the results of the attached paper. I do exactly mentioned method (FEM) in the paper but I don't get correct results.

This paper about gas face seal and compressible zeroth and first order perturbed Reynolds equation. I Solved zeroth order with FEM and FDM correctly and I got same results that bring out in the paper (Opening force for various pressure ratio) and I solve first order equation with 1 pressure ratio (figure 5 and 6) (I am thankful to you if look at the paper), but in other figure I can't get the paper's results. I try many method to solve linear system of equation (Ax=b) such as direct method, iterative method (i.e. gauss-sidel, cgs, gmres, bicg, pcg ...) but I failed, also I try many network grid and there is no achievement.

So what should I do?

I really know my question is general, but I really don't know about the errors.

I need to calculate the eigenvector from the highest eigenvalue of an ill-conditioned matrix (kappa > 10^15). Since my goal is not to search for maxima/minima nor to calculate all eigenvectors, I'm not sure whether the ill-condition may be an issue.

Dear all

I need an efficient algorithm to calculate

**A**^{T}**A**when**A**is a sparse matrix with unknown structure.Thank you in advance for your comments.

In the theory of the stability of the differential operators, one could prove the stability results based on spectra of an operator, (all eigenvalues must be negative for example).

one problem with the above method is that not all linear operators are self-adjoint (for examples operators in convection diffusion form) and their corresponding eigenvalue problem can not be solved analytically, hence spectra of the operator can not be calculated analytically. On the other hand there is a definition related to spectra, which is called pseudo-spectra, that somehow evaluates the approximated spectrum , even for non self-adjoint operators.

I want to know is it possible to establish stability results for a differential operator based on pseudo-spectra?

Hello, I need to transform the initial reachabiltiy Matrix into the final reachability Matrix. I already did some research but I don't get the approach. Can someone help me?

I know how to plot a 2D real vector. Let's say I have a vector 'a' represented in a row matrix, a = [1 1]. I can plot it in xy plane and I will be getting a line having the equation x=y.

Similarly, I know how to plot a complex number. Let's say I have a complex number z=a+ib. I can plot in real-imag plane and I will be getting a similar line having the equation x=y.

But I don't know how to plot a complex vector 'c' represented in a row matrix, c = [1+i 1+i].

Kindly guide in plotting other alternate complex vectors such as [1+i 1-i], [1 i], [i 1]