Science topic

# Numerical Linear Algebra - Science topic

Explore the latest questions and answers in Numerical Linear Algebra, and find Numerical Linear Algebra experts.

Questions related to Numerical Linear Algebra

I would like to know the working of Tucker model of Tensor decomposition with worked out examples. I went through the following link, but its too difficult to visualize what is actually happening.

Dear Colleagues,

I would like to invite you to submit both original research and review articles to the Special Issue on "Modern Applications of Numerical Linear Algebra" organised by Mathematics (IF=1.747) ISSN 2227-7390. For more details see https://mdpi.com/si/74727.

The mathematics behind the inverse of large sparse matrices is very interesting and widely used in several fields. Sometimes, It is required to find the inverse of these kinds of matrices. However, finding the same is computationally costly. I want to know, the related research, what happens when a single entry (or a few entries) are perturbed in the original matrix then how much it will affect the entries of inverse of the matrix.

Dear Colleagues

Where can one find the following formula (see the picture attached) for computing a special tridiagonal determinant? Please give a reference in which one can find, or from which one can derive, the formula showed by the picture. Thank a lot.

Best regards

Feng Qi (F. Qi)

How we can compute eigenvalues of a 2*2 block matrix when each block is a square matrix?

what are the advantages and disadvantages of Matching Pursuit Algorithms for Sparse Approximation? and if there are alternative method better than Matching Pursuit

Recently I have came across two different equations for the estimation of Remote Sensing Reflectance (Rrs), i.e.

1 - Rrs equation by Mobley, (1999) is

**Rrs(λ) = Lu(λ) - ρLsky(λ) / Ed(λ)**

2 - Rrs equation mentioned in the articles by Dorji et al. (2016 and 2017) is

**Rrs(λ) = Lu(λ) x ρLsky(λ) / Ed(λ)**

I have a confusion about the numerator of this equation, I would like to know that the terms Lu(λ) and ρLsky(λ) are being multiplied with each other or subtracted? Because in Mobley 1999 article the term ρLsky(λ) is being subtracted from Lu(λ), i.e. Rrs(λ)= Lu(λ) - ρLsky(λ) / Ed(λ)

Looking forward for the advice.

What is the name for the identities

**(2) and (3) i**n the functional analysis literature F-1(is the inverse function?where (1) F

**:[0,1] to [0,1]**and F**is strictly monotonic increasing**where

**F(0)=0,**F(1/2)=1/2 and F(1)=1,**2)\forall (x)\in (F) F(1-x)+F(x)=1**

**(3)\forall (p)in codom(F)F-1(1-p) +F-1(p)=**1; these are the equality bi-conditional (expressed in (2) and (3) , the equality cases of (4), bi-conditional, because it applies to the inverse function so they are expressed as

**forall (x, x1)\in dom(F)=[0,1],[x+y =1] (iff) [F(x)+F(y)=1]**

**forall (p, p1), in IM(F)\subseteq[0,1];[p+p =1] iff [ F-1(p1) +F-1(p)=1], F-1 is the inverse function and thus F-1(p) F-1(1-p) are elements of dom(F)=[0,1]**

x+y=1 iff F(x)+F(y)=1

see, the attached paper 'order indifference and rank dependent probabilities around page 392, its the biconditional form of segal calls a symmetric probability transformation function

I presume that if in addition F satisfies (4)\forall x in [0,1]=dom(F); F(1/2x)=1/2F(x)

That such a function will be identity function, as F(x)=x for all dyadic rationals and some rationals and F is strictly monotone increasing and agrees with F(x) over a dense set note that

given midpoint convexity at 1 and 0

I presume that if in addition

**@1\forall x in [0,1]=dom(F) F(1/2x)<=1/2F(x)**

**\@0 forall x in [0,1] F(1/2+x/2)<=1/2+F(x)/2**

That these equations collapse into equal-ties

.F(1/2x)=1/2F(x)

F(1/2+x/2)=1/2+F(x)/2

**given the symmetry equation;(2) F(1-x)+F(x)=1**, an

**dd(1) F:[0,1]to [0,1] and F(0)=0**(which gives F(1/2)=1/2, F(1)=1, follows from F(0)=0,), it follows then F(x)=x for all dyadics rationals in [0,1]

where F(1)=1 and F(0)= and F strictly monotonic increasing as above

and some rationals and F becomes odd at all dyadic points in [0,1]

n

**I am not sure if (3) i**s required but then given F is strictly monotone increasing (it should be applied by injectivity and (2) . In any case I presume F would collapse into F(x)=x.

What is the general form of a function that merely satisfies F(1)=1 F(0)=0 F(1/2)=1/2 and is strictly mo-none increasing and continuous and satisfies the inequalities?

@1\forall x in [0,1]=dom(F) F(1/2x)<=1/2F(x)

\@0 forall x in [0,1]=dom(F) F(1/2+x/2)<=1/2+F(x)/2

The function also satisfies

(4)\forall x,y\in dom (F) x+y>1 \leftrightarrow F(x) +F(y)>1,

\forall x,y\in dom (F) x+y<1 \leftrightarrow F(x) +F(y)<1,

\forall p,p2\in codom(F) p+p1>1 \leftrightarrow F-1(p) +F(p2)>1

\forall p,p2\in codom(F) p+p1<1 \leftrightarrow F-1(p) +F(p2)<1

]

I am working in meshless methods using radial basis approximation. As it is known that the system of linear equations arising is severely ill-conditioned. What approach would you like to suggest for solving this problem?

Recently I found one dimentional hermitian operator has equivalent complex operator under iso-spectral behaviour. In fact I am serching for a hermitian operator which does not have equivalent complex operator.

Recently, I have read some references applying proper orthogonal decomposition to solve unsteady aerodynamic problems. I'm very interested in this topic and really want to learn more about it. For now, I'm not sure how this method works, so, can anyone who is familiar with this topic give me some advice about starting in this field? Or some recommendations for related materials (books and references that may contain necessary mathematics background)? Thanks a lot!

PS: I'm familiar with linear algebra and basic matrix analysis, but I don't know much about control theory since I majored in aerodynamics.

I need to calculate the desired output weights (Wout) which are the linear regression weights of the desired outputs d(n) (a vector) on the input states vector x(n). How can I get binary weights? Is there any mathematical theory to address this problem?

Please can someone give me a reference on Pisot-Dufresnoy-Boyd Algorithm.

Key words: Pisot number

Consider that X is an unknown matrix and A is a known one. I have to solve the following equation: A=[(XX

^{T})/(λ_{max}(XX^{T}))]. One can use normally the svd decomposition of A but the presence of the λ_{max}(XX^{T}) in the equation makes it more complicated to factorize. Recall that λ_{max}(XX^{T}) denotes the maximum eigenvalue of the matrix (X^{T}X) .Does anyone have any idea how to solve it in order to find X please? ThanksDear Colleagues, see this video on the diagonalisation of a 3x3 matrix.

My question exactly is about the method used at the minute 5:00 to 5:21 to find the roots of the polynomial.

This is the link of the video:

Recently, I obtain a linear system, $Ax = b$, where $A$ is a nonsingular, strictly diagonally dominant $M$-matrix. Then I also got a matrix splitting $A = S - T$, where $S$ is also a nonsingular, strictly diagonally dominant $M$-matrix. So I establish a stationary iteration scheme as follows,

$$Sx^{(k+1)} = Tx^{(k)} + b.$$

According to numerical results, it seems that this iterative scheme is always convergent. Is it rational from theoretical analysis? So, can we prove the convergence of this iterative scheme? i.e.,

show $\rho(S^{-1}T) < 1$, where $\rho(\cdot)$ is the spectral radius.

Please also refer to the URL for details.

I need to find the column reduction form of a matrix. Are there any easy methods, free online books or pdfs where I can get examples?

Hi everyone.

I have developed a R package named eemR (https://github.com/PMassicotte/eemR and https://cran.r-project.org/web/packages/eemR/index.html) which aims at providing an easy way to manipulate fluorescence matrix.

One of the function in the package is used to extract peak values at different regions in the fluorescence matrix (Coble's peaks for instance). I have noticed that the reported locations of these peaks are not consistent in the literature.

In Coble's 1996 paper, peaks are reported as follow (these are the values I am using in the R package):

Coble, P. G. (1996). Characterization of marine and terrestrial DOM in seawater using excitation-emission matrix spectroscopy. Mar. Chem. 51, 325–346. doi:10.1016/0304-4203(95)00062-3.

Peak B: ex = 275 nm, em = 310 nm

Peak T: ex = 275 nm, em = 340 nm

Peak A: ex = 260 nm, em = 380:460 nm

Peak M: ex = 312 nm, em = 380:420 nm

peak C: ex = 350 nm, em = 420:480 nm

In Coble's 2007 paper, peaks are reported as follow:

Coble, P. G. (2007). Marine optical biogeochemistry: The chemistry of ocean color. Chem. Rev. 107, 402–418. doi:10.1021/cr050350+.

Peak B: ex = 275 nm, em = 305 nm

Peak T: ex = 275 nm, em = 340 nm

Peak A: ex = 260 nm, em = 260/400 - 460 nm

Peak M: ex = 290 nm, em = 310/370 - 410 nm

peak C: ex = 320 nm, em = 420:460 nm

On the USGS website (http://or.water.usgs.gov/proj/carbon/EEMS.html):

Peak B: ex = 270 nm, em = 306 nm

Peak T: ex = 270 nm, em = 340 nm

Peak A: ex = 260 nm, em = 450 nm

Peak M: ex = 300 nm, em = 390 nm

peak C: ex = 340 nm, em = 440 nm

At first, these differences seem to be minors but I was wondering what were your thoughts about that. Should I review my code to change or adjust peak positions?

I have developed a new crossover for 0-1 matrix and need to check the efficiency of it.

Suppose that

**is a finite group and***G***is a Sylow p-subgroup of***P***G,**and**is a chief factor of***H/K***G**, and**is a Sylow p-subgroup of***T/K***. Is that correct to say that***H/K***is less than or equal to***c(T/K)***, where***c(P)***means nilpotency class of***c(P)***?***P*I need to solve the problem shown below, but I do not know what optimization libraries can solve this problem.

argmin Σ

_{i}(x_{i}^{T}Ax_{i}-bx_{i}) sub. to Ab=**0**The optimization target is a 3x3 matrix A and a 3D vector b, given the 3D vectors xi where i=1,2,...,N. Clear view of the formula is attached.

I need a C/C++ optimization library, hopefully opensource, light and fast. If you have any idea, help me.

I would appreciate more if you let me know the related class or method in the library. Thank you!

For a general parameter (

**t**) dependent*m*x*m*matrix,**M(t),**can one always diagonalize it by evaluating**D (t)=U(t)^{-1}M(t)U(t),**

where

**D(t)**is a diagonal matrix. Under what conditions can I numerically evaluate**U(t)**such that the parametric dependence is retained either exactly or to a very good approximation.I would also like to know some standard algorithms or approximations. In case I am conceptually not right about thinking this way, please point out the error.

Is there some package for FORTRAN with the backslash operator, or at least some good large equation system solver? I read in some forums about LAPACK, but i couldn't find out a windows tutorial for its installation. Does anyone has some expertise about the efficiency of these package?

Let

**AX=B**is a linear system,where**A**is a square matrix of greater order(20*20 or more) and**X,B**are column matrices(X is matrix of unknowns).What,s the easy way to solve this systemBy using the formula in the attach, we calculate the angle between the complex vector and the complex vector . Thus, it is shown that the cosine of the angle between two complex vector is complex. But, the cosine of the angle should not be complex. Could you tell me how to understand this proplem?

Dear masters

please please and again please

Can householder method be applied when our vector is not numeric? Generally, for a given vector V=(v

_{1}v_{2}. . . v_{n})^{t }, can we find two non-singular matrices like P(t) and Q(t) such that P(t)V(t)Q(t)=(a(t) 0 0 . . 0)^{t}?Then recently I come across a matrix equations, which are given as follows, (use MATLAB notations):

(kron(A,B) + kron(C,D))* x = f; (linear systems)

where A is a complex symmetric matrix (square), B, C, D are all real non-symmetric matrices. (square), f is a real vector

But we can reform it as

B*X*A^T + D*X*C^T = F, F = reshape(f, n, m);

Are there some good suggestions for this kind of matrix equations?

Iterative solvers and suitable preconditioning?

I need a random matrix with preassigned correlation for Monte Carlo simulation

I have a large sparse matrix in matlab and I need the complement of this matrix

what should I do?

note that because of out of memory error I can not use D=1-S

where S in the original matrix and D is it's complement.

Any suggestion/resources are appreciated.

Thank you so much.

good morning

I read the article titled 'SINRD Circuits Analysis with WCIP' we can solve the equation system (10) by the GMRES algorithm

So to use the GMRES we must put the system in the form of equation (11) something I've already done that is to say the system is like this we have: A*x = b

my question is how can we write equation (11) in matlab to use the GMRES

thank you

Based on Jacobian method how can I calculate the basis vector, ε, if I have θ of ankle and knee joints in time series. My matrix would be 10 rows (time series) of X vectors for two joints (2 columns):

Jhip(θ)(t)⋅ ε//(t) = 0

I know there is a matlab function (null ()), but it gives me nothing.

I wrote a Matlab finite-differnece code that solves the 3-D Laplace Equation for a large, complex geometry. Currently, I am using Matlab's preconditioned conjugate gradient gradient function -- which runs on one core only -- to iteratively solve the system of equations. The code takes several hours to solve with 64 million degrees of freedom. Eventually, I would like to increase the size of my domain, so a parallel implementation of the solver would be ideal, possibly one that runs on a cluster. Does anyone know of a free parallelized conjugate gradient code (written in any language) that I can wrap up into my Matlab code?

a c 0 … 0 1

b a c 0 … 0

0 b a c 0 … 0

0 0

. a c 0

. b a c

1 0 … 0 b a

I am interested in the numerical solution of convection-diffusion problems where the convection dominates. In the iterative solution with Gauss-Seidel, instabilities can occur for large Peclet numbers. Is the "downwind numbering" as in the paper by Bey & Wittum a possible solution for this problem on structured grids and with variable coefficients (advection speeds)? Unfortunately, I don't have access to the paper.

Any suggestion/resources are appreciated.

Hi, guys. I got a problem in my recent research. It can be sketched as follows:

How to theoretically choose a positive \alpha such that the matrix D=B*B^T/\alpha+B*inv(A)*B^T is nonsingular or well-conditioned? Here B is an m-by-n matrix (n >=m) and A is an n-by-n nonsingular matrix .

(* complete dictionary of words, one per row, number of rows is (alpha^word), using words of length "word" and an alphabet of "alpha" number of characters *)

alpha = 4; word = 3; dict =.;

dict = Partition[Flatten[Tuples[Reverse[IdentityMatrix[alpha]], word]], (alpha*word)];

PseudoInverse[dict] == ((Transpose[dict])*((alpha)^-(word - 1))) - ((word - 1)/(alpha^word)/word)

Output = True

An equation editor format is here if you can't read Mathematica:

In literature, for real vectors, the relationship between higher order cumulants and moments can be found.

How does this relationship look for complex vectors?

For example, how does E[y*1 y*2 y*3 y4 y5 y6] relate to all the higher order cumulants? Is there a general formula for this?

I want to solve a system with fatty acids information from its protonic matrix.

BiCG-type methods include BiCG, BiCGSTAB, CGS, QMR, BiCGSTAB2, BiCGSTAB(l),GPBiCG, IDR, etc.

In many practical problems, we get a large sparse matrix. Can we find determinant of this matrix efficiently?

I want to compare it with a numerical method that I'm trying to develop.

c = all(b>=-100*eps);

b is a matrix I already have.

what can you say about matrix c?

2.2204e-16 is the value of eps which command window shows.

Could anyone point me towards publications describing state of the art in block-GMRes (or other block-Krylov) solvers ?

Is Smith Normal Form possible for complex matrices? If not, is it possible for some type of complex matrices?