Science topic

# Numerical Linear Algebra - Science topic

Explore the latest questions and answers in Numerical Linear Algebra, and find Numerical Linear Algebra experts.
Questions related to Numerical Linear Algebra
• asked a question related to Numerical Linear Algebra
Question
I would like to know the working of Tucker model of Tensor decomposition with worked out examples. I went through the following link, but its too difficult to visualize what is actually happening.
It is better to clarify your thoughts before. Or would lead a misconception.
• asked a question related to Numerical Linear Algebra
Question
Dear Colleagues,
I would like to invite you to submit both original research and review articles to the Special Issue on "Modern Applications of Numerical Linear Algebra" organised by Mathematics (IF=1.747) ISSN 2227-7390. For more details see https://mdpi.com/si/74727.
I hope the review process would be fast and how many weeks will it take to index accepted paper in web of science?
• asked a question related to Numerical Linear Algebra
Question
The mathematics behind the inverse of large sparse matrices is very interesting and widely used in several fields. Sometimes, It is required to find the inverse of these kinds of matrices. However, finding the same is computationally costly. I want to know, the related research, what happens when a single entry (or a few entries) are perturbed in the original matrix then how much it will affect the entries of inverse of the matrix.
A standard trick in these cases is to use the Sherman Morrison formula
However, the inverse of a sparse matrix does not have to be sparse and in particular one does not want to store inverses for sparse large matrices, so that the formula would rather have to be applied to the action A^-1b of the inverse on the right hand side of the linear system, so as to correct the solution to the original linear system A^-1b with a hopefully limited amount of operations.
Please notice that this is a very generic comment, I am sure somebody in the sparse solver community will have studied the problem in much greater depth.
• asked a question related to Numerical Linear Algebra
Question
Dear Colleagues
Where can one find the following formula (see the picture attached) for computing a special tridiagonal determinant? Please give a reference in which one can find, or from which one can derive, the formula showed by the picture. Thank a lot.
Best regards
Feng Qi (F. Qi)
The following formally papers are related to this question:
[1] Feng Qi, Viera Cernanova, and Yuri S. Semenov, Some tridiagonal determinants related to central Delannoy numbers, the Chebyshev polynomials, and the Fibonacci polynomials, University Politehnica of Bucharest Scientific Bulletin Series A---Applied Mathematics and Physics 81 (2019), no. 1, 123--136.
[2] Feng Qi and Ai-Qi Liu, Alternative proofs of some formulas for two tridiagonal determinants, Acta Universitatis Sapientiae Mathematica 10 (2018), no. 2, 287--297; available online at https://doi.org/10.2478/ausm-2018-0022
[3] Feng Qi, Wen Wang, Dongkyu Lim, and Bai-Ni Guo, Several explicit and recurrent formulas for determinants of tridiagonal matrices via generalized continued fractions, Nonlinear Analysis: Problems, Applications and Computational Methods, Editors: Zakia Hammouch, Hemen Dutta, Said Melliani, Michael Ruzhansky; Springer Book Series Lecture Notes in Networks and Systems. The 6th International Congress of the Moroccan Society of Applied Mathematics (SM2A 2019) organized by Sultan Moulay Slimane University, Faculte des sciences et techniques, BP 523, Beni-Mellal, Morocco, during 7-9 November, 2019.
• asked a question related to Numerical Linear Algebra
Question
How we can compute eigenvalues of a 2*2 block matrix when each block is a square matrix?
First, you need to understand the method, and then you can use Wolfram Mathematica, Maple, Mathlab, or any other software.
For block matrices, follow the Silvester method, as shown in the attached article.
• asked a question related to Numerical Linear Algebra
Question
what are the advantages and  disadvantages of Matching Pursuit Algorithms for Sparse Approximation? and if there are alternative method better than Matching Pursuit
The advantages of OMP and MP algorithms for Direction of Arrival estimation:
Applying BS algorithms to a DOA problem enhances resolution
and decreases complexity. Moreover, the knowledge of the number
of signal sources is not required to know in these algorithms. In
addition, they do not need any post-processing to converge to
the ML solution since the output of these algorithms is straightly
the DOAs. ML algorithm compares all feasible directions and then
selects the most likely one. On the other hand, BS algorithms
compare some of the angles and select them in a smart method.
Hence, BS algorithms are much more computationally efficient
in comparison to other algorithms of DOA estimation such as
MUSIC and ESPRIT to approach to the ML solution. Moreover, BS
algorithms converge to the ML solution when the value of SNR
is low, whereas other approaches converge at high SNRs only.
In addition, in other methods for DOA estimation, the number
of estimated DOAs is limited by the number of antennas. The BS
based DOA estimation methods can estimate more DOAs than the
antennas number. Among BS methods, OMP algorithm provides
slightly higher performance than MP algorithm with moderately
higher computational complexity.
• asked a question related to Numerical Linear Algebra
Question
Recently I have came across two different equations for the estimation of Remote Sensing Reflectance (Rrs), i.e.
1 - Rrs equation by Mobley, (1999) is
Rrs(λ) = Lu(λ) - ρLsky(λ) / Ed(λ)
2 - Rrs equation mentioned in the articles by Dorji et al. (2016 and 2017) is
Rrs(λ) = Lu(λ) x ρLsky(λ) / Ed(λ)
I have a confusion about the numerator of this equation, I would like to know that the terms Lu(λ) and ρLsky(λ) are being multiplied with each other or subtracted? Because in Mobley 1999 article the term ρLsky(λ) is being subtracted from Lu(λ), i.e. Rrs(λ)= Lu(λ) - ρLsky(λ) / Ed(λ)
Felix Seidel is correct.
• asked a question related to Numerical Linear Algebra
Question
What is the name for the identities (2) and (3) in the functional analysis literature F-1(is the inverse function?
where (1) F:[0,1] to [0,1] and F is strictly monotonic increasing
where F(0)=0, F(1/2)=1/2 and F(1)=1,
2)\forall (x)\in (F) F(1-x)+F(x)=1
(3)\forall (p)in codom(F)F-1(1-p) +F-1(p)=1; these are the equality bi-conditional (expressed in (2) and (3) , the equality cases of (4), bi-conditional, because it applies to the inverse function so they are expressed as
forall (x, x1)\in dom(F)=[0,1],[x+y =1]  (iff) [F(x)+F(y)=1]
forall (p, p1), in IM(F)\subseteq[0,1];[p+p =1] iff [ F-1(p1) +F-1(p)=1], F-1 is the inverse function and thus F-1(p) F-1(1-p) are elements of dom(F)=[0,1]
x+y=1 iff F(x)+F(y)=1
see, the attached paper 'order indifference and rank dependent probabilities around page 392, its the biconditional form of segal calls a symmetric probability transformation function
I presume that if in addition F satisfies (4)\forall x in [0,1]=dom(F); F(1/2x)=1/2F(x)
That such a function will be identity function, as F(x)=x for all dyadic rationals and some rationals and F is strictly monotone increasing and agrees with F(x) over a dense set note that
given midpoint convexity at 1 and 0
I presume that if in addition
@1\forall x in [0,1]=dom(F) F(1/2x)<=1/2F(x)
\@0 forall x in [0,1]  F(1/2+x/2)<=1/2+F(x)/2
That these equations collapse into equal-ties
.F(1/2x)=1/2F(x)
F(1/2+x/2)=1/2+F(x)/2
given the symmetry equation;(2) F(1-x)+F(x)=1, andd(1) F:[0,1]to [0,1] and F(0)=0 (which gives F(1/2)=1/2, F(1)=1, follows from F(0)=0,), it follows then F(x)=x for all dyadics  rationals in [0,1]
where F(1)=1 and F(0)= and F strictly monotonic increasing as above
and some rationals and F becomes odd at all dyadic points in [0,1]
n
I am not sure if (3) is required but then given F is strictly monotone increasing (it should be applied by injectivity and (2) . In any case I presume F would collapse into F(x)=x.
What is the general form of a function that merely satisfies F(1)=1 F(0)=0 F(1/2)=1/2 and is strictly mo-none increasing and continuous and satisfies the inequalities?
@1\forall x in [0,1]=dom(F) F(1/2x)<=1/2F(x)
\@0 forall x in [0,1]=dom(F)  F(1/2+x/2)<=1/2+F(x)/2
The function also satisfies
(4)\forall x,y\in dom (F) x+y>1 \leftrightarrow F(x) +F(y)>1,
\forall x,y\in dom (F) x+y<1 \leftrightarrow F(x) +F(y)<1,
\forall p,p2\in codom(F) p+p1>1 \leftrightarrow F-1(p) +F(p2)>1
\forall p,p2\in codom(F) p+p1<1 \leftrightarrow F-1(p) +F(p2)<1
]
if u introduce new variable s=t-0.5 and new function q(s)=f(s)-0.5 then the equation is rewritten as q(s)=q(-s), i.e. it is just a condition of antisymmetry.
• asked a question related to Numerical Linear Algebra
Question
I am working in meshless methods using radial basis approximation. As it is known that the system of linear equations arising is severely ill-conditioned. What approach would you like to suggest for solving this problem?
If you are MATLAB programmer you can use pinv command as follows:
X=pinv(A)*B;
pinv(A) is the pseudo inverse of A. The pseudo inversion is based on the truncated singular value decomposition method.
• asked a question related to Numerical Linear Algebra
Question
Recently I found one dimentional hermitian operator has  equivalent complex operator under iso-spectral behaviour. In fact I am serching for a hermitian operator which does not have equivalent complex operator.
hmmm, dear Tahar Latrache, I find the following links in less than 1 min....
(and, btw., I find your post pretty aggressive. What's the point??)
• asked a question related to Numerical Linear Algebra
Question
Recently, I have read some references applying proper orthogonal decomposition to solve unsteady aerodynamic problems. I'm very interested in this topic and really want to learn more about it. For now, I'm not sure how this method works, so, can anyone who is familiar with this topic give me some advice about starting in this field? Or some recommendations for related materials (books and references that may contain necessary mathematics background)? Thanks a lot!
PS: I'm familiar with linear algebra and basic matrix analysis, but I don't know much about control theory since I majored in aerodynamics.
Sometimes, you might need to apply other modal decomposition technique instead of POD. For example, DMD would be proper answer for some cases.
You can find open source codes here:
and I found this comparison useful:
Comparison of optimized Dynamic Mode Decomposition vs POD for the shallow water equations model reduction with large-time-step observations
• asked a question related to Numerical Linear Algebra
Question
I need to calculate the desired output weights (Wout) which are the linear regression weights of the desired outputs d(n) (a vector) on the input states vector x(n). How can I get binary weights? Is there any mathematical theory to address this problem?
Hello dear friend!
It's nice to hear from you again.
The problem is similar to perceptron rule of learning with the difference in the constraint; while the latter requires binary outputs.The concept in perceptron is that the undesired outputs are subject to change (ie. 0 to 1 or vise versa). So the weights connecting to them are changed in the (one of two possible) direction(s) to satisfy the desired answer.
However In your case, where the weights are restricted to 0 & 1,  the problem transforms into whether a weight should contribute to a specific component (in the output space) or not. As mentioned by Peter, there is no closed formula, but learning algorithms are applied to iteratively reduce a cost function. Hence a cost function with mean squared error in accordance with the constraints in the problem (binary Ws) can be what you want.
If you take a look at the perceptron rule for binary output networks, I think  the idea would be insightful.
• asked a question related to Numerical Linear Algebra
Question
Please can someone give me a reference on Pisot-Dufresnoy-Boyd Algorithm.
Key words: Pisot number
Dear Hanifa
This paper presents two algorithms on certain computations
• asked a question related to Numerical Linear Algebra
Question
Consider that X is an unknown matrix and A is a known one. I have to solve the following equation: A=[(XXT)/(λmax(XXT))]. One can use normally the svd decomposition of A but the presence of the λmax(XXT) in the equation makes it more complicated to factorize. Recall that λmax(XXT) denotes the maximum eigenvalue of the matrix (XTX) .Does anyone have any idea how to solve it in order to find X please? Thanks
Dear Kwassi,
The suggestions of Peter are correct.
(However, to simplify the problem, it is better to put X=kY.)
So I recommend to start with A=YYT. Once you obtain Y, multiply its entries by √λmax (square root). That is the resulting matrix X.
If λmax is not known and varies, as you are writing, you can multiply each entry of Y by k. This will be a parameter. In fact, k2max
• asked a question related to Numerical Linear Algebra
Question
Dear Colleagues, see this video on the diagonalisation of a 3x3 matrix.
My question exactly is about the method used at the minute 5:00 to 5:21 to find the roots of the polynomial.
This is the link of the video:
it is Horner method which carrying division algorithm
• asked a question related to Numerical Linear Algebra
Question
Recently, I obtain a linear system, $Ax = b$, where $A$ is a nonsingular, strictly diagonally dominant $M$-matrix. Then I also got a matrix splitting $A = S - T$, where $S$ is also a nonsingular, strictly diagonally dominant $M$-matrix. So I establish a stationary iteration scheme as follows,
$$Sx^{(k+1)} = Tx^{(k)} + b.$$
According to numerical results, it seems that this iterative scheme is always convergent. Is it rational from theoretical analysis? So, can we prove the convergence of this iterative scheme? i.e.,
show $\rho(S^{-1}T) < 1$, where $\rho(\cdot)$ is the spectral radius.
Please also refer to the URL for details.
Zhu is right: the most classical reference is the book by Varga.
Under the conditions you described the method is NOT always convergent: take S=\eps A for constructing a counterexample.
However you can find mild conditions for convergence..
In this context there are many beautiful results based on 'monotonicity' arguments
• asked a question related to Numerical Linear Algebra
Question
I need to find the column reduction form of a matrix.  Are there any easy methods, free online books or pdfs where I can get examples?
From a computational viewpoint there is not much difference between row reduction and column reduction.  Instead of doing the operations on rows you are doing the operations on columns.  In fact you can turn column reduction into row reduction.
Take the transpose of the matrix, do row reduction (this can be found in any linear algebra text) and at the end take the transpose again.
• asked a question related to Numerical Linear Algebra
Question
Hi everyone.
I have developed a R package named eemR (https://github.com/PMassicotte/eemR and https://cran.r-project.org/web/packages/eemR/index.html) which aims at providing an easy way to manipulate fluorescence matrix.
One of the function in the package is used to extract peak values at different regions in the fluorescence matrix (Coble's peaks for instance). I have noticed that the reported locations of these peaks are not consistent in the literature.
In Coble's 1996 paper, peaks are reported as follow (these are the values I am using in the R package):
Coble, P. G. (1996). Characterization of marine and terrestrial DOM in seawater using excitation-emission matrix spectroscopy. Mar. Chem. 51, 325–346. doi:10.1016/0304-4203(95)00062-3.
Peak B: ex = 275 nm, em = 310 nm
Peak T: ex = 275 nm, em = 340 nm
Peak A: ex = 260 nm, em = 380:460 nm
Peak M: ex = 312 nm, em = 380:420 nm
peak C: ex = 350 nm, em = 420:480 nm
In Coble's 2007 paper, peaks are reported as follow:
Coble, P. G. (2007). Marine optical biogeochemistry: The chemistry of ocean color. Chem. Rev. 107, 402–418. doi:10.1021/cr050350+.
Peak B: ex = 275 nm, em = 305 nm
Peak T: ex = 275 nm, em = 340 nm
Peak A: ex = 260 nm, em = 260/400 - 460 nm
Peak M: ex = 290 nm, em = 310/370 - 410 nm
peak C: ex = 320 nm, em = 420:460 nm
Peak B: ex = 270 nm, em = 306 nm
Peak T: ex = 270 nm, em = 340 nm
Peak A: ex = 260 nm, em = 450 nm
Peak M: ex = 300 nm, em = 390 nm
peak C: ex = 340 nm, em = 440 nm
At first, these differences seem to be minors but I was wondering what were your thoughts about that. Should I review my code to change or adjust peak positions?
Hi Philippe,
I agree that it's better to find maximum (or average) in the specific region, not the specific position.
There are some related references which may be helpful.
One is from Coble's new book: Aquatic Organic Matter Fluorescence (2014).
Another is Leenheer's EST paper (2003).
And you can also refer to Chen&Westerhoff&Leenheer's EST paper (2003) .
Best Regards
Penghui
• asked a question related to Numerical Linear Algebra
Question
I have developed a new crossover for 0-1 matrix and need to check the efficiency of it.
I've not enquired about the use.. I want to check the efficiency
• asked a question related to Numerical Linear Algebra
Question
Suppose that G is a finite group and is a Sylow p-subgroup of G, and H/K is a chief factor of G, and T/K is a Sylow p-subgroup of H/K. Is that correct to say that c(T/K) is less than or equal to c(P), where c(P) means nilpotency class of P ?
I will try and reformulate in words the answer by Yuri Semenov.
You need to know first of all that if you go from a group to a subgroup, or to a quotient group, then the nilpotence class (NC) cannot  increase.
You also need to know that a Sylow subgroup of a quotient group of the group $G$ is the image of a Sylow subgroup of $G$. And then, by one of the Sylow theorems, a Sylow subgroup of a subgroup of $G$ is a subgroup of a suitable Sylow subgroup of $G$.
With these premises, a Sylow subgroup of the subgroup $H$ of $G$ will have NC less than or equal to the NC of that of a Sylow subgroup of $G$. And then the same holds for a Sylow subgroup of the quotient group $H/K$.
Note that as I have written it, the argument applies also to a composition factor.
• asked a question related to Numerical Linear Algebra
Question
I need to solve the problem shown below, but I do not know what optimization libraries can solve this problem.
argmin Σi(xiTAxi-bxi) sub. to Ab=0
The optimization target is a 3x3 matrix A and a 3D vector b, given the 3D vectors xi where i=1,2,...,N. Clear view of the formula is attached.
I need a C/C++ optimization library, hopefully opensource, light and fast. If you have any idea, help me.
I would appreciate more if you let me know the related class or method in the library. Thank you!
The problem is equivalent to the one obtained by replacing A by a _symmetric_ matrix S sub to b orthogonal to Sb. To see this, choose S as the symmetric part of A. Then x'Ax=x'Sx for any x (' means transpose) Putting R=A-S, Ab=0 translates to Sb=-Rb. In 3D, you can find for any antisymmetric R a vector r such that for any b we have Rb=r 'x' b ('x' vector product). Thus Sb must be orthogonal to b. The reformulation of the problem reduces dimensionality by three.
• asked a question related to Numerical Linear Algebra
Question
For a general parameter (t) dependent m x m matrix, M(t), can one always  diagonalize it by evaluating
D (t)=U(t)^{-1}M(t)U(t),
where D(t) is a diagonal matrix. Under what conditions can I numerically evaluate U(t) such that the parametric dependence is retained either exactly or to a very good approximation.
I would also like to know some standard algorithms or approximations. In case I am conceptually not right about thinking this way, please point out the error.
The two most useful criteria (sufficient conditions) for diagonalizability are (1) that A commutes with its conjugate transpose (normal -- already mentioned) and (2) that the eigenvalues of A are simple (roots of the characteristic polynomial with multiplicity 1).  Indeed, the eigenvector for a simple eigenvalue (and the eigenvalue itself) will depend differentiably on the matrix, so presumably on your parameter, and the computation should work nicely as a perturbation problem.  [This extends to suitable linear operators on infinite-dimensional spaces.]   I would recommend Kato's book for the theory, but have no specific recommendation for the computation, which depends strongly on the available known structure of the parametrized matrix.
• asked a question related to Numerical Linear Algebra
Question
Is there some package for FORTRAN with the backslash operator, or at least some good large equation system solver? I read in some forums about LAPACK, but i couldn't find out a windows tutorial for its installation. Does anyone has some expertise about the efficiency of these package?
The list at http://www.netlib.org/utk/people/JackDongarra/la-sw.html is quite comprehensive, though some solvers like PARDISO or WSMP are not listed (both have free versions with limited functionality).
You should choose a solver depending on your needs, as well as type and size of problems to be solved. Do you need a distributed parallel solver? Is your matrix banded? Structured?
• asked a question related to Numerical Linear Algebra
Question
Let AX=B is a linear system,where A is a square matrix of greater order(20*20 or more) and X,B are column matrices(X is matrix of unknowns).What,s the easy way to solve this system
The best way is to use LAPACK. See
It contains state of the art numerical algorithms. You can either use Gaussian elimination or  a QR factorization.
Moreover there exists a C++ version.
• asked a question related to Numerical Linear Algebra
Question
By using the formula in the attach, we calculate the angle between the complex vector and the complex vector . Thus, it is shown that the cosine of the angle between two complex vector is complex. But, the cosine of the angle should not be complex. Could you tell me how to understand this proplem?
• asked a question related to Numerical Linear Algebra
Question
Dear masters
Can householder method be applied when our vector is not numeric? Generally, for a given vector V=(v1 v2 . . . vn), can we find two non-singular matrices like P(t) and Q(t) such that  P(t)V(t)Q(t)=(a(t) 0 0 . . 0)t ?
yes, clearly what you do with the numbers , you can do also with the symbols,
the proof of the method itself goes with the symbols. However, there is a little
trouble: normally, one will also apply a row permutation to get the first element
of the vector one of the larger ones (in order to improve roundoff behavior )
but this cannot be done symbolically. But what about Q(t), if, as you wrote, V
is a column only?
• asked a question related to Numerical Linear Algebra
Question
Then recently I come across a matrix equations, which are given as follows, (use MATLAB notations):
(kron(A,B) + kron(C,D))* x = f; (linear systems)
where A is a complex symmetric matrix (square), B, C, D are all real non-symmetric matrices. (square), f is a real vector
But we can reform it as
B*X*A^T + D*X*C^T = F, F = reshape(f, n, m);
Are there some good suggestions for this kind of matrix equations?
Iterative solvers and suitable preconditioning?
We have some solvers online, not sure whether the Sylvester solver is already made public, check the M.E.S.S. homepage.
• asked a question related to Numerical Linear Algebra
Question
I need a random matrix with preassigned correlation for Monte Carlo simulation
If you're using a statistics package, it probably has a library function to do this. R, Matlab, and Mathematica all do, assuming you want to generate a multivariate Gaussian with the required covariance. Otherwise, generate vectors from an isotropic Gaussian with unit variance and multiply them by one of the matrices of a Cholesky factorisation of the covariance matrix.
• asked a question related to Numerical Linear Algebra
Question
I have a large sparse matrix in matlab and I need the complement of this matrix
what should I do?
note that because of out of memory error I can not use D=1-S
where S in the original matrix and D is it's complement.
If you need to do certain matrix computations with the complement of S (denoted by D), then it is usually unnecessary to form D explicitly; for instance, a matrix-vector product D*v only requires one Matrix-Vector product with S, that is, D*v=v-S*v. In this sense, you can write a subroutine that only requires S as an input to solve your problem.
• asked a question related to Numerical Linear Algebra
Question
Any suggestion/resources are appreciated.
Thank you so much.
The most applicable method is to make some symmetry ansatz (like a similarity form), and transform the PDE to an ODE, which hopefully is exactly solvable. I don't think the inverse scattering method has much practical value; a direct numerical solution will most likely be much better in most cases (unless you are investigating very special effects).
• asked a question related to Numerical Linear Algebra
Question
good morning
I read the article titled 'SINRD Circuits Analysis with WCIP' we can solve the equation system (10) by the GMRES algorithm
So to use the GMRES we must put the system in the form of equation (11) something I've already done that is to say the system is like this we have: A*x = b
my question is how can we write equation (11) in matlab to use the GMRES
thank you
Use the function 'gmres' in MATLAB. Be aware of the input parameters, as mentioned by Milan D. Mihajlovic.
• asked a question related to Numerical Linear Algebra
Question
Based on Jacobian method how can I calculate the basis vector, ε, if I have θ of ankle and knee joints in time series. My matrix would be 10 rows (time series) of X vectors for two joints (2 columns):
Jhip(θ)(t)⋅ ε//(t) = 0
I know there is a matlab function (null ()), but it gives me nothing.
• asked a question related to Numerical Linear Algebra
Question
I wrote a Matlab finite-differnece code that solves the 3-D Laplace Equation for a large, complex geometry. Currently, I am using Matlab's preconditioned conjugate gradient gradient function -- which runs on one core only -- to iteratively solve the system of equations. The code takes several hours to solve with 64 million degrees of freedom. Eventually, I would like to increase the size of my domain, so a parallel implementation of the solver would be ideal, possibly one that runs on a cluster. Does anyone know of a free parallelized conjugate gradient code (written in any language) that I can wrap up into my Matlab code?
as far as I remember the triliinos project and also petsc contain this possibility.
• asked a question related to Numerical Linear Algebra
Question
a   c   0 …       0  1
b   a   c   0 …      0
0   b   a   c  0 …  0
0                          0
.                 a   c   0
.                 b   a   c
1   0 …      0   b   a
This is a good question with more than one answer.
You may find the following article helpful:
Y. Eidelman, I. Gohberg, V. Olshevsky, Eigenstructure of order-one-quasiseparable matrices, Linear Alg. and its Applications 405, 2005, 1-40:
See Section 1.4, p. 6 for an overview.    An introduction to tridiagonal matrices starts in Section 1.1,   p. 2 (see the examples, starting on page  3).d
• asked a question related to Numerical Linear Algebra
Question
I am interested in the numerical solution of convection-diffusion problems where the convection dominates. In the iterative solution with Gauss-Seidel, instabilities can occur for large Peclet numbers. Is the "downwind numbering" as in the paper by Bey & Wittum a possible solution for this problem on structured grids and with variable coefficients (advection speeds)? Unfortunately, I don't have access to the paper.
• asked a question related to Numerical Linear Algebra
Question
Any suggestion/resources are appreciated.
This is a good question with many possible answers and points-of-view.   It seems that it is often the case that we learn about human behaviour indirectly by considering how humans interact with other humans, the environment and machines.   In short, mathematical views of human behaviour often focus on stimulus-response modelling.
Mathematical models of human behaviour have been recently been studied in the context of epidemics in
P. Poletti, Human behaviour in epidemic modelling, Ph.D. thesis, University of Trento, 2010:
Human behaviour is modelled by Poletti in terms of two mutually influencing phenomena: epidemic transitions and behavioural changes in the population of susceptible individuals (see Section 2.2, starting on page 16).
Modelling human-computer (device) interaction is the focus of
P. Eslambolchilar, Making sense of interaction using a model-based approach, Ph.D. thesis, National University of Ireland, Maynooth, 2006:
See, for example, the probabilistic framework of a model-based behaviour system in Fig. 6.13, starting on page 182.
A bit less mathematical but still very interesting model of human behaviour in terms of human-made music is given in
A. Tidemann, A groovy virtual drummer: Learning by imitation using a self-organizing connectionist architecture, Ph.D. thesis, Norwegian University of Science and Technology, 209:
For an overview of the Tidemann's approach to model and imitate human musical expressiveness, see Fig. 1.1, p. 5.
This thesis introduces the SHEILA architecture in terms of human drum-playing patterns with an accompanying melody (see Section 3, Architectuure, page 106 in the pdf file but unnumbered in the thesis).
• asked a question related to Numerical Linear Algebra
Question
Thanks.
Yes, for example, the quaternion group Q  has a unique subgroup H={1,-1} of order 2,  for which Q/H is the Klein four-group.
• asked a question related to Numerical Linear Algebra
Question
Hi, guys. I got a problem in my recent research. It can be sketched as follows:
How to theoretically choose a positive \alpha such that the matrix D=B*B^T/\alpha+B*inv(A)*B^T is nonsingular or well-conditioned? Here B is an m-by-n matrix (n >=m) and A is an n-by-n nonsingular matrix .
I think that you need to have more conditions on B.
For example, if B has all zero entries, then D=0 for any alpha, so there is no alpha that  meets your criteria in that case.
I have found that a useful way to find the conditions you need to obtain the results you desire (D non-singular or well-conditioned for some value of alpha) is to try fairly non-restrictive conditions, and then find counter-examples.  The counter examples can provide insight on what other conditions are needed.
• asked a question related to Numerical Linear Algebra
Question
(* complete dictionary of words, one per row, number of rows is (alpha^word), using words of length "word" and an alphabet of "alpha" number of characters *)
alpha = 4; word = 3; dict =.;
dict = Partition[Flatten[Tuples[Reverse[IdentityMatrix[alpha]], word]], (alpha*word)];
PseudoInverse[dict] == ((Transpose[dict])*((alpha)^-(word - 1))) - ((word - 1)/(alpha^word)/word)
Output = True
An equation editor format is here if you can't read Mathematica:
Oh ok. I made quite a few wikipedia articles and their policy concerning new material (particularly in science) is very clear. Any statement challenged or likely to be challenged should come from a reliable source.
They do give a definition for what they consider to be a reliable source (http://en.wikipedia.org/wiki/Wikipedia:Verifiability#What_counts_as_a_reliable_source). For this very reason if a new equation is posted without any article or book supporting it is very likely it will be removed. Also those materials should have been peer-reviewed (somehow) in order to be accepted. This is why self-published material is generally not accepted as a reliable source.
All of this to support the fact that wikipedia does not publish original research (http://en.wikipedia.org/wiki/Wikipedia:No_original_research). Research gate, for example, would be a much better medium for that. In order to post new scientific material on wikipedia I would advise first to publish it in a peer-reviewed journal.
Evidently people that manage wikipedia content are like you and me and therefore prone to error, misjudgment or misinterpretation of that content. Altogether however I do understand why the rules exist and as far as I know they're pretty good guiding lines.
For the scientific consequences of your question I really could not state if the equation you've put here is new material (like a different approach for the same problem) or a derivation of an old one (same formula written in a way it's more handy in certain sciences like computer sciences) since as I said I'm not familiar with the concept. Be careful thought that using Mathematica code to state a new concept is highly compromising for a fair analysis due to inability of fellow scientists to read correctly that statement and due to particulars on that programming language that can somehow infer on the results of your proposal.
• asked a question related to Numerical Linear Algebra
Question
In literature, for real vectors, the relationship between higher order cumulants and moments can be found.
How does this relationship look for complex vectors?
For example, how does E[y*1 y*2 y*3 y4 y5 y6] relate to all the higher order cumulants? Is there a general formula for this?
There's no difference, it's just that you have to view X_i and X*_i as independent variables.
If you denote moments by M_ij.. = E[Xi Xj...] etc, and cumulants by C_ij..., then the general formula is, in words, that E_ij... is the sum over all partitionings of ij... of the corresponding product of cumulants. Thus, e.g.,
M_i = C_i,
M_ij = C_ij + C_i C_j,
M_ijk = C_ijk + C_ij C_k + C_ik C_j + C_jk C_i + C_i C_j C_k.
These relation can be inverted to yield cumulants as a similar but slightly more complicated sums of products of moments, thus.e.g.
C_i = M_i,
C_ij = M_ij - M_i M_j,
C_ijk = M_ijk - M_ij M_k - M_ik M_j - M_jk M_i + 2 M_i M_j M_k.
A nice way of summarizing this is in terms of generating functions - GF. For a single variable, if F(a) is the moment GF,
F(a) = E[exp(a X)] = sum_n a^n E[X^n] / n!
then G(a) = log F(a) is the corresponding GF for cumulants. This has a straightforward generalization to several variables.
• asked a question related to Numerical Linear Algebra
Question
Linear algebra
You need to determine what kind of ill-condition it is; say, if it is rank-deficient or ill-posed. For the latter, methods with regularization appear promising.
• asked a question related to Numerical Linear Algebra
Question
I want to solve a system with fatty acids information from its protonic matrix.
Do you need to have the inverse, or do you want to find the solution of a system of linear equations where the matrix is singular. If it's only the latter, then you don't need the inverse of the matrix (which doesn't exist anyway) to find the solution.
• asked a question related to Numerical Linear Algebra
Question
BiCG-type methods include BiCG, BiCGSTAB, CGS, QMR, BiCGSTAB2, BiCGSTAB(l),GPBiCG, IDR, etc.
@Lauro de Paula: Thanks for your suggestion. I will have a look at your paper.
• asked a question related to Numerical Linear Algebra
Question
In many practical problems, we get a large sparse matrix. Can we find determinant of this matrix efficiently?
Please give your large sparse matrix, we can talk about it. Because the types of sparse matrices are very huge, we can not answer you.
• asked a question related to Numerical Linear Algebra
Question
I want to compare it with a numerical method that I'm trying to develop.
One often used, robust and stable algorithm is the "scaling and squaring" approach which can be found, e.g., in matlab's expm routine. More details can be found in N. Highams textbook "functions of matrices".
However, is A is large and sparse and you're intrinsically interested in exp(At)v, i.e. the matrix-vector product of exp(At) with a vector v, a variety of other methods opens. These are usually more adequate for the large-scale case.
• asked a question related to Numerical Linear Algebra
Question
c = all(b>=-100*eps);
b is a matrix I already have.
what can you say about matrix c?
2.2204e-16 is the value of eps which command window shows.
The epsilon of the machine (short: eps) is the minimum distance that a floating point arithmetic program like Matlab can recognize between two numbers x and y.
Try this:
>> format long e
>> x=1;y=x+eps;
>> y-x
ans =
2.220446049250313e-016
>> x=1;y=x+eps/2;
>> y-x
ans =
0
You see that y-x=0 and Matlab cannot recognise a difference less than eps:
>> eps
ans =
2.220446049250313e-016
• asked a question related to Numerical Linear Algebra
Question
Could anyone point me towards publications describing state of the art in block-GMRes (or other block-Krylov) solvers ?
Indeed, Martin Gutknecht has a nice survey from 2007, available at his web-site:
Block GMRES was first introduced in a thesis by B. Vital in 1990.
Relevant papers other than the survey mentioned above include (in alpha order):
Lakhdar Elbouyahyaoui, Abderrahim Messaoudi, and Hassane Sadok, Algebraic properties of the block GMRES and block Arnoldi methods, Electronic Transactions on Numerical Analysis, 33 (2008–2009), 207–220.
Julien Langou, Iterative methods for solving linear systems with multiple right hand sides.
Ph.D. thesis, INSA Toulouse, June 2003. CERFACS Report TH/PA/03/24.
(available online at cerfacs.fr)
Micka ̈el Robb ́e and Miloud Sadkane, Exact and inexact breakdowns in the block GMRES method, Linear algebra and its applications 419 (2006), 265–285.
Valeria Simoncini and Efstratios Gallopouolos, Convergence properties of block GMRES and matrix polynomials, Linear Algebra and its Applications 47 (1996), 97–119.
There is also a very recent paper where there is a block version of FGMRES:
Henri Calandra, Serge Gratton, Julien Langou, Xavier Pinel, and Xavier Vasseur, Flexible variants of block restarted GMRES methods with application to geophysics. SIAM Journal on Scientific Computing 34 (2012), A714–A736.
• asked a question related to Numerical Linear Algebra
Question