Random Matrix Theory

Random Matrix Theory

  • Elio Conte added an answer:
    Has some one a possible software for application of Random matrix Theory in time series ?

    I am asking software that some one could send me for application of Random Matrix Theory in time series calculating eigenvalues of the correlation matrix. May you help me ?

    Elio Conte

    I have seen , it requires a proper compiler , I suppose. Has some one a prompt .exe software?

  • Ahmad Bazzi added an answer:
    Is there any closed form expression of eigenvalues of a matrix C which is a Hadamard product of two matrices A and B i.e. C = A o B?

    The Hadamard product or pairwise product of two matrices denoted hereby by "o"

    C = A o B.

    Let Lambda(A) be the set of all eigenvalues of matrix A.

    I am aware of inequalities relating Lambda(A), Lambda(B), and Lambda(C).
    It is possible that my question does not have any answer but I would just like to make sure that is there any closed form EQUALITY expressions relating Lambda(A), Lambda(B), and Lambda(C).

    Thank you in advance.

    Ahmad Bazzi

    Thank you all for your delightful answers

  • Eytan Katzav added an answer:
    Is there any internal relations between circle law and semicircle law?

    In random matrix theory, Let $A$  be a random $n \times n$ matrix whose entries i.i.d with expectation 0 and variance 1, let F be the  LSD of $A$ , $F$ will be uniform distribution over the unit disk. The marginal distribution of $F$ equal to a non-standard  semicircle law, which is the LSD of a wigner matrix $B$. Is there any relationship between circle law and semicircle law ? That is, if we have known one of the two law, can we prove another one by means of matrix analysis?

    Eytan Katzav

    The reply of James Peters is to the point and includes many references and explanations.

    Let me take a different angle on things - both the circle law and the semi-circle law are some sort of large-number-theorems, namely capture some universal average behaviour of a large collection of random numbers (in this case organized as a matrix). The eigenvalues of those random matrices converge to one of the above-mentioned laws under quite general conditions (i.e., even if the individual entries are not Gaussian or uniformly distributed - any distribution with finite variance usually would do the job - Tim Rogers' PhD thesis indeed discusses this for example).

    The main difference between the laws is that in the case of the circle law the eigenvalues are complex while in Wigner's semi-circle law the eigenvalues are real - and the reason is simple. In the first case the random matrices are non-hermitian while in the second they are hermitian (or even symmetric). 

    Obtaining one from the other is non-trivial, but there are techniques that take the non-hermitian case, and create from them an ensemble of larger hermitian matrices (double the size) and then use results for those hermitian ones (such as the semi-circle law) to infer results for the non-hermitian ones.

  • Alexander Wickstrom added an answer:
    What sort of mathematical background would best prepare a neuroscientist to really understand cortical networks?

    Structurally, both micro- and macro- cortical networks are highly complex. 

    When analyzed functionally in real time, whether locally or globally, the complexity of these networks really challenges my comprehension. It seems that being able to manipulate mathematical models of brain networks should prepare a scientist to better conceptualize the dynamic function of in vivo networks.

    I've looked at mathematical concepts such as random matrix theory, Bayesian networks, hidden-hierarchical Markov chains, chaos and nonlinear dynamics; are any of these better suited than the others, or is a combined approach utilizing techniques from various disciplines more appropriate? Additionally, what kind of relevant course work should a hopeful neuroscientist invest in to perform these kinds of operations?

    Alexander Wickstrom

    I appreciate all of these great answers. Looks like I still have a lot of work to do! 

  • Ignat V. Fialkovsky added an answer:
    Can someone suggest any references on diagrammatics and renormalization for a QFT model with a non-trivial tadpole?

    See above.

    Ignat V. Fialkovsky

    Thanks a lot for your answers!
    I also find this work helpfull

    Steven Weinberg
    Calculations of Symmetry Breaking

    PRD 7 (1973) 2887

  • Alexander Yurkin added an answer:
    Could the distribution of the prime numbers be related to a physical system?
    It seems that the distribution of eigenvalues in quantum chaotic systems obeys the same statistics as eigenvalues of random matrices. e.g.

    It has also been shown that the distribution of the critical zeros of the Riemann zeta-function can be related to the distribution of eigenvalues of certain random matrices:

    So this would seem to to suggest a statistical relationship between the zeros of the zeta function and the energy levels some quantum chaotical system. Can any relationship be drawn between the placement of the prime numbers and a dynamical system? Any other implications of this connection?
    • Source
      [Show abstract] [Hide abstract]
      ABSTRACT: The eigenvalues of quantum chaotic systems have been conjectured to follow, in the large energy limit, the statistical distribution of eigenvalues of random ensembles of matrices of size $N\rightarrow\infty$. Here we provide semiclassical arguments that extend the validity of this correspondence to finite energies. We conjecture that the spectrum of a generic fully chaotic system without time-reversal symmetry has, around some large but finite energy $E$, the same statistical properties as the Circular Unitary Ensemble of random matrices of dimension $N_{\rm eff} = \tH / \sqrt{24 d_1}$, where $\tH$ is Heisenberg time and $\sqrt{d_1}$ is a characteristic classical time, both evaluated at energy $E$. A corresponding conjecture is also made for chaotic maps.

    + 1 more attachment

    Alexander Yurkin

    I think about the probability theory. Nevertheless M. Planck, A. Einstein and E. Schrödinger were right! They were fully developed people. And here M. Born and L. Landau aren't right because at them (at Born and Landau) the right cerebral hemisphere in a cherpny box was underdeveloped. So the theory and experiment showed.

  • Parviz Parvin added an answer:
    Why random lasing in highly scattering media doesn't occur by CW optical pumping?
    Random lasing was seen in several kinds of optically active highly scattering media. Is there a method to make a CW-pumped (optically) random laser?
    Parviz Parvin
    Here, thermal effects may induce high ASE and high losses with lower inverted population (small gain) in the same time.
  • Camille Male added an answer:
    Order in N of Cov( u_ij u_ik ) and Var( u_ij), for i,j,k distinct and U=(u_ij) an N by N Haar matrix ?
    What are the order of Cov( u_ij u_ik ) and Var( u_ij), for i,j,k distinct and (u_ij) the entries of a matrix distributed according to the Haar distribution on the unitary group ?
    It is clear that the formulas given by Weingarten calculus (e.g. Benoit's ones) answer this question in a larger generality. But I cannot find a reference with this simple application, do you know any ?
    Camille Male
    Thank James for your p.m. answer. Let me share it, and comment it in the context of traffics.
    By James Mingo: "First regarding your post: according to Benoit's formula
    \[ \mathrm{E}(u_{i_1j_1} \cdots u_{i_nj_n}
    \overline{u_{i_{1}'j_{1}'}} \cdots
    \overline{u_{i_{n}'j_{n}'}}) \]
    = \sum_{\pi, \sigma \in S_n}
    \delta_{i_1 i_{\pi(1)}'} \cdots \delta_{i_n i_{\pi(n)}'}
    \delta_{j_1 {j}_{\sigma(1)}'} \cdots \delta_{j_n j_{\sigma(n)}'}
    \mathrm{Wg}(\pi^{-1}\sigma) \]
    We have (because there no complex conjugates)
    E(u_{ij}u_{ik}) = 0 as well as E(u_{ij}) = 0.
    Thus the covariance is 0. If we replace U by O (the
    orthogonal group) we still get E(o_{ij}o_{ik}) = 0 for k and
    k not equal. E(o_{ij})^2) = 1/N where N is the size of the

    I have then realized a simple interesting fact, a simple way to observe the difference between the limiting distribution of permutation matrices and Haar matrices in the classical compact groups.

    For any n.c.v.r. "a" in a space of traffics, thanks to certain operad construction I'll present soon in my revised version of [1], I can define the operator "deg(a)", which correspond, for a matrix "A = (A_{i,j})_{i,j=1..N}" to the diagonal matrix
    "deg(A) = diag( \sum_{j=1..N} A_{ij} )_{i=1..N}"

    If "u" is the limit in distribution of traffics of a uniform permutation matrix (or any permutation matrix with small density of cycle of given size), then "deg(u)" is the identity "\mbf 1" in the underlying *-probability space.
    Nevertheless, if "u" is the limit of a Haar matrix on the unitary group, then "deg(u)" has the distribution of "\theta \mbf 1", where "\theta" is a complex gaussian random variable.

About Random Matrix Theory

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable.

Topic followers (249) See all