Alan Edelman

Massachusetts Institute of Technology, Cambridge, Massachusetts, United States

Are you Alan Edelman?

Claim your profile

Publications (90)74.13 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Arrays are such a rich and fundamental data type that they tend to be built into a language, either in the compiler or in a large low-level library. Defining this functionality at the user level instead provides greater flexibility for application domains not envisioned by the language designer. Only a few languages, such as C++ and Haskell, provide the necessary power to define $n$-dimensional arrays, but these systems rely on compile-time abstraction, sacrificing some flexibility. In contrast, dynamic languages make it straightforward for the user to define any behavior they might want, but at the possible expense of performance. As part of the Julia language project, we have developed an approach that yields a novel trade-off between flexibility and compile-time analysis. The core abstraction we use is multiple dispatch. We have come to believe that while multiple dispatch has not been especially popular in most kinds of programming, technical computing is its killer application. By expressing key functions such as array indexing using multi-method signatures, a surprising range of behaviors can be obtained, in a way that is both relatively easy to write and amenable to compiler analysis. The compact factoring of concerns provided by these methods makes it easier for user-defined types to behave consistently with types in the standard library.
    07/2014;
  • Alan Edelman, Plamen Koev
    [Show abstract] [Hide abstract]
    ABSTRACT: We derive explicit expressions for the distributions of the extreme eigenvalues of the beta-Wishart random matrices in terms of the hypergeometric function of a matrix argument. These results generalize the classical results for the real (β = 1), complex (β = 2), and quaternion (β = 4) Wishart matrices to any β > 0.
    Random Matrices: Theory and Applications. 05/2014; 03(02).
  • [Show abstract] [Hide abstract]
    ABSTRACT: “Low temperature” random matrix theory is the study of random eigenvalues as energy is removed. In standard notation, β is identified with inverse temperature, and low temperatures are achieved through the limit β→∞. In this paper, we derive statistics for low-temperature random matrices at the “soft edge,” which describes the extreme eigenvalues for many random matrix distributions. Specifically, new asymptotics are found for the expected value and standard deviation of the general-β Tracy-Widom distribution. The new techniques utilize beta ensembles, stochastic differential operators, and Riccati diffusions. The asymptotics fit known high-temperature statistics curiously well and contribute to the larger program of general-β random matrix theory. ©2014 American Institute of Physics
    Journal of Mathematical Physics. 01/2014; 55(6).
  • Source
    Alexander Dubbs, Alan Edelman
    [Show abstract] [Hide abstract]
    ABSTRACT: We find the joint generalized singular value distribution and largest generalized singular value distributions of the $\beta$-MANOVA ensemble with positive diagonal covariance, which is general. This has been done for the continuous $\beta > 0$ case for identity covariance (in eigenvalue form), and by setting the covariance to $I$ in our model we get another version. For the diagonal covariance case, it has only been done for $\beta = 1,2,4$ cases (real, complex, and quaternion matrix entries). This is in a way the first second-order $\beta$-ensemble, since the sampler for the generalized singular values of the $\beta$-MANOVA with diagonal covariance calls the sampler for the eigenvalues of the $\beta$-Wishart with diagonal covariance of Forrester and Dubbs-Edelman-Koev-Venkataramana. We use a conjecture of MacDonald proven by Baker and Forrester concerning an integral of a hypergeometric function and a theorem of Kaneko concerning an integral of Jack polynomials to derive our generalized singular value distributions. In addition we use many identities from Forrester's {\it Log-Gases and Random Matrices}. We supply numerical evidence that our theorems are correct.
    Random Matrices: Theory and Applications. 09/2013; 03(01).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proves a matrix model for the Wishart Ensemble with general covariance and general dimension parameter beta. In so doing, we introduce a new and elegant definition of Jack polynomials.
    Journal of Mathematical Physics 05/2013; 54(8). · 1.30 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A linear algebraic approach to graph algorithms that exploits the sparse adjacency matrix representation of graphs can provide a variety of benefits. These benefits include syntactic simplicity, easier implementation, and higher performance. One way to employ linear algebra techniques for graph algorithms is to use a broader definition of matrix and vector multiplication. We demonstrate through the use of the Julia language system how easy it is to explore semirings using linear algebraic methodologies.
    High Performance Extreme Computing Conference (HPEC), 2013 IEEE; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Dynamic languages have become popular for scientific computing. They are generally considered highly productive, but lacking in performance. This paper presents Julia, a new dynamic language for technical computing, designed for performance from the beginning by adapting and extending modern programming language techniques. A design based on generic functions and a rich type system simultaneously enables an expressive programming model and successful type inference, leading to good performance for a wide range of programs. This makes it possible for much of the Julia library to be written in Julia itself, while also incorporating best-of-breed C and Fortran libraries.
    09/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Theoretical studies of localization, anomalous diffusion and ergodicity breaking require solving the electronic structure of disordered systems. We use free probability to approximate the ensemble-averaged density of states without exact diagonalization. We present an error analysis that quantifies the accuracy using a generalized moment expansion, allowing us to distinguish between different approximations. We identify an approximation that is accurate to the eighth moment across all noise strengths, and contrast this with perturbation theory and isotropic entanglement theory.
    Physical Review Letters 07/2012; 109(3):036403. · 7.73 Impact Factor
  • Source
    Ramis Movassagh, Alan Edelman
    [Show abstract] [Hide abstract]
    ABSTRACT: We define an indefinite Wishart matrix as a matrix of the form A=W^{T}W\Sigma, where \Sigma is an indefinite diagonal matrix and W is a matrix of independent standard normals. We focus on the case where W is L by 2 which has engineering applications. We obtain the distribution of the ratio of the eigenvalues of A. This distribution can be "folded" to give the distribution of the condition number. We calculate formulas for W real (\beta=1), complex (\beta=2), quaternionic (\beta=4) or any ghost 0<\beta<\infty. We then corroborate our work by comparing them against numerical experiments.
    07/2012;
  • Source
    Jiahao Chen, Troy Van Voorhis, Alan Edelman
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the implications of free probability for random matrices. From rules for calculating all possible joint moments of two free random matrices, we develop a notion of partial freeness which is quantified by the breakdown of these rules. We provide a combinatorial interpretation for partial freeness as the presence of closed paths in Hilbert space defined by particular joint moments. We also discuss how asymptotic moment expansions provide an error term on the density of states. We present MATLAB code for the calculation of moments and free cumulants of arbitrary random matrices.
    04/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: We approximate the density of states in disordered systems by decomposing the Hamiltonian into two random matrices and constructing their free convolution. The error in this approximation is determined using asymptotic moment expansions. Each moment can be decomposed into contributions from specific joint moments of the random matrices; each of which has a combinatorial interpretation as the weighted sum of returning trajectories. We show how the error, like the free convolution itself, can be calculated without explicit diagonalization of the Hamiltonian. We apply our theory to Hamiltonians for one-dimensional tight binding models with Gaussian and semicircular site disorder. We find that the particular choice of decomposition crucially determines the accuracy of the resultant density of states. From a partitioning of the Hamiltonian into diagonal and off-diagonal components, free convolution produces an approximate density of states which is correct to the eighth moment. This allows us to explain the accuracy of mean field theories such as the coherent potential approximation, as well as the results of isotropic entanglement theory.
    02/2012;
  • Source
    Ramis Movassagh, Alan Edelman
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a method that we call isotropic entanglement (IE), which predicts the eigenvalue distribution of quantum many body (spin) systems with generic interactions. We interpolate between two known approximations by matching fourth moments. Though such problems can be QMA-complete, our examples show that isotropic entanglement provides an accurate picture of the spectra well beyond what one expects from the first four moments alone. We further show that the interpolation is universal, i.e., independent of the choice of local terms.
    Physical Review Letters 08/2011; 107(9):097205. · 7.73 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Partitioning oracles were introduced by Hassidim et al. (FOCS 2009) as a generic tool for constant-time algorithms. For any epsilon > 0, a partitioning oracle provides query access to a fixed partition of the input bounded-degree minor-free graph, in which every component has size poly(1/epsilon), and the number of edges removed is at most epsilon*n, where n is the number of vertices in the graph. However, the oracle of Hassidimet al. makes an exponential number of queries to the input graph to answer every query about the partition. In this paper, we construct an efficient partitioning oracle for graphs with constant treewidth. The oracle makes only O(poly(1/epsilon)) queries to the input graph to answer each query about the partition. Examples of bounded-treewidth graph classes include k-outerplanar graphs for fixed k, series-parallel graphs, cactus graphs, and pseudoforests. Our oracle yields poly(1/epsilon)-time property testing algorithms for membership in these classes of graphs. Another application of the oracle is a poly(1/epsilon)-time algorithm that approximates the maximum matching size, the minimum vertex cover size, and the minimum dominating set size up to an additive epsilon*n in graphs with bounded treewidth. Finally, the oracle can be used to test in poly(1/epsilon) time whether the input bounded-treewidth graph is k-colorable or perfect.
    06/2011;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Bearing estimates input to a tracking algorithm require a concomitant measurement error to convey confidence. When Capon algorithm based bearing estimates are derived from low signal-to-noise ratio (SNR) data, the method of interval errors (MIE) provides a representation of measurement error improved over high SNR metrics like the Cramér-Rao bound or Taylor series. A corresponding improvement in overall tracker performance is had. These results have been demonstrated [4] assuming MIE has perfect knowledge of the true data covariance. Herein this assumption is weakened to explore the potential performance of a practical implementation that must address the challenges of non-stationarity and finite sample effects. Comparisons with known non-linear smoothing techniques designed to reject outlier measurements is also explored.
    Signals, Systems and Computers (ASILOMAR), 2011 Conference Record of the Forty Fifth Asilomar Conference on; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Approximating ideal program outputs is a common technique for solving computationally difficult problems, for adhering to processing or timing constraints, and for performance optimization in situations where perfect precision is not necessary. To this end, programmers often use approximation algorithms, iterative methods, data resampling, and other heuristics. However, programming such variable accuracy algorithms presents difficult challenges since the optimal algorithms and parameters may change with different accuracy requirements and usage environments. This problem is further compounded when multiple variable accuracy algorithms are nested together due to the complex way that accuracy requirements can propagate across algorithms and because of the size of the set of allowable compositions. As a result, programmers often deal with this issue in an ad-hoc manner that can sometimes violate sound programming practices such as maintaining library abstractions. In this paper, we propose language extensions that expose trade-offs between time and accuracy to the compiler. The compiler performs fully automatic compile-time and installtime autotuning and analyses in order to construct optimized algorithms to achieve any given target accuracy. We present novel compiler techniques and a structured genetic tuning algorithm to search the space of candidate algorithms and accuracies in the presence of recursion and sub-calls to other variable accuracy code. These techniques benefit both the library writer, by providing an easy way to describe and search the parameter and algorithmic choice space, and the library user, by allowing high level specification of accuracy requirements which are then met automatically without the need for the user to understand any algorithm-specific parameters. Additionally, we present a new suite of benchmarks, written in our language, to examine the efficacy of our techniques. Our experimental results show that by relaxing accuracy requirements , we can easily obtain performance improvements ranging from 1.1× to orders of magnitude of speedup.
    Proceedings of the CGO 2011, The 9th International Symposium on Code Generation and Optimization, Chamonix, France, April 2-6, 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Approximating ideal program outputs is a common tech-nique for solving computationally difficult problems, for ad-hering to processing or timing constraints, and for perfor-mance optimization in situations where perfect precision is not necessary. To this end, programmers often use approx-imation algorithms, iterative methods, data resampling, and other heuristics. However, programming such variable ac-curacy algorithms presents difficult challenges since the op-timal algorithms and parameters may change with different accuracy requirements and usage environments. This prob-lem is further compounded when multiple variable accuracy algorithms are nested together due to the complex way that accuracy requirements can propagate across algorithms and because of the resulting size of the set of allowable compo-sitions. As a result, programmers often deal with this issue in an ad-hoc manner that can sometimes violate sound pro-gramming practices such as maintaining library abstractions. In this paper, we propose language extensions that ex-pose trade-offs between time and accuracy to the compiler. The compiler performs fully automatic compile-time and install-time autotuning and analyses in order to construct op-timized algorithms to achieve any given target accuracy. We present novel compiler techniques and a structured genetic tuning algorithm to search the space of candidate algorithms and accuracies in the presence of recursion and sub-calls to other variable accuracy code. These techniques benefit both the library writer, by providing an easy way to describe and search the parameter and algorithmic choice space, and the library user, by allowing high level specification of accu-racy requirements which are then met automatically without the need for the user to understand any algorithm-specific parameters. Additionally, we present a new suite of bench-marks, written in our language, to examine the efficacy of our techniques. Our experimental results show that by re-laxing accuracy requirements, we can easily obtain perfor-mance improvements ranging from 1.1x to orders of magni-tude of speedup.
    08/2010;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The method of interval estimation (MIE) provides a strategy for mean squared error (MSE) prediction of algorithm performance at low signal-to-noise ratios (SNR) below estimation threshold where asymptotic predictions fail. MIE interval error probabilities for the Capon algorithm are known and depend on the true data covariance and assumed signal array response. Herein estimation of these error probabilities is considered to improve representative measurement errors for parameter estimates obtained in low SNR scenarios, as this may improve overall target tracking performance. A statistical analysis of Capon error probability estimation based on the data sample covariance matrix is explored herein.
    Signals, Systems and Computers (ASILOMAR), 2010 Conference Record of the Forty Fourth Asilomar Conference on; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present two new algorithms for computing all Schur functions s κ (x 1 , . . . , x n) for partitions κ such that |κ| ≤ N. Both algorithms have the property that for nonnegative arguments x 1 , . . . , x n the output is computed to high relative accuracy and the cost per Schur function is O(n 2).
    01/2010;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Algorithmic choice is essential in any problem domain to re- alizing optimal computational performance. Multigrid is a prime example: not only is it possible to make choices at the highest grid resolution, but a program can switch techniques as the problem is recursively attacked on coarser grid levels to take advantage of algorithms with dierent scaling behav- iors. Additionally, users with dierent convergence criteria must experiment with parameters to yield a tuned algorithm that meets their accuracy requirements. Even after a tuned algorithm has been found, users often have to start all over when migrating from one machine to another. We present an algorithm and autotuning methodology that address these issues in a near-optimal and ecient man- ner. The freedom of independently tuning both the algo- rithm and the number of iterations at each recursion level re- sults in an exponential search space of tuned algorithms that have dierent accuracies and performances. To search this
    Proceedings of the ACM/IEEE Conference on High Performance Computing, SC 2009, November 14-20, 2009, Portland, Oregon, USA; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: It is often impossible to obtain a one-size-fits-all solution for high performance algorithms when considering different choices for data distributions, parallelism, transformations, and blocking. The best solution to these choices is often tightly coupled to different architectures, problem sizes, data, and available system resources. In some cases, completely different algorithms may provide the best performance. Current compiler and programming language techniques are able to change some of these parameters, but today there is no simple way for the programmer to express or the compiler to choose different algorithms to handle different parts of the data. Existing solutions normally can handle only coarse-grained, library level selections or hand coded cutoffs between base cases and recursive cases. We present PetaBricks, a new implicitly parallel language and compiler where having multiple implementations of multiple algorithms to solve a problem is the natural way of programming. We make algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The PetaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, we introduce novel techniques to autotune algorithms for different convergence criteria. When choosing between various direct and iterative methods, the PetaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice.
    ACM SIGPLAN Notices 01/2009; 44(6):38-49. · 0.71 Impact Factor

Publication Stats

3k Citations
74.13 Total Impact Points

Institutions

  • 1994–2014
    • Massachusetts Institute of Technology
      • • Department of Chemistry
      • • Department of Mathematics
      • • Laboratory for Computer Science
      Cambridge, Massachusetts, United States
  • 2005
    • University of California, Berkeley
      Berkeley, California, United States