Article

Automatic Blocking Of QR and LU Factorizations for Locality

07/2004; DOI: 10.1145/1065895.1065898
Source: CiteSeer

ABSTRACT ojps5#?tbh'rs^547*/21#;268eF5#/2681I?0u^`14; t0W*?H\\_W35/_;26 0S:c!dW*;2683>W?Q/=5/26 Y868?HW5#;x54Y8<4Wf0;=5v71I3>d0.0/=5/26 ; t0W*?H\\_W35/_;26 0S:c!dW*;2683>W?Q/=5/26 5#dHd0Y Q/268zH7 28890-32250 d0.0/=5/26 ; t0W*?H\\_W35/_;26 0S:c!dW*;2683>W?Q/=5/26 3>1t!W*;2?7143>dH.!/2W*;2]H/2[0W^547}/214;268eF5#/26814?54Y8<414;26Z/2[H3> ?HWWtb/21>fW f0Y81Q7}gWtCyv[0W?C1IdW};=5#/268?H<1I?Y 4;268eF5#/26814?54Y8<414;26Z/2[H3> ?HWWtb/21>fW d0Y81I6Z/:/2[HWt!WWd7F547=[HW[068W*;=5#;27=[QXd!;2W454Y8W?Q/V68?/21t05FXL V7143>dH.0/2W}; 3>W*3>14;_XX!/2W3>TmW7F5#.H\\_Wwf1#/2[ojpfH54\\_WFt|1I?bu1I.0\\_W[H14Yt!W*; /_;=5#?H^`14;235#/26814?H=|5#?th'r^5#7*/214;268eF5/26 1I?bu1I.0\\_W[H14Yt!W*; 28930-270 7*1I3>dHY8W*cY81Q14d/_;2.H7*/2.!;2W]'^W*y7143>dH68Y8W*;2w7F54?^.0Y H14Yt!W*; 28930 - /2[0Wf0Y81Q7}g68?H<14^x/2[HW*\\_Wn54Y8<414;26Z/2[H3>{[H1I.0<I[Y868?HWF5;q54Y8<IW*f0;=5|Y86Z f!;=5#;268W \\_.H7=[C54 hi-j9L- kmld0;216t!Wv354?.5#Y8Y8Xf0Y ?HWF5;q54Y8<IW*f0;= 3>W*?/=5/2681I?HV1#^/2[HW\\_W54Y8<I1#;268/2[03>]QfQX54.0/21435#/2687F5#Y 4Y8<IW*f0;=5|Y86Z 289 f0Y81Q7}gWtGW*;2\\_6814?Hq14^/2[0Wn7143>dH.0/=5/2681I?H]3>14;2WnfW*?HW*z0/w7F54?fW <I5468?HWt\\_.H7=[5454.!/21I35/26 143>dH.0/=5/2681I?H]3>14;2WnfW*?HW*z0/w7F54?fW 2 /_;=5/2W<I68W{[H68>d5#dW*;t!W3>1I?0/_;=5#/2W [01Fy/215#dHdHYZX5#?5#<4 <#;2W\\_\\_68WY81Q1Id/_;=5#?H^1#;235#/2681I?/2W7}[0?H6 1Fy/215#dHdHYZX5#?5#<4 28980-186 68?0<0]Q/21wd0;21t!.H7W W*U>7*6 ^1#;235#/2681I?/2W7}[0?H6 1Fy/215#dHdHYZX5#?5#<4 dH5#;_/26 ryv68/2[ 28800-16570 1#;235#/2681I?/2W7}[0?H6 1Fy/215#dHdHYZX5#?5#<4 28980-1 /2[H5#/7F5#?fW<4W?HW};=5#/2WFtfX14.0;>14d0/2683>68eW*;5#?t7*1I3>d5;2W|/2[0W dW};_^14;235#?H7W14^5#.0/21#Nf0Y81Q7}...

0 Bookmarks
 · 
94 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article presents a new high-performance bidiagonal reduction (BRD) for homogeneous multicore architectures. This article is an extension of the high-performance tridiagonal reduction implemented by the same authors [Luszczek et al., IPDPS 2011] to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix, which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this article comes from the combination of four important features: (1) tile algorithms with tile data layout, which provide an efficient data representation in main memory; (2) a two-stage reduction approach that allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second stage (reduction from band to bidiagonal form) by using high-performance kernels optimized for cache reuse; (3) a data dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures that the data dependencies are not violated. A detailed analysis is provided to understand the critical impact of the tile size on the total execution time, which also corresponds to the matrix bandwidth size after the reduction of the first stage. The performance results show a significant improvement over currently established alternatives. The new high-performance BRD achieves up to a 30-fold speedup on a 16-core Intel Xeon machine with a 12000× 12000 matrix size against the state-of-the-art open source and commercial numerical software packages, namely LAPACK, compiled with optimized and multithreaded BLAS from MKL as well as Intel MKL version 10.2.
    ACM Transactions on Mathematical Software (TOMS). 04/2013; 39(3).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: While successful implementations have already been written for one-sided transformations (e.g., QR, LU and Cholesky factorizations) on multicore architecture, getting high performance for two-sided reductions (e.g., Hessenberg, tridiagonal and bidiagonal reductions) is still an open and difficult research problem due to expensive memory-bound operations occurring during the panel factorization. The processor memory speed gap continues to widen, which has even further exacerbated the problem. This paper focuses on an efficient implementation of the tridiagonal reduction, which is the first algorithmic step toward computing the spectral decomposition of a dense symmetric matrix. The original matrix is translated into a tile layout i.e., a high performance data representation, which substantially enhances data locality. Following a two stage approach, the tile matrix is then transformed into band tridiagonal form using compute intensive kernels. The band form is further reduced to the required tridiagonal form using a left-looking bulge chasing technique to reduce memory traffic and memory contention. A dependence translation layer associated with a dynamic runtime system allows for scheduling and overlapping tasks generated from both stages. The obtained tile tridiagonal reduction significantly outperforms the state-of-the art numerical libraries (10X against multithreaded LAPACK with optimized MKL BLAS and 2.5X against the commercial numerical software Intel MKL) from medium to large matrix sizes.
    25th IEEE International Symposium on Parallel and Distributed Processing, IPDPS 2011, Anchorage, Alaska, USA, 16-20 May, 2011 - Conference Proceedings; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: LU decomposition for dense matrices is an important linear algebra kernel that is widely used in both scientific and engineering applications. To efficiently perform large matrix LU decomposition on FPGAs with limited local memory, a block LU decomposition algorithm on FPGAs applicable to arbitrary matrix size is proposed. Our algorithm applies a series of transformations, including loop blocking and space-time mapping, onto sequential nonblocking LU decomposition. We also introduce a high performance and memory efficient hardware architecture, which mainly consists of a linear array of processing elements (PEs), to implement our block LU decomposition algorithm. Our design can achieve optimum performance under various hardware resource constraints. Furthermore, our algorithm and design can be easily extended to the multi-FPGA platform by using a block-cyclic data distribution and inter-FPGA communication scheme. A total of 36 PEs can be integrated into a Xilinx Virtex-5 XC5VLX330 FPGA on our self-designed PCI-Express card, reaching a sustained performance of 8.50 GFLOPS at 133 MHz for a matrix size of 16,384, which outperforms several general-purpose processors. For a Xilinx Virtex-6 XC6VLX760, a newer FPGA, we predict that a total of 180 PEs can be integrated, reaching 70.66 GFLOPS at 200 MHz. Compared to the previous work, our design can integrate twice the number of PEs into the same FPGA and has significantly higher performance.
    IEEE Transactions on Computers 04/2012; · 1.38 Impact Factor

Full-text (2 Sources)

View
8 Downloads
Available from
Jun 6, 2014