Adam McLaughlin's research while affiliated with Georgia Institute of Technology and other places

Publications (6)

Article
Full-text available
Graphs that model social networks, numerical simulations, and the structure of the Internet are enormous and cannot be manually inspected. A popular metric used to analyze these networks is Betweenness Centrality (BC), which has applications in community detection, power grid contingency analysis, and the study of the human brain. However, these an...
Article
Full-text available
Graphs that model social networks, numerical simulations, and the structure of the Internet are enormous and cannot be manually inspected. A popular metric used to analyze these networks is between ness centrality, which has applications in community detection, power grid contingency analysis, and the study of the human brain. However, these analys...
Article
Full-text available
Betweenness Centrality is a widely used graph analytic that has applications such as finding influential people in social networks, analyzing power grids, and studying protein interactions. However, its complexity makes its exact computation infeasible for large graphs of interest. Furthermore, networks tend to change over time, invalidating previo...
Conference Paper
Full-text available
Applications of high-performance graph analysis range from computational biology to network security and even transportation. These applications often consider graphs under rapid change and are moving beyond HPC platforms into energy-constrained embedded systems. This paper optimizes one successful and demanding analysis kernel, betweenness central...

Citations

... The abundance of GPUs and their potential to speed up centrality measures was already noted over 10 years ago by Sharma et al. (2011). GPUs demonstrated their potential as hardware accelerators for classic metrics such as betweenness centrality (McLaughlin and Bader, 2018) or Eigenvector centrality (Sharma et al., 2011). More recently, this potential has been confirmed in the case of relatively newer measures such as the lobby index (Xiao et al., 2020). ...
... Higher-level task parallelism Higher-level task parallelism can bring more parallelism for several graph primitives. Preliminary work [45,50] Gunrock as a Backend Gunrock currently offers C/C++-friendly interfaces that make Gunrock callable from Python, Julia, and other high-level programming languages. The future work along this path is to identify and implement support for Gunrock as a back end to a higher-level graph analytics framework (such as TinkerPop or NetworkX) or create our own graph query/analysis DSL on top of Gunrock. ...
... Recent research has even proposed alternative methods that fall in between these two extremes [41]. Determining whether vertices can reach one another is a fundamental graph property that has been used for memory consistency verification [31], social network analysis [12], and the LU factorization of sparse matrices [15]. ...
... It is due to the factor that even the computation of betweenness centrality of a node is time consuming and the scalability of the computation is challenging. Following are literature on computation of exact and approximate betweenness scores in parallel [95,109,58,158,141,112,19,173,39,186,185,183,50,184] and distributed [189,191,82,48] frameworks. ...
... The RK [251] algorithm represents the leading non-adaptive sampling algorithm for betweenness centrality approximation; KADABRA was shown to be 100× faster than RK in undirected real-world graphs, and 70× faster thank RK in directed graphs [56] McLaughlin and Bader [210] introduced a work-efficient parallel algorithm for betweenness centrality approximation, implemented for single-and multi-GPU machines. ...
... Real world graph analytics require both scalability to large graph instances and high-performance implementations. Plenty of individual graph algorithms have been successfully accelerated using GPUs [9], [32], [33]; however, these manual implementations tend to make code reuse difficult compared to their CPU counterparts. Code reuse is tremendously important, because its absence results in tremendous effort spent duplicating and extracting work that has already been completed. ...