Graph Visualization with Latent Variable Models

978-952-248-095-8 07/2010; DOI: 10.1145/1830252.1830265
Source: OAI

ABSTRACT Large graph layout design by choosing locations for the vertices on the plane, such that the drawn set of edges is understandable, is a tough problem. The goal is ill-defined and usually both optimization and evaluation criteria are only very indirectly related to the goal. We suggest a new and surprisingly effective visualization principle: Position nodes such that nearby nodes have similar link distributions. Since their edges are similar by definition, the edges will become visually bundled and do not interfere. For the definition of similarity we use latent variable models which incorporate the user's assumption of what is important in the graph, and given the similarity construct the visualization with a suitable nonlinear projection method capable of maximizing the precision of the display. We finally show that the method outperforms alternative graph visualization methods empirically, and that at least in the special case of clustered data the method is able to properly abstract and visualize the links. TKK reports in information and computer science, ISSN 1797-5042; 20

  • [Show abstract] [Hide abstract]
    ABSTRACT: Dimensionality reduction is one of the basic operations in the toolbox of data analysts and designers of machine learning and pattern recognition systems. Given a large set of measured variables but few observations, an obvious idea is to reduce the degrees of freedom in the measurements by rep resenting them with a smaller set of more "condensed" variables. Another reason for reducing the dimensionality is to reduce computational load in further processing. A third reason is visualization.
    IEEE Signal Processing Magazine 04/2011; · 3.37 Impact Factor

Full-text (2 Sources)

Available from
May 30, 2014