ArticlePDF Available

DXR: A Toolkit for Building Immersive Data Visualizations


Abstract and Figures

This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform. Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite. DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We demonstrate the flexibility of DXR through several examples spanning a wide range of applications.
Content may be subject to copyright.
DXR: A Toolkit for Building Immersive Data Visualizations
Ronell Sicat, Jiabao Li, JunYoung Choi, Maxime Cordeil, Won-Ki Jeong, Benjamin Bach, and Hanspeter Pfister
Fig. 1. DXR enables rapid prototyping of immersive data visualizations: (b,c) declarative specifications concisely represent visualizations;
(a:right) DXR’s graphical user interface (GUI) within the virtual world enables quick iteration over visualization parameters such as
data sources, graphical marks, and visual encodings; (b) the GUI modifies the underlying design specifications; (c) specifications can
be fine-tuned by the designer in a text editor; (d) the designer can add 3D models as custom graphical marks to achieve (e) novel
immersive visualization designs. Example visualizations built using DXR: (f) a 3D vector field plot showing locations of photographs of
an exhibit; (g) flames representing the remaining lifetime of real-world organic materials as they decay; (h) bar charts and scatter plots
embedding sports data in a virtual basketball court; and (i) coins showing Bitcoin prices in a 3D game.
—This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform.
Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising
medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require
complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder
the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With
DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite.
DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual
world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We
demonstrate the flexibility of DXR through several examples spanning a wide range of applications.
Index Terms—Augmented Reality, Virtual Reality, Immersive Visualization, Immersive Analytics, Visualization Toolkit.
Immersive technologies such as augmented and virtual reality, often
called extended reality (XR), provide novel and alternative forms of rep-
resenting, interacting, and engaging with data and visualizations [45].
The range of applications that benefit from stereoscopy, augmented
reality, natural interaction, and space-filling immersive visualizations is
growing, including examples in information visualization [44,48], sci-
entific visualization [68], immersive storytelling [14,57, 60], immersive
workspaces [50], and embedded data representations [36, 51, 72]. Fu-
R. Sicat and H. Pfister are with the Harvard Visual Computing Group.
J. Li is with the Harvard Graduate School of Design.
B. Bach is with the School of Informatics at Edinburgh University.
M. Cordeil is with the Immersive Analytics Lab at Monash University.
J. Choi, and W.-K. Jeong are with Ulsan National Institute of Science and
Manuscript received xx xxx. 201x; accepted xx xxx. 201x. Date of Publication
xx xxx. 201x; date of current version xx xxx. 201x. For information on
obtaining reprints of this article, please send e-mail to:
Digital Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx
eled by the recent increase in affordable AR and VR devices, immersive
visualizations have come into focus for many real-world applications
and should be meant to being designed and created by a range of people
not necessarily trained in XR development.
Building applications and prototyping visualization designs for im-
mersive environments remains challenging for many reasons. It is a
craft that naturally requires knowledge of concepts and technology
from data visualization and analytics, 3D computer graphics, AR, and
VR, as well as human-computer interaction, and human factors. Not
only does this hinder fast prototyping and design exploration, especially
in a real environment [36], but it creates a high bar for novice develop-
ers without background in any of these areas. On the other hand, the
success and wide adoption of D3 [43], Vega-Lite [66], and VTK [29]
have shown how visualization-specific toolkits and languages empower
development, design, and dissemination. We believe it is timely to
think about user-friendly tool-support for immersive visualizations.
In this paper, we present DXR, a toolkit for rapidly building and
ata visualization applications for e
eality. DXR
is based on the Unity development platform [24]. While Unity enables
XR development, it still has limited support for rapid prototyping of
visualization applications. Currently, designers must write low-level
code using C# or JavaScript to parse data, manually instantiate objects,
create the visual mappings, bind data to visual object properties, and
to implement interaction, placement, and propagation of data updates.
Furthermore, iterative design changes require tuning of low-level code,
which prohibits quick prototyping. In turn, DXR provides a high-level
interface for constructing and adapting pre-configured visualizations
(Fig. 1) in Unity. It uses a Vega-Lite inspired grammar to specify a
visual mapping for data imported through DXR data loading routines.
Changes to the visual mapping automatically update the visualization
in XR while the developer wears the AR or VR headset and sees the
changes in real-time [54]. Advanced users can edit the Vega-Lite-like
design specifications in JavaScript Object Notation (JSON) in any text
editor. Eventually, designers can create custom graphical marks and
visual channels that leverage the wide variety of Unity prefabs for
building unique and engaging designs.
DXR comes with a library of reusable pre-defined visualizations
such as scatter plots, bar charts, and flow visualizations, which can
be connected through filtering and linking. DXR visualizations are
Unity GameObjects that are compatible with the full feature set of
the Unity development environment and associated libraries, e.g., for
object tracking and placement, interaction, styling, etc. Interactive
DXR applications can be exported to a variety of platforms, including
AR on Microsoft HoloLens [15], and VR headsets.
DXR aims to allow a wider community to explore and design im-
mersive data visualizations. Use cases for DXR range from individ-
ual interactive 2D or 3D visualizations, immersive multi-visualization
workspaces, to embedded data representations [36, 72] that apply XR
technology inside museums, sports arenas, and scientific labs, to name
a few. DXR is open-source, freely available at
view/dxr-vis, with a range of well-documented reusable examples.
2.1 Applications of Immersive Visualization
Immersive visualizations have been built for most common visual-
ization types, including scatter plots [35], parallel coordinates [44],
networks [49], and sports analytics applications [30]. ImAxes [48]
implements scatter plots and parallel coordinates that are explorable
and reconfigurable through embodied interaction.
Beyond exploration, AR and VR are often used as a medium for
experiencing data-driven presentations and storytelling. For example,
LookVR [14] turns bar charts into virtual walls that can be climbed in
VR. Beach [60] virtually puts users in a room with dangerously increas-
ing sea levels to educate them about climate change. An application by
the Wall Street Journal [57] lets users virtually walk along a line chart
like a staircase to experience the rise and sudden fall of the Nasdaq
index during a stock market crash. All these examples present data-
driven scenes [38] that allow end-users to relate the data to real-life
experiences for better engagement and impact. These visualizations are
typically created by artists, storytellers, designers, and domain experts
who had to invest time to learn visualization and XR development.
Many more examples lend themselves to an exploration through
immersive technology motivated by better spatial perception, a larger
display space, or bringing together physical referents and their data [36,
69, 72]. Coupled with immersive displays, immersive interactions
beyond the mouse and keyboard allow natural and direct interaction
with data in AR [35] or VR [68] environments. Other studies have
shown the benefit of immersion for collaborative data analysis [49,50].
2.2 Authoring Immersive Visualizations
The most common platform to develop XR applications is Unity [24],
a game engine with a large community and a range of modular and
additional assets such as 3D models, and scripting libraries. For AR,
additional frameworks exist to support object tracking and rendering in
general, e.g., ARToolkit [6], Vuforia [31], or for specific platforms, e.g.,
ARCore for Android [4], ARKit for iOS [5], and Mixed Reality Toolkit
for Microsoft’s Universal Windows Platform (UWP) [16]. A-Frame [1]
is a library that enables the creation of immersive virtual scenes in
the browser by integrating WebVR [32] content within HTML. How-
ever, none of these libraries provides specific support for developing
and designing visualization applications in XR. Moreover, designing
visualizations in immersive environments can be complex, requiring
consideration of issues such as occlusion, clutter, and user movement
and interactions in 3D [36, 46,62].
Recent work started to enable easier authoring of immersive visu-
alizations, yet still require a significant amount of low-level program-
ming or are restricted to a limited set of graphical marks. For example,
Filonik et al. [53] proposed Glance, a GPU-based framework with a
focus on rendering fast and effective abstract visualizations in AR and
VR. Donalek et al. [50] developed iViz, which provides a GUI for
specifying visualization parameters for a collaborative VR analytics
environment. Virtualitics [28] is a commercial immersive and collabo-
rative visualization platform that uses machine learning to help inform
the design of three dimensional visualizations. Operations such as
filtering, and details-on-demand are supported by virtual pointers.
2.3 Authoring Non-Immersive Visualizations
Visualization authoring tools for non-immersive platforms provide a
multitude of approaches, ranging from easy-to-use charting tools to
highly flexible visualization programming libraries (Fig. 2).
Polestar Vega-Lite Vega D3Lyra
DXR GrammarDXR GUI Unity Programming
Easy to learn & use
More templated More flexible
Difficult to learn & use
Fig. 2. Inspired by (top) 2D JavaScript-based authoring tools, (bottom)
DXR offers multiple high-level interfaces that are easier to learn and use
than low-level Unity programming for constructing visualizations.
For instance, Plotly’s Chart Studio [20] lets users interactively ex-
plore different styles of charts based on data selection in a tabular
view. Similarly, Polestar [21] and RAWGraphs [22] both provide a
minimalistic drag-and-drop interface to specify a visual mapping and
instantly update the resulting visualization. These interactive charting
tools offer easy-to-use graphical interfaces instead of or in addition
to programming for adapting pre-configured designs. Tableau [23]
combines interactive creation with a scripting interface to perform data
analysis. On the other end of the spectrum, there are tools that require
more effort to learn and use but allow flexible and novel visualization
designs. These tools are naturally closer to low-level programming, but
include helper routines such as parsers, color scales, mapping operators,
data structures, as well as large libraries of existing visualizations to
start with. Examples include D3 for JavaScript [43], the InfoVis Toolkit
for Java [52], or Bokeh for Python [7]. Domain-specific languages
such as Vivaldi [47], Diderot [59], and ViSlang [64] provide high-level
programming APIs that are tailored for application domain experts.
In between these two extremes, there are a set of tools with a trade-off
between usability and flexibility. For instance, grammar-based author-
ing tools provide a high-level abstraction for building visualizations
so that designers can focus on their data and not worry about software
engineering [55]. The foundational grammar of graphics introduced by
Leland Wilkinson [71] paved the way for modern high-level visualiza-
tion tools such as Vega [67], Vega-Lite [66], and GGplot [70]. Vega
and Vega-Lite make visualization design more efficient with concise
declarative specifications—enabling rapid exploration of designs albeit
with a limited set of graphical marks. Python, R, and Matlab offer their
own high-level visualization libraries that require a simple function
call with a respective parameterization to deliver data and visualization
parameters, e.g., Seaborn, Bokeh, Plotly, GGPlot. Other interactive de-
sign tools include Lyra [65], Protovis [42] and Data-Driven Guides [58].
These tools allow for novel designs but require manual specification of
shapes, style, and sometimes layout.
DXR integrates several of these approaches. It uses a declarative
visualization grammar inspired by Vega-Lite; provides a GUI for speci-
fying visual mappings and designing visualizations inside XR; comes
with a set of pre-defined visualizations; and allows for individual styling
and customization. DXR is also fully compatible with and can be ex-
tended through C# code in Unity.
2.4 Unity Core Concepts
We briefly review the core concepts of Unity as far as they are important
for the understanding of DXR. In Unity, applications are represented
as composable 3D scenes in which designers can add and manipulate
GameObjects which encapsulate objects and their behavior. Example
GameObjects include cameras, 3D models, lights, effects, input han-
dlers, and so on. GameObjects are organized in parent-child hierarchies
or scene-graphs. GameObjects can be saved as prefabs that serve as
shareable and reusable templates. Unity has an on-line Asset Store [25]
for sharing reusable scenes, prefabs, and other assets. A script is C# or
JavaScript code that can be attached to GameObjects as components
and used to programmatically manipulate GameObjects at runtime. De-
signers can edit scenes and GameObjects interactively using the Unity
Editor user interface, or programmatically using scripts via the Unity
scripting API [27]. The scene can be configured to run in either AR or
VR, simply by specifying the target device in Unity deployment settings.
At runtime, i.e., when the scene is played, the user can see the scene
through the device’s display, and interact with it using the device’s input
modalities, e.g., controllers, gesture, or voice. For more information,
we refer the reader to the complete Unity documentation [26].
DXR consists of prefabs and scripts that provide a high-level interface
for constructing data-driven GameObjects in a Unity scene. Figure 3
illustrates the conceptual parts of DXR. A visualization in DXR is
represented as a GameObject prefab—vis-prefab—that can be added to
scenes and manipulated via the Unity Editor or via scripting, just like
any other GameObjects. The vis-prefab reads the visual mapping from
a visualization specification file—vis-specs—in JSON format. The vis-
specs file also contains a URL pointer to the respective data file which
can be in CSV or JSON format. When the data or vis-specs file changes,
DXR can be notified to update the visual rendering immediately.
JSON file
Unity GameObject
Fig. 3. DXR overview: A vis-specs file references the data file and
holds the design declaration. It gets interpreted by DXR to generate a
visualization that is represented as a vis-prefab GameObject in Unity.
DXR’s main target users are visualization developers with varying
expertise in XR programming (Unity/C#) and whose goal is to rapidly
prototype and build immersive visualizations.
Non-programmers (beginners)
include users with little to no pro-
gramming experience, e.g., architecture or biology students, artists,
and storytellers. In DXR, they can build visualizations without
programming. To do this, they can place their data file into the
directory. Then, they can add a DXR
vis-prefab into their scene using the Unity menu or by dragging it into
their scene window. They can then set the vis-specs filename param-
eter of the vis-prefab to an empty file (to start from scratch) or to one
of the example vis-specs files in the
folder containing templates for common visualizations such as bar
charts, scatter plots, vector field plots, and many more. At runtime,
DXR generates the visualization, and a GUI gives the user control
over the data and visual mappings that can be changed (Fig. 4).
Non-XR-developers (intermediate)
include users with general pro-
gramming experience, e.g., with JSON and visualization grammars,
but without experience with Unity and C# specifically. With DXR,
intermediate users can edit the vis-specs file and directly manipulate
the visualization grammar to add or remove visual mappings and
fine-tune the design (Sect. 4.1). For example, they can adjust scale
domains and ranges, change color schemes, etc. Intermediate users
can also create custom graphical marks with generic visual channels
without programming (Sect. 6).
Fig. 4. Steps to use a template: 1) drag-and-drop vis-prefab to the scene,
2) set the vis-specs filename, 3) run, and tweak parameters in the GUI.
XR-developers (advanced)
include users experienced in XR pro-
gramming, i.e., well-versed with Unity/C#’s object-orientedness and
scripting API. They are meant to quickly iterate between using the
GUI, editing the vis-specs file, and low-level C# programming to
develop custom visualizations. Advanced users can build custom
visualizations, e.g., by creating a new C# mark class that inherits
DXR’s graphical mark base class implementation (Sect. 6). GameOb-
ject attributes exposed as encoding channels extend the grammar,
show up in the GUI in XR, and are available in the vis-specs file.
Any new visualization created in this way is now accessible through
the previous two scenarios, benefiting other users.
The following sections detail how DXR supports these scenarios.
Fig. 5 shows DXR’s visualization pipeline, consisting of four steps:
specify,infer,construct, and place. First, the designer describes the
visualization design in a concise specification (vis-specs) using DXR’s
high-level visualization grammar. DXR then infers missing visualiza-
tion parameters with sensible defaults. Based on this complete speci-
fication, DXR then programmatically constructs the 3D visualization
that the designer can place in a real or virtual immersive scene.
4.1 Design Specification
We designed DXR’s visualization grammar to be similar to Vega-
Lite [66] because it is intuitive, making it easier to learn and modify
representations, and concise, making it efficient to iterate over designs.
Furthermore, there are many visualization designers who are familiar
with Vega, Vega-Lite, and Polaris who will find it easier to learn and
use DXR and transition their designs to immersive environments.
A single visualization in DXR, which we call
, is a collection
of graphical marks (Unity GameObjects) whose properties (position,
color, size, etc.) are mapped to data attributes according to the declar-
ative specification in the vis-specs file. Following the notation of
Vega-Lite, a
is a simplified equivalent of a unit that “describes
a single Cartesian plot, with a backing data set, a given mark-type, and
a set of one or more encoding definitions for visual channels such as
position (x, y), color, size, etc.” [66]:
dxrvis := (data, mark, encodings, interactions)
The input
consists of a “relational table consisting of records
(rows) with named attributes (columns)” [66]. The
specifies the
graphical object (Unity prefab) that will encode each data item. DXR’s
built-in marks include standard 3D objects like sphere, cube, cone,
and text, as well as user-provided custom graphical marks (Sect. 6).
describe what and how properties of the mark will be
mapped to data.
that can be added to the visualization are
discussed in a later section. The formal definition of an encoding is:
encoding := (channel, field, data-type, value,
scale, guide)
Immersive visualization!
{ "width":15240, "height":14330,
"data" : {
"url" : "basketball.csv"
"mark" : "sphere",
"encoding" : {
"x" : {
"field" : "X",
"type" : "quantitative",
"scale" : {
"domain" : [0, 50]
"y" : {
"field" : "Y",
"type" : "quantitative",
"scale" : {
"domain" : [0, 47]
"color" : {
"field" : "FG%",
"type" : "quantitative"
3D visualization!Inferred specifications!
Concise specifications!
{ "width":15240, "height":14330,
"depth": 500,
"data" : {
"url" : "basketball.csv"
"mark" : "sphere",
"encoding" : {
"x" : {
"field" : "X",
"type" : "quantitative",
"scale" : {
"domain" : [0, 50],
"type" : "linear",
"range" : [0, 15240]
"axis" : {
"filter" : false,
"title" : "X",
"length" : 500,
"color" : "#bebebe",
"ticks" : true, ...
"y" : {
"field" : "Y",
"type" : "quantitative",
"scale" : {
"domain" : [0, 47],
"type" : "linear",
"range" : [0, 14330]
"axis" : { ... }
"color" : {
"field" : "FG%",
"type" : "quantitative",
"scale" : {
"type" : "sequential",
"domain" : [0, 100],
"range" : "ramp",
"scheme" : "ramp"
"legend" : {
"type" : "gradient",
"filter" : false,
"gradientWidth" : 200,
"gradientHeight" : 50,
"title" : "Legend: FG%",
Specify Infer Construct Place
Fig. 5. DXR’s visualization pipeline. The designer specifies the visualization design via concise specifications. Then DXR infers missing parameters
to sensible defaults and uses the inferred specifications to programmatically construct a 3D visualization that can be placed in an AR or VR scene.
describes which geometric or visual property of the
graphical mark will be mapped to the data attribute specified by
. DXR provides a set of generic channels that generally
apply to any Unity GameObject, namely
, and
. The
channel rescales the object’s
, and
equally, while
only rescales the object along a user-defined forward direc-
tion. This forward direction is by default set to the (0,1,0) 3D vector,
and is used to orient marks that show direction, e.g., arrow and cone.
DXR also provides
channels that translate the
mark by a percentage of its
, or
for styling and
to handle prefabs with different center or pivot points. The
describes the data attribute that can be quantitative, nominal, or ordinal.
A channel can also be mapped to a fixed setting using the
erty. The
describes the type of mapping (linear, categorical, etc.)
from data attribute values to visual channel properties, as well as the
mapping’s domain and range. The
properties describe the axis
or legend specifications such as tick values, labels, and the like.
Fig. 1 (a-e) and Fig. 5 show examples of declarative specifications
using DXR’s grammar. A detailed grammar documentation with tu-
torials is provided on-line at
dxr-vis/grammar- docs
. Thanks to the syntax similarity, some Vega-
Lite visualizations can be ported with little effort into immersive envi-
ronments using DXR. Unlike Vega-Lite, DXR does not provide data
transforms, yet. We plan to add them in future versions.
4.2 Inference
Concise specifications are intuitive and succinct, making them easy
to learn and modify, as well as reduces the tedious setting of all tun-
able visualization parameters. DXR’s inference engine sets missing
visualization parameters to sensible defaults based on the data types
and visual channels informed by the Vega-Lite model. Originally, the
resulting inferred specification was hidden from the designer by default
and only used internally by DXR. However, feedback from new users
indicated that all the possible tunable parameters can be difficult to
remember, leading to frequent visits to the on-line documentation. To
address this, we provide designers direct access to DXR’s inferred
specifications so they can see and tweak them directly. This option
exposes all tunable parameters to improve debugging, customization,
and learning. Inference rules are documented on the DXR website.
4.3 Construction
A DXR specification is not translated to Unity code. Instead, a speci-
fication acts as a complete list of parameters for DXR’s visualization
construction pipeline that gets executed at runtime by the vis-prefab.
Visualizations in DXR are most similar to glyph-based visualiza-
tions [41]. A graphical mark in DXR is a glyph whose visual properties
are mapped to data (independently of other glyphs) and then rendered
within a spatial context. Thus, we modeled DXR’s construction pipeline
after Lie et al.’s glyph-based visualization pipeline [61], adapting it to
match Unity’s scripting API for prefab instantiation and modification.
First, DXR parses the data and constructs the necessary internal data
structures. Then it loads the specified mark as a GameObject prefab
which is instantiated for each data record. Each instance starts with
the prefab’s default properties with initial positions at the vis-prefab’s
origin. Then, DXR goes through each encoding parameter in the speci-
fications and changes visual channel properties of the mark instances
according to the data attribute or a given fixed value. This instantiation
and encoding is performed by a C# Mark base class that encapsulates
functionalities of a graphical mark. For example, to set the position,
rotation, and size channels, the class programmatically modifies each
instance’s local transform property. Scale parameters in the specifica-
tion instantiate one of several pre-programmed scaling functions for
mapping data attribute values to visual channel values. Finally, DXR
constructs optional axes, legends, and query filters. These steps result in
an interactive 3D visualization represented as children of the vis-prefab
GameObject—a collection of data-driven instances of the mark prefab,
with optional axes, legends, and filters. Similar to how glyph instances
are rendered in their spatial context, this 3D visualization can be placed
in an AR or VR scene for immersion.
We designed this construction pipeline to be as agnostic as possible
to the graphical mark prefab’s type and complexity in order to support
the use of any Unity prefab as graphical mark (Sect. 6).
4.4 Placement
DXR facilitates the placement of visualizations within real or virtual
worlds. DXR provides an anchor—a red cube near the visualization
origin that allows a user to drag-and-drop the visualization in a fixed
position relative to the real-world or a virtual scene at runtime. When
the anchor is clicked on, the visualization’s position and orientation
get attached to that of the user’s view. By moving around, the user
effectively drags the visualization in 3D space. Clicking on the anchor
again drops the visualization. This feature is particularly useful for
aligning embedded visualizations with their object referents [72] or
spatial contexts such as the examples in Fig. 1. In these embedded
visualizations, physical positions of the referents need to be measured
and encoded in the data. The anchor can then be aligned to the real-
world origin used for measuring these positions. In the future, aligning
of graphical marks with (non-)static referents could be enabled with
computer vision and tracking.
Furthermore, DXR visualizations are GameObjects that can be com-
posed and placed within a Unity scene either manually using the Unity
Editor, or programmatically via scripts, or through libraries such as
Vuforia, e.g., for attaching a GameObject to a fiducial marker. In some
cases, designers may want to set the size of their visualization to match
Fig. 6. DXR supports interactive query filters and linked views. For visualized data attributes (e.g., Horsepower, Origin), threshold and toggle
filters can be directly integrated into their
, respectively (left: purple highlights). For non-visualized attributes (e.g., Cylinders,
Weight in lbs), filters can be enumerated using the
parameter (right: blue highlight) and appear on the side. Visualizations that use the
same data within the same scene can be linked so that only data items satisfying queries are shown across all linked views (orange highlights).
GUI-based prototyping is easy to learn and use but is more templated Grammar-based prototyping requires learning the grammar but is more flexible
Fig. 7. Prototyping with DXR typically involves (left) trying combinations of data, graphical mark, and visual encoding parameters using the in-situ
GUI on an immersive device, and (right) fine-tuning design specifications using a text editor running side-by-side with the Unity Editor on a computer.
the size of their intended canvas when overlaying them together. For
example, in Fig. 5, the width and height of the visualization is set to
that of the width and height of a basketball court (DXR size units are
in millimeters). Moreover, multiple visualizations can be arranged in
a scene to enable side-by-side comparison, or building of compound
charts, e.g., simple stacked bar charts (Fig. 1h).
4.5 Interactions
In order to support multi-visualization workspaces, DXR allows the
creation of query filters and linking of visualizations via vis-specs,
illustrated in Fig. 6. Interactive query filters [56] control the visibility of
graphical marks according to data using threshold and toggle interfaces.
Linked visualizations within the same scene get filtered together.
By default, DXR provides details-on-demand with a tooltip that
shows a textual list of data attributes when the user’s gaze pointer
hovers on a graphical mark. DXR’s GUI (Sect. 5) also provides view
manipulation (scaling up and down, rotating along x, y, z-axis) and
view configuration controls. DXR’s grammar for interaction can be ex-
tended to further exploit device-dependent affordances of tangible [35]
or direct manipulation [48] interfaces, as well as gesture- and voice-
based input. Furthermore, any existing Unity asset for manipulating
GameObjects and navigating scenes can apply to DXR visualizations.
For example, hand tracking devices, e.g., leap motion [13], can be
utilized to move, rotate, and rescale DXR visualizations using hand ges-
tures. Similarly, device-dependent navigation features such as tracked
headsets allow walking around DXR visualizations in AR or VR.
Out of the many options for immersive input modalities, e.g., touch,
gaze, gesture, voice [37], we decided to use gaze and click for filtering,
GUI interactions, and object placements. This makes them compati-
ble with many common immersive devices because they are typically
supported. Their similarity to mouse interactions in WIMP-based inter-
faces also make them familiar and easy to learn.
Fig. 7 shows a typical XR development set-up. It often requires testing a
visualization within an immersive environment while tuning the design
on a desktop or laptop computer running the Unity Editor. Initially, we
designed DXR so that designers can only modify vis-specs in a text
editor, typically running side-by-side with the Unity Editor. However,
we found that in some cases this led to tedious switching between
the two contexts. Furthermore, we found that the JSON syntax and
grammar-based specification were overwhelming to non-programmers.
To address these challenges, we designed and implemented DXR’s
in-situ GUI—an intuitive interface that is embedded in the Unity scene
with the vis-prefab so it runs in-situ within the immersive environment
at runtime (Fig. 7:left).
The GUI provides drop-down menus, similar to WIMP interfaces,
for selecting data, graphical marks, and visual encoding options from
pre-defined sets of parameters. This removes the need to deal with
JSON syntax. Moreover, designers no longer need to memorize possible
parameter values since the GUI’s drop-down menus already provide
lists of usable marks, channels, and data attributes. GUI interactions
directly modify the underlying specification, as illustrated in Fig. 1
(a, b) and Fig. 7, updating the output visualization instantly for rapid
immersive prototyping.
Using the in-situ GUI, a designer can stay in the immersive environ-
ment to try different combinations of data, marks, and channels until an
initial prototype has been reached. Instant preview of the visualization
gives the designer immediate feedback for rapid design iterations. The
design can then be fine-tuned back on the computer using a text editor.
The GUI also enables adapting and reusing existing DXR visualizations
as pre-configured templates similar to interactive charting applications.
With only a few clicks in the GUI, an existing visualization’s data can
be easily changed, instantly updating the visualization with the new
data—all without programming or additional scene configuration.
Our initial GUI design included drop-down menus for creating query
filters. However, we noticed that in practice they were seldom used,
yet made the GUI crowded. In our current design we removed these
menus since filters can be easily added via the grammar (Fig. 6). In
the future, we plan to make the GUI reconfigurable such that designers
can add and arrange menus for features they use the most. Another
design we considered was to make the GUI tag-along and follow the
user’s peripheral view. However, multiple GUIs overlap when there
are multiple visualizations in a scene, rendering them unusable. In
the current design, the GUI is fixed on the side of the visualization by
default and simply rotates along the y-axis to always face the user.
We made it easy to use any Unity prefab as custom graphical mark in
DXR in order to leverage their wide availability and variety to support
flexible and engaging visualization designs. Fig. 8 illustrates how DXR
enables this by leveraging Unity’s object-orientedness in representing
graphical marks. As discussed in Sect. 4.3, DXR has a Mark base
class that encapsulates all mark-related graphing functionalities such
as instantiation and visual encoding. This base class treats any mark
prefab in the same way, regardless of their type—as a shaded 3D box
model. DXR uses the bounding box of this model to modify standard
geometric properties like position and size, and its material shader to
change color and opacity. This base class is automatically applied to
any prefab within a designated marks directory.
Any Unity prefab can be converted into a graphical mark in DXR sim-
ply by placing it in the marks directory. During construction (Sect. 4.3),
DXR uses the
parameter in the specifications as the unique prefab
filename to load from the marks directory. Once loaded successfully,
the prefab becomes a DXR graphical mark that can be instantiated and
modified according to data via the base class implementation. This
simple model makes it easy to extend the system with arbitrary Unity
prefabs as custom marks. For example, when a 3D model of a book
is saved as a prefab in the marks directory, it automatically becomes
a DXR graphical mark with the generic channels. Instead of a plain
bar chart, this book graphical mark can now be used to generate an
embellished bar chart (Fig. 8d).
Optionally, the designer can expose more complex prefab parameters
as custom encoding channels by implementing a derived class that
inherits from DXR’s Mark base class. For example, using this approach,
the intensity property of a flame particle system prefab can be used as
an encoding channel, in addition to the generic channels inherited from
the base class. This custom mark can be used to visualize forest fires in
Montesinho park [17] overlaid on a virtual geographical map (Fig. 8e).
Custom marks and channels are represented as a mark prefab with
an optional script of the derived class. These formats can be packed
into a Unity package file that allows their easy sharing and reuse. Once
imported, custom marks and channels just work, without the need for
additional set-up or programming. A drawback of this approach how-
ever, is that unlike grammars with a fixed set of marks with predictable
behavior, DXR users will have to be conscious about imported marks
to make sure that they understand how the channel encodings work,
to avoid unexpected behavior. In the future, we envision that well
documented DXR mark prefabs with accompanying examples will be
made available in the Asset Store similar to D3 blocks [8] and Vega
or Vega-Lite specifications that will facilitate informed sharing and
reuse. Consequently, designers must be conscious when using complex
prefabs that could extend construction times or limit frame rates with
increasing data size (Sect. 9).
a cube sphere bbook baseball c fire milk
Fig. 8. (a) In addition to DXR’s built-in generic graphical marks, (b)
designers can use any Unity prefab as a custom mark with generic
visual channels such as position, size, and color, simply by saving it in a
designated directory. (c) Additional channels can be implemented in a
derived class to expose other prefab properties to DXR’s visual encoding
process. Custom marks and channels enable flexible designs, such as
(d) bookshelf keywords visualized using virtual books as bars, and (e)
forest fires visualized using flame particle systems on a virtual map.
Studies of visualization design workflows show that designers typically
iterate and switch between tools [39, 40]. For example, a designer may
use high-level tools like Polaris or Vega-Lite to create an initial visu-
alization, and then switch to more advanced D3 or Vega to fine-tune
the design. This type of workflow benefits from layered authoring
support [54], i.e., cross-compatible tools along the spectrum of simplic-
ity to flexibility illustrated in Fig. 2. This spectrum of tools can also
support the collaboration of designers and domain experts with varying
expertise in design, visualization, and programming.
DXR naturally supports layered authoring by providing multiple
alternative interfaces for design specification and graphical mark cus-
tomization. For design specification (Fig. 7:top) the GUI is easy to learn
and use, but is limited to pre-configured designs since the designer can
only change some parameters. The text editor allows tweaking of all
tunable parameters but requires familiarity with the grammar and JSON
syntax. Similarly, for graphical mark customization (Fig. 8:top), the
designer has three options: built-in graphical marks only allow simple
designs, custom marks with generic channels are easy to create but
only offer standard geometric and visual channels, and custom marks
and channels via derived class implementation are more complex to
create but are most flexible. With these options, DXR is able to support
iterative workflows as well as collaborations among users with varying
expertise as illustrated in the following examples.
We demonstrate the usage of DXR with a range of representative appli-
cation examples. Table 1 categorizes them by a variety of characteris-
tics. These and additional examples can be found on the DXR website
Immersive information visualization.
DXR can be used to create
bar charts (Figs. 1h and 8d), scatter plots (Figs. 1i and 6), and space-
time cubes (Fig. 10b) [33]. Without DXR, this would involve writing
custom programs to load data, instantiate marks, calculate and apply
visual mappings, and create axes, legends, and interactive query fil-
ters. With DXR, particularly non-programmers, can easily prototype
✔ ✔
✔ ✔
✔ ✔
✔ ✔
✔ ✔
✔ ✔
✔ ✔
✔ ✔
✔ ✔
✔ ✔
✔ ✔
✔ ✔
Table 1. Summary of examples authored using DXR.
Mark type
is the
graphical mark, which can be a generic type (
) or a
custom prefab;
spatial dimension
is 2D if the visualization uses both
position channels, and 3D if it uses all
is the size
of the visualization (small: hand size, medium: table size, or large: room
size); the runtime environment can be
AR or VR
; and
whether the visualization is anchored in the real or virtual world.
visualizations without any programming either starting from scratch
or reusing templates as previously illustrated in Fig. 4. For example,
to visualize research collaborations over time [34], a user can start
from scratch with an empty vis-specs file, and then use the GUI to
specify the data file, set the visual mark to
, map the categorical
researcher name
to the
channels, map the
attribute to the
channel, and finally map the quantitative
(collaboration strength) to the cube’s
. Based on these
parameters, DXR writes the vis-specs file and generates the space-time
cube visualization (Fig. 10b:left). This space-time cube can now be
used as a template. For instance, another user can load a different
dataset, e.g., country-to-country trading data, and update the data at-
tributes through the GUI. As the data or any parameters change, the
visualization updates (Fig. 10b:right) and the user can proceed with the
exploration. A threshold filter for the
attribute can be added
using the vis-specs.
Immersive geospatial visualizations.
To visualize forest fire data
on a virtual map [17], a non-programmer can use a DXR scatter plot
template and align its anchor to the origin of a map’s image via the
Unity Editor (Fig. 9a). At runtime, the user can then use the GUI
to set the scatter plot’s data filename and assign
fire location attributes to
channels, and
fire intensity
(Fig. 9a). An intermediate user might improve on this by creating a
graphical mark with generic visual channels. This can
be done by downloading a static 3D flame model or prefab, e.g., from
an on-line repository, and copy-pasting it in DXR’s designated marks
directory and renaming it to flame.prefab. Using the GUI or by editing
the vis-specs file, the graphical mark can then be set to
(Fig. 9b).
A more advanced user can use an animated particle system as mark to
make the visualization more engaging. To do this, the user can create
a derived class, e.g., called MarkFire, and override the default
channel implementation to map it to the particle system’s intensity
parameter via C# programming. The user can then directly edit the
vis-specs file to set the new mark parameter as well as fine-tune the
parameter of the particle system’s intensity to match the desired
forest fire effect (Fig. 8e).
Similarly, we created a custom 3D bar chart (Fig. 10e) that can be
flown-over or walked-through in AR or VR showing heights and ages
of several buildings in Manhattan [18]. Furthermore, we downloaded a
3D population visualization [9] from the Asset Store and converted it
into a reusable DXR template (Fig. 10f) with minimal effort.
Embedded data visualizations
place visualizations close to their
physical referent [72]. The example in Fig. 10a embeds a data-driven
ab c
Fig. 9. Prototypes for (a,b) forest fire and (c) milk data visualizations.
virtual glass of milk on top of each milk carton on a grocery shelf. An
advanced user would iteratively build this visualization as follows. First,
the physical positions of each milk carton are measured (in millimeters,
with respect to a real-world origin, e.g., lower left corner of shelf) and
are then added as
columns in the data. Then, a 2D scatter
plot template is used to visualize the data, using the GUI to map
the measured positions to
dimensions, calcium content to
and days before expiration to
. Next, the
parameters in the vis-specs are set to match that of the shelf. At runtime,
DXR constructs the visualization where scale parameters and color
schemes are generated automatically with default values. The user
can then place the visualization by aligning its anchor with the shelf’s
lower left corner (Fig. 9c). Then, the user downloads 3D models of a
glass of milk and sugar cubes from Unity’s Asset Store and composes
a custom graphical mark implementing new channels
, and
via C# programming. Using the GUI or
vis-specs, these channels are then mapped to calcium content, days to
expiry date, and sugar content, respectively. Scale and color schemes
are then fine-tuned in the vis-specs, e.g., the color range for the milk is
changed from the default white-to-red into brown-to-white reflecting
the freshness of the milk (Fig. 10a). For an advanced user, the complete
design and specification process can take approximately 15-20 minutes.
Using a custom flame graphical mark’s intensity channel, we show
the remaining life of referent organic materials hanging on a wall
(Fig. 1g), adding a virtual dimension to the existing artwork [3]. And
we used DXR to create a 3D vector field plot using built-in cone graph-
ical marks to show locations of photographs (Fig. 1f) of an exhibit [12].
To build this example, 3D real-world positions and orientations were en-
coded in the data and mapped to the
, and
channels. Embedded data visualiza-
tions can reveal insights about physical objects and spaces, as well as
enhance our experience in the real-world.
Immersive visualization workspaces
consist of multiple linked
visualizations in custom AR and VR environments. In an example sce-
nario, users with varying backgrounds can collaboratively develop a VR
workspace that visualizes a multi-variate hurricane data [11] through a
set of 3D and 2D scientific and abstract visualizations (Fig. 10c). 3D
data points encode position, orientation, temperature, pressure, etc. A
domain expert on hurricanes without experience in XR-programming,
for example, can use a 3D vector field plot template to visualize wind
velocity. The GUI can be used to quickly try out different variable
combinations to find interesting correlations in the data via scatter plot
templates. Working with an advanced immersive visualization designer,
the domain expert can then customize the layout, link visualizations,
and add query filters to support custom analytical workflows. The ar-
rangement of the visualizations can be easily modified using the Unity
Editor or direct manipulation, e.g., such that they surround a physical
workstation (Fig. 10d).
Immersive sports data analytics
is a growing trend with more and
more companies leveraging AR and VR for immersive training and
strategizing [30]. Our baseball (Fig. 1e) and basketball (Figs. 1h, 5,
and 7) examples were scaled to life-size and blended with real-world
or virtual courts. The full immersion potentially makes it easier for
players to assimilate and translate the data into actionable insights.
With a HoloLens, a baseball batter can, for example, view life-size
virtual pitched balls of an opponent for immersive training within a
real-world baseball field, similar to an existing immersive training
"data": { "url": "population.json" },
"mark": "radialbar",
"encoding": {
"latitude": {
"field": "lat",
"type": "quantitative"
"longitude": {
"field": "lng",
"type": "quantitative"
"length": {
"field": "population",
"type": "quantitative", ...
"color": {
"field": "population",
"type": "quantitative", ...
} } }
c d
Fig. 10. Examples of immersive visualizations built using DXR include (a) embedded representations, (b, c, d, e, f) 2D and 3D information and
geospatial data visualizations, (c, d) immersive workspaces, and (g, h) 3D flow fields and streamlines. Prototyping each example took 10-30 minutes
using DXR’s GUI and grammar-based interfaces. Custom graphical marks are based on Asset Store prefabs and 3D models from on-line repositories.
All examples presented in this paper are available on the DXR website as templates for designers.
10 100 1000 10000
Frame Rate (FPS)
Data Size (# items): ticks placed on log10 scale
Fig. 5
Fig. 10 (b): left
Fig. 10 (g)
Fig. 10 (h)
Fig. 10 (b): right
10 100 1000 10000
Construction Time (seconds)
Data Size (# items): ticks placed on log10 scale
Fig. 11. (left) Construction times and (right) frame rates as a function of data size running on Unity Desktop, HoloLens (HL), and ACER VR headset
(VR). Lines show visualizations that use simple (cube, cone) or complex (fire, arrow) graphical marks.
marks show other examples from this paper.
application [30]. Since most sports data are similar across players and
teams, once their visualizations are implemented in DXR, they can be
reused as templates by non-programmer players and coaches.
We also used DXR to create immersive flow field (Fig. 10g) and
streamlines (Fig. 10h) visualizations using arrow and paper airplane
graphical marks, respectively. Fig. 10h shows direct manipulation of
DXR visualizations using a leap motion controller.
As DXR is meant for prototyping and exploring designs, scalability was
not an explicit design goal. This section reports on performance mea-
sures of the current implementation. Fig. 11 shows construction times
and frame rates for varying data sizes and graphical mark complexities
running on Unity Desktop, HoloLens [15] (HL), and ACER VR head-
set [2] (VR). The Unity Desktop experiments were performed using
Unity Editor 2017.2.1 on a PC with an Intel Xeon CPU (2 processors
@ 2.10 GHz), 128 GB RAM, and a GTX Titan X graphics card. The
ACER VR headset was tethered to the same PC, while the HoloLens
used its own standalone processor and GPU. For the
Random Cube
Random Fire
examples, we generated random 3D points and plotted
them in a 3D scatter plot. We used the built-in cube graphical mark and
more complex fire particle system similar to Fig. 8e, respectively. We
used these two examples as representative visualization examples with
both simple and complex marks on all devices. The
Flow Cone
Flow Arrow
examples use the flow simulation data shown in Fig. 10g
at different subsampling levels plotted as a vector field. We used the
built-in cone and custom arrow graphical marks, respectively. Note
that the flow visualization examples used 8 channels (
), while the scatter
plot used only 3 channels (x,y,z).
To measure
construction time
, we ran DXR’s visualization con-
struction pipeline (Sect. 4) 11 times for each example. We discarded
the first one as warm-up and report the average of the remaining 10.
Construction times remain below 12 seconds even for complex exam-
ples. As data size goes up, construction times increase as an effect of
increasing graphical mark prefab instantiation calls.
When measuring
frame rate
, we kept the visualization fully visible
within the viewport and continuously rotated it along the y-axis with
respect to its center. Frame rates drop more or less exponentially with
increasing complexity of the scene. Scenes with 10,000 items are
barely interactive (60 FPS or less [19]). The
Flow Arrow
(yellow line) drops quickly because our custom arrow graphical mark
consists of two geometric primitives, a cone and cylinder, that need to
be rendered and mapped to 8 data attributes each.
For reasonable data complexity (1,000 items or less) the system
achieves real-time frame rates (over 50 FPS). The exception to this
is the HoloLens which performs poorly with highly complex mark
prefabs, e.g., fire particle system, due to its limited stand-alone GPU
inside the head-mounted display. Nevertheless, for applications run-
ning below 60 FPS, the HoloLens runs a built-in image stabilization
pipeline that improves the stability of virtual objects to reduce motion
sickness, e.g., by duplicating frames [10]. We are not able to run the
Random Fire HL
example with 10,000 data points on the HoloLens
due to memory limitations. We also note that the HoloLens and the VR
headset automatically cap frame rates at 60 FPS. With these numbers
in mind, designers must be conscious about their choice of mark prefab
in order to balance prefab complexity, data size, and frame rates accord-
ing to their design and hardware requirements. For data sizes beyond
1,000 items, despite low frame rates, DXR can still benefit developers
in quickly and cheaply previewing visualization designs in XR before
investing time in writing specialized and optimized implementations.
In the future, we hope to leverage advanced GPU shader programs
to improve frame rates for large data sizes. We could also provide
specially optimized custom graphical marks and use level-of-detail
techniques that have been developed to handle large-scale scientific vi-
sualizations [63]. Eventually, designers can build on DXR to implement
more scalable custom visualization techniques, e.g., multi-resolution
approaches, by aggregating data via external tools, combining multiple
visualizations, and customizing mark behavior.
DXR makes rapid prototyping of immersive visualizations in Unity
more accessible to a wide range of users. By providing a high-level
interface and declarative visualization grammar, DXR reduces the need
for tedious manual visual encoding and low-level programming to
create immersive data-driven content. We believe DXR is an important
step towards enabling users to make their data engaging and insightful
in immersive environments.
DXR opens up many directions for future work. On one hand, we
look forward to developing new immersive visualization applications
for shopping, library-browsing, office productivity systems, or collabo-
rative analysis. On the other hand, we encourage the user community
to improve and extend DXR’s functionality. In addition to the GUI,
alternative immersive interfaces can be explored for specifying and
interacting with data representations, e.g., using gesture, voice, or
tangible user interfaces. We envision the development of immersive vi-
sualization recommender systems, similar to Voyager, providing better
support for designing in the AR-CANVAS [36] and to suggest designs
that can alleviate potential cluttering and occlusion issues. DXR may
also enable perception and visualization researchers to streamline user
studies for a better understanding of the benefits and limitations of
immersive visualization in various domains.
The authors wish to thank Iqbal Rosiadi, Hendrik Strobelt, and the
anonymous reviewers for their helpful feedback and suggestions. This
work was supported in part by the following grants: NSF IIS-1447344,
NIH U01CA200059, and National Research Foundation of Korea grants
NRF-2017M3C7A1047904 and NRF-2017R1D1A1A09000841.
[1] A-Frame. Last accessed: March 2018.
Acer Mixed Reality.
series/wmr. Last accessed: March 2018.
. Last accessed: July
. Last ac-
cessed: March 2018.
. Last accessed: March
[6] ARToolkit. Last accessed: March 2018.
. Last accessed:
March 2018.
D3 Blocks.
. Last accessed: March
Globe - Data Visualizer.
packages/templates/systems/globe-data- visualizer-80008
Last accessed: March 2018.
Hologram stability.
windows/mixed-reality/hologram- stability
. Last accessed:
June 2018.
IEEE Visualization 2004 Contest.
http://sciviscontest-staging. Last accessed: March 2018.
Jerome B. Wiesner: Visionary, Statesman, Hu-
jerome-b- wiesner-visionary- statesman- humanist/
. Last
accessed: July 2018.
Leap Motion.
. Last accessed: March
lookvr. Last accessed: March 2018.
Microsoft HoloLens.
hololens. Last accessed: March 2018.
Mixed Reality Toolkit.
Montesinho Park Forest Fires Data.
elikplim/forest-fires- data-set. Last accessed: March 2018.
New York City Buildings Database.
new-york- city/nyc-buildings/data. Last accessed: March 2018.
Performance Recommendations for Hololens Apps.
// reality/
performance-recommendations- for-hololens- apps
. Last
accessed: March 2018.
[20] Plotly. Last accessed: March 2018.
. Last accessed: March
[22] RAWGraphs. Last accessed: March 2018.
[23] Tableau. Last accessed: March 2018.
[24] Unity. Last accessed: March 2018.
Unity Asset Store.
. Last accessed:
March 2018.
Unity documentation.
. Last ac-
cessed: June 2018.
Unity Scripting API.
ScriptReference/. Last accessed: March 2018.
. Last accessed: March
Visualization Toolkit.
. Last accessed: March
VR Sports Training.
portfolio-items/vr- sports-training/
. Last accessed: March
[31] Vuforia. Last accessed: March 2018.
[32] WebVR. Last accessed: March 2018.
B. Bach, P. Dragicevic, D. Archambault, C. Hurter, and S. Carpendale. A
descriptive framework for temporal data visualizations based on general-
ized spacetime cubes. Computer Graphics Forum, 36(6):36–61, 2017. doi:
10.1111/cgf. 12804
B. Bach, E. Pietriga, and J.-D. Fekete. Visualizing dynamic networks
with matrix cubes. In Proceedings of the SIGCHI conference on Human
Factors in Computing Systems, pp. 877–886. ACM, 2014.
B. Bach, R. Sicat, J. Beyer, M. Cordeil, and H. Pfister. The hologram in my
hand: How effective is interactive exploration of 3D visualizations in im-
mersive tangible augmented reality? IEEE Transactions on Visualization
and Computer Graphics, 24(1):457–467, Jan 2018. doi: 10.1109/TVCG.
B. Bach, R. Sicat, H. Pfister, and A. Quigley. Drawing into the AR-
CANVAS: Designing embedded visualizations for augmented reality. In
Workshop on Immersive Analytics, IEEE Vis, 2017.
S. K. Badam, A. Srinivasan, N. Elmqvist, and J. Stasko. Affordances of
input modalities for visual data exploration in immersive environments.
In Workshop on Immersive Analytics, IEEE Vis, 2017.
M. Bellgardt, S. Gebhardt, B. Hentschel, and T. Kuhlen. Gistualizer:
An immersive glyph for multidimensional datapoints. In Workshop on
Immersive Analytics, IEEE Vis, 2017.
A. Bigelow, S. Drucker, D. Fisher, and M. Meyer. Reflections on how
designers design with data. In Proceedings of the 2014 International
Working Conference on Advanced Visual Interfaces, AVI ’14, pp. 17–24.
ACM, New York, NY, USA, 2014. doi: 10.1145/2598153. 2598175
A. Bigelow, S. Drucker, D. Fisher, and M. Meyer. Iterating between tools
to create and edit visualizations. IEEE Transactions on Visualization and
Computer Graphics, 23(1):481–490, Jan 2017. doi: 10.1109/TVCG.2016.
R. Borgo, J. Kehrer, D. H. Chung, E. Maguire, R. S. Laramee, H. Hauser,
M. Ward, and M. Chen. Glyph-based visualization: Foundations, design
guidelines, techniques and applications. In Eurographics (STARs), pp.
39–63, 2013.
M. Bostock and J. Heer. Protovis: A graphical toolkit for visualization.
IEEE Transactions on Visualization and Computer Graphics, 15(6):1121–
1128, Nov. 2009. doi: 10.1109/TVCG.2009.174
M. Bostock, V. Ogievetsky, and J. Heer. Data-Driven Documents. IEEE
Transactions on Visualization and Computer Graphics, 17(12):2301–2309,
Dec 2011. doi: 10. 1109/TVCG.2011.185
S. Butscher, S. Hubenschmid, J. M
uller, J. Fuchs, and H. Reiterer. Clus-
ters, trends, and outliers: How immersive technologies can facilitate the
collaborative analysis of multidimensional data. In Proceedings of the
2018 CHI Conference on Human Factors in Computing Systems, CHI ’18,
pp. 90:1–90:12. ACM, New York, NY, USA, 2018. doi: 10.1145/3173574.
T. Chandler, M. Cordeil, T. Czauderna, T. Dwyer, J. Glowacki, C. Goncu,
M. Klapperstueck, K. Klein, K. Marriott, F. Schreiber, and E. Wilson.
Immersive Analytics. In 2015 Big Data Visual Analytics (BDVA), pp. 1–8,
Sept 2015. doi: 10. 1109/BDVA.2015.7314296
Z. Chen, Y. Wang, T. Sun, X. Gao, W. Chen, Z. Pan, H. Qu, and Y. Wu. Ex-
ploring the design space of immersive urban analytics. Visual Informatics,
1(2):132 – 142, 2017. doi: 10. 1016/j.visinf.2017.11.002
H. Choi, W. Choi, T. M. Quan, D. G. C. Hildebrand, H. Pfister, and W. K.
Jeong. Vivaldi: A domain-specific language for volume processing and
visualization on distributed heterogeneous systems. IEEE Transactions on
Visualization and Computer Graphics, 20(12):2407–2416, Dec 2014. doi:
10.1109/TVCG. 2014.2346322
M. Cordeil, A. Cunningham, T. Dwyer, B. H. Thomas, and K. Marriott.
ImAxes: Immersive axes as embodied affordances for interactive multivari-
ate data visualisation. In Proceedings of the 30th Annual ACM Symposium
on User Interface Software and Technology, UIST ’17, pp. 71–83. ACM,
New York, NY, USA, 2017. doi: 10.1145/3126594.3126613
M. Cordeil, T. Dwyer, K. Klein, B. Laha, K. Marriott, and B. H. Thomas.
Immersive collaborative analysis of network connectivity: Cave-style or
head-mounted display? IEEE Transactions on Visualization and Computer
Graphics, 23(1):441–450, Jan 2017. doi: 10.1109/TVCG.2016. 2599107
C. Donalek, S. G. Djorgovski, S. Davidoff, A. Cioc, A. Wang, G. Longo,
J. S. Norris, J. Zhang, E. Lawler, S. Yeh, A. Mahabal, M. J. Graham,
and A. J. Drake. Immersive and collaborative data visualization using
virtual reality platforms. In Big Data (Big Data), 2014 IEEE International
Conference on, pp. 609–614. IEEE, 2014.
N. ElSayed, B. Thomas, K. Marriott, J. Piantadosi, and R. Smith. Situated
analytics. In 2015 Big Data Visual Analytics (BDVA), pp. 1–8, Sept 2015.
doi: 10.1109/BDVA.2015. 7314302
J. D. Fekete. The InfoVis Toolkit. In IEEE Symposium on Information
Visualization, pp. 167–174, 2004. doi: 10.1109/INFVIS.2004.64
D. Filonik, T. Bednarz, M. Rittenbruch, and M. Foth. Glance: Generalized
geometric primitives and transformations for information visualization
in AR/VR environments. In Proceedings of the 15th ACM SIGGRAPH
Conference on Virtual-Reality Continuum and Its Applications in Industry
- Volume 1, VRCAI ’16, pp. 461–468. ACM, New York, NY, USA, 2016.
doi: 10.1145/3013971. 3014006
M. Gandy and B. MacIntyre. Designer’s augmented reality toolkit, ten
years later: Implications for new media authoring tools. In Proceedings
of the 27th Annual ACM Symposium on User Interface Software and
Technology, UIST ’14, pp. 627–636. ACM, New York, NY, USA, 2014.
doi: 10.1145/2642918. 2647369
J. Heer and M. Bostock. Declarative language design for interactive
visualization. IEEE Transactions on Visualization and Computer Graphics,
16(6):1149–1156, Nov 2010. doi: 10.1109/TVCG.2010. 144
J. Heer and B. Shneiderman. Interactive dynamics for visual analysis.
Queue, 10(2):30:30–30:55, Feb. 2012. doi: 10.1145/2133416.2146416
R. Kenny and A. A. Becker. Is the Nasdaq in another bubble? A virtual
reality guided tour of 21 years of the Nasdaq.
com/3d-nasdaq/. Last accessed: March 2018.
N. W. Kim, E. Schweickart, Z. Liu, M. Dontcheva, W. Li, J. Popovic,
and H. Pfister. Data-Driven Guides: Supporting expressive design for
information graphics. IEEE Transactions on Visualization and Computer
Graphics, PP(99):1–1, Jan 2017 2017.
G. Kindlmann, C. Chiw, N. Seltzer, L. Samuels, and J. Reppy. Diderot:
a domain-specific language for portable parallel scientific visualization
and image analysis. IEEE Transactions on Visualization and Computer
Graphics, 22(1):867–876, Jan 2016.
J. Li. Beach.
. Last accessed:
March 2018.
A. E. Lie, J. Kehrer, and H. Hauser. Critical design and realization aspects
of glyph-based 3D data visualization. In Proceedings of the 25th Spring
Conference on Computer Graphics, SCCG ’09, pp. 19–26. ACM, New
York, NY, USA, 2009. doi: 10. 1145/1980462.1980470
M. Luboschik, P. Berger, and O. Staadt. On spatial perception issues in
augmented reality based immersive analytics. In Proceedings of the 2016
ACM Companion on Interactive Surfaces and Spaces, ISS Companion
’16, pp. 47–53. ACM, New York, NY, USA, 2016. doi: 10.1145/3009939.
Z. Lv, A. Tek, F. Da Silva, C. Empereur-mot, M. Chavent, and M. Baaden.
Game on, Science - How video game technology may help biologists
tackle visualization challenges. PLOS ONE, 8(3):1–13, 03 2013. doi: 10.
1371/journal.pone. 0057990
P. Rautek, S. Bruckner, M. E. Gr
oller, and M. Hadwiger. ViSlang: A
system for interpreted domain-specific languages for scientific visual-
ization. IEEE Transactions on Visualization and Computer Graphics,
20(12):2388–2396, Dec 2014.
A. Satyanarayan and J. Heer. Lyra: An interactive visualization design
environment. In Computer Graphics Forum, vol. 33, pp. 351–360. Wiley
Online Library, 2014.
A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer. Vega-Lite:
A grammar of interactive graphics. IEEE Transactions on Visualization
and Computer Graphics, 23(1):341–350, Jan 2017. doi: 10.1109/TVCG.
A. Satyanarayan, R. Russell, J. Hoffswell, and J. Heer. Reactive Vega: A
streaming dataflow architecture for declarative interactive visualization.
IEEE Transactions on Visualization and Computer Graphics, 22(1):659–
668, Jan 2016.
W. Usher, P. Klacansky, F. Federer, P. T. Bremer, A. Knoll, J. Yarch, A. An-
gelucci, and V. Pascucci. A virtual reality visualization tool for neuron
tracing. IEEE Transactions on Visualization and Computer Graphics,
24(1):994–1003, Jan 2018. doi: 10. 1109/TVCG.2017.2744079
S. White and S. Feiner. SiteLens: Situated visualization techniques for
urban site visits. In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, CHI ’09, pp. 1117–1120. ACM, New York,
NY, USA, 2009. doi: 10.1145/1518701.1518871
H. Wickham. ggplot2: elegant graphics for data analysis. Springer, 2016.
L. Wilkinson. The grammar of graphics. Springer Science & Business
Media, 2006.
W. Willett, Y. Jansen, and P. Dragicevic. Embedded data representations.
IEEE Transactions on Visualization and Computer Graphics, 23(1):461–
470, Jan 2017. doi: 10. 1109/TVCG.2016.2598608
... The visualization community has made significant strides in facilitating the authoring and prototyping of interactive visualization interfaces through toolkits and high-level grammars. While these contributions have significantly lowered the entry barrier for information [11,12,56,57,74], scientific [38,43,52], and immersive visualization [13,60], the same cannot be said about urban visual analytics. Urban data and urban analytical tasks impose specific requirements that must be met to drive real-world applications. ...
... In the third category, we have high-level visualization grammars [37,42,51,56,59,60,74], a compromise between the ease of use of template-based tools and the flexibility of visualization libraries. Rather than be constrained by templates or requiring the programming of individual visualization components, visualization grammars empower users to specify their visualizations through high-level abstractions. ...
... For example, with Vega [57], Vega-Lite [56], and Animated Vega-Lite [74], users can author their own visualizations through JSON files following rules that specify marks, encodings, and interactions of the plots. DXR [60], VRIA [13], and Deimos [40] extend Vega-Lite's grammar to virtual and augmented reality, offering the ability to create immersive visualizations within arbitrary (including urban) environments, but without offering complex integration and interaction between different data layers. ...
Full-text available
While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at
... The general public has diverse technical skills and knowledge backgrounds, and special considerations should be given to users without programming and other advanced technical skills. (Butcher & Ritsos, 2017;Cavallo et al., 2019;Lee et al., 2019;Liu et al., 2020;Sicat et al., 2019) Domain experts include users in academia like data scientists, immersive analytic researchers, and other scientists. They can also be industry professionals like city planners, airport managers, air traffic analysts, investors, market operators, and visualization developers with varying XR programming expertise (Donalek et al., 2014;Hentschel et al., 2009;Shaikh et al., 2019). ...
... Supported data should be complex, large-scale, multidimensional, real-time, and dynamic, such as biomedical and other scientific abstract data, aircraft trajectory data, sports data, financial market data, and social media sentiment data (Butcher et al., 2019;Cordeil et al., , 2019Jing et al., 2019;Lee et al., 2019;Liu et al., 2020;Neto et al., 2015;Sicat et al., 2019;Tadeja et al., 2019). ...
... Case studies aim to observe how users visualize and analyze data using immersive dashboards in a predefined context to gather feedback for future use. Potential participants should be domain experts and/or general users, and their technical capabilities and knowledge background should be varied or the same (Cavallo et al., 2019;Cordeil et al., 2019;Filho et al., 2018;Lugmayr et al., 2019;Sicat et al., 2019). ...
... One driving force behind this growth is the availability of increasingly sophisticated toolkits for these development environments: For example, collaborative mixed reality (MR) systems "have only recently advanced to the point where researchers can focus deeply on the nuances of supporting collaboration, rather than needing to focus primarily on creating the enabling technology" [7]. Although there has been a proliferation of toolkits in different areas such as visualization [6,9,27,30] or logging [13,22], networking has been mostly neglected and delegated to commercial solutions. Networking is an essential part in many interactive CR prototypes, for example to support collaboration across realities (e.g., multiple homogeneous [8] or heterogeneous [29] devices) or to connect complementary interfaces [33] (e.g., transitioning between desktop and MR [15]). ...
... Other research-driven frameworks, such as Webstrates [17] and its variants [2,12,26], support developers in seamlessly synchronizing and sharing content across web-based devices. Especially in the field of InfoVis, toolkits such as IATK [6], DXR [30], u2vis [27], and RagRug [9] play an essential role to significantly reduce the effort required to create data visualizations. Recent toolkits such as MRAT [22] and RELIVE [14] also include data capturing capabilities to record and analyze MR study data. ...
Conference Paper
Full-text available
We present Colibri, an open source networking toolkit for data exchange , model synchronization, and voice transmission to support rapid development of distributed cross reality research prototypes. Development of such prototypes often involves multiple heterogeneous components, which necessitates data exchange across a network. However, existing networking solutions are often unsuitable for research prototypes as they require significant development resources and may be lacking in terms of data privacy, logging capabilities , latency requirements, or supporting heterogeneous devices. In contrast, Colibri is specifically designed for networking in interactive research prototypes: Colibri facilitates the most common tasks for establishing communication between cross reality components with little to no code necessary. We describe the usage and implementation of Colibri and report on its application in three cross reality prototypes to demonstrate the toolkit's capabilities. Lastly, we discuss open challenges to better support the creation of cross reality prototypes.
... The best authoring paradigm however is unclear. In broader immersive analytics, authoring systems range from text-based specifications (e.g., [7,37]) to GUIs (e.g., [9]) to fully embodied interactions (e.g., [10]). The latter approach would likely involve "building blocks", as P4 suggested, to allow end-users to easily build situated dashboards without complex grammars or code. ...
Situated Visualization is an emerging field that unites several areas - visualization, augmented reality, human-computer interaction, and internet-of-things, to support human data activities within the ubiquitous world. Likewise, dashboards are broadly used to simplify complex data through multiple views. However, dashboards are only adapted for desktop settings, and requires visual strategies to support situatedness. We propose the concept of AR-based situated dashboards and present design considerations and challenges developed over interviews with experts. These challenges aim to propose directions and opportunities for facilitating the effective designing and authoring of situated dashboards.
... Over the past few years, diferent 3D visualizations in mixed reality environments have been explored, such as fight trajectories [31], 3D parallel coordinates [18], interactively connecting and linking together diferent axes [5,22], link routing between diferent visualizations in a 3D space [51], or 3D geotemporal visualizations [60]. Consequently, frameworks that facilitate the creation of immersive visualizations have emerged, such as DXR [56], IATK [21], or VRIA [17]. Furthermore, recent research has investigated the opportunities of co-located collaboration, for example for augmenting large interactive displays [53] and within a shared virtual environment [40]. ...
Conference Paper
Full-text available
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
... To support the development of AR assistants, software toolkits have been proposed, for example, RagRug [16], which is designed for situated analysis, or Data visualizations in eXtended Reality (DXR) [46], which is specifically designed to build immersive analytics [30] applications. However, while such toolkits make it easier to develop feature-rich assistive systems that use data from the multiple sensors provided by the AR headset display and integrate AI methods, they do not offer explicit tools for external debugging of the required ML models and sensor streams. ...
The concept of augmented reality (AR) assistants has captured the human imagination for decades, becoming a staple of modern science fiction. To pursue this goal, it is necessary to develop artificial intelligence (AI)-based methods that simultaneously perceive the 3D environment, reason about physical tasks, and model the performer, all in real-time. Within this framework, a wide variety of sensors are needed to generate data across different modalities, such as audio, video, depth, speech, and time-of-flight. The required sensors are typically part of the AR headset, providing performer sensing and interaction through visual, audio, and haptic feedback. AI assistants not only record the performer as they perform activities, but also require machine learning (ML) models to understand and assist the performer as they interact with the physical world. Therefore, developing such assistants is a challenging task. We propose ARGUS, a visual analytics system to support the development of intelligent AR assistants. Our system was designed as part of a multi year-long collaboration between visualization researchers and ML and AR experts. This co-design process has led to advances in the visualization of ML in AR. Our system allows for online visualization of object, action, and step detection as well as offline analysis of previously recorded AR sessions. It visualizes not only the multimodal sensor data streams but also the output of the ML models. This allows developers to gain insights into the performer activities as well as the ML models, helping them troubleshoot, improve, and fine tune the components of the AR assistant.
... Existing Immersive Analytics research strongly focuses on data visualization [20], [47]. Specifically, a few toolkits enable data scientists to create immersive data visualizations, including DXR [48], VRIA [49], IATK [50], DataHop [30] and ImAxes [26]. While ImAxes and DataHop provide a fully immersive visualization authoring experience, the others require users to create and configure visualizations on the desktop and view visualizations in the immersive environment. ...
Full-text available
Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.
... This involves exploring novel techniques, guidelines and models [18]. An example of that is the DXR toolkit, presented in [50], that offers developers an efficient way to build immersive data visualization designs, with a new succinct declarative visualization grammar inspired by Vega-Lite. ...
Full-text available
To fully leverage the benefits of augmented and mixed reality (AR/MR) in supporting users, it is crucial to establish a consistent and well-defined situated visualization (SV) model. SV encompasses visualizations that adapt based on context, considering the relevant visualizations within their physical display environment. Recognizing the potential of SV in various domains such as collaborative tasks, situational awareness, decision-making, assistance, training, and maintenance, AR/MR is well-suited to facilitate these scenarios by providing additional data and context-driven visualization techniques. While some perspectives on the SV model have been proposed, such as space, time, place, activity, and community, a comprehensive and up-to-date systematization of the entire SV model is yet to be established. Therefore, there is a pressing need for a more comprehensive and updated description of the SV model within the AR/MR framework to foster research discussions.
As visualization makes the leap to mobile and situated settings, where data is increasingly integrated with the physical world using mixed reality, there is a corresponding need for effectively managing the immersed user's view of situated visualizations. In this paper we present an analysis of view management techniques for situated 3D visualizations in handheld augmented reality: a shadowbox, a world‐in‐miniature metaphor, and an interactive tour. We validate these view management solutions through a concrete implementation of all techniques within a situated visualization framework built using a web‐based augmented reality visualization toolkit, and present results from a user study in augmented reality accessed using handheld mobile devices.
Conference Paper
Full-text available
Immersive technologies such as augmented reality devices are opening up a new design space for the visual analysis of data. This paper studies the potential of an augmented reality environment for the purpose of collaborative analysis of multidimensional, abstract data. We present ART, a collaborative analysis tool to visualize multidimensional data in augmented reality using an interactive, 3D parallel coordinates visualization. The visualization is anchored to a touch-sensitive tabletop, benefiting from well-established interaction techniques. The results of group-based, expert walkthroughs show that ART can facilitate immersion in the data, a fluid analysis process, and collaboration. Based on the results, we provide a set of guidelines and discuss future research areas to foster the development of immersive technologies as tools for the collaborative analysis of multidimensional data.
Full-text available
Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft HoloLens. These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence. In this paper, we propose an theoretical model to characterize the visualizations in immersive urban analytics. Further more, based on our comprehensive and concise model, we contribute a typology of combination methods of 2D and 3D visualizations that distinguish between linked views, embedded views, and mixed views. We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations. Finally, based on existing works, possible future research opportunities are explored and discussed.
Full-text available
Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.
Conference Paper
Full-text available
We introduce ImAxes, an immersive system for exploring multivariate data using fluid, modeless interaction. The basic interface element is an embodied data axis. The user can manipulate these axes like physical objects in the immersive environment and combine them into sophisticated visualisa-tions. The type of visualisation that appears depends on the proximity and relative orientation of the axes with respect to one another, which we describe with a formal grammar. This straightforward composability leads to a number of emergent visualisations and interactions, which we review, and then demonstrate with a detailed multivariate data analysis use case.
Full-text available
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user’s real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
Conference Paper
Full-text available
Beyond other domains, the field of immersive analytics makes use of Augmented Reality techniques to successfully support users in analyzing data. When displaying ubiquitous data integrated into the everyday life, spatial immersion issues like depth perception, data localization and object relations become relevant. Although there is a variety of techniques to deal with those, they are difficult to apply if the examined data or the reference space are large and abstract. In this work, we discuss observed problems in such immersive analytics systems and the applicability of current countermeasures to identify needs for action.
Conference Paper
This paper outlines Glance, a unifying framework for exploring multidimensional, multivariate data in the context of AR/VR environments, along with specific implementation techniques that utilize programmable GPUs. The presented techniques extend the graphics pipeline through programmable shaders in order to support more general geometries and operations. Our point of departure from existing structural theories of graphics is a general spatial substrate, where data is encoded using higher-dimensional geometric primitives. From there, we define a series of processing stages, utilizing shaders to enable flexible and dynamic coordinate transformations. Furthermore, we describe how advanced visualization techniques, such as faceting and multiple views, can be integrated elegantly into our model. Bridging between Computer Graphics and Information Visualization theories, the elements of our framework are composable and expressive, allowing a diverse set of visualizations to be specified in a universal manner (see figure 1).
A taxonomy of tools that support the fluent and flexible use of visualizations.