ChapterPDF Available

Abstract and Figures

Cybersecurity practitioners face the challenge of monitoring complex and large datasets. These could be visualized as time-varying node-link graphs, but would still have complex topologies and very high rates of change in the attributes of their links (representing network activity). It is natural, then, that the needs of the cybersecurity domain have driven many innovations in 2D visualization and related computerassisted decision making. Here, we discuss the lessons learned while implementing user interactions for Virtual Data Explorer (VDE), a novel system for immersive visualization (both in Mixed and Virtual Reality) of complex time-varying graphs. VDE can be used with any dataset to render its topological layout and overlay that with time-varying graph; VDE was inspired by the needs of cybersecurity professionals engaged in computer network defense (CND). Immersive data visualization using VDE enables intuitive semantic zooming, where the semantic zoom levels are determined by the spatial position of the headset, the spatial position of handheld controllers, and user interactions (UIa) with those controllers. This spatially driven semantic zooming is quite different from most other network visualizations which have been attempted with time-varying graphs of the sort needed for CND, presenting a broad design space to be evaluated for overall user experience (UX) optimization. In this paper, we discuss these design choices, as informed by CND experts, with a particular focus on network topology abstraction with graph visualization, semantic zooming on increasing levels of network detail, and semantic zooming to show increasing levels of detail with textual labels.
Content may be subject to copyright.
User Interactions in Virtual Data Explorer
Kaur Kullman1,2[0000000194800583] and Don Engel1 [0000000328380140]
1University of Maryland, Baltimore County, Baltimore MD 21250, USA
donengel@umbc.edu
2Tallinn University of Technology, 12616 Tallinn, Estonia hcii@coda.ee
Abstract. Cybersecurity practitioners face the challenge of monitoring
complex and large datasets. These could be visualized as time-varying
node-link graphs, but would still have complex topologies and very high
rates of change in the attributes of their links (representing network ac-
tivity). It is natural, then, that the needs of the cybersecurity domain
have driven many innovations in 2D visualization and related computer-
assisted decision making. Here, we discuss the lessons learned while im-
plementing user interactions for Virtual Data Explorer (VDE), a novel
system for immersive visualization (both in Mixed and Virtual Reality)
of complex time-varying graphs. VDE can be used with any dataset to
render its topological layout and overlay that with time-varying graph;
VDE was inspired by the needs of cybersecurity professionals engaged in
computer network defense (CND).
Immersive data visualization using VDE enables intuitive semantic zoom-
ing, where the semantic zoom levels are determined by the spatial po-
sition of the headset, the spatial position of handheld controllers, and
user interactions (UIa) with those controllers. This spatially driven se-
mantic zooming is quite different from most other network visualizations
which have been attempted with time-varying graphs of the sort needed
for CND, presenting a broad design space to be evaluated for overall
user experience (UX) optimization. In this paper, we discuss these de-
sign choices, as informed by CND experts, with a particular focus on
network topology abstraction with graph visualization, semantic zoom-
ing on increasing levels of network detail, and semantic zooming to show
increasing levels of detail with textual labels.
Keywords: user interactions ·virtual reality ·mixed reality ·network
visualization ·topology visualization ·data visualization ·cybersecurity
1 Introduction
This work follows a large volume of prior research done on 3D user interactions
[24, 3, 6, 10], immersive analytics [1, 5, 4, 18] and the combination of the two [21,
17, 23, 9]. Although the task-specific layout of an immersive data visualization
is arguably the most important aspect determining its utility [15], non-intrusive
The material is based upon work supported by NASA under award number
80GSFC21M0002.
2 K. Kullman and D. Engel
Fig. 1. A computer network’s topology visualized with VDE, using a Mixed Reality
headset.
and intuitive user interfaces (UI) and overall user experiences (UX) are also
important in determining the usability and utility of an immersive data visual-
ization. In this paper, we report on the applicability of various user interaction
(UIa) methods for immersive analytics of node-link diagrams.
Work on Virtual Data Explorer (VDE, Fig. 1) started in 2015, initially as a
fork of OpenGraphiti and then rebuilt from scratch as a Unity 3D project [14].
One of the factors that motivated the transfer away from OpenGraphiti at the
time was its lack of support for user interactions in virtual reality, which became a
particularly significant omission when Oculus Touch controllers were released in
late 2016 which enabled sufficiently precise user interactions to be implemented
with Unity 3D. User feedback solicited from early VDE users motivated various
alterations and additions to the interactions implemented for virtual and mixed
reality in VDE.
2 Objective
Encoding information into depth cues while visualizing data has been avoided
in the past for a good reason: on a flat screen, it’s not helpful [19]. Nevertheless,
User Interactions in Virtual Data Explorer 3
recent studies have confirmed [23] that with equipment that provides the user
with stereoscopic perception and parallax, three-dimensional shapes can be use-
ful in providing users with insight into the visualized dataset [12]. Additionally,
researchers have found that test subjects managed to gather data and to under-
stand the cyber situation presented to them only after few sessions with great
performance scores, even if the task seemed difficult to them on the first try [8].
The motivating factors for creating VDE were the challenges that cyber de-
fense analysts, cyber defense incident responders, network operations specialists,
and related professionals face while analyzing the datasets relevant to their tasks.
Such datasets are often multidimensional but not intrinsically spatial. Conse-
quently, analysts must either scale down the number of dimensions visible at a
time for encoding into a 2D or 3D visualization, or they must combine multiple
visualizations displaying different dimensions of that dataset into a dashboard.
The inspiration for VDE was the hope that immersive visualization would en-
able the 3D encoding of data in ways better aligned to subject matter experts’
(SMEs’) natural understanding of their datasets’ relational layout, better re-
flecting their mental models of the multilevel hierarchical relationships of groups
of entities expected to be present in a dataset and the dynamic interactions
between these entities [13].
Therefore, the target audience for the visualizations created with VDE are
the SMEs responsible for ensuring the security of networks and other assets.
SMEs utilize a wide array of Computer Network Defense (CND) tools, such as
Security Information & Event Management (SIEM) systems which allow data
from various sources to be processed and for alerts to be handled [15]. CND
tools allow analysts to monitor, detect, investigate, and report incidents that
occur in the network, as well as provide an overview of the network state. To
provide analysts with such capabilities, CND tools depend on the ability to query,
process, summarize and display large quantities of diverse data which have fast
and unexpected dynamics [2]. These tools can be thought of along the lines of
the seven human-data interaction task levels defined by Shneiderman [22]:
1. Gaining an overview of the entire dataset,
2. Zooming in on an item or subsets of items,
3. Filtering out irrelevant items,
4. Getting details-on-demand for an item or subset of items,
5. Relating between items or subset of items,
6. Keeping a history of actions, and
7. Allowing extraction of subsets of items and query parameters.
These task levels have been taken into account while developing VDE and
most have been addressed with its capabilities. When appropriate, Shneider-
man’s task levels are referred to by their sequential number later in this paper.
3 Virtual Data Explorer
VDE enables a user to stereoscopically perceive a spatial layout of a dataset in a
VR or MR environment (e.g., the topology of a computer network), while the re-
4 K. Kullman and D. Engel
sulting visualization can be augmented with additional data, like TCP/UDP/ICMP
session counts between network nodes [16]. VDE allows its users to customize vi-
sualization layouts via two complimentary text configuration files that are parsed
by the VDE Server and the VDE Client.
To accommodate timely processing of large query results, data-processing in
VDE is separated into a server component (VDES). Thread-safe messaging is
used extensively - most importantly, to keep the Client (VDEC) visualization in
sync with (changes in) incoming data, but also for asynchronous data processing,
for handling browser-based user interface actions, and in support of various other
features.
A more detailed description of VDE is available at [11].
3.1 Simulator Sickness
Various experiments have shown that applying certain limitations to a user’s
ability to move in the virtual environment - limit their view and other forms
of constrained navigation - will limit confusion and help prevent simulator sick-
ness while in VR [7]. These lessons were learned while developing VDE and
adjusted later, as others reported success with the same or similar mitigation
efforts [20]. Most importantly, if an immersed user can only move the viewpoint
(e.g., its avatar) either forwards or backwards in the direction of user’s gaze (or
head-direction), the effects of simulator sickness can be minimized or avoided
altogether [12]. This form of constrained navigation in VR is known as “the
rudder movement” [20].
3.2 Virtual or Mixed Reality
Although VDE was initially developed with Virtual Reality headsets (Oculus
Rift DK2 and later CV1 with Oculus Touch), its interaction components were
always kept modular so that once mixed reality headsets such as the Meta 2,
Magic Leap, and Hololens became available, their support could be integrated
into the same codebase.
The underlying expectation for preferring MR to VR is the user’s ability
to combine stereoscopically perceivable data visualizations rendered by a MR
headset with relevant textual information represented by other sources in the
user’s physical environment (SIEM, dashboard, or another tool), most likely
from flat screens. This requirement was identified from early user feedback that
trying to input text or define / refine data queries while in VR would be vastly
inferior to the textual interfaces that users are already accustomed to operating
while using conventional applications on a flat screen for data analysis. Hence,
rather than spend time on inventing 3D data-entry solutions for VR, it was
decided to focus on creating and improving stereoscopically perceivable data
layouts and letting users use their existing tools to control the selection of data
that is then fed to the visualization.
A major advantage provided by the VR environment, relative to MR, is that
VR allows users to move (fly) around in a larger scale (overview) visualization
User Interactions in Virtual Data Explorer 5
Fig. 2. Head-Up Display showing labels of visualized groups that the user focuses on,
retaining visual connetions to those with B´ezier curves. HUD is used also for other
interaction and feedback purposes.
of a dataset while becoming familiar with its layout(s) and/or while collaborat-
ing with others. However, once the user is familiar with the structure of their
dataset, changing their position (by teleporting or flying in VR space) becomes
less beneficial over time. Accordingly, as commodity MR devices became suf-
ficiently performant, they were prioritized for development - first, the Meta 2,
later followed by support for the Magic Leap and HoloLens.
3.3 User Interface
In the early stages of VDE development on Unity 3D, efforts were made to either
use existing VR-based menu systems (VRTK, later MRTK) or to design a native
menu, such that would allow the user to control which visualization components
are visible and/or interactive; to configure connection to VDE Server; to switch
between layouts; and to exercise other control over the immersive environment.
However, controlling VDE’s server and client behavior, including data selection
and transfer, turned out to be more convenient when done in combination with
the VDES web-based interface and with existing conventional tools on a flat
screen. For example, in case of cybersecurity related datasets, the data source
could be a SIEM, log-correlation, netflow, or PCAP analyzing environments.
6 K. Kullman and D. Engel
3.4 Head-Up Display
Contextual information is displayed on a head-up display (HUD) that is per-
ceived to be positioned a few meters away from the user in MR and about 30m
in VR. The HUD smoothly follows the direction of user’s head in order to re-
main in the user’s field of view (see Fig. 2). This virtual distance was chosen
to allow a clear distinction between the HUD and the network itself, which is
stereoscopically apparent as being nearer to the user.
3.5 User Interactions
The ability to interact with the visualization, namely, to query information about
a visual representation of a datapoint (ex: semi-transparent cube for a node or
line for a relation between two nodes) using input devices (ex: hand- and finger-
tracking, input controllers) is imperative. While gathering feedback from SMEs
[12], this querying capability was found to be crucial for the users’ immersion
in the VR data visualization to allow them to explore and to build their under-
standing of the visualized data.
The MR or VR system’s available input methods are used to detect whether
the user is trying to grab something, point at a node, or point at an edge. In
case of MR headsets, these interactions are based on the user’s tracked hands
(see: Fig. 3 and Fig. 4), and in case of VR headsets, pseudo-hands (see: Fig. 5
Fig. 6) are rendered based on hand-held input controllers.
A user can:
1. point to select a visual representation of a data-object – a node (for example,
a cube or a sphere) or an edge – with a “laser” or dominant hand’s index
finger of either the virtual rendering of the hand or users real hand tracking
results (in case of MR headsets). Once selected, detailed information about
the selected object (node or edge) is shown on a line of text rendered next
to user’s hand, (Shneiderman Task Level 4).
2. grab (or pinch) nodes and move (or throw) these around to better perceive
its relations by observing the edges that are originating or terminating in
that node: humans perceive the terminal locations of moving lines better
than that of static ones, (Shneiderman Task Levels 3, 5).
3. control data visualization layout’s properties (shapes, curvature, etc.) with
controller’s analog sensors, (Shneiderman Task Levels 1, 5).
4. gesture with non-dominant hand to trigger various functionalities. For ex-
ample: starfish – toggle the HUD; pinch both hands – scale the visualization;
fist – toggle edges; etc.
In addition to active gestures and hand recognition, the user’s position and
gaze (instead of just their head direction) are used if available to decide which
visualization sub-groups to focus on, to enable textual labels, to hide enclosures,
to enable update routines, colliders, etc. (Shneiderman Task Levels 2, 3, 4, 5, 7).
Therefore, depending on user’s direction and location amongst the visualization
User Interactions in Virtual Data Explorer 7
Fig. 3. In an MR environment, the user pinches a node, that is sized accordingly, to
move that around and explore its relations. Notice the two gray spheres indicating the
location, where the MR device (Magic Leap) perceives the tips of user’s thumb and
index finger to be: due to the device’ lack of precision, these helper-markers are used to
guide the user. Note that the distortion is further aggravated by to the way the device
records the video and overlays the augmentation onto it. For comparison with Virtual
Reality view, please see Fig. 5.
8 K. Kullman and D. Engel
Fig. 4. MR view of Locked Shields 18 Partner Run network topology and network
traffic visualization with VDE; user is selecting a Blue Team’s network’s visualization
with index finger to have it enlarged and brought into the center of the view. Please
see the video accompanying this paper for better perception: https://coda.ee/HCII22
components and on the user’s gaze (if eye-tracking is available), a visualization’s
details are either visible or hidden, and if visible, then either interactive or not.
The reasons for such a behavior are threefold:
1. Exposing the user to too many visual representations of the data objects will
overwhelm them, even if occlusion is not a concern.
2. Having too many active objects may overwhelm the GPU/CPU of a stan-
dalone MR/VR headset - or even a computer rendering into a VR headset
- due to the computational costs of colliders, joints, or other physics. (see
“Optimizations” section, below)
3. By adjusting their location (and gaze), the user can:
(a) See an overview of the entire dataset (Shneiderman Task Level 1),
(b) Zoom on an item or subsets of items (Shneiderman Task Level 2),
(c) Filter irrelevant items (Shneiderman Task Level 3),
(d) Get details-on-demand for an item or subset of items (Shneiderman Task
Level 4),
(e) Relate between items or subsets of items. (Shneiderman Task Level 5).
Fig. 7 and Fig. 8 show this behavior, while the video (https://coda.ee/HCII22)
accompanying this paper makes understanding such MR interaction clearer than
is possible from a screenshot, albeit less so than experiencing it with a MR head-
set.
User Interactions in Virtual Data Explorer 9
Fig. 5. In a VR environment, the user grabs a node, that is sized to sit into ones palm.
For comparison with Mixed Reality view, please see Fig. 3.
10 K. Kullman and D. Engel
Fig. 6. User touches an edge with the index finger of Oculus avatar’s hand, to learn
details about that edge.
User Interactions in Virtual Data Explorer 11
Fig. 7. Once user moves closer to a part of the visualization that might be of interest,
textual labels are shown for upper tier groups first, while the rectangular represen-
tations of these groups are disappeared as the user gets closer, to enable focusing on
the subgroups inside, and then the nodes with their IP addresses as labels. To convey
the changes in visualization as the user moves, screenshots are provided sequentially,
numbered 1-4. For comparison with Virtual Reality view, please see Fig. 8.
12 K. Kullman and D. Engel
Fig. 8. Once user moves closer to a part of the visualization that might be of interest,
textual labels are shown for upper tier groups first, while the rectangular represen-
tations of these groups are disappeared as the user gets closer, to enable focusing on
the subgroups inside, and then the nodes with their IP addresses as labels. To convey
the changes in visualization as the user moves, screenshots are provided sequentially,
numbered 1-4. For comparison with Mixed Reality view, please see Fig. 7.
User Interactions in Virtual Data Explorer 13
3.6 Textual information
Text labels of nodes, edges, groups are a significant issue, as these are expensive
to render due to their complex geometrical shapes and also risk the possible
occlusion of objects which may fall behind them. Accordingly, text is shown in
VDE only when necessary, to the extreme that a label is made visible only when
the user’s gaze is detected on a related object. Backgrounds are not used with
text in order to reduce their occlusive footprint.
3.7 Optimizations
The basis for VDE: less is more.
Occlusion of visual representations of data objects is a significant problem
for 3D data visualizations on flat screens. In VR/MR environments, occlusion
can be mostly mitigated by stereoscopic perception of the (semi-transparent)
visualizations of data objects and by parallax, but may still be problematic [5].
While occlusion in MR/VR can be addressed by measures such as trans-
parency, transparency adds significant overhead to the rendering process. To
optimize occlusion-related issues, VDE strikes a balance between the necessity
of transparency of visualized objects, while adjusting the number of components
currently visible (textual labels, reducing the complexity of objects that are far-
ther from the user’s viewpoint, etc.) based on the current load (measured FPS);
on objects’ relative positions in user’s gaze (in-view, not-in-view, behind the
user); and on the user’s virtual distance from these objects. This XR-centric
approach to semantic zooming proves a natural user experience, visually akin
to the semantic zooming techniques used in online maps which smoothly but
dramatically change the extent of detail as a function of zoom level (showing
only major highways or the smallest of roads, toggling the visibility of street
names and point of interest markers).
Although colors and shapes of the visual representations of data objects
can be used to convey information about their properties, user feedback has
confirmed that these should be used sparsely. Therefore, in most VDE layouts,
the nodes (representing data objects) are visualized as transparent off-white
cubes or spheres, and the latter only in case if the available GPU is powerful
enough. Displaying a cube versus a sphere may seem a trivial difference, but
considering the sizes of some of the datasets visualized (>10,000 nodes and
>10,000 edges), these complexities add up quickly and take a significant toll.
4 Conclusion
Immersive visualization of large, dynamic node-link diagrams requires careful
consideration of visual comprehensibility and computational performance. While
many of node-link visualization idioms are well-studied in 2D flat screen visu-
alizations, the opportunities and constraints presented by VR and MR environ-
ments are distinct. As the pandemic made a larger-scale study with many par-
ticipants impossible, VDE instead underwent a more iterative review process,
14 K. Kullman and D. Engel
drawing input from representative users and domain expertise. The approach
described herein reflects many iterations of performance testing and user feed-
back.
Optimizing user interactions for VDE presented the design challenge of pro-
viding an interface which intuitively offers an informative presentation of the
node-link network both at a high-level “overview” zoom level and at a very
zoomed “detail” view, with well-chosen levels of semantic zoom available along
the continuum between these extremes. Constrained navigation further optimizes
the user experience, limiting confusion and motion sickness. Dynamic highlight-
ing, through the selection and controller-based movement of individual notes,
enhances the users’ understanding of the data.
5 Acknowledgements
The authors thank Alexander Kott, Jennifer A. Cowley, Lee C. Trossbach,
Matthew C. Ryan, Jaan Priisalu, and Olaf Manuel Maennel for their ideas and
guidance. This research was partly supported by the Army Research Laboratory
under Cooperative Agreement Number W911NF-17-2-0083 and in conjunction
with the CCDC Command, Control, Computers, Communications, Cyber, Intel-
ligence, Surveillance, and Reconnaissance (C5ISR) Center. The material is based
upon work supported by NASA under award number 80GSFC21M0002.
References
1. Batch, A., Elmqvist, N.: The interactive visualization gap in initial exploratory
data analysis. IEEE Transactions on Visualization and Computer Graphics 24(1),
278–287 (2018). https://doi.org/10.1109/TVCG.2017.2743990
2. Ben-Asher, N., Gonzalez, C.: Effects of cyber security knowl-
edge on attack detection. Computers in Human Behav-
ior 48, 51–61 (2015). https://doi.org/10.1016/j.chb.2015.01.039,
https://www.sciencedirect.com/science/article/pii/S0747563215000539
3. Casallas, J.S., Oliver, J.H., Kelly, J.W., Merienne, F., Garbaya, S.: Using
relative head and hand-target features to predict intention in 3d moving-
target selection. In: 2014 IEEE Virtual Reality (VR). pp. 51–56 (2014).
https://doi.org/10.1109/VR.2014.6802050
4. ubel, S., R¨ohlig, M., Schumann, H., Trapp, M.: 2d and 3d presentation of spatial
data: A systematic review. In: 2014 IEEE VIS International Workshop on 3DVis
(3DVis). pp. 11–18 (2014). https://doi.org/10.1109/3DVis.2014.7160094
5. Elmqvist, N., Tsigas, P.: A taxonomy of 3d occlusion management for visualization.
IEEE Transactions on Visualization and Computer Graphics 14(5), 1095–1109
(2008). https://doi.org/10.1109/TVCG.2008.59
6. unther, T., Franke, I.S., Groh, R.: Aughanded virtuality — the hands in the
virtual environment. In: 2015 IEEE Virtual Reality (VR). pp. 327–328 (2015).
https://doi.org/10.1109/VR.2015.7223428
7. Johnson, D.M.: Introduction to and review of simulator sickness research (2005)
User Interactions in Virtual Data Explorer 15
8. Kabil, A., Duval, T., Cuppens, N.: Alert characterization by non-expert users in a
cybersecurity virtual environment: A usability study. In: De Paolis, L.T., Bourdot,
P. (eds.) Augmented Reality, Virtual Reality, and Computer Graphics. pp. 82–101.
Springer International Publishing, Cham (2020)
9. Kabil, A., Duval, T., Cuppens, N., Comte, G.L., Halgand, Y., Ponchel, C.: Why
should we use 3d collaborative virtual environments for cyber security? In: 2018
IEEE Fourth VR International Workshop on Collaborative Virtual Environments
(3DCVE). pp. 1–2 (2018). https://doi.org/10.1109/3DCVE.2018.8637109
10. Kang, H.J., Shin, J.h., Ponto, K.: A comparative analysis of 3d user inter-
action: How to move virtual objects in mixed reality. In: 2020 IEEE Con-
ference on Virtual Reality and 3D User Interfaces (VR). pp. 275–284 (2020).
https://doi.org/10.1109/VR46266.2020.00047
11. Kullman, K.: Creating useful 3d data visualizations: Using mixed and virtual real-
ity in cybersecurity (2020), https://coda.ee/MAVRIC, 3nd Annual MAVRIC Con-
ference
12. Kullman, K., Ben-Asher, N., Sample, C.: Operator impressions of 3d visualizations
for cybersecurity analysts. In: 18th European Conference on Cyber Warfare and
Security. Coimbra, Portugal (2019)
13. Kullman, K., Cowley, J., Ben-Asher, N.: Enhancing cyber defense situational
awareness using 3d visualizations. In: 13th International Conference on Cyber War-
fare and Security. Washington, DC (2018)
14. Kullman, K.: Virtual data explorer. https://coda.ee/
15. Kullman, K., Buchanan, L., Komlodi, A., Engel, D.: Mental model mapping
method for cybersecurity. In: HCI (2020)
16. Kullman, K., Engel, D.: Interactive stereoscopically perceivable multidimensional
data visualizations for cybersecurity. Journal of Defence & Security Technologies
4(3), 37–52 (2022). https://doi.org/10.46713/jdst.004.03
17. Lu, F., Davari, S., Lisle, L., Li, Y., Bowman, D.A.: Glanceable ar: Evaluating
information access methods for head-worn augmented reality. In: 2020 IEEE Con-
ference on Virtual Reality and 3D User Interfaces (VR). pp. 930–939 (2020).
https://doi.org/10.1109/VR46266.2020.00113
18. Miyazaki, R., Itoh, T.: An occlusion-reduced 3d hierarchical data visualization
technique. In: 2009 13th International Conference Information Visualisation. pp.
38–43 (2009). https://doi.org/10.1109/IV.2009.32
19. Munzner, T.: Visualization Analysis and Design. AK Peters Visualization Series,
CRC Press (2015), https://books.google.de/books?id=NfkYCwAAQBAJ
20. Pruett, C.: Lessons from the frontlines modern vr design patterns (2017),
https://developer.oculus.com/blog/lessons-from-the-frontlines-modern-vr-design-
patterns, unity North American Vision VR/AR Summit
21. Roberts, J.C., Ritsos, P.D., Badam, S.K., Brodbeck, D., Kennedy, J., Elmqvist, N.:
Visualization beyond the desktop–the next big thing. IEEE Computer Graphics
and Applications 34(6), 26–34 (2014). https://doi.org/10.1109/MCG.2014.82
22. Shneiderman, B.: The eyes have it: a task by data type taxonomy for information
visualizations. In: Proceedings 1996 IEEE Symposium on Visual Languages. pp.
336–343 (1996). https://doi.org/10.1109/VL.1996.545307
23. Whitlock, M., Smart, S., Szafir, D.A.: Graphical perception for immersive analyt-
ics. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR).
pp. 616–625 (2020). https://doi.org/10.1109/VR46266.2020.00084
24. Yu, D., Liang, H.N., Fan, K., Zhang, H., Fleming, C., Papangelis, K.: Design and
evaluation of visualization techniques of off-screen and occluded targets in virtual
16 K. Kullman and D. Engel
reality environments. IEEE Transactions on Visualization and Computer Graphics
26(9), 2762–2774 (2020). https://doi.org/10.1109/TVCG.2019.2905580
... 4 Additionally, there is a need for new tools that facilitate training and monitoring through the big data that the organizations have, which will help security operators in rapidly detecting, responding, and preventing these threats. 5 Also there is a shortage of cybersecurity professionals and a lack of diversity in the workforce, as it has been reported that less than 20% of cybersecurity professionals were women or individuals from underrepresented groups. 6 This also highlights the need for new technologies or methods to enhance cybersecurity education and to increase interest in this profession. ...
... One application was created as VR but also was designed to be easily changed to be MR, which hints to the direction of crossreality applications in the future in this domain. 5,23 Reports assessed for eligibility. ...
... Munsinger et al. showed that immersive VR systems with headsets provide a greater perception of security threats in comparison to traditional monitoring tools such as Wireshark. 18 Similarly, Kullman et al. 5 developed the Virtual Data Explorer (VDE) which is an immersive VR to visualize complex network topologies in 3D and to enhance understanding of the cyber situation in it. Their qualitative study with 10 cybersecurity analysts revealed that this VR visualization was better than cluttered 2D screen representations or printed paper-based representations of the topologies, also analysts showed interest in incorporating this visualization in their workflow. ...
... As XR has been shown to enable domain scientists in other fields to develop a better underestanding of their data [7], we seek to use XR to support GEOS users. Our implementation is in NASA's open source toolkit for XR, the Mixed Reality Exploration Toolkit (MRET) [8], because MRET has a track record in supporting the modular development of tailored visualizations for other application domains [9]. ...
Conference Paper
Full-text available
Our work explores the use of extended reality (XR) to improve scientific discovery with numerical weather/climate models that inform Earth science digital twins, specifically the NASA Goddard Earth Observing System (GEOS) global atmospheric model. The overall project is named the Vi-sualization And Lagrangian dynamics Immersive eXtended Reality Toolkit (VALIXR), which has two main areas of focus: (1) enhancing the understanding of and interaction with model output data through advanced visualizations in the XR environment, and (2) the integration of Lagrangian dynamics into the GEOS model, which allows a natural, feature-specific analysis of Earth science phenomena as opposed to traditional, fixed-point Eulerian dynamics. Here, we report initial work on these focus areas.
... The first image in the panel depicts an overview of the network layout used in the present study. The second image is a representative close-up (taken fromKullman & Engel, 2022). (C) Images depicting the 2D network topology as shown in the Arkime condition. ...
Article
Full-text available
Background Cyber defense decision-making during cyber threat situations is based on human-to-human communication aiming to establish a shared cyber situational awareness. Previous studies suggested that communication inefficiencies were among the biggest problems facing security operation center teams. There is a need for tools that allow for more efficient communication of cyber threat information between individuals both in education and during cyber threat situations. Methods In the present study, we compared how the visual representation of network topology and traffic in 3D mixed reality vs. 2D affected team performance in a sample of cyber cadets ( N = 22) cooperating in dyads. Performance outcomes included network topology recognition, cyber situational awareness, confidence in judgements, experienced communication demands, observed verbal communication, and forced choice decision-making. The study utilized network data from the NATO CCDCOE 2022 Locked Shields cyber defense exercise. Results We found that participants using the 3D mixed reality visualization had better cyber situational awareness than participants in the 2D group. The 3D mixed reality group was generally more confident in their judgments except when performing worse than the 2D group on the topology recognition task (which favored the 2D condition). Participants in the 3D mixed reality group experienced less communication demands, and performed more verbal communication aimed at establishing a shared mental model and less communications discussing task resolution. Better communication was associated with better cyber situational awareness. There were no differences in decision-making between the groups. This could be due to cohort effects such as formal training or the modest sample size. Conclusion This is the first study comparing the effect of 3D mixed reality and 2D visualizations of network topology on dyadic cyber team communication and cyber situational awareness. Using 3D mixed reality visualizations resulted in better cyber situational awareness and team communication. The experiment should be repeated in a larger and more diverse sample to determine its potential effect on decision-making.
... The first image in the panel depicts an overview of the network layout used in the present study. The second image is a representative close-up (taken fromKullman & Engel, 2022). (C) Images depicting the 2D network topology as shown in the Arkime condition. ...
Preprint
Full-text available
Cyber defense decision-making during cyber threat situations is based on human-to-human communication aiming to establish a shared cyber situational awareness. Previous studies suggested that communication inefficiencies were among the biggest problems facing security operation center teams. There is a need for tools that allow for more efficient communication of cyber threat information between individuals both in education and during cyber threat situations. In the present study, we compared how the visual representation of network topology and traffic in 3D mixed reality versus 2D affected team performance in a sample of cyber cadets (N = 22) cooperating in dyads. Performance outcomes included network topology recognition, cyber situational awareness, confidence in judgements, experienced communication demands, observed verbal communication, and forced choice decision-making. The study utilized network data from the NATO CCDCOE 2022 Locked Shields cyber defense exercise. We found that participants using the 3D mixed reality visualization had better cyber situational awareness than participants in the 2D group. The most apparent difference was in the detection of the top five Red Team hosts targeting Blue Team systems, where the traffic associated with the identified Red Team hosts differed in the tens of thousands between the groups. The 3D mixed reality group was generally more confident in their judgments except when performing worse than the 2D group on the topology recognition task (which favored the 2D condition). Participants in the 3D mixed reality group experienced less communication demands, and performed more verbal communication aimed at establishing a shared mental model and less communications discussing task resolution. There were no differences in decision-making between the groups. This could be due to cohort effects such as formal training or the modest sample size. This is the first study comparing the effect of 3D mixed reality and 2D visualizations of network topology on dyadic cyber team communication and cyber situational awareness. Using 3D mixed reality visualizations resulted in better cyber situational awareness and team communication. The experiment should be repeated in a larger and more diverse sample to determine its potential effect on decision-making.
Article
Full-text available
Interactive Data Visualizations (IDV) can be useful for cybersecurity subject matter experts (CSMEs) while they are exploring new data or investigating familiar datasets for anomalies, correlating events, etc. For an IDV to be useful to a CSME, interaction with that visualization should be simple and intuitive (free of additional mental tasks) and the visualization’s layout must map to a CSME’s understanding. While CSMEs may learn to interpret visualizations created by others, they should be encouraged to visualize their datasets in ways that best reflect their own ways of thinking. Developing their own visual schemes makes optimal use of both the data analysis tools and human visual cognition. In this article, we focus on a currently available interactive stereoscopically perceivable multidimensional data visualization solution, as such tools could provide CSMEs with better perception of their data compared to interpreting IDV on flat media (whether visualized as 2D or 3D structures).
Conference Paper
Full-text available
Cybersecurity analysts ingest and process significant amounts of data from diverse sources in order to acquire network situation awareness. Visualizations can enhance the efficiency of analysts' workflow by providing contextual information, various sets of cybersecurity related data, information regarding alerts, among others. However, textual displays and 2D visualizations have limited capabilities in displaying complex, dynamic and multidimensional information. There have been many attempts to visualize data in 3D, while being displayed on 2D displays, but success has been limited. We propose that customized, stereoscopically perceivable 3D visualizations aligned with analysts' internal representations of network topology, may enhance their capability to understand their networks' state in ways that 2D displays cannot afford. These 3D visualizations may also provide a path for users who are trained and comfortable with textual and 2D representations of data to assess visualization methods that may be suitably aligned to implicit knowledge of their networks. Thus, the premise of custom data-visualizations forms the foundation for this study. Herein, we report on findings from a comparative, qualitative, within-subjects usability analysis between 2D and 3D representations of the same network traffic dataset. Study participants (analysts) provided information on: 1.) ability to create an initial understanding of the network, 2.) ease of finding task-relevant information in the representation, and 3.) overall usability. Results indicated that interviewees indicated a preference for 3D visualizations over the 2D alternatives and we discuss possible explanations for this preference.
Article
Full-text available
This research explores the design and evaluation of visualization techniques of targets that reside outside of users' view or are occluded by other elements within a virtual reality environment (VE). We first compare four techniques (3DWedge, 3DArrow, 3DMinimap, and Radar) that can provide direction and distance information of targets. To give structure to their evaluation, we also develop a framework of four tasks (one for direction and three for distance) and their assessment criteria. The results show that 3DWedge is the best-performing and most usable technique. However, all techniques, including 3DWedge, have poor performance in dense scenarios with a large number of targets. To improve their support in dense scenarios, a fifth technique, 3DWedge+, is developed by using 3DWedge as its foundation and including the visual elements of the other three techniques that are useful. A second study is conducted to evaluate the performance of 3DWedge+ in relation to the other techniques. The results show that both 3DWedge and 3DWedge+ are significantly better in distinguishing user-to-target distance and that 3DWedge+ is particularly suitable for dense scenarios. Based on these results, we provide a set of recommendations for the design of visualization techniques of off-screen and occluded targets in 3D VE.
Chapter
Although cybersecurity is a domain where data analysis and training are considered of the highest importance, few virtual environments for cybersecurity are specifically developed, while they are used efficiently in other domains to tackle these issues.
Chapter
Visualizations can enhance the efficiency of Cyber Defense Analysts, Cyber Defense Incident Responders and Network Operations Specialists (Subject Matter Experts, SME) by providing contextual information for various cybersecurity-related datasets and data sources. We propose that customized, stereoscopic 3D visualizations, aligned with SMEs internalized representations of their data, may enhance their capability to understand the state of their systems in ways that flat displays with either text, 2D or 3D visualizations cannot afford. For these visualizations to be useful and efficient, we need to align these to SMEs internalized understanding of their data. In this paper we propose a method for interviewing SMEs to extract their implicit and explicit understanding of the data that they work with, to create useful, interactive, stereoscopically perceivable visualizations that would assist them with their tasks.