Reimagining TaxiVis through an Immersive Space-Time Cube metaphor and reflecting on potential benefits of Immersive Analytics for urban data exploration
To read the file of this research, you can request a copy directly from the authors.
Abstract and Figures
[ IEEE VR 2024 Paper ] [ https://arxiv.org/abs/2402.00344 ]
Current visualization research has identified the potential of more immersive settings for data exploration, leveraging VR and AR technologies. To explore how a traditional visualization system could be adapted into an immersive framework, and how it could benefit from this, we decided to revisit a landmark paper presented ten years ago at IEEE VIS. TaxiVis, by Ferreira et al., enabled interactive spatio-temporal querying of a large dataset of taxi trips in New York City. Here, we reimagine how TaxiVis’ functionalities could be implemented and extended in a 3D immersive environment. Among the unique features we identify as being enabled by the Immersive TaxiVis prototype are alternative uses of the additional visual dimension, a fully visual 3D spatio-temporal query framework, and the opportunity to explore the data at different scales and frames of reference. By revisiting the case studies from the original paper, we demonstrate workflows that can benefit from this immersive perspective. Through reporting on our experience, and on the vision and reasoning behind our design decisions, we hope to contribute to the debate on how conventional and immersive visualization paradigms can complement each other and on how the exploration of urban datasets can be facilitated in the coming years.
While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at
urbantk.org.</uri
Stakeholder participation is an important component of modern urban planning processes. It can provide information about potential social conflicts related to specific urban planning scenarios. However, acquiring feedback from stakeholders is usually limited to explicit response types such as interviews or questionnaires. Such explicit response types are not suitable for the assessment of unconscious responses to specific parameters of an urban planning scenario. To address this limitation, we propose an approach for the assessment of affective and stress responses using implicit measures. Using a measure for electrodermal activity (EDA) and a virtual reality (VR)-based 3D urban model, we demonstrate how implicit physiological measurements can be visualized and temporally matched to specific parameters in an immersive representation of an urban planning scenario. Since this approach is supposed to support conventional stakeholder participation processes in urban planning, we designed it to be simple, cost-effective and with as little task interference as possible. Based on the additional insights gained from measuring physiological responses to urban planning scenarios, urban planners can further optimize planning scenarios by adjusting them to the derived implicitly expressed needs of stakeholders. To support simple implementation of the suggested approach, we provide sample scripts for visualization of EDA data. Limitations concerning the evaluation of raw EDA data and potentials for extending the described approach with additional physiological measures and real-time data evaluation are discussed.
Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.
Recent technological innovations have led to an increase in the availability of 3D urban data, such as shadow, noise, solar potential, and earthquake simulations. These spatiotemporal datasets create opportunities for new visualizations to engage experts from different domains to study the dynamic behavior of urban spaces in this under explored dimension. However, designing 3D spatiotemporal urban visualizations is challenging, as it requires visual strategies to support analysis of time-varying data referent to the city geometry. Although different visual strategies have been used in 3D urban visual analytics, the question of how effective these visual designs are at supporting spatiotemporal analysis on building surfaces remains open. To investigate this, in this paper we first contribute a series of analytical tasks elicited after interviews with practitioners from three urban domains. We also contribute a quantitative user study comparing the effectiveness of four representative visual designs used to visualize 3D spatiotemporal urban data: spatial juxtaposition, temporal juxtaposition, linked view, and embedded view. Participants performed a series of tasks that required them to identify extreme values on building surfaces over time. Tasks varied in granularity for both space and time dimensions. Our results demonstrate that participants were more accurate using plot-based visualizations (linked view, embedded view) but faster using color-coded visualizations (spatial juxtaposition, temporal juxtaposition). Our results also show that, with increasing task complexity, plot-based visualizations perform better in preserving efficiency (time, accuracy) compared to color-coded visualizations. Based on our findings, we present a set of takeaways with design recommendations for 3D spatiotemporal urban visualizations for researchers and practitioners. Lastly, we report on a series of interviews with four practitioners, and their feedback and suggestions for further work on the visualizations to support 3D spatiotemporal urban data analysis.
As mixed-reality (MR) technologies become more mainstream, the delineation between data visualisations displayed on screens or other surfaces and those foating in space becomes increasingly blurred. Rather than the choice of using either a 2D surface or the 3D space for visualising data being a dichotomy, we argue that users should have the freedom to transform visualisations seamlessly between the two as needed. However, the design space for such transformations is large, and practically uncharted. To explore this, we frst establish an overview of the diferent states that a data visualisation can take in MR, followed by how transformations between these states can facilitate common visualisation tasks. We then describe a design space of how these transformations function, in terms of the diferent stages throughout the transformation, and the user interactions and input parameters that afect it. This design space is then demonstrated with multiple exemplary techniques based in MR.
We propose TimeTables, a novel prototype system that aims to support data exploration, using embodiment with space-time cubes in virtual reality. TimeTables uses multiple space-time cubes on virtual tabletops, which users can manipulate by extracting time layers or individual buildings to create new tabletop views. The surrounding environment includes a large space for multiple linked tabletops and a storage wall. TimeTables presents information at different time scales by stretching layers to drill down in time. Users can also jump into tabletops to inspect data from an egocentric perspective. We present a use case scenario of energy consumption displayed on a university campus to demonstrate how our system could support data exploration and analysis over space and time. From our experience and analysis we believe the system has a high potential in assisting spatio-temporal data exploration and analysis.
After a long period of scepticism, more and more publications describe basic research but also practical approaches to how abstract data can be presented in immersive environments for effective and efficient data understanding. Central aspects of this important research question in immersive analytics research are concerned with the use of 3D for visualization, the embedding in the immersive space, the combination with spatial data, suitable interaction paradigms and the evaluation of use cases. We provide a characterization that facilitates the comparison and categorization of published works and present a survey of publications that gives an overview of the state of the art, current trends, and gaps and challenges in current research.
Exploring large virtual environments, such as cities, is a central task in several domains, such as gaming and urban planning. VR systems can greatly help this task by providing an immersive experience; however, a common issue with viewing and navigating a city in the traditional sense is that users can either obtain a local or a global view, but not both at the same time, requiring them to continuously switch between perspectives, losing context and distracting them from their analysis. In this paper, our goal is to allow users to navigate to points of interest without changing perspectives. To accomplish this, we design an intuitive navigation interface that takes advantage of the strong sense of spatial presence provided by VR. We supplement this interface with a perspective that warps the environment, called UrbanRama, based on a cylindrical projection, providing a mix of local and global views. The design of this interface was performed as an interactive process in collaboration with architects and urban planners. We conducted a qualitative and a quantitative pilot user study to evaluate UrbanRama and the results indicate the effectiveness of our system in reducing perspective changes, while ensuring that the warping doesnt affect distance and orientation perception.
Immersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and human-computer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics.
In this work we propose the combination of large interactive displays with personal head-mounted Augmented Reality (AR) for information visualization to facilitate data exploration and analysis. Even though large displays provide more display space, they are challenging with regard to perception, effective multi-user support, and managing data density and complexity. To address these issues and illustrate our proposed setup, we contribute an extensive design space comprising first, the spatial alignment of display, visualizations, and objects in AR space. Next, we discuss which parts of a visualization can be augmented. Finally, we analyze how AR can be used to display personal views in order to show additional information and to minimize the mutual disturbance of data analysts. Based on this conceptual foundation, we present a number of exemplary techniques for extending visualizations with AR and discuss their relation to our design space. We further describe how these techniques address typical visualization problems that we have identified during our literature research. To examine our concepts, we introduce a generic AR visualization framework as well as a prototype implementing several example techniques. In order to demonstrate their potential, we further present a use case walkthrough in which we analyze a movie data set. From these experiences, we conclude that the contributed techniques can be useful in exploring and understanding multivariate data. We are convinced that the extension of large displays with AR for information visualization has a great potential for data analysis and sense-making.
Immersive technologies offer new opportunities to support collaborative visual data analysis by providing each collaborator a personal, high-resolution view of a flexible shared visualisation space through a head mounted display. However, most prior studies of collaborative immersive analytics have focused on how groups interact with surface interfaces such as tabletops and wall displays. This paper reports on a study in which teams of three co-located participants are given flexible visualisation authoring tools to allow a great deal of control in how they structure their shared workspace. They do so using a prototype system we call FIESTA: the Free-roaming Immersive Environment to Support Team-based Analysis. Unlike traditional visualisation tools, FIESTA allows users to freely position authoring interfaces and visualisation artefacts anywhere in the virtual environment, either on virtual surfaces or suspended within the interaction space. Our participants solved visual analytics tasks on a multivariate data set, doing so individually and collaboratively by creating a large number of 2D and 3D visualisations. Their behaviours suggest that the usage of surfaces is coupled with the type of visualisation used, often using walls to organise 2D visualisations, but positioning 3D visualisations in the space around them. Outside of tightly-coupled collaboration, participants followed social protocols and did not interact with visualisations that did not belong to them even if outside of its owner's personal workspace.
Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short ‘bursts’ of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such ‘casual collaborative’ scenarios will require engaging features to draw users’ attention, with intuitive, ‘walk-up and use’ interfaces. This paper presents Uplift, a novel prototype system to support ‘casual collaborative visual analytics’ for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders.
Virtual reality (VR) headsets offer a large and immersive workspace for displaying visualizations with stereoscopic vision, as compared to traditional environments with monitors or printouts. The controllers for these devices further allow direct three-dimensional interaction with the virtual environment. In this paper, we make use of these advantages to implement a novel multiple and coordinated view (MCV) system in the form of a vertical stack, showing tilted layers of geospatial data. In a formal study based on a use-case from urbanism that requires cross-referencing four layers of geospatial urban data, we compared it against more conventional systems similarly implemented in VR: a simpler grid of layers, and one map that allows for switching between layers. Performance and oculometric analyses showed a slight advantage of the two spatial-multiplexing methods (the grid or the stack) over the temporal multiplexing in blitting. Subgrouping the participants based on their preferences, characteristics, and behavior allowed a more nuanced analysis, allowing us to establish links between e.g., saccadic information, experience with video games, and preferred system. In conclusion, we found that none of the three systems are optimal and a choice of different MCV systems should be provided in order to optimally engage users.
Many cities, countries and transport operators around the world are striving to design intelligent transport systems. These systems capture the value of multisource and multiform data related to the functionality and use of transportation infrastructure to better support human mobility, interests, economic activity and lifestyles. They aim to provide services that can enable transportation customers and managers to be better informed and make safer and more efficient use of infrastructure.
In developing principles, guidelines, methods and tools to enable synergistic work between humans and computer-generated information, the science of visual analytics continues to expand our understanding of data through effective and interactive visual interfaces.
In this paper, we describe an application of visual analytics related to the study of movement and transportation systems. This application documents the use of rapid, 2D and 3D web visualisation and data analytics libraries and explores their potential added value to the analysis of big public transport performance data. A novel approach to displaying such data through a generalisable framework visualisation system is demonstrated. This framework recalls over a year’s worth of public transport performance data at a highly granular level in a fast, interactive browser-based environment.
Greater Sydney, Australia forms a case study to highlight potential uses of the visualisation of such large, passively-collected data sets as an applied research scenario. In this paper, we argue that such highly visual systems can add data-driven rigour to service planning and longer-term transport decision-making. Furthermore, they enable the sharing of quality of service statistics with various stakeholders and citizens and can showcase improvements in services before and after policy decisions. The paper concludes by making recommendations on the value of this approach in embedding these or similar web-based systems in transport planning practice, performance management, optimisation and understanding of customer experience.
This paper presents a study on the usage landscape of augmented reality (AR) and virtual reality (VR) in the architecture, engineering and construction sectors, and proposes a research agenda to address the existing gaps in required capabilities. A series of exploratory workshops and questionnaires were conducted with the participation of 54 experts from 36 organisations from industry and academia. Based on the data collected from the workshops, six AR and VR use-cases were defined: stakeholder engagement, design support, design review, construction support, operations and management support, and training. Three main research categories for a future research agenda have been proposed, i.e.: (i) engineering-grade devices, which encompasses research that enables robust devices that can be used in practice, e.g. the rough and complex conditions of construction sites; (ii) workflow and data management; to effectively manage data and processes required by AR and VR technologies; and (iii) new capabilities; which includes new research required that will add new features that are necessary for the specific construction industry demands. This study provides essential information for practitioners to inform adoption decisions. To researchers, it provides a research road map to inform their future research efforts. This is a foundational study that formalises and categorises the existing usage of AR and VR in the construction industry and provides a roadmap to guide future research efforts.
Urban planning is increasingly data driven, yet the challenge of designing with data at a city scale and remaining sensitive to the impact at a human scale is as important today as it was for Jane Jacobs. We address this challenge with Urban Mosaic, a tool for exploring the urban fabric through a spatially and temporally dense data set of 7.7 million street-level images from New York City, captured over the period of a year. Working in collaboration with professional practitioners, we use Urban Mosaic to investigate questions of accessibility and mobility, and preservation and retrofitting. In doing so, we demonstrate how tools such as this might provide a bridge between the city and the street, by supporting activities such as visual comparison of geographically distant neighborhoods, and temporal analysis of unfolding urban development.
Daily mobility data describes individual displacements over 24-hours periods and are an important source of information to understand the real rhythm of a city, to provide appropriate transportation policies, and to support investment decisions. Geovisualization researchers have designed multiple coordinated views environments, combining spatial and temporal dimensions, and providing indicators comparison. Daily mobility analysis is complex and requires simultaneous exploration and combination of different indicators at different spatial and temporal granularity levels. The design of effective geovisualization environments supporting this analysis evokes several challenges due to the diversity and multiplicity of indicators, the granularity of space and time, and time integration. In this paper, we propose a geovisualization approach enabling the dynamic visualization of diverse indicators, as well as the exploration of space, time, and other attributes. We use multiple screens embedding customizable dashboards and allowing the users to arrange views and compare indicators as it better fits their analysis. It also integrates a mobile device serving as a display and interaction tool to physically control the evolution of the visualization on time.
A Space-Time Cube enables analysts to clearly observe spatio-temporal features in movement trajectory datasets in geovisualization. However, its general usability is impacted by a lack of depth cues, a reported steep learning curve, and the requirement for efficient 3D navigation. In this work, we investigate a Space-Time Cube in the Immersive Analytics domain. Based on a review of previous work and selecting an appropriate exploration metaphor, we built a prototype environment where the cube is coupled to a virtual representation of the analyst's real desk, and zooming and panning in space and time are intuitively controlled using mid-air gestures. We compared our immersive environment to a desktop-based implementation in a user study with 20 participants across 7 tasks of varying difficulty, which targeted different user interface features. To investigate how performance is affected in the presence of clutter, we explored two scenarios with different numbers of trajectories. While the quantitative performance was similar for the majority of tasks, large differences appear when we analyze the patterns of interaction and consider subjective metrics. The immersive version of the Space-Time Cube received higher usability scores, much higher user preference, and was rated to have a lower mental workload, without causing participants discomfort in 25-minute-long VR sessions.
Immersive analytics (IA) is a new term referring to the use of immersive technologies for data analysis. Yet such applications are not new, and numerous contributions have been made in the last three decades. However, no survey reviewing all these contributions is available. Here we propose a survey of IA from the early nineties until the present day, describing how rendering technologies, data, sensory mapping, and interaction means have been used to build IA systems, as well as how these systems have been evaluated. The conclusions that emerge from our analysis are that: multi-sensory aspects of IA are under-exploited, the 3DUI and VR community knowledge regarding immersive interaction is not sufficiently utilised, the IA community should focus on converging towards best practices, as well as aim for real life IA systems.
Urban traffic noise situations are usually visualized as conventional 2D maps or 3D scenes. These representations are indispensable tools to inform decision makers and citizens about issues of health, safety, and quality of life but require expert knowledge in order to be properly understood and put into context. The subjectivity of how we perceive noise as well as the inaccuracies in common noise calculation standards are rarely represented. We present a virtual reality application that seeks to offer an audiovisual glimpse into the background workings of one of these standards, by employing a multisensory, immersive analytics approach that allows users to interactively explore and listen to an approximate rendering of the data in the same environment that the noise simulation occurs in. In order for this approach to be useful, it should manage complicated noise level calculations in a real time environment and run on commodity low-cost VR hardware. In a prototypical implementation, we utilized simple VR interactions common to current mobile VR headsets and combined them with techniques from data visualization and sonification to allow users to explore road traffic noise in an immersive real-time urban environment. The noise levels were calculated over CityGML LoD2 building geometries, in accordance with Common Noise Assessment Methods in Europe (CNOSSOS-EU) sound propagation methods.
We propose a multi-scale Mixed Reality (MR) collaboration between the Giant, a local Augmented Reality user, and the Miniature, a remote Virtual Reality user, in Giant-Miniature Collaboration (GMC). The Miniature is immersed in a 360-video shared by the Giant who can physically manipulate the Miniature through a tangible interface, a combined 360-camera with a 6 DOF tracker. We implemented a prototype system as a proof of concept and conducted a user study (n=24) comprising of four parts comparing: A) two types of virtual representations, B) three levels of Miniature control, C) three levels of 360-video view dependencies, and D) four 360-camera placement positions on the Giant. The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation. From the results, we give design recommendations and demonstrate an example Giant-Miniature Interaction.
We introduce IATK, the Immersive Analytics Toolkit, a software package for Unity that allows interactive authoring and exploration of data visualisation in immersive environments. The design of IATK was informed by interdisciplinary expert-collaborations as well as visual analytics applications and iterative refinement over several years. IATK allows for easy assembly of visualisations through a grammar of graphics that a user can configure in a GUI— in addition to a dedicated visualisation API that supports the creation of novel immersive visualisation designs and interactions. IATK is designed with scalability in mind, allowing visualisation and fluid responsive interactions in the order of several million points at a usable frame rate. This paper outlines our design requirements, IATK’s framework design and technical features, its user interface, as well as application examples.
An understanding of the evolutionary patterns in areas of urban activity is crucial for official decision makers and urban planners. The origin-destination (OD) datasets generated by human daily travel behavior reflect urban dynamics. Previous spatio-temporal analysis methods utilize these datasets to extract popular city areas, with the ignorance of the flow relationships between areas. Several methods have been unable to determine time steps with similar spatial characteristics automatically, or failed to recognize the evolutionary patterns of various modalities for a city. In this paper, we propose a new methodology to discover the hidden semantic-level city dynamics from OD data. The method carries out spatial simplification, and constructs a sequence of location networks at first. Then, the hourly network is studied as a document consisting of trip relationships among location clusters, enabling a semantic analysis of the OD dataset as a document corpus. Hidden themes, namely traffic topics, are identified through a topic modeling technique in an unsupervised manner. Finally, an interactive visual analytics system is designed to intuitively demonstrate the probability-based thematic information and the evolutionary activity patterns of a city. The feasibility and validity of our method is demonstrated via case studies with two kinds of real-world datasets: bike-sharing system (BSS) dataset and taxi dataset. Semantic-level city structures and recurrent behaviors representing the life of a large set of users, as well as the differences in BSS usage patterns of two cities are discovered. We also discover how people use different means of transportation for one city.
Visual analytics systems can greatly help in the analysis of urban data allowing domain experts from academia and city governments to better understand cities, and thus enable better operations, informed planning and policies. Effectively designing these systems is challenging and requires bringing together methods from different domains. In this paper, we discuss the challenges involved in designing a visual analytics system to interactively explore large spatio-temporal data sets and give an overview of our research that combines visualization and data management to tackle these challenges.
Immersive virtual- and augmented-reality headsets can overlay a flat image against any surface or hang virtual objects in the space around the user. The technology is rapidly improving and may, in the long term, replace traditional flat panel displays in many situations. When displays are no longer intrinsically flat, how should we use the space around the user for abstract data visualisation? In this paper, we ask this question with respect to origin-destination flow data in a global geographic context. We report on the findings of three studies exploring different spatial encodings for flow maps. The first experiment focuses on different 2D and 3D encodings for flows on flat maps. We find that participants are significantly more accurate with raised flow paths whose height is proportional to flow distance but fastest with traditional straight line 2D flows. In our second and third experiment we compared flat maps, 3D globes and a novel interactive design we call MapsLink, involving a pair of linked flat maps. We find that participants took significantly more time with MapsLink than other flow maps while the 3D globe with raised flows was the fastest, most accurate, and most preferred method. Our work suggests that careful use of the third spatial dimension can resolve visual clutter in complex flow maps.
Visualizing 3D trajectories to extract insights about their similarities and spatial configuration is a critical task in several domains. Air traffic controllers for example deal with large quantities of aircrafts routes to optimize safety in airspace and neuroscientists attempt to understand neuronal pathways in the human brain by visualizing bundles of fibers from DTI images. Extracting insights from masses of 3D trajectories is challenging as the multiple three dimensional lines have complex geometries, may overlap, cross or even merge with each other, making it impossible to follow individual ones in dense areas. As trajectories are inherently spatial and three dimensional, we propose FiberClay: a system to display and interact with 3D trajectories in immersive environments. FiberClay renders a large quantity of trajectories in real time using GP-GPU techniques. FiberClay also introduces a new set of interactive techniques for composing complex queries in 3D space leveraging immersive environment controllers and user position. These techniques enable an analyst to select and compare sets of trajectories with specific geometries and data properties. We conclude by discussing insights found using FiberClay with domain experts in air traffic control and neurology.
MIT City Science Group (CS) studies the interaction of social, economic and physical characteristics of urban areas to understand how people use and experience cities with the goal of improving urban design practices to facilitate consensus between stakeholders. Long-established processes of engagement around urban transformation have been reliant on visual communication and complex negotiation to facilitate coordination between stakeholders, including community members, administrative bodies and technical professionals. City Science group proposes a novel methodology of interaction and collaboration called CityScope, a data-driven platform that simulates the impacts of interventions on urban ecosystems prior to detail-design and execution. As stakeholders collectively interact with the platform and understand the impact of proposed interventions in real-time, consensus building and optimization of goals can be achieved. In this article, we outline the methodology behind the basic analysis and visualization elements of the tool and the tangible user interface, to demonstrate an alternate solution to urban design strategies as applied to the Volpe Site case study in Kendall Square, Cambridge, MA.
The use of novel displays and interaction resources to support immersive data visualization and improve analytical reasoning is a research trend in the information visualization community. In this work, we evaluate the use of an HMD-based environment for the exploration of multidimensional data, represented in 3D scatterplots as a result of dimensionality reduction (DR). We present a new modeling for this problem, accounting for the two factors whose interplay deter- mine the impact on the overall task performance: the difference in errors introduced by performing dimensionality reduction to 2D or 3D, and the difference in human perception errors under different visualization conditions. This two-step framework offers a simple approach to estimate the benefits of using an immersive 3D setup for a particular dataset. Here, the DR errors for a series of roll call voting datasets when using two or three dimensions are evaluated through an empirical task-based approach. The perception error and overall task performance, on the other hand, are assessed through a comparative user study with 30 participants. Results indicated that perception errors were low and similar in all approaches, resulting in overall performance benefits in both desktop and HMD-based 3D techniques. The immersive condition, however, was found to require less effort to find information and less navigation, besides providing much larger subjective perception of accuracy and engagement.
Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft HoloLens. These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence. In this paper, we propose an theoretical model to characterize the visualizations in immersive urban analytics. Further more, based on our comprehensive and concise model, we contribute a typology of combination methods of 2D and 3D visualizations that distinguish between linked views, embedded views, and mixed views. We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations. Finally, based on existing works, possible future research opportunities are explored and discussed.
We introduce ImAxes, an immersive system for exploring multivariate data using fluid, modeless interaction. The basic interface element is an embodied data axis. The user can manipulate these axes like physical objects in the immersive environment and combine them into sophisticated visualisa-tions. The type of visualisation that appears depends on the proximity and relative orientation of the axes with respect to one another, which we describe with a formal grammar. This straightforward composability leads to a number of emergent visualisations and interactions, which we review, and then demonstrate with a detailed multivariate data analysis use case.
Analyzing large amounts of complex movement data requires appropriate visual and analytical methods. This paper proposes a 2-D star-icon based visualization technique for the visual exploration of multivariate movement events in a space-time cube. To test the proposed method, we derive multivariate events from massive real-world floating car data and visually explore spatio-temporal patterns. The experimental results show that our proposed methods are helpful in identifying interesting locations or functional areas, and assist the understanding of dynamic patterns.
Urban design is a highly visual discipline that requires visualization for informed decision making. However, traditional urban design tools are mostly limited to representations on 2D displays that lack intuitive awareness. The popularity of head-mounted displays (HMDs) promotes a promising alternative with consumer-grade 3D displays. We introduce UrbanVR, an immersive analytics system with effective visualization and interaction techniques, to enable architects to assess designs in a virtual reality (VR) environment. Specifically, UrbanVR incorporates 1) a customized parallel coordinates plot (PCP) design to facilitate quantitative assessment of high-dimensional design metrics, 2) a series of egocentric interactions, including gesture interactions and handle-bar metaphors, to facilitate user interactions, and 3) a viewpoint optimization algorithm to help users explore both the PCP for quantitative analysis, and objects of interest for context awareness. Effectiveness and feasibility of the system are validated through quantitative user studies and qualitative expert feedbacks.
We present a tool for immersive analysis of spatial energy data. The tool is built on ImAxes, which was designed for immersive analytics of abstract data in virtual environments, and extends it with a geospatial layer that allows for iterative and collaborative sensemaking. We developed new interaction techniques that enable the user to freely choose data variables, combine them into charts, and use filters and data lenses to create interactive visualisations grounded in a map. Fusion between various separate data sources can take place directly in the hands of the humans who need to understand and explore it. In this paper, we describe the system development and present a use case scenario with data from Australia's National Energy Analytics Research Program.
In this work, we evaluate two standard interaction techniques for Immersive Analytics environments: virtual hands, with actions such as grabbing and stretching, and virtual ray pointers, with actions assigned to controller buttons. We also consider a third option: seamlessly integrating both modes and allowing the user to alternate between them without explicit mode switches. Easy-to-use interaction with data visualizations in Virtual Reality enables analysts to intuitively query or filter the data, in addition to the benefit of multiple perspectives and stereoscopic 3D display. While many VR-based Immersive Analytics systems employ one of the studied interaction modes, the effect of this choice is unknown. Considering that each has different advantages, we compared the three conditions through a controlled user study in the spatio-temporal data domain. We did not find significant differences between hands and ray-casting in task performance, workload, or interactivity patterns. Yet, 60% of the participants preferred the mixed mode and benefited from it by choosing the best alternative for each low-level task. This mode significantly reduced completion times by 23% for the most demanding task, at the cost of a 5% decrease in overall success rates.
The design space for user interfaces for Immersive Analytics applications is vast. Designers can combine navigation and manipulation to enable data exploration with ego-or exocentric views, have the user operate at different scales, or use different forms of navigation with varying levels of physical movement. This freedom results in a multitude of different viable approaches. Yet, there is no clear understanding of the advantages and disadvantages of each choice. Our goal is to investigate the affordances of several major design choices, to enable both application designers and users to make better decisions. In this work, we assess two main factors, exploration mode and frame of reference, consequently also varying visualization scale and physical movement demand. To isolate each factor, we implemented nine different conditions in a Space-Time Cube visualization use case and asked 36 participants to perform multiple tasks. We analyzed the results in terms of performance and qualitative measures and correlated them with participants' spatial abilities. While egocentric room-scale exploration significantly reduced mental workload, exocentric exploration improved performance in some tasks. Combining navigation and manipulation made tasks easier by reducing workload, temporal demand, and physical effort.
Immersive Analytics is a new research initiative that aims to remove barriers between people, their data and the tools they use for analysis and decision making. Here the aims of immersive analytics research are clarified, its opportunities and historical context, as well as providing a broad research agenda for the field. In addition, it is reviewed how the term immersion has been used to refer to both technological and psychological immersion, both of which are central to immersive analytics research.
This research presents an application for visualizing the real-world cityscapes and massive transport network performance data sets in Augmented Reality (AR) using the Microsoft HoloLens, or any equivalent hardware. This runs in tandem with numerous emerging applications in the growing worldwide Smart Cities movement and industry. Specifically, this application seeks to address visualization of both real-time and aggregated city data feeds - such as weather, traffic and social media feeds. The software is developed in extensible ways, and it able to overlay various historic and live data sets coming from multiple sources.
Advances in computer graphics, data processing and visualization now allow us to tie these visual tools in with much more detailed, longitudinal, massive performance data sets to support comprehensive and useful forms of visual analytics for city planners, decision makers and citizens. Further, it allows us to show these in new interfaces such as the HoloLens and other head-mounted displays to enable collaboration and more natural mappings with the real world.
Using this toolkit, this visualization technology allows a novel approach to explore hundreds of millions of data points in order to find insights, trends, patterns over significant periods of time and geographic space. The focus of our development uses open data sets, which maximizes applications to assessing the performance of networks of cities worldwide. The city of Sydney, Australia is used as our initial application. It showcases a real-world example of this application enabling analysis of the transport network performance over the past twelve months.
Immersive analytics turns the very space surrounding the user into a canvas for data analysis, supporting human cognitive abilities in myriad ways. We present the results of a design study, contextual inquiry, and longitudinal evaluation involving professional economists using a Virtual Reality (VR) system for multidimensional visualization to explore actual economic data. Results from our preregistered evaluation highlight the varied use of space depending on context (exploration vs. presentation), the organization of space to support work, and the impact of immersion on navigation and orientation in the 3D analysis space.
This paper introduces GeoGate, an Augmented Reality tabletop
system that extends the Space-Time Cube and utilizes a ring-shaped
tangible user interface to explore correlations between entities in
multiple location datasets. We demonstrate GeoGate in the context
of the maritime domain, where operators seek to find geo-temporal
associations between trajectories recorded from a global positioning
system, and light data extracted from night time satellite images.
GeoGate utilizes a tabletop system displaying a traditional 2D map
in conjunction with a Microsoft Hololens to present a single view
of the data with a novel Augmented Reality extension of the Space-
Time Cube. To validate GeoGate, we present the results of a user study comparing GeoGate with the existing 2D approach used in
a normal desktop environment. The outcomes of the user study
show that GeoGate’s approach reduces mistakes in the interpretation
of the correlations between various datasets, while the qualitative
results show that such a system is preferable for the majority of
geo-temporal maritime tasks compared.
From emergency planning to real estate, many domains can benefit from collaborative exploration of urban environments in VR and AR. We have created an interactive experience that allows multiple users to explore live datasets in context of an immersive scale model of the urban environment with which they are related.
Route planning is an important daily activity and has been intensively studied owing to their broad applications. Extracting the driving experience of taxi drivers to learn about the best routes and to support dynamic route planning can greatly help both end users and governments to ease traffic problems. Travel frequency representing the popularity of different road segments plays an important role in experience-based path-finding models and route computation. However, global frequency used in previous studies does not take into account the dynamic space-time characteristics of origins and destinations and the detailed travel frequency in different directions on the same road segment. This paper presents the space-time trajectory cube as a framework for dividing and organizing the trajectory space in terms of three dimensions (origin, destination, and time). After that, space-time trajectory cube computation and origin-destination constrained experience extraction methods are proposed to extract the fine-grained experience of taxi drivers based on a dataset of real taxi trajectories. Finally, space-time constrained graph was generated by merging drivers' experience with the road network to compute optimal routes. The framework and methods were implemented using a taxi trajectory dataset from Shenzhen, China. The results show that the proposed methods effectively extracted the driving experience of the taxi drivers and the entailed trade-off between route length and travel time for routes with high trajectory coverage. They also indicate that road segment global frequency is not appropriate for representing driving experience in route planning models. These results are important for future research on route planning or path finding methods and their applications in navigation systems.
Bike-sharing systems are a popular mode of public transportation, increasing in number and size around the world. Public bike-sharing systems attend to the needs of a large number of commuters while synchronizing to the rhythm of big cities. To better understand the usage of such systems, we introduce an interactive visualization system to explore the dynamics of public bike-sharing systems by profiling its historical dataset. By coordinating a pixel-oriented timeline with a map, and introducing a scheme of partial reordering of time series, our design supports the identification of several patterns in temporal and spatial domains. We take New York City׳s bike-sharing program, Citi Bike, as a use case and implement a prototype to show changes in the system over a period of ten months, ranking stations by different properties, using any time interval in daily and monthly timelines. Different analyses are presented to validate the visualization system as a useful operational tool that can support the staff of bike-sharing programs of big cities in the exploration of such large datasets, in order to understand the commuting dynamics to overcome management problems and provide a better service to commuters.