Conference Paper

Have Fun with Math: Multimodal, Interactive, and Immersive Exploration of Wave Functions with 3D Models

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The process of transforming data into sounds for auditory display provides unique user experiences and new perspectives for analyzing and interpreting data. A research study for data transformation to sounds based on musical elements, called data-to-music sonification, reveals how musical characteristics can serve analytical purposes with enhanced user engagement. An existing user engagement scale has been applied to measure engagement levels in three conditions within melodic, rhythmic, and chordal contexts. This article reports findings from a user engagement study with musical traits and states the benefits and challenges of using musical characteristics in sonifications. The results can guide the design of future sonifications of multivariable data.
Article
Full-text available
Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short ‘bursts’ of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such ‘casual collaborative’ scenarios will require engaging features to draw users’ attention, with intuitive, ‘walk-up and use’ interfaces. This paper presents Uplift, a novel prototype system to support ‘casual collaborative visual analytics’ for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders.
Article
Full-text available
Visual representations of data introduce several possible challenges for the human visual perception system in perceiving brightness levels. Overcoming these challenges might be simplified by adding sound to the representation. This is called sonification. As sonification provides additional information to the visual information, sonification could be useful in supporting the visual perception. In the present study, usefulness (in terms of accuracy and response time) of sonification was investigated with an interactive sonification test. In the test, participants were asked to identify the highest brightness level in a monochrome visual representation. The task was performed in four conditions, one with no sonification and three with different sonification settings. The results show that sonification is useful, as measured by higher task accuracy, and that the participant's musicality facilitates the use of sonification with better performance when sonification was used. The results were also supported by subjective measurements, where participants reported an experienced benefit of sonification.
Article
Full-text available
In recent years, immersion has become a frequently emphasized factor in the geovisualization research agenda. A principal reason for this growing interest is the assumption that immersive virtual environments (IVE) facilitate the formation of spatial presence, generally understood as the sense of “being there”. In a virtually mediated environment, the feeling of being there is of particular concern for cartographic ambitions, in terms of generating insights through geospatial representation. Current literature indicates that immersive VR systems stimulate the experience of spatial presence; however, this assumption is mainly based upon user studies in the visual communication channel. Moreover, research on IVE for geovisualization matters has to date been focused on visual-graphical rather than on auditive or even multisensory representations in virtual space. In this context, the present paper aims to evaluate the potential of audiovisual cartography with immersive virtual environments. Following a brief discussion of basic concepts, such as immersion, spatial presence and embodiment, we will integrate these aspects into a geovisualization immersion pipeline (GIP), as a framework with which to systematically link the technical and cognitive aspects of IVE. In the subsequent sections, we will examine this framework in the audio channel by analyzing how sound is implemented and perceived in GeoIVE. As we shall see, the positive effect of a combined audio-visual vs. exclusively visual presentation is supported by a series of user studies of sound effects, making audiovisual cartography with IVE a rich and worthwhile field of research.
Article
Full-text available
Why talk about usability in a book about information design? Although there is not yet a consensus on a single definition of information design, one that I like is making "information accessible and usable to people" (Sless, 1992, p. 1). It is not enough to design well: We must also achieve information design that is usable. We therefore need to establish what "usable" means, so in this chapter I present five dimensions of usability, which can be used in several ways: • As a model, they provide a way to understand what kind of usability is needed in different contexts; • As a tool, they help guide the design process, suggesting both a general approach and specific choices; • For evaluation, they are both useful as a way of understanding why a design is failing, and suggest appropriate techniques to get the design right. This multifaceted view of usability allows designers of both complex and simpler products to understand user requirements and evaluate the success of the design. The work of an information designer shares elements with others including user experience designers, information architects, graphic artists, interaction designers, user interface designers, usability engi-neers, writers, content managers, indexers, and quite a few more.
Article
Full-text available
Representing a single data variable changing in time via sonification, or using that data to control a sound in some way appears to be a simple problem but actually involves a significant degree of subjectivity. This paper is a response to my own focus on specific sonification tasks (Kramer 1990, 1993) (Fitch & Kramer, 1994), on broad theoretical concerns in auditory display (Kramer 1994a, 1994b, 1995), and on the representation of high-dimensional data sets (Kramer 1991a & Kramer & Ellison, 1991b). The design focus of this paper is partly a response to the others who, like myself, have primarily employed single fundamental acoustic variables such as pitch or loudness to represent single data streams. These simple representations have framed three challenges: Behavioral and Cognitive Science-Can sonifications created with complex sounds changing simultaneously in several dimensions facilitate the formation of a stronger internal auditory image, or audiation, than would be produced by simpler sonifications? Human Factors and Applications-Would such a stronger internal image of the data prove to be more useful from the standpoint of conveying information? Technology and Design-How might these richer displays be constructed? This final question serves as a starting point for this paper. After years of cautious sonification research I wanted to explore the creation of more interesting and compelling representations.
Article
Full-text available
The integration of information from different sensory modalities has many advantages for human observers, including increase of salience, resolution of perceptual ambiguities, and unified perception of objects and surroundings. Several behavioral, electrophysiological and neuroimaging data collected in various tasks, including localization and detection of spatial events, crossmodal perception of object properties and scene analysis are reviewed here. All the results highlight the multiple faces of crossmodal interactions and provide converging evidence that the brain takes advantages of spatial and temporal coincidence between spatial events in the crossmodal binding of spatial features gathered through different modalities. Furthermore, the elaboration of a multimodal percept appears to be based on an adaptive combination of the contribution of each modality, according to the intrinsic reliability of sensory cue, which itself depends on the task at hand and the kind of perceptual cues involved in sensory processing. Computational models based on bayesian sensory estimation provide valuable explanations of the way perceptual system could perform such crossmodal integration. Recent anatomical evidence suggest that crossmodal interactions affect early stages of sensory processing, and could be mediated through a dynamic recurrent network involving backprojections from multimodal areas as well as lateral connections that can modulate the activity of primary sensory cortices, though future behavioral and neurophysiological studies should allow a better understanding of the underlying mechanisms.
Article
Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer.
Book
• This work, a second edition of which has very kindly been requested, was followed by La Construction du réel chez l'enfant and was to have been completed by a study of the genesis of imitation in the child. The latter piece of research, whose publication we have postponed because it is so closely connected with the analysis of play and representational symbolism, appeared in 1945, inserted in a third work, La formation du symbole chez l'enfant. Together these three works form one entity dedicated to the beginnings of intelligence, that is to say, to the various manifestations of sensorimotor intelligence and to the most elementary forms of expression. The theses developed in this volume, which concern in particular the formation of the sensorimotor schemata and the mechanism of mental assimilation, have given rise to much discussion which pleases us and prompts us to thank both our opponents and our sympathizers for their kind interest in our work. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
We describe visualization software, Visualizer, that was developed specifically for interactive, visual exploration in immersive virtual reality (VR) environments. Visualizer uses carefully optimized algorithms and data structures to support the high frame rates required for immersion and the real-time feedback required for interactivity. As an application developed for VR from the ground up, Visualizer realizes benefits that usually cannot be achieved by software initially developed for the desktop and later ported to VR. However, Visualizer can also be used on desktop systems (unix/linux-based operating systems including Mac OS X) with a similar level of real-time interactivity, bridging the “software gap” between desktop and VR that has been an obstacle for the adoption of VR methods in the Geosciences. While many of the capabilities of Visualizer are already available in other software packages used in a desktop environment, the features that distinguish Visualizer are: (1) Visualizer can be used in any VR environment including the desktop, GeoWall, or CAVE, (2) in non-desktop environments the user interacts with the data set directly using a wand or other input devices instead of working indirectly via dialog boxes or text input, (3) on the desktop, Visualizer provides real-time interaction with very large data sets that cannot easily be viewed or manipulated in other software packages. Three case studies are presented that illustrate the direct scientific benefits realized by analyzing data or simulation results with Visualizer in a VR environment. We also address some of the main obstacles to widespread use of VR environments in scientific research with a user study that shows Visualizer is easy to learn and to use in a VR environment and can be as effective on desktop systems as native desktop applications.
Book
From the Publisher: In 1996, recognizing this book, ACM's Special Interest Group on Documentation (SIGDOC) presented Ben Shneiderman with the Joseph Rigo Award. SIGDOC praised the book as one "that took the jargon and mystery out of the field of human-computer interaction" and attributed the book's success to "its readability and emphasis on practice as well as research." In revising this best-seller, Ben Shneiderman again provides a complete, current, and authoritative introduction to user-interface design. The user interface is the part of every computer system that determines how people control and operate that system. When the interface is well designed, it is comprehensible, predictable, and controllable; users feel competent, satisfied, and responsible for their actions. In this book, the author discusses the principles and practices needed to design such effective interaction. Based on 20 years experience, Shneiderman offers readers practical techniques and guidelines for interface design. As a scientist, he also takes great care to discuss underlying issues and to support conclusions with empirical results. Interface designers, software engineers, and product managers will all find here an invaluable resource for creating systems that facilitate rapid learning and performance, yield low error rates, and generate high user satisfaction. Coverage includes the human factors of interactive software (with added discussion of diverse user communities), tested methods to develop and assess interfaces, interaction styles (like direct manipulation for graphical user interfaces), and design considerations (effective messages, consistent screen design, appropriate color).
Gamify your classroom: A field guide to game-based learning
  • M Farber
  • Farber M.