Thesis

A Perceptual Approach to Audio-Visual Instrument Design, Composition and Performance

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The thesis presents a perceptual approach to audio-visual instrument design, composition and performance. The approach informs practical work as well as a parametric visualisation model, which can be used to analyse sensory dominance, sonic expression and spatial presence in any audio-visual performance language. The practical work intends for the image to function as a stage scene, which reacts to the music and allows for attention to focus on the relation between the sounds themselves. This is challenging, because usually vision dominates over audition. To clarify the problem, the thesis extrapolates from audio-visual theory, psychology, neuroscience, interaction design and musicology. The investigations lead to three creative principles, which inform the design of an instrument that combines a custom zither and audio-visual 3D software. The instrument uses disparities between the acoustic and digital outputs so as to explore those creative principles: a) to threshold the performerʼs control over the instrument and the instrumentʼs unpredictability, in ways that convey musical expression; b) to facilitate perceptual simplification of visual dynamics; c) to create an audio-visual relationship that produces a sense of causation, and simultaneously confounds the cause and effect relationships. This latter principle is demonstrated with a study on audio-visual mapping and perception, whose conclusions are equally applicable to the audio-visual relationship in space. Yet importantly, my creative decisions are not driven by demonstrative aims. Regarding the visual dynamics, the initial creative work assures perceptual simplification, but the final work exposes a gray area that respects to how the audienceʼs attention might change over time. In any case, the parametric visualisation model can reveal how any audio-visual performance work might converge or diverge from these three creative principles. It combines parameters for interaction, sonic & visual dynamics, audio-visual relationship, physical performance setup and semantics. The parameters for dynamics and semantics reflect how stimuli inform attention at a particular timescale. The thesis uses the model to analyse a set of audio-visual performance languages, to represent my solo performance work from a creative perspective, and to visualise the workʼs versatility in collaboration with other musicians. Keywords: NIME, audio-visual performance, music, 3D environments, perception, attention

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... We have shown how the parametric visualisation model can be used to analyse existing audio-visual instruments. It also provides a theoretical perspective from which to create new audio-visual performances, and develop new audio-visual systems 86 . We have applied the model in the development of the audio-visual instrument mentioned earlier in this chapter, which processes 3D sound and image based on an acoustic zither input. ...
Chapter
This chapter proposes a parametric model that is useful in audio-visual instrument design, composition and performance. We can draw a separation between those activities, but in practice that separation might not be so obvious: ultimately, the iterative creation process must always consider the final, global experience. Derived from a perceptual approach, the model is applicable to the broad diversity of aesthetical options and technical platforms. One can equally discard part of the parameters to analyse recorded audio pieces and films. On the one hand, the model enables the separate analysis of performer-instrument interaction, sound, image, audio-visual relationship and physical setup. On the other, it enables the analysis of how the combination conducts the audience’s experience. The chapter begins by presenting each parameter independently, while illustrating their use with a range of artistic examples. It then explains how their combination facilitates the analysis of expression, of the relative strength of sound and image and of the audience’s feeling of presence. Finally, it demonstrates how the model can be used in creative practice, showing its usefulness as a compositional tool.
Chapter
Full-text available
Psychology provides an important base from which to understand music, and is very relevant for electronic music in particular, where psychological theories have even inspired new compositional explorations. Furthermore, in analysing and composing electronic music, traditional music theory is often not applicable. There is no conventional score available on which the analysis of the music could be based, for the music does not rest solely on certain standard notated pitch structures and rhythmic frameworks, but encompasses timbre, spatialisation and other general auditory parameters. An appreciation of the role of aural cognition is vital for a true engagement with this field, where any sounding object is fair game. The purpose of this chapter is to provide an introduction to perceptual and cognitive processes of music that are fundamental for understanding electronic music. The chapter begins with a discussion of the neuroscientific basis of the auditory system. This is followed by a discussion of low-level phenomena of audition, including the localisation of sound sources, masking, auditory stream segregation and the perception of timbre. Next, the perception of pitch is tackled, with a discussion on its relation to alternative tunings. Finally, basic notions of rhythm perception are introduced. For each of these parts, electronic music examples illustrating the perceptual principles will be given. Any and all principles expounded in this chapter might be taken up and profitably investigated by electronic musicians.
Article
Full-text available
This article pinpoints a specific movement within the broad spectrum of music technology to identify a musical and instrument-building tradition concerned with gesture. This area of research and creative practice, often labeled NIME after the international conference series New Interfaces for Musical Expression, goes beyond enhancing traditional instrument performance practice to looking at new paradigms for instrumental performance. This article focuses on musically driven efforts to exploit analog and digital technologies to capture musical gesture and afford new forms of sonic articulation and musical expression. It retraces the history of music technology in the twentieth century that led up to the founding of NIME and introduces composers and performers who have established a performance practice on interactive, sensor-based musical instruments. Finally, it finishes by indicating current directions in the field, including the musical exploitation of biometric signals and location-tracking technologies.
Article
Full-text available
Computer code is a form of notational language. It prescribes actions to be carried out by the computer, often by systems called interpreters. When code is used to write music, we are therefore operating with programming language as a relatively new form of musical notation. Music is a time-based art form and the traditional musical score is a linear chronograph with instructions for an interpreter. Here code and traditional notation are somewhat at odds, since code is written as text, without any representational timeline. This can pose problems, for example for a composer who is working on a section in the middle of a long piece, but has to repeatedly run the code from the beginning or make temporary arrangements to solve this difficulty in the compositional process. In short: code does not come with a timeline but is rather the material used for building timelines. This article explores the context of creating linear ‘code scores’ in the area of musical notation. It presents the Threnoscope as an example of a system that implements both representational notation and a prescriptive code score.
Conference Paper
Full-text available
Under some situations sensory modalities compete for attention, with one modality attenuating processing in a second modality. Almost forty years of research with adults has shown that this competition is typically won by the visual modality. Using a discrimination task on an eye tracker, the current research provides novel support for auditory dominance, with words and nonlinguistic sounds slowing down visual processing. At the same time, there was no evidence suggesting that visual input slowed down auditory processing. Several eye tracking variables correlated with behavioral responses. Of particular interest is the finding that adults' first fixations were delayed when images were paired with auditory input, especially nonlinguistic sounds. This finding is consistent with neurophysiological findings and also consistent with a potential mechanism underlying auditory dominance effects.
Chapter
Full-text available
The term " presence " entered in the wide scientific debate in 1992 when Sheridan and Furness used it in the title of a new journal dedicated to the study of virtual reality systems and teleoperations: Presence, Teleoperators and Virtual Environments. Following this approach, the term " presence " has been used to describe a widely reported sensation experienced during the use of virtual reality. The main limitation of this vision is what is not said. What is presence for? Is it a specific cognitive process? To answer to these questions, a second group of researchers considers presence as a broad psychological phenomenon, not necessarily linked to the experience of a medium, whose goal is the control of the individual and social activity. In this chapter we support this second vision, starting from the following broad statements: (a) the psychology of presence is related to human action and its organization in the environment; (b) the psychology of presence is related to the body and to the embodiment process; (c) presence is an evolved process related to the understanding and management of the causal texture of both the physical and social worlds. In the following paragraphs we will justify these claims and underline their relevance for the design and usage of interactive technologies.
Conference Paper
Full-text available
Performers of Hindustani Classical Music depend heav-ily on complex models of motion and movement to elaborate melodic ideas through hand gestures and motion metaphors. Despite advances in computational modeling of grammars that govern the elaboration of a raga, these systems run into difficulties because of the nature of pitch systems in computer programs and in performance. We elaborate the problems with trying to obtain the ideas in a flexible-pitch scheme like HCM through the means of a fixed-pitch scheme-like nota-tion and computer music generation. In this paper, we present some experiments to study the effectiveness of a graphical notation scheme in HCM, sound tracing study and an analysis of the terminology used for ornaments – through which to understand motion in HCM. We plan to analyze them computationally to develop a formalism that would be more suitable to the nuances of HCM than the present schemes.
Conference Paper
Full-text available
This paper introduces the investment of play, its role and significance in the design and development of digital musical instruments (DMIs). Dimension map analyses are used to create a qualitative numerical estimate of DMI expression. Expression is then longitudinally compared to data sets spanning a 16year study epoch of the Bent Leather Band. This study identifies multiplicity of control and other parameters, as significant affordances for DMI musical expression and skill development. The paper argues that Expression is proportional to the sum of invested play and the processional affordances latent within the DMI system.
Article
Full-text available
Immersive virtual environments offer the possibility of natu- ral interaction within a virtual scene that is familiar to users because it is based on everyday activity. The use of such en- vironments for the representation and control of interactive musical systems remains largely unexplored. We propose a paradigm for working with sound and music in a physical context, and develop a framework that allows for the creation of spatialized audio scenes. The framework uses structures called soundNodes, soundConnections, and DSP graphs to organize audio scene content, and offers greater control com- pared to other representations. 3-D simulation with physical modelling is used to define how audio is processed, and offers a high degree of expressive interaction with sound, particu- larly when the rules of sound propagation are bent. Sound sources and sinks are modelled within the scene along with the user/listener/performer, creating a navigable 3-D sonic space for sound-engineering, musical creation, listening, and performance.
Article
Full-text available
In an effort to find a better suited interface for musical perfor- mance, a novel approach has been discovered and developed. At the heart of this approach is the concept of physical interaction with sound in space, where sound processing occurs at various 3- D locations and sending sound signals from one area to another is based on physical models of sound propagation. The control is based on a gestural vocabulary that is familiar to users, involving natural spatial interaction such as translating, rotating, and point- ing in 3-D. This research presents a framework to deal with real- time control of 3-D audio, and describes how to construct audio scenes to accomplish various musical tasks. The generality and ef- fectiveness of this approach has enabled us to reimplement several conventional applications, with the benefit of a substantially more powerful interface, and has further led to the conceptualization of several novel applications.
Conference Paper
Full-text available
While several researchers have grappled with the problem of comparing musical devices across performance, installa- tion, and related contexts, no methodology yet exists for producing holistic, informative visualizations for these de- vices. Drawing on existing research in performance inter- action, human-computer interaction, and design space anal- ysis, the authors propose a dimension space representation that can be adapted for visually displaying musical devices. This paper illustrates one possible application of the dimen- sion space to existing performance and interaction systems, revealing its usefulness both in exposing patterns across ex- isting musical devices and aiding in the design of new ones.
Article
Full-text available
At STEIM most research is applied towards specific projects of resident artists and composers. A large portion of this work is for live performance with digital electronics. Hardware and software designers are guided to take idiosyncrasy rather than generality as the prime guiding principle but have managed to create recyclable musical tools. An empirical method is promoted for both artist and technologist in order to recover the physicality of music lost in adapting to the abstractions of technology. The emphasis is on instrument design as a response to the questions posed by each artist's heterogeneous collections of ideas and tools.
Article
Full-text available
The authors examine how materiality emerges from complex chains of mediation in creative software use. The primarily theoretical argument is inspired and illustrated by interviews with two composers of electronic music. The authors argue that computer mediated activity should not primarily be understood in terms of simple mediation, but rather as chains of complex mediation in which the dominant form of representation is metonymy rather than metaphor.
Article
Full-text available
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Article
Full-text available
In this paper we offer a theory of cross-modal objects. To begin, we discuss two kinds of linkages between vision and audition. The first is a duality. The the visual system detects and identifies emphsurfaces; the auditory system detects and identifies emphsources. Surfaces are illuminated by sources of light; sound is reflected off surfaces. However, the visual system discounts sources and the auditory system discounts surfaces. These and similar considerations lead to the Theory of Indispensable Attributes that states the conditions for the formation of gestalts in the two modalities. The second linkage involves the formation of audiovisual objects, integrated cross-modal experiences. We describe research that reveals the role of cross-modal causality in the formation of such objects. These experiments use the canonical example of a causal link between vision and audition: a visible impact that causes a percussive sound.
Article
Full-text available
This paper explores the differences in the design and performance of acoustic and new digital musical instruments, arguing that with the latter there is an increased encapsulation of musical theory. The point of departure is the phenomenology of musical instruments, which leads to the exploration of designed artefacts as extensions of human cognition – as scaffolding onto which we delegate parts of our cognitive processes. The paper succinctly emphasises the pronounced epistemic dimension of digital instruments when compared to acoustic instruments. Through the analysis of material epistemologies it is possible to describe the digital instrument as an epistemic tool: a designed tool with such a high degree of symbolic pertinence that it becomes a system of knowledge and thinking in its own terms. In conclusion, the paper rounds up the phenomenological and epistemological arguments, and points at issues in the design of digital musical instruments that are germane due to their strong aesthetic implications for musical culture.
Article
Full-text available
First Person Shooters are among the most played computer video games. They combine navigation, interaction and collaboration in 3D virtual environments using simple input devices, i.e. mouse and keyboard. In this paper, we study the possibilities brought by these games for musical interaction. We present the Couacs, a collaborative multiprocess instrument which relies on interaction techniques used in FPS together with new techniques adding the expressiveness required for musical interaction. In particular, the Faders For All game mode allows musicians to perform pattern-based electronic compositions.
Article
Full-text available
Video Games are boring when they are too easy and frustrating when they are too hard. While most single-player games allow players to adjust basic difficulty (easy, medium, hard, insane), their overall level of challenge is often static in the face of individual player input. This lack of flexibility can lead to mismatches between player ability and overall game difficulty. In this paper, we explore the computational and design requirements for a dynamic difficulty adjustment system. We present a probabilistic method (drawn predominantly from Inventory Theory) for representing and reasoning about uncertainty in games. We describe the implementation of these techniques, and discuss how the resulting system can be applied to create flexible interactive experiences that adjust on the fly.
Article
Full-text available
Seeking new forms of expression in computer music, a small number of laptop composers are braving the challenges of coding music on the fly. Not content to submit meekly to the rigid interfaces of performance software like Ableton Live or Reason, they work with programming languages, building their own custom software, tweaking or writing the programs themselves as they perform. Often this activity takes place within some established language for computer music like SuperCollider, but there is no reason to stop errant minds pursuing their innovations in general scripting languages like Perl. This paper presents an introduction to the field of live coding, of real-time scripting during laptop music performance, and the improvisatory power and risks involved. We look at two test cases, the command-line music of slub utilising, amongst a grab-bag of technologies, Perl and REALbasic, and Julian Rohrhuber's Just In Time library for SuperCollider. We try to give a flavour of an exciting but hazardous world at the forefront of live laptop performance.
Conference Paper
Full-text available
q3osc is a heavily modified version of the open-sourced ioquake3 gaming engine featuring an integrated Oscpack implementation of Open Sound Control for bi-directional communication between a game server and one or more external audio servers. By combining ioquake3's internal physics engine and robust multiplayer network code with a full-featured OSC packet manipulation library, the virtual actions and motions of game clients and previously one-dimensional in-game weapon projectiles can be re-purposed as independent and behavior-driven OSC emitting sound-objects for real-time networked performance and spatialization within a multi-channel audio environment. This paper details the technical and aesthetic decisions made during the development and initial imple-mentations of q3osc and introduces specific mapping and spatialization paradigms currently in use for sonification.
Article
Full-text available
The analysis of digital music systems has traditionally been characterized by an approach that can be defined as phenomenological. The focus has been on the body and its relationship to the machine, often neglecting the system's conceptual design. This paper brings into focus the epistemic features of digital systems, which implies emphasizing the cognitive, conceptual and music theoretical side of our musical instruments. An epistemic dimension space for the analysis of musical devices is proposed.
Article
Full-text available
This paper reports on a survey conducted in the autumn of 2006 with the objective to understand people's relationship to their musical tools. The survey focused on the question of embodiment and its different modalities in the fields of acoustic and digital instruments. The questions of control, instrumental entropy, limitations and creativity were addressed in relation to people's activities of playing, creating or modifying their instruments. The approach used in the survey was phenomenological, i.e. we were concerned with the experience of playing, composing for and designing digital or acoustic instruments. At the time of analysis, we had 209 replies from musicians, composers, engineers, designers, artists and others interested in this topic. The survey was mainly aimed at instrumentalists and people who create their own instruments or compositions in flexible audio programming environments such as SuperCollider, Pure Data, ChucK, Max/MSP, CSound, etc.
Book
This title offers insight into a range of art and performance practices that have emerged as a result a more technological world. These practices are integral to alternative and mainstream performance culture and the author explores their aesthetic theorisation and analyses other approaches, including those offered by research into neuroesthetics.
Thesis
The thesis presents a perceptual approach to audio-visual instrument design, composition and performance. The approach informs practical work as well as a parametric visualisation model, which can be used to analyse sensory dominance, sonic expression and spatial presence in any audio-visual performance language. The practical work intends for the image to function as a stage scene, which reacts to the music and allows for attention to focus on the relation between the sounds themselves. This is challenging, because usually vision dominates over audition. To clarify the problem, the thesis extrapolates from audio-visual theory, psychology, neuroscience, interaction design and musicology. The investigations lead to three creative principles, which inform the design of an instrument that combines a custom zither and audio-visual 3D software. The instrument uses disparities between the acoustic and digital outputs so as to explore those creative principles: a) to threshold the performerʼs control over the instrument and the instrumentʼs unpredictability, in ways that convey musical expression; b) to facilitate perceptual simplification of visual dynamics; c) to create an audio-visual relationship that produces a sense of causation, and simultaneously confounds the cause and effect relationships. This latter principle is demonstrated with a study on audio-visual mapping and perception, whose conclusions are equally applicable to the audio-visual relationship in space. Yet importantly, my creative decisions are not driven by demonstrative aims. Regarding the visual dynamics, the initial creative work assures perceptual simplification, but the final work exposes a gray area that respects to how the audienceʼs attention might change over time. In any case, the parametric visualisation model can reveal how any audio-visual performance work might converge or diverge from these three creative principles. It combines parameters for interaction, sonic & visual dynamics, audio-visual relationship, physical performance setup and semantics. The parameters for dynamics and semantics reflect how stimuli inform attention at a particular timescale. The thesis uses the model to analyse a set of audio-visual performance languages, to represent my solo performance work from a creative perspective, and to visualise the workʼs versatility in collaboration with other musicians. Keywords: NIME, audio-visual performance, music, 3D environments, perception, attention
Conference Paper
The role of attention in timing was evaluated in 2 experiments. In Experiment 1, participants reproduced the durations of melodies with either a coherent or an incoherent structure. Participants were tested under control (timing only) and detection (timing plus target detection) workload conditions. Reproductions were shorter and more inaccurate under detection conditions, and incoherent event structure extended the effect to a wider range of durations. In Experiment 2, participants reproduced the durations of auditory prose passages that represented 3 levels of mental workload and 3 levels of event structure. Both increases in workload and the degradation of structure led to inaccurate reproductions. The results point to the central role of attention in temporal experience.
Book
Sounding New Media examines the long-neglected role of sound and audio in the development of new media theory and practice, including new technologies and performance art events, with particular emphasis on sound, embodiment, art, and technological interactions. Frances Dyson takes an historical approach, focusing on technologies that became available in the mid-twentieth century-electronics, imaging, and digital and computer processing-and analyzing the work of such artists as John Cage, Edgard Varèse, Antonin Artaud, and Char Davies. She utilizes sound's intangibility to study ideas about embodiment (or its lack) in art and technology as well as fears about technology and the so-called "post-human." Dyson argues that the concept of "immersion" has become a path leading away from aesthetic questions about meaning and toward questions about embodiment and the physical. The result is an insightful journey through the new technologies derived from electronics, imaging, and digital and computer processing, toward the creation of an aesthetic and philosophical framework for considering the least material element of an artwork, sound.
Book
Listening to Noise and Silence engages with the emerging practice of sound art and the concurrent development of a discourse and theory of sound. In this original and challenging work, Salomé Voegelin immerses the reader in concepts of listening to sound artwork and the everyday acoustic environment, establishing an aesthetics and philosophy of sound and promoting the notion of a sonic sensibility. A multitude of sound works are discussed, by lesser known contemporary artists and composers (for example Curgenven, Gasson and Federer), historical figures in the field (Artaud, Feldman and Cage), and that of contemporary canonic artists such as Janet Cardiff, Bill Fontana, Bernard Parmegiani, and Merzbow. Informed by the ideas of Adorno, Merleau-Ponty and others, the book aims to come to a critique of sound art from its soundings rather than in relation to abstracted themes and pre-existing categories. Listening to Noise and Silence broadens the discussion surrounding sound art and opens up the field for others to follow.
Conference Paper
Many performers of novel musical instruments find it difficult to engage audiences beyond those in the field. Previous research points to a failure to balance complexity with usability, and a loss of transparency due to the detachment of the controller and sound generator. The issue is often exacerbated by an audience’s lack of prior exposure to the instrument and its workings. However, we argue that there is a conflict underlying many novel musical instruments in that they are intended to be both a tool for creative expression and a creative work of art in themselves, resulting in incompatible requirements. By considering the instrument, the composition and the performance together as a whole with careful consideration of the rate of learning demanded of the audience, we propose that a lack of transparency can become an asset rather than a hindrance. Our approach calls for not only controller and sound generator to be designed in sympathy with each other, but composition, performance and physical form too. Identifying three design principles, we illustrate this approach with the Serendiptichord, a wearable instrument for dancers created by the authors.
Article
A theoretical and historical account of the main preoccupations of makers of abstract films is presented in this book. The book's scope includes discussion of nonrepresentational forms as well as examination of experiments in the manipulation of time in films. The ten chapters discuss the following topics: art and cinematography, the first abstract films, the direction of formal cinema in France in 1920, the premature decline of international avant-garde film, cinema produced by the Russian filmmaker Dziga Vertov, the link between European and United States experimental film provided by Oskar Fischinger, United States abstract film after World War II, the beginnings of the new formal film, avant-garde film in 1966, and current developments. A selected bibliography is included. (CC)
Article
In this paper we reflect on the performer–instrument relationship by turning towards the thinking practices of the French philosopher Maurice Merleau-Ponty (1908–1961). Merleau-Ponty’s phenomenological idea of the body as being at the centre of the world highlights an embodied position in the world and bestows significance onto the body as a whole, onto the body as a lived body. In order to better understand this two-way relationship of instrument and performer, we introduce the notion of the performative layer, which emerges through strategies for dealing with discontinuities, breakdowns and the unexpected in network performance.
Article
This study is an analysis of working memory capacity in the context of both visual and auditory information. Working memory storage capacity is important because cognitive tasks can be completed only with sufficient ability to hold information as it is processed. The ability to repeat information depends on task demands but can be distinguished from a more constant, underlying mechanism: a central memory store limited to 3 to 5 meaningful items in young adults. The purpose of this study is to use strategies that can increase the efficiency of the use of a limited capacity or allow the maintenance of additional information separate from that limited capacity. The researcher will discuss why this central limit is important, how it can be observed, how it differs among individuals, and why it may exist. The review focuses on the term nature of capacity limits, storage capacity limit, views of researchers on working memory, and evidences of both visual and auditory working memory. The results suggest a focus on central capacity limits that are beneficial in predicting which thought processes individuals can do, and in understanding individual differences in cognitive maturity and intellectual aptitude.
Article
This paper presents an approach to practice-based research in new musical instrument design. At a high level, the pro-cess involves drawing on relevant theories and aesthetic ap-proaches to design new instruments, attempting to iden-tify relevant applied design criteria, and then examining the experiences of performers who use the instruments with particular reference to these criteria. Outcomes of this pro-cess include new instruments, theories relating to musician-instrument interaction and a set of design criteria informed by practice and research.
Article
In Gibson's theory of perception, an organism directly perceives the value of the environment through affordances. By affordance, Gibson means the opportunities or possibilities of nature, which require the act of information pickup. Within design theory, however, there is a strong tendency towards separating perceptual information of affordances and the affordance itself. Combining theoretical discussion with an empirical case study of a medical device, we suggest there is untapped value in the notion of direct perception and argue that there is meaning through doing. Looking at the role of affordances over time, instead of a person's first exposure to a product necessitates sensitivity toward enskilment and how people create meaning through the use of products.
Article
Recent theories of telepresence or spatial presence in a virtual environment argue that it is a subjective experience of being in the virtual environment, and that it is the outcome of constructing a mental model of the self as being located in the virtual environment. However, current theories fail to explain how the subjective experience of spatial presence emerges from the unconscious spatial cognition processes. To fill this gap, spatial presence is conceptualized here as a cognitive feeling. From this perspective, spatial presence is a feedback from unconscious cognitive processes that informs conscious thought about the state of the spatial cognitive system. Current theorizing on the origins and properties of cognitive feelings is reviewed and applied to spatial presence. This new conception of presence draws attention to the functionality of spatial presence for judgments, decisions, and behavior. By highlighting the distinction between spatial cognitive processes and the subjective feeling of spatial presence, the use of questionnaires is theoretically grounded and legitimized as a method of presence research. Finally, embodied cognition theories are reviewed to identify cues that give rise to spatial presence.