Conference PaperPDF Available

Three-in-one: Levitation, Parametric Audio, and Mid-Air Haptic Feedback

Authors:
  • Ultrahaptics

Abstract and Figures

Ultrasound enables new types of human-computer interfaces, ranging from auditory and haptic displays to levitation (visual). We demonstrate these capabilities with an ultrasonic phased array that allows users to interactively manipulate levitating objects with mid-air hand gestures whilst also receiving auditory feedback via highly directional parametric audio, and haptic feedback via focused ultrasound onto their bare hands. Therefore, this demo presents the first ever ultrasound rig which conveys information to three different sensory channels and levitates small objects simultaneously.
Content may be subject to copyright.
Three-in-one: Levitation, Parametric
Audio, and Mid-Air Haptic Feedback
Gözel Shakeri
Euan Freeman
Glasgow Interactive Systems Section,
University of Glasgow
g.shakeri.1@research.gla.ac.uk
euan.freeman@glasgow.ac.uk
William Frier
Michele Iodice
Benjamin Long
Orestis Georgiou
Ultrahaptics Ltd.
first.last@ultrahaptics.com
Carl Andersson
Chalmers University of Technology
carl.andersson@chalmers.se
ABSTRACT
Ultrasound enables new types of human-computer interfaces, ranging from auditory and haptic
displays to levitation (visual). We demonstrate these capabilities with an ultrasonic phased array that
allows users to interactively manipulate levitating objects with mid-air hand gestures whilst also
receiving auditory feedback via highly directional parametric audio, and haptic feedback via focused
ultrasound onto their bare hands. Therefore, this demo presents the first ever ultrasound rig which
conveys information to three dierent sensory channels and levitates small objects simultaneously.
CCS CONCEPTS
Human-centered computing Haptic devices;Auditory feedback;Gestural input.
Figure 1: A portable and self-contained ar-
rangement of ultrasonic transducers held
together by a laser cut perspex and 3D
printed parts. This rig is used to demon-
strate levitation, parametric audio and
mid-air haptic feedback simultaneously,
and can receive user input through a Leap
Motion controller.
KEYWORDS
Levitation; Ultrasound; Gestural controllers; In-
terface design; CHI’19 Extended Abstracts, May 4–9, 2019, Glasgow, Scotland Uk
©2019 Copyright held by the owner/author(s).
This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version
of Record was published in CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI’19 Extended
Abstracts), May 4–9, 2019, Glasgow, Scotland Uk, hps://doi.org/10.1145/3290607.3313264.
INTRODUCTION
Since the advent of computers, scientists and Sci-Fi enthusiasts envisioned the fusion of virtual and
physical world in an “Ultimate Display” [
7
]. This display will ideally be “a room within which the
computer can control the existence of maer”, shaping and reshaping maer to help us understand
shapes of objects, enable see-through and grasp-through objects, and provide multi-sensory feedback
to enhance the experience. One possible approach towards such a display is facilitated by ultrasound.
Ultrasound enables levitation of multiple dierent particles (e.g. polystyrene beads, liquid drops)
which are computer-manipulated to display dierent shapes in mid-air [
5
]. Further, using similar
ultrasonic phased arrays, highly directional and steerable auditory feedback (i.e. parametric audio)
[6] as well as haptic feedback [1] can be generated.
DEMO CONTRIBUTION
This demo paper presents for the first time ever a multimodal interaction (visual, auditory, and tactile)
where the user controls the in-air levitating particles with hand gestures. Specifically, the ultrasonic
levitation rig shown in Figure 1 presents directional auditory feedback to the gesture interaction and
tactile feedback to enhance the sense of agency and improve user experience while enabling a much
broader area for applications than was possible until now.
Figure 2: A levitating bead traced a heart-
shaped-path along a user-defined hand
gesture. The images of a video were added
together to produce this LeviPainting.
Figure 3: A levitating bead traced a leer-
A-shaped-path along a user-defined hand
gesture. The images of a video were added
together to produce this LeviPainting.
BACKGROUND
Ultrasound can be used for a multitude of interactive applications and is becoming more accessible to
designers and researchers through projects like Ultraino [4] and companies like Ultrahaptics Ltd.
Levitation.
Acoustic waves can levitate particles of a wide range of sizes and materials [
2
]. There are
many ways of achieving this, including the generation of acoustic standing waves such that particles
can ’sit’ on the nodes of the waves, or acoustic traps which apply radiation forces to particles enabling
them to levitate. The laer requires an electronically controlled phased array of ultrasonic transducers
which allows for more stable and advanced manipulation of the levitated particles [4].
Parametric Audio.
This is achieved by appropriately pre-distorting and modulating an audio signal
onto an ultrasonic carrier [
6
]. Propagation in air causes demodulation of the compound signal which
’spills out’ as audible sound along an ultrasound beam. It is possible to electronically steer the ultra-
sonic beam and thus the audible signal in a desired direction.
Ultrasonic Haptics.
Focused sound can also exert pressure against the skin, enabling non-contact
haptic feedback [
1
,
3
]. This pressure is too weak to be perceived, but feels like vibration when the
amplitude is modulated at a frequency within the range of vibrotactile perception (e.g., turning it on
and o at 200Hz), or when the focus is moved along a lateral or closed path at high speeds.
Multimodal. There have been numerous example use cases of the above technologies in HCI, most
oen using 40kHz ultrasound. Recent advancements in soware and hardware have allowed combi-
nations of the three, typically in pairs of two, to be demonstrated simultaneously. Here, we present
an example of all three technologies working in parallel to create a multimodal interactive experience.
DEMO SET-UP
Two Ultrahaptics UHEV2 devices comprising of 9
×
28 ultrasonic transducers each have been placed
in a sandwich arrangement with transducers facing each other to create the desired acoustic fields
and cancel any undesirable opposing pressure forces (Figure 1). Everything is held together by a laser
cut perspex and 3D printed frame (
L
39
×D
18
×H
31 cm) that includes additional USB-powered fans
for cooling and has adjustable height. The two transducer boards are spaced about 16.5 cm from one
another and are cabled together such that synchronization of top and boom boards is achieved,
which is necessary for stable levitation. The enclosing volume of about
L
28
.
8
×D
9
.
1
×H
16
.
5cm is
what we will refer to as the levitation space.
Figure 4: The ultrasonic rig is divided into
dierent parts to support the three func-
tionalities. The right half is the levitation
area where small objects can be manipu-
lated in space via acoustic traps (orange).
The top le transducer array is used to
project a beam of parametric audio (red)
onto the table such that it can get reflected
into the user’s direction. Boom le trans-
ducer array produces the haptic feedback
(blue). The user’s hand is gesturing above
the Leap Motion device (infront of the rig)
and receives feedback about interaction
onto the index finger.
We track the user gesture with a Leap Motion Controller (www.leapmotion.com). The path of the
index finger executed inside of the Leap Motion’s interaction area is translated into the levitation
space and the input signal is further smoothed out by a moving average filter. The sides of the rig
are fied with small platforms made of acoustically transparent material (Saati Acoustex B003HYD)
enabling the easy loading of the to-be-levitated polystyrene beads (approx. 2 mm).
To achieve simultaneous levitation, parametric audio and haptic feedback, we dedicate dierent
parts of the rig to particular purposes (Figure 4). Namely, while the right half (top and boom) of the
rig is dedicated to levitation (eectively halving the levitation space), the top le is used for parametric
audio, and the boom le for haptic feedback. The top le 9
×
14 transducer array projects a beam of
parametric audio downwards and at an angle of 30 degrees onto the table on which the rig stands.
The beam then reflects and can be heard by the user if they are standing at the right height and
position such that their head is in the reflected parametric audio beam. Meanwhile, the boom le
9
×
14 transducer array focuses ultrasound onto the users hand which is being tracked by the Leap
controller. In this way, we minimize interference and corruption between the dierent acoustic fields.
It is easy to switch le for right functions dynamically.
INTERACTIVE DEMO
To exemplify the capabilities aorded by the multimodal levitation rig described previously, we present
an interactive function we call LeviPaint that uses a levitating particle as a paint brush in air. To
accomplish this, the levitating bead follows a user-defined path, and if this path is captured through
a long exposure image, a LeviPainting is created (see for example Figures 2 and 3).
In order to create a LeviPainting, users will be guided through the three phases of the planned
interaction. Phase 1 consists of the user drawing a shape in mid-air using their index finger above the
Leap Motion hand tracker. On entrance of the hand into the interaction area, the rig produces a short
high-pitched beep sound therefore prompting the user that their hand is being tracked. This indicates
the system is ready for input. On hearing that tone, the user starts drawing and the system records
their motion. Moreover, the rig projects haptic feedback onto the user’s index finger. Once the user’s
hand exits the interaction region and the Leap loses the hand, the system progresses to phase 2. In
phase 2, the rig transports the bead from the loading platform towards the centre of the levitation
space, and then replays the recorded motion with the bead tracing out the same path as the user’s
hand. The bead motion is then traced back in reverse and then the bead is dropped. During levitation
flight, an audio track is also played from the top le part of the rig, however is perceived as if coming
from under the table due to the reflection described previously. A DSLR camera is used to take a long
exposure photograph (approx. 30 seconds) of the moving levitating particle on a dark background
to capture its trail. Finally, in phase 3 the LeviPainting is produced and displayed on an LCD screen
nearby. The whole interaction takes about 1 minute and requires 1 more to reset for the next user.
CONCLUSION
Ultrasound oers a wide range of multimodal (audio-visual-haptic) opportunities for information
displays. Not only can it levitate and manipulate small objects in real-time to draw dierent shapes
in mid-air, but it can also present haptic and audio feedback to the user through the same ultrasonic
hardware apparatus. Our demo presents the first ever compact and self-contained ultrasonic rig
capable of these three technologies (levitation, mid-air haptics, parametric audio) simultaneously and
therefore can encourage broad discussion about new HCI application areas. For instance, how can we
scale this technology up to room-sized deployments; what kind of immersive applications are possible;
can we communicate science in a new way; and can we create new and exciting art installations?
ACKNOWLEDGEMENTS
This research is funded by the European Union’s Horizon 2020 research and innovation programme
(#737087).
REFERENCES
[1]
T. Carter, S. A. Seah, B. Long, B. Drinkwater,
and S. Subramanian. 2013. UltraHaptics: Multi-
Point Mid-Air Haptic Feedback for Touch Sur-
faces. UIST (2013), 505–514. hps://doi.org/10.
1145/2501988.2502018
[2]
E. Freeman, J. Williamson, P. Kourtelos, and S.
Brewster. 2018. Levitating Object Displays with
Interactive Voxels. In PerDis ’18. ACM Press, Ar-
ticle 15. hps://doi.org/10.1145/3205873.3205878
[3]
T. Hoshi, M. Takahashi, T. Iwamoto, and H. Shin-
oda. 2010. Noncontact Tactile Display Based on
Radiation Pressure of Airborne Ultrasound. IEEE
Transactions on Haptics 3, 3 (July 2010), 155–165.
hps://doi.org/10.1109/TOH.2010.4
[4]
A. Marzo, T. Corke, and B. W. Drinkwater.
2018. Ultraino: An Open Phased-Array System
for Narrowband Airborne Ultrasound Transmis-
sion. IEEE Transactions on Ultrasonics, Ferro-
electrics, and Frequency Control (2018). hps:
//doi.org/10.1109/TUFFC.2017.2769399
[5]
Y. Ochiai, T. Hoshi, and J. Rekimoto. 2014. Pixie
Dust: Graphics Generated by Levitated and
Animated Objects in Computational Acoustic-
potential Field. ACM Trans. Graph. 33, 4, Article
85 (July 2014), 13 pages. hps://doi.org/10.1145/
2601097.2601118
[6]
F. J. Pompei. 1995. Sound From Ultrasound: The
Parametric Array as an Audible Sound Source i-A.
Technical Report. hp://sound.media.mit.edu/
%7Ebv/
[7]
I. Sutherland. 2001. The Ultimate Display. Pro-
ceedings of the IFIPS Congress 65(2):506-508. New
York: IFIP 2 (01 2001).
... By moving focal points along a path, through sequential AM or STM (see Fig. 1), it is possible to generate the perception of continuous vibrotactile shapes [4,7,11], surfaces or textures [1]. UMH interfaces are increasingly finding applications in human-computer-interaction [16,34] and mid-air gesture interfaces [23], yet there are still only relatively few studies on the perception of focused ultrasound haptic stimuli despite such information being crucial in informing stimulus design for haptic rendering with these interfaces (see Rakkolainen et al. 's review [31] and the related work section from Mulot et al. 's recent work [25] for an overview of existing work on ultrasound haptic stimulus perception). ...
Article
Ultrasound mid-air haptic (UMH) devices are a novel tool for haptic feedback, capable of providing localized vibrotactile stimuli to users at a distance. UMH applications largely rely on generating tactile shape outlines on the users’ skin. Here, we investigate how to achieve sensations of continuity or gaps within such 2D curves by studying the perception of pairs of amplitude-modulated (AM) focused ultrasound stimuli. On the one hand, we aim to investigate perceptual effects which may arise from providing simultaneous UMH stimuli. On the other, we wish to provide perception-based rendering guidelines for generating continuous or discontinuous sensations of tactile shapes. Finally, we hope to contribute towards a measure of the perceptually achievable resolution of UMH interfaces. We performed a user study to identify how far apart two focal points need to be in order to elicit a perceptual experience of two distinct stimuli separated by a gap. Mean gap detection thresholds were found at 32.3mm spacing between focal points, but a high within- and between-subject variability was observed. Pairs spaced below 15mm were consistently (>95%) perceived as a single stimulus, while pairs spaced 45mm apart were consistently (84%) perceived as two separate stimuli. To investigate the observed variability, we resort to acoustic simulations of the resulting pressure fields. These show a non-linear evolution of actual peak pressure spacing as a function of nominal focal point spacing. Beyond an initial threshold in spacing (between 15mm and 18mm), which we believe to be related to the perceived size of a focal point, the probability of detecting a gap between focal points appears to linearly increase with spacing. Our work highlights physical interactions and perceptual effects to consider when designing or investigating the perception of UMH shapes.
... UMH interfaces are highly interesting in the multi-modal context of augmented and virtual reality interaction as they allow the generation of a wide variety not only of tactile sensations, but also auditory stimuli as well as the levitation of particles. This provides potential for multi-sensory stimulation based on a single device [64,78]. The manner in which UMH can support multimodal interaction are also discussed in detail in chapters "Opportunities for Multisensory Mid-Air Interactions Featuring Ultrasound Haptic Feedback" and "Multimodal Interaction with Mid-Air Haptics". ...
... Additionally, mid-air technology can be flexible enough to allow for multisensory experiences. Ultrasonic phased arrays, such as those developed by Hirayama et al. (2019), Shakeri et al. (2019), andMartinez Plasencia et al. (2020) combine mid-air tactile and auditory stimulation simultaneously. They employ speakers emitting sound waves that, at specific frequencies, can be both heard and felt on the skin. ...
Article
Full-text available
Multisensory integration research has allowed us to better understand how humans integrate sensory information to produce a unitary experience of the external world. However, this field is often challenged by the limited ability to deliver and control sensory stimuli, especially when going beyond audio–visual events and outside laboratory settings. In this review, we examine the scope and challenges of new technology in the study of multisensory integration in a world that is increasingly characterized as a fusion of physical and digital/virtual events. We discuss multisensory integration research through the lens of novel multisensory technologies and, thus, bring research in human–computer interaction, experimental psychology, and neuroscience closer together. Today, for instance, displays have become volumetric so that visual content is no longer limited to 2D screens, new haptic devices enable tactile stimulation without physical contact, olfactory interfaces provide users with smells precisely synchronized with events in virtual environments, and novel gustatory interfaces enable taste perception through levitating stimuli. These technological advances offer new ways to control and deliver sensory stimulation for multisensory integration research beyond traditional laboratory settings and open up new experimentations in naturally occurring events in everyday life experiences. Our review then summarizes these multisensory technologies and discusses initial insights to introduce a bridge between the disciplines in order to advance the study of multisensory integration.
... Ultraleap (Bristol, UK) is a company that commercializes ultrasonic phased arrays for haptic applications related to automotive [31], digital signage [32], and AR/VR [33] applications. The company has also been exploring the effects on humans of high intensity ultrasound exposure [34] and has been releasing multiple prototypes that explore optimized array designs [35,36]. For example, transducer array in a Fibonacci spiral arrangement can suppress unwanted secondary focal points [37]. ...
Article
Full-text available
Holographic methods from optics can be adapted to acoustics for enabling novel applications in particle manipulation or patterning by generating dynamic custom-tailored acoustic fields. Here, we present three contributions towards making the field of acoustic holography more widespread. Firstly, we introduce an iterative algorithm that accurately calculates the amplitudes and phases of an array of ultrasound emitters in order to create a target amplitude field in mid-air. Secondly, we use the algorithm to analyse the impact of spatial, amplitude and phase emission resolution on the resulting acoustic field, thus providing engineering insights towards array design. For example, we show an onset of diminishing returns for smaller than a quarter-wavelength sized emitters and a phase and amplitude resolution of eight and four divisions per period, respectively. Lastly, we present a hardware platform for the generation of acoustic holograms. The array is integrated in a single board composed of 256 emitters operating at 40 kHz. We hope that the results and procedures described within this paper enable researchers to build their own ultrasonic arrays and explore novel applications of ultrasonic holograms.
... However, PAT experiences simultaneously delivering several modalities have started to be demonstrated only very recently [Hirayama et al. 2019;Shakeri et al. 2019]. Our algorithm (i.e. ...
Article
Full-text available
Phased Arrays of Transducers (PATs) allow accurate control of ultrasound fields, with applications in haptics, levitation (i.e. displays) and parametric audio. However, algorithms for multi-point levitation or tactile feedback are usually limited to computing solutions in the order of hundreds of sound-fields per second, preventing the use of multiple high-speed points, a feature that can broaden the scope of applications of PATs. We present GS-PAT, a GPU multi-point phase retrieval algorithm, capable of computing 17K solutions per second for up to 32 simultaneous points in a mid-end consumer grade GPU (NVidia GTX 1660). We describe the algorithm and compare it to state of the art multi-point algorithms used for ultrasound haptics and levitation, showing similar quality of the generated sound-fields, and much higher computation rates. We then illustrate how the shift in paradigm enabled by GS-PAT (i.e. real-time control of several high-speed points) opens new applications for PAT technologies, such as in volumetric fully coloured displays, multi-point spatio-temporal tactile feedback, parametric audio and simultaneous combinations of these modalities.
... The levitating particles used are expanded polystyrene beads (diameter 1-3 mm). This basic apparatus was developed in [9]. Here, we have improved and expanded on its capabilities through software, the gesture input set, and the obstacle track. ...
Chapter
Ultrasound mid-air haptic (UMH) devices are promising for tactile feedback in virtual reality (VR), as they do not require users to be tethered to, hold, or wear any device. This approach is less cumbersome, easy to set up, can simplify tracking, and leaves the hands free for concurrent interactions. This chapter explores work conducted at CNRS-IRISA dealing with the challenges arising from the integration of UMH interfaces in immersive VR through three main axes. These are discussed in the wider context of the state of the art on UMH for augmented and virtual reality, and illustrated through several VR use-cases. A first axis deals with device integration into the VR ecosystem. Interaction in immersive VR is based on the synergy between complex input devices allowing real-time tracking of the user and multimodal feedback devices delivering a coherent visual, auditory and haptic picture of a simulated virtual environment (VE). Using UMH in immersive VR therefore hinges on integrating UMH devices such that their operation does not interfere with other input and feedback devices. It is also critical to ensure that UMH feedback is adequately synchronized and co-located with respect to other stimuli, and delivered within a workspace that is compatible with that of VR interaction. Regarding this final point, we propose PUMAH, a robotic solution for increasing the usable workspace of UMH devices. The second and third axes, respectively, focus on stimulus perception and rendering of VE properties. Virtual object properties can be rendered in a variety of ways, through, e.g., amplitude modulation (AM) or spatiotemporal modulation (STM), with many parameters (modulation frequency, spatial sampling, etc.) coming into play, raising questions about the limits of the design space. To tackle this challenge, we begin by conducting psychophysical experimentation to understand the usable ranges for stimulus parameters and understand the perceptual implications of stimulus design choices. We propose an open-source software framework intended to facilitate UMH stimulus design and perceptual evaluation. These results in turn serve as the basis for the design and evaluation of rendering schemes for VR. Using amplitude variations along a focal point path in STM, we investigate the possibility of rendering geometric details and in a second step, sensations of stiffness in VR.
Article
Full-text available
Modern ultrasonic phased-array controllers are electronic systems capable of delaying the transmitted or received signals of multiple transducers. Configurable transmit-receive array systems, capable of electronic steering and shaping of the beam in near real-time are available commercially, for example for medical imaging. However, emerging applications such as ultrasonic haptics, parametric audio or ultrasonic levitation, require only a small sub-set of the capabilities provided by the existing controllers. To meet this need we present Ultraino, a modular, inexpensive, and open platform that provides hardware, software and example applications specifically aimed at controlling the transmission of narrowband airborne ultrasound. Our system is composed of software, driver boards and arrays that enable users to quickly and efficiently perform research in various emerging applications. The software can be used to define array geometries, simulate the acoustic field in real time and control the connected driver boards. The driver board design is based on an Arduino Mega and can control 64 channels with a square wave of up to 17 Vpp and π/5 phase resolution. Multiple boards can be chained together to increase the number of channels. 40 kHz arrays with flat and spherical geometries are demonstrated for parametric audio generation, acoustic levitation and haptic feedback.
Article
Full-text available
This paper describes a tactile display which provides unrestricted tactile feedback in air without any mechanical contact. It controls ultrasound and produces a stress field in a 3D space. The principle is based on a nonlinear phenomenon of ultrasound: Acoustic radiation pressure. The fabricated prototype consists of 324 airborne ultrasound transducers, and the phase and intensity of each transducer are controlled individually to generate a focal point. The DC output force at the focal point is 16 mN and the diameter of the focal point is 20 mm. The prototype produces vibrations up to 1 kHz. An interaction system including the prototype is also introduced, which enables users to see and touch virtual objects.
Conference Paper
Levitating objects can be used as the primitives in a new type of display. We present levitating particle displays and show how research into object levitation is enabling a new way of presenting and interacting with information. We identify novel properties of levitating particle displays and give examples of the interaction techniques and applications they allow. We then discuss design challenges for these displays, potential solutions, and promising areas for future research.
Article
We propose a novel graphics system based on the expansion of 3D acoustic-manipulation technology. In conventional research on acoustic levitation, small objects are trapped in the acoustic beams of standing waves. We expand this method by changing the distribution of the acoustic-potential field (APF). Using this technique, we can generate the graphics using levitated small objects. Our approach makes available many expressions, such as the expression by materials and non-digital appearance. These kinds of expressions are used in many applications, and we aim to combine them with digital controllability. In the current system, multiple particles are levitated together at 4.25-mm intervals. The spatial resolution of the position is 0.5 mm. Particles move at up to 72 cm/s. The allowable density of the material can be up to 7 g/cm(3). For this study, we use three options of APF: 2D grid, high-speed movement, and combination with motion capture. These are used to realize floating screen or mid-air raster graphics, mid-air vector graphics, and interaction with levitated objects. This paper reports the details of the acoustic-potential field generator on the design, control, performance evaluation, and exploration of the application space. To discuss the various noncontact manipulation technologies in a unified manner, we introduce a concept called "computational potential field" (CPF).
Conference Paper
We introduce UltraHaptics, a system designed to provide multi-point haptic feedback above an interactive surface. UltraHaptics employs focused ultrasound to project discrete points of haptic feedback through the display and directly on to users' unadorned hands. We investigate the desirable properties of an acoustically transparent display and demonstrate that the system is capable of creating multiple localised points of feedback in mid-air. Through psychophysical experiments we show that feedback points with different tactile properties can be identified at smaller separations. We also show that users are able to distinguish between different vibration frequencies of non-contact points with training. Finally, we explore a number of exciting new interaction possibilities that UltraHaptics provides.
Article
A parametric array exploits the nonlinearity of the propagation medium to emit or detect acoustic waves in a spatially versatile manner, permitting concise, narrow directivity patterns otherwise possible only with physically very large transducer geometries. This thesis explores the use of the parametric array as an audible sound source, permitting audible sound to be generated with very high directivity compared to traditional loudspeakers of comparable size. The thesis begins with a review of basic underlying mathematics and relevant approximate solutions of nonlinear acoustic systems. Then, these solutions are used to construct suitable methods of ultrasonic synthesis for low-distortion audio reproduction. Geometrical modelling methods for predicting the acoustic distribution are presented and evaluated, and practical applications are explored experimentally. Issues of risk associated with ultrasonic exposure are presented, and the feasibility of a phased-array system for beam control is explored. Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002. Includes bibliographical references (leaves 91-94). Vita.
Article
We live in a physical world whose properties we have come to know well through long familiarity. We sense an involvement with this physical world which gives us the ability to predict its properties well. For example, we can predict where objects will fall, how well-known shapes look from other angles, and how much force is required to push objects against friction. We lack corresponding familiarity with the forces on charged particles, forces in non-uniform fields, the effects of nonprojective geometric transformations, and high-inertia, low friction motion. A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world. It is a looking glass into a mathematical wonderland. Computer displays today cover a variety of capabilities. Some have only the fundamental ability to plot dots. Displays being sold now generally have built in line-drawing capability. An ability to draw simple curves would be useful. Some available displays are able to plot very short line segments in arbitrary directions, to form characters or more complex curves. Each of these abilities has a history and a known utility.
The Ultimate Display. Proceedings of the
  • I Sutherland
I. Sutherland. 2001. The Ultimate Display. Proceedings of the IFIPS Congress 65(2):506-508. New York: IFIP 2 (01 2001).