Article

BYU-BYU-View: a wind communication interface

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

BYU-BYU-View is a novel interface realized with the symbiosis of the input/output of wind and the computer graphics. "BYU-BYU" is a japanese onomatopoeia for a howling wind. It adds a new element, that is, "wind", to the direct interaction with a user and a virtual environment, and the communication through a network, by integrating the graphic presentation with the input and output of wind on a special screen.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Their challenge was to create suitable sensing arrangement to measure tone holes and airflow with low-cost sensors, high-sensitivity, and integration on a simple printed circuit board (PCB) [12]. Sawada et al. (2007) demonstrated the possibility of using 'wind' (i.e. breath, airflow) to communicate and interface with the computer in a mixed reality in combination with a special windpermeable screen [13]. ...
... Sawada et al. (2007) demonstrated the possibility of using 'wind' (i.e. breath, airflow) to communicate and interface with the computer in a mixed reality in combination with a special windpermeable screen [13]. For people who are motor impaired (e.g. ...
Conference Paper
Full-text available
This paper describes the vision and development of a tangible user interface (TUI) that allows 'glassblowing-like' interaction (IA) with a computer. The premise is that human fidelity in exerting pressure and airflow (i.e. breathing, blowing) could stimulate intuition, creative processing, and affords unconventional human-computer interaction (UHCI). The ultimate goal is to find out how the potential of the human body can be used to design, develop and analyze new spatial interaction methods that surpass performance or application possibilities of currently available techniques. Multi-modal interactions are essential to computational processing whereby the human and machine are interconnected and coupled to enhance skills (analogue and digital), support rich performance, facilitate learning and foster knowledge in design and engineering processing. This paper describes the key concept of the TUI, the graphical user interface (GUI) and the data visualizer system. We illustrate the concept with a prototype system – the Air-Flow-Interaction-Interface (AFIF), testing and experimentation – to identify underlying research issues.
... Action-based mappings have also been used in exergames and therapeutic games, where a game element moves in accordance with the user's breathing upwards and downwards to navigate through a world [5,52,67,68,73]. Sawada et al. [61] presented a stationary setting that could transfer breathing actions between two users using an array of ventilators and breathing sensors arranged in front of users. While these works focus on breathing as input modality, Hashimoto et al. [24] developed a straw-like breathing interface that allows an application to adapt inhalation resistance in order to simulate drinking through a straw. ...
... Some studies [14], [24], [28], [35], [36], [38] have used wearable breathing sensors, which are inconvenient and expensive to use and not common in everyday life. The specially customized devices in these studies [1], [2], [18], [22], [29], [31], [34], [37], [39] were also inconvenient and expensive to use and impose considerable limitations in terms of usage scenarios and usage methods. All of the above works generally relied on custom devices such as breathing sensors, breathing belts, or other special sensors, which are intricate, expensive, inconvenient to wear, lack universal portability in daily life, and may be less controllable, placing considerable limitations on the usage modes and scenarios that have not been further explored. ...
Article
Full-text available
Breathing is a natural and directly controllable human activity. Currently, some works have considered breath as a direct input controlling mechanism. The equipment relied upon in these works is generally complicated, expensive, inconvenient to wear, and sometimes insufficiently controllable. The use of breathing interaction is also limited to a certain scene and is not universal. This paper proposes an adaptive interaction method, which is a natural and directly controllable interaction based on blowing air that only uses headset microphones to obtain the sound waveform of the blowing action without requiring expensive equipment, and that can be used conveniently anytime and anywhere. This blowing interaction uses a Siamese network to achieve “self-adaptation” – the first step adapts to noise interference, including environmental noise and the user’s own speaking interference, and the second step adapts to different users and equipment, that is, the blowing interaction is used by different people or on different equipment, and the interaction mode can accurately identify the type of blowing. This paper also develops several applications of the blowing interaction method to test the algorithm. During tests, it’s proved that this interface not only increases the type of blowing used for interaction but also eliminates interference from speaking in a normal volume effectively and addresses the problem of individual differences.
... Then, this also induces immersive experience in VR. As a different approach, compressed air with jet nozzles [11,43,45] is used to show surface information of digital object [31] or communication with people [40]. Airflow-based feedback is strongly dependent on its application scenario and treated as a secondary element in modality feedback. ...
Conference Paper
Full-text available
People sometimes imagine and yearn for a "Super Power," an ability they do not have naturally. In this paper, we propose Virtual Super-Leaping (VSL) as an immersive virtual experience that provides the feeling of extreme jumping in the sky. First, we define the necessary feedback elements and classify the action sequence of Super-Leaping, including the design of the multimodal feedback for each action state. Then, we describe the design of the VSL system, which has two components: (i) visual a head-mounted display-based feedback, and (ii) a VSL-enabling haptic device, which provides both kinesthesia and airflow using multiple synchronized propeller units. We end by reporting on our technical evaluation and public demonstrations. This work contributes to the enhancement of immersive virtual experiences and development of devices for human augmentation.
... AIREAL [14] is a palm-sized air-cannon for simulating a user's gestures in a video game. BYU-BYU-View [13] focuses on the wind by displaying the remote user on a local mesh-type monitor and providing wind from the remote user through the monitor. Because our proposed air-cannon is larger than these systems, we can provide powerful airy feedback to the user during physical activity. ...
Conference Paper
Although the necessity and importance of exercise support for the elderly people is largely recognized, the lack of skilled and adequate instructors often limits such activities physically. Remote exercise systems can be a solution for this problem because they may be able to support exercise activities even when instructors and participants are in separate locations. However, when simply using normal video-conferencing systems, instructors and participants have difficulty understanding each side's situation, particularly during guided physical actions. In addition, remote exercise systems cannot support the adjustment of the position of each user, a task that is quite naturally performed in normal exercise activities. Our system, called CASPER, solves these problems by proposing a mirror-like image composition method in which all the participants and the instructor are shown on the same screen so that both sides can understand the situation clearly. We also introduce an airy haptic device to remotely send tactile feedback for further enhancing sensations. In this paper, we describe the system design and its evaluation. The evaluation confirms that our system could effectively allow users to perform exercise activities even at remote locations.
... Fan sets are most commonly used [6,10,14,24,39]. Other implementations include using an air compressor [32], a controllable vent [17], and an audio speaker [16]. Due to the noise produced, the bulkiness of the air compressor and vent, and the limited wind coverage generated by the audio speaker approach, we chose fan sets in our study. ...
Conference Paper
Multi-sensory feedback can potentially improve user experience and performance in virtual environments. As it is complicated to study the effect of multi-sensory feedback as a single factor, we created a design space with these diverse cues, categorizing them into an appropriate granularity based on their origin and use cases. To examine the effects of tactile cues during non-fatiguing walking in immersive virtual environments, we selected certain tactile cues from the design space, movement wind, directional wind and footstep vibration, and another cue, footstep sounds, and investigated their influence and interaction with each other in more detail. We developed a virtual reality system with non-fatiguing walking interaction and low-latency, multi-sensory feedback, and then used it to conduct two successive experiments measuring user experience and performance through a triangle-completion task. We noticed some effects due to the addition of footstep vibration on task performance, and saw significant improvement due to the added tactile cues in reported user experience.
... A more distributed and body-immersive wind environment created using the Wind Cubes [20], blow displays [21], and more recently a virtual sailing environment [22] involve use of fans in proximity to the user that add wind sensation but intrude on the environment due to visible and audible fan locations. In [23], a localized VR display that uses permeable screens and a fan arrangement that is directly behind the screens is used. Recent work in [24] shows that locality of wind distribution and fan misalignment can cause significant error in perception of wind direction considering just noticeable difference (JND [25,26]) as a metric. ...
Article
Full-text available
This paper presents the Treadport Active Wind Tunnel (TPAWT)-a full-body immersive virtual environment for the Treadport locomotion interface designed for generating wind on a user from any frontal direction at speeds up to 20 kph. The goal is to simulate the experience of realistic wind while walking in an outdoor virtual environment. A recirculating-type wind tunnel was created around the pre-existing Treadport installation by adding a large fan, ducting, and enclosure walls. Two sheets of air in a non-intrusive design flow along the side screens of the back-projection CAVE-like visual display, where they impinge and mix at the front screen to redirect towards the user in a full-body cross-section. By varying the flow conditions of the air sheets, the direction and speed of wind at the user are controlled. Design challenges to fit the wind tunnel in the pre-existing facility, and to manage turbulence to achieve stable and steerable flow, were overcome. The controller performance for wind speed and direction is demonstrated experimentally.
... Turning to creative applications, breath controllers have augmented traditional wind and brass musical instruments (e.g., the Yamaha BC3A Breath Controller [11]) and been incorporated into new kinds of instrument [1]. Breath control has been put to other creative uses too: for navigating an immersive virtual world through the metaphor of diving [2]; and for enabling two-way "gust-based" communication [8]. Mainstream products have also entered the market, for example the Sensawaft breath controlled mouse [9]. ...
Conference Paper
Full-text available
This paper explores the potential for breath control as an interaction medium for gaming. In particular it examines the positioning of breath control within the stack of interface paradigms: As the only control, as a secondary control and as an ancillary or ambient control. It describes a technology developed using specially adapted gas masks to measure breath flow. By describing five simple games (or game modifications), each developed using breath in a somewhat different way, we show some of the possibilities of this unique interface paradigm. Crucially, the paper aims to demonstrate that breathing, though in principle a one dimensional interface medium, is actually a subtle and viable control mechanism that can be used either as a control mechanism in itself, or to enhance a more traditional game interface, ultimately leading to a satisfying and immersive game experience.
Article
Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing.
Article
Full-text available
It is important for systems to recognize user actions using sensors and cameras when constructing interactive systems and artworks. Conventional systems have tackled to make systems recognize wide varieties of user actions and to install sensing devices with various environmental restrictions. However, since the recognition methods in conventional systems are specialized to their own work, they cannot be applied to other systems and a specialist for activity recognition is required to construct the systems. In addition, conventional systems took a long time to select recognition algorithms and to set the recognition parameters. This means that they cannot have enough flexibility to change the actions to be recognized or to adapt to changing environments. This paper proposes a method of adding interactivity to various surfaces and recognizing the positions and intensities of performed action by using multiple accelerometers. Our method has functions that enable easy settings and maintenance even by beginners in activity recognition. Participants in an experiment on constructing interactive surfaces constructed a system that could recognize two actions at two points in 51 minutes on average. Moreover, we confirmed the effectiveness of our approach with two actual artworks in long-term media-arts exhibitions.
Article
Full-text available
We present SpiroSurface, a novel force display for interactive tabletops. SpiroSurface uses a pneumatic system to generate both repulsive and attractive forces. We develop a prototype with 5x5 grid holes on the surface connected to an air compressor and vacuum tanks through electromagnetic valves. The display can output a maximum of +1.0 and -0.08 megapascal (MPa) pressure from a hole that generates 74 and -6 N force. We investigated the latency of the output pressure through pneumatics and an experiment, which indicated a minimum of 50-ms latency. The display allows the creation of three kinds of novel interactions: (1) enhancement of GUI, (2) deformation of soft objects, and (3) three-degree-of-freedom rotation of objects. In the first application, users can feel the force from the display without holding or attaching additional devices. In the second and third applications, the shape and motion of an object on the surface can be manipulated without embedding additional active components in the objects. These aspects allow users to easily experience interaction and expand the freedom of interaction design. We introduce several examples combining video projection and motion tracking. These examples demonstrate the potential of the display.
Conference Paper
IVRC, International collegiate Virtual Reality Contest is competition of student virtual reality pieces which is evaluated by originality, technical challenge and impact on innovation, and it has given honors to young talents since 1993. This workshop shares experiences from the past 23 years of IVRC. And audiences can be inspired innovative idea from it.
Article
Scent is an important component of every individual's real life; it has many psychological and physical effects. Therefore, if a visual image is presented along with a matching scent, we expect it will possibly carry more detailed information and be perceived as a more realistic sensation than it would have on its own. In order to do so, it is necessary to solve some problems. First, the qualities for a device to be able to integrate a scent with an image are discussed. Second, a new method in which scents are emitted through a display screen in the direction of a viewer in order to enhance the reality of visual images is described. Third, the psychological effects on a viewer when scents are integrating with images are described. The authors investigated the viewing experiences and eye catching effects when applied to movies, a digital-signage, and virtual reality.
Article
Full-text available
We developed a paperclip-shaped motion-actuating system, named “Move-it”. With this system, users can add motions to sticky notes just by clipping it. This system is constructed by using clip-shaped devices that have sensors and actuators. The recognition system uses photo interrupter module as sensor. Each clip device can recognize which note it has been attached to by calculating how much infrared light is being reflected by the grayscale image printed on the flip side of each note. The actuation system is made of a coil-shaped shape memory alloy combined with a polyolefin sheet that is embedded to the clip device, and can provide a sufficient motion for actuating the note. By writing for example, “meeting 16:00” on the sticky note using a digital pen and our device, the user can get physical reminder without any complex setting. This paper presents the design and evaluation of the recognition and actuation system of our clip-shaped device.
Conference Paper
In haptic interaction, friction caused by slip on the fingertip is a key factor for manual manipulation as well as exploration of texture and shape. From the moment of contact, the friction contains vertical and tangential skin deformations and vibrations, not all of which have been simultaneously supported by previous portable/wearable haptic devices. We propose a portable haptic device that has the ability to present skin deformation and vibration with two degrees of freedom by using two types of motors: a voice coil motor (VCM) for vertical motion and vibration, and direct current motors for tangential skin stretch. The VCM also achieves encounter-type haptic interactions. A combination of these motions encompasses most cutaneous cues for realistic friction.
Article
In our entertainment VR application, the user can move freely through a virtual city by using a web like Spiderman™. In this application, the user wears a web shooter, which is a device to shoot webs, and takes aim at a target building. Then, when the user swings his/her arm ahead, a web is launched and it sticks to the target building on the screen. After the web sticks to the building, the user's arm is pulled in the direction of the target building by a pulling force feedback system, which gives the feeling of pulling to the user directly and smoothly, as if he/she were attached to an elastic string. Finally, the user moves to the target building. In three exhibitions, we surveyed the effectiveness of the application by questionnaire. We were able to confirm that a lot of users had enjoyed and were satisfied with our VR application.
Article
Full-text available
We present VacuumTouch, a novel haptic interface architecture for touch screens that provides attractive force feedback to the user's finger. VacuumTouch consists of an air pump and solenoid air valves that connect to the surface of the touch screen and suck the air above the surface where the user's finger makes contact. VacuumTouch does not require the user to hold or attach additional devices to provide the attractive force, which allows for easy interaction with the surface. This paper introduces the implementation of the VacuumTouch architecture and some applications for enhancement of the graphical user interface, namely a suction button, a suction slider, and a suction dial. The quantitative evaluation was conducted with the suction dial and showed that the attractive force provided by VacuumTouch improved the performance of the dial menu interface and its potential effects. At the end of this paper, we discuss the current prototype's advantages and limitations, as well as possible improvements and potential capabilities.
Conference Paper
This paper describes an approach to interpolate the wind direction and velocity using two fans. Firstly, maps of the angle and velocity of the wind in relation to the speeds of the two fans are created. It was found that changes in the speeds of the two fans in the interpolation were not necessarily linear to the directional angles from the fans. Next, in the generation of wind, the speeds of the fans are inversely referenced from the maps using the target direction and velocity. An experiment confirmed that wind can be generated from an intermediate direction between the two fans. Through an evaluation using subjects, it was proven that a change in the direction of the wind could be recognized to some extent, although great differences in accuracy were observed among individuals.
Conference Paper
In this paper we present Squama, a programmable physical window or wall that can independently control the visibility of its elemental small square tiles. This is an example of programmable physical architecture, our vision for future architectures where the physical features of architectural elements and facades can be dynamically changed and reprogrammed according to people's needs. When Squama is used as a wall, it dynamically controls the transparency through its surface, and simultaneously satisfies the needs for openness and privacy. It can also control the amount of sunlight and create shadows, called programmable shadows, in order to afford indoor comfort without completely blocking the outer view. In this paper, we discuss how in future, architectural space can become dynamically changeable and introduce the Squama system as an initial instance for exemplifying this concept.
Article
Our surroundings are becoming infused with sensors measuring a variety of data streams about the environment, people and objects. Such data can be used to make the spaces that we inhabit responsive and interactive. Personal data in its different forms are one important data stream that such spaces are designed to respond to. In turn, one stream of personal data currently attracting high levels of interest in the HCI community is physiological data (e.g., heart rate, electrodermal activity), but this has seen little consideration in building architecture or the design of responsive environments. In this context, we developed a prototype mapping a single occupant’s respiration to its size and form, while it also sonifies their heartbeat. The result is a breathing building prototype, formative trials of which suggested that it triggers behavioral and physiological adaptations in inhabitants without giving them instructions and it is perceived as a relaxing experience. In this paper, we present and discuss the results of a controlled study of this prototype, comparing three conditions: the static prototype, regular movement and sonification and a biofeedback condition, where the occupant’s physiological data directly drives the prototype and presents this data back to them. The study confirmed that the biofeedback condition does indeed trigger behavioral changes and changes in participants’ physiology, resulting in lower respiration rates as well as higher respiration amplitudes, respiration to heart rate coherence and lower frequency heart rate variability. Self-reported state of relaxation is more dependent on inhabitant preferences, their knowledge of physiological data and whether they found space to ‘let go’. We conclude with a discussion of ExoBuilding as an immersive but also sharable biofeedback training interface and the wider potential of this approach to making buildings adapt to their inhabitants.
Conference Paper
This research has been carried out as part of a multi-sensory theater project that aims at establishing technology to integrate a range of sensations such as visual, audio, force, tactile, vestibular, and odor into passive and interactive media content and communications. This paper describes the approaches that are being examined and the current status of the project. As a platform for the experiments, a prototype version of a multi-sensory theater has been implemented. The theater is equipped with devices that present wind and olfactory sensations. The sensation of wind is generated by both computer-controlled fans and air nozzles connected to a source of compressed air, and the olfactory sensation is presented by emitting odorants into the air. To facilitate the creation of content, a framework for editing multi-sensory information was constructed, in which all of the devices were connected to and controlled by a sequencer based on a MIDI interface.
Conference Paper
We developed a floating avatar system that integrates a blimp with a virtual avatar to create a unique telepresence system. Our blimp works as an avatar and contains several pieces of equipment, including a projector and a speaker as the output functions. Users can communicate with others by transmitting their facial image through the projector and voice through the speaker. A camera and microphone attached to the blimp provide the input function and support the user's manipulation from a distance. The user's presence is dramatically enhanced compared to using conventional virtual avatars (e.g., CG and images) because the avatar is a physical object that can move freely in the real world. In addition, the user's senses are augmented because the blimp detects dynamic information in the real world. For example, the camera provides the user with a special floating view, and the microphone catches a wide variety of sounds such as conversations and environmental noises. This paper describes our floating avatar concept and its implementation.
Conference Paper
Full-text available
Emerging robotic technologies are enabling the control of individual seats on rollercoasters and other thrill rides. We explore the potential of breathing as an effective and engaging way of driving this. Observations and interviews from trials of an enhanced bucking bronco ride show that breath-control is fun, challenging and intelligible, and reveal riders-x tactics as they battled the machine. We conclude that breath control is feasible and appropriate for controlling rides, unpack its important characteristics, and consider how it might be built into future ride systems. We argue that the combination of voluntary and involuntary factors in breathing is especially appealing for controlling rides as it balances game-like elements of skill and learning against the thrill of surrendering control to the machine.
Conference Paper
A new device has been developed for generating airflow field and odor-concentration distribution in a real environment for presenting to the user. This device is called a multi-sensorial field (MSF) display. When two fans are placed facing each other, the airflows generated by them collide with each other and are radially deflected on a plane perpendicular to the original airflow direction. By utilizing the deflected airflow, the MSF display can present the airflow blowing from the front to the user without placing fans in front of the user. The directivity of the airflow deflection can be controlled by placing nozzles on the fans to adjust the cross-sectional shape of the airflow jets coming from the fans. The MSF display can also generate odor-concentration distribution in a real environment by introducing odor vapors into the airflow generated by the fans. The user can freely move his/her head and sniff at various locations in the generated odor distribution. The results of preliminary sensory tests are presented to show the potential of the MSF display.
Conference Paper
We demonstrate a method in which gesture control is enabled through use of a stereo camera. Hard to use keyboard commands are effectively emulated using positional gestures. We describe the implementation and some results from user surveys based on user's trying the application at a public exposition. Along with outlining the implementation, we report the initial findings and show evidence as to why such a gesture interface is more effective as an interface as opposed to traditional gaming interfaces.
Conference Paper
This paper reports on the current progress in a project to develop a multi-sensory theater. The project is focused not only on the development of hardware devices for multi-sensory presentations but also on an investigation into the framework and method of expression for creating the content. Olfactory, wind, and pneumatic devices that present the sensation of odor, wind and gusts, respectively, were developed and integrated into an audio-visual theater environment. All the devices, including the video device, are controlled through a MIDI interface. Also, a framework for creating the multisensory content by programming the sequence of device operations was proposed and implemented.
Conference Paper
We study a wearable device for presenting the sensation of localized wind. Previous works on wind displays used an array of fans that were fixed around the visual display. The distance between the fans and the user was relatively large, making it difficult to give the impression of "local" wind, such as the virtual experience of a bullet passing close to the skin. We propose a local wind display that is a new type of wearable device. The presentation area is around the ears, which the area of the body most sensitive to wind. In this paper, we evaluated the sensation threshold of wind at different parts of the body, focusing on the head. We also measured the two-point discrimination threshold of the most sensitive area on the head. Finally, we propose an application of the device.
Conference Paper
The super hero has overwhelming speed and power. Especially, the greatest characteristic is his special ability. In our VR application, the user can jump from one building to another by using a web and being stuck to it like Spiderman#8482;, the famous super hero. In fact, the aim of this application is to give the user the enjoyment of using superpower. In this application, the user wears the web-shooter, which is the device to shoot a web, and takes aim at a target building with this device. Then, when the user swings his arm ahead, the web is launched and is stuck to the target building on the screen. After the web is stuck to the building, the user's arm is pulled in the direction of the target building by the pulling force feedback system, which can give the feeling of pulling to the user directly and smoothly, as if he were attached to an elastic string. Finally, the user moves to the target building. In exhibition, we surveyed from guests by the questionnaire. And we got evaluation from this questionnaire. As results, we were able to confirm that a lot of users had enjoyed "Spider Hero".
Article
Interaction with computers can make use of our whole physical and even emotional selves, as demonstrated by such emerging systems as HoloWall, SmartSkin, and PreSense.
Conference Paper
Full-text available
This paper proposes a novel system called "Kirifuki" which has an ability of operating a computer by breathing in and blowing out. The principle of this system is as follows; the computer screen is projected onto the physical desk, and when the user blows onto it, the open visual objects on the screen can be manipulated by breathing without using hands and any of physical pointing devices. In this paper, we describe our prototype system and demonstrate its application sub-systems which especially applied to entertainment fields.
Article
We introduce an untethered interface that eliminates the annoyance of wires etc. by using air-jets to establish force feedback. Attendees experience interaction with a virtual object that responds to being "touched". The sense of touch is provided by air-jets while visual clues are provided by a projection-based stereo display.
Conference Paper
This paper describes a mixed reality installation named Jellyfish Party, for enjoying playing with soap bubbles. A special feature of this installation is the use of a spirometer sensor to measure the amount and speed of expelled air used to blow virtual soap bubbles.
Wind-Surround System , Interaction
  • T Kosaka
  • S Hattori
  • Kosaka T.