Chapter

LIVEJACKET: Wearable Music Experience Device with Multiple Speakers

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

There are two conventional methods to experience music: listening physically (Live) and listening through digital media. However, there are differences in the quality of the music experience between these listening methods. To improve the quality of music experience and entertainment when listening through digital media, we developed LIVEJACKET, a jacket capable of vibrotactile presentation and music playback. By simultaneously presenting vibration and sound from 22 multiple speakers attached to a jacket, we created the sensation of being enveloped in sound. Wearers feel like they are singing, which can improve the quality of the music experience. We set five music listening methods, completed experiments, and conducted a questionnaire survey on the music experience. Based on the results, we found that the system we proposed can provide a music experience that cannot be obtained by listening to music through traditional digital methods.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Although loudspeakers are not thought as actuators, they generate vibrations that can convey musical information through the skin as demonstrated in [33,[38][39][40][41]. Frequency response of loudspeakers by far overcome upper tactile perception threshold (e.g., up to 20 kHz), so subwoofers are usually selected for haptic applications. ...
... Consequently, most works described in this section are not necessarily portable at the moment of publication, but have clear opportunities to lately become HMP-WDs. Starting from prototypes that cover a small skin surface, there are bracelets (Figure 4a), designed to be worn on the wrist (see, e.g., in [19,21,59,60]); gloves and mobile device mockups (Figure 4b), designed to be worn or held on the hands (see, e.g., in [37,[61][62][63][64]); belts (Figure 4c), designed to be worn surrounding the body from the chest to the abdomen (see, e.g., in [8,35,41,42]); and jackets ( Figure 4d), designed to be worn on the upper body with actuators usually located on the back, the front and the superior limbs (e.g., [38,65]). Other variations are whole body suits [15], and headphone type displays [39], but instances are scarce. ...
... Although this model ignores the frequency content that defines timbre of instruments, the researchers demonstrated that evaluation of perception was better than that obtained with the FM. The TM offers opportunities for future work as demonstrated by Hashizume et al. [38] who tested the method in a multi-modal experimental setup. Tactile metaphors is another resource that represent an opportunity for timbre rendering but, as mentioned in Section 4.2, this method remains unexplored. ...
Article
Full-text available
Tactile rendering has been implemented in digital musical instruments (DMIs) to offer the musician haptic feedback that enhances his/her music playing experience. Recently, this implementation has expanded to the development of sensory substitution systems known as haptic music players (HMPs) to give the opportunity of experiencing music through touch to the hearing impaired. These devices may also be conceived as vibrotactile music players to enrich music listening activities. In this review, technology and methods to render musical information by means of vibrotactile stimuli are systematically studied. The methodology used to find out relevant literature is first outlined, and a preliminary classification of musical haptics is proposed. A comparison between different technologies and methods for vibrotactile rendering is performed to later organize the information according to the type of HMP. Limitations and advantages are highlighted to find out opportunities for future research. Likewise, methods for music audio-tactile rendering (ATR) are analyzed and, finally, strategies to compose for the sense of touch are summarized. This review is intended for researchers in the fields of haptics, assistive technologies, music, psychology, and human–computer interaction as well as artists that may make use of it as a reference to develop upcoming research on HMPs and ATR.
... As well as digital applications, there are various conceptual physical products, mostly focusing on haptic features or commands, such as gestures through wearable technologies.Hashizume et al. (2018) developed a wearable music player, LIVEJACKET, that gives vibrations and sound through speakers located in the jacket. The aim is to support music listening experiences through the use of sensorial input. Although it is not a recent example,Nirjon et al. (2012) developed the 'MusicalHeart' application and earphones. By using sensors, ea ...
Thesis
Full-text available
Music has always been a key element for people to connect with themselves and others. For this study, the main focus is to investigate pre-recorded music playing experiences through time with changing technology and design. As technology develops, many habits and behaviours of people change. Listening to music is an experience that has changed throughout the decades, with regards to technological developments and social contexts. From radio and gramophones to mobile phones and online streaming, the means of listening to music has been through many great changes. Various products and interfaces have been used to organize and deliver recorded music, such as Walkman’s, CD players, and iPods. In this sense, designers have always been involved in presenting the pre-recorded music playing experience to people. As the needs and expectations of users have evolved, so designers’ contributions have also developed, especially in the transition from physical to digital music players. The history of this evolution will be explored in this study, plotting how pre-recorded music playing experiences have changed or remained the same alongside the changes in product design and means of delivering music. A proposal of design and future music playing experiences will be presented at the end of the thesis.
... Another wearable system, the LIVEJACKET, which uses a vest with 22 haptic stimulators attached to the arms and torso has also been developed (Hashizume et al., 2018). Like the haptic suit, the LIVEJACKET presents different musical instruments through different haptic stimulators. ...
Article
Full-text available
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This "electro-haptic stimulation" improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
... Based on recent developments of vibrotactile technology, which promise to make music more inclusive and immersive [24], this study investigated the effectiveness of a commercially available wearable vibrotactile display in translating musical emotions into vibrations. Our study has shown how profoundly deaf individuals perceive musical emotions through vibrations and has outlined the potential and limitations of such technologies for conveying intended emotions to users. ...
Preprint
Advances in tactile-audio feedback technology have created new possibilities for deaf people to feel music. However, little is known about deaf individuals' perception of musical emotions through vibrotactile feedback. In this paper, we present the findings from a mixed-methods study with 16 profoundly deaf participants. The study protocol was designed to explore how users of a backpack-style vibrotactile display perceive intended emotions in twenty music excerpts. Quantitative analysis demonstrated that participants correctly identified happy and angry excerpts and rated them as more arousing than sad and peaceful excerpts. More positive emotions were experienced during happy compared to angry excerpts while peaceful and sad excerpts were hard to be differentiated. Based on qualitative data, we highlight the benefits and limitations of using vibrations to convey musical emotions to profoundly deaf users. Finally, we provide guidelines for designing accessible music experiences for the deaf community.
Article
We are studying music and color to create a new way of enjoying music namely "wearing", and we are exploring its entertainment. Concretely, images of colorful costumes are formed from music in real time, and projected on fog generated around a person by projectors to realize "wearing music". In this paper, we propose a method to color music in three dimensions, a method to generate costume images from music, and a method to wear music images. The core ideas are the three dimensional coloring of music focusing on the spiral structure of the musical scale and synesthesia, the image generation of the costume by the technique of generative art, and the image projection from three directions to the fog which is the media of projection. However, it was difficult to control the fog, and the images projection onto the fog did not work as a result. Therefore, instead of fog, we also conducted experiments such as projecting costume images directly onto a white coat or mannequin body. By this research, even ordinary people without special senses can experience a new way of enjoying music namely "wearing", and we can expect a possibility of creating a new entertainment world where music and fashion are fused.
Article
Full-text available
The development of the Internet of Things (IoT) accentuates the interweaving of the digital with the physical, raising multiple issues, including for learning and more specifically for the learning of the deaf. While emerging technologies may have given rise to various methods and modes of learning, the implications of the IoT for deaf learning are not yet understood. This study aims to propose the possible integration of the IoT and augmented reality (AR) to support deaf learning from three dimensions of analysis: data, interfaces and pervasiveness. The axes of applications identified suggest that the IoT promotes learning characterized by experimentation, adaptation (to the context and to the learner), the manipulation of objects and exploration without the constraints of time or space. In order to achieve this integration, preliminary survey is collected from deaf learners. Based on the results, a deaf intelligent learning model is proposed and discussed.
Article
Full-text available
Sound source localization is important for spatial awareness and immersive Virtual Reality (VR) experiences. Deaf and Hard-of-Hearing (DHH) persons have limitations in completing sound-related VR tasks efficiently because they perceive audio information differently. This paper presents and evaluates a special haptic VR suit that helps DHH persons efficiently complete sound-related VR tasks. Our proposed VR suit receives sound information from the VR environment wirelessly and indicates the direction of the sound source to the DHH user by using vibrotactile feedback. Our study suggests that using different setups of the VR suit can significantly improve VR task completion times compared to not using a VR suit. Additionally, the results of mounting haptic devices on different positions of users’ bodies indicate that DHH users can complete a VR task significantly faster when two vibro-motors are mounted on their arms and ears compared to their thighs. Our quantitative and qualitative analysis demonstrates that DHH persons prefer using the system without the VR suit and prefer mounting vibro-motors in their ears. In an additional study, we did not find a significant difference in task completion time when using four vibro-motors with the VR suit compared to using only two vibro-motors in users’ ears without the VR suit.
Article
Virtual Reality (VR) has a great potential to improve skills of Deaf and Hard-of-Hearing (DHH) people. Most VR applications and devices are designed for persons without hearing problems. Therefore, DHH persons have many limitations when using VR. Adding special features in a VR environment, such as subtitles, or haptic devices will help them. Previously, it was necessary to design a special VR environment for DHH persons. We introduce and evaluate a new prototype called "EarVR" that can be mounted on any desktop or mobile VR Head-Mounted Display (HMD). EarVR analyzes 3D sounds in a VR environment and locates the direction of the sound source that is closest to a user. It notifies the user about the sound direction using two vibro-motors placed on the user's ears. EarVR helps DHH persons to complete sound-based VR tasks in any VR application with 3D audio and a mute option for background music. Therefore, DHH persons can use all VR applications with 3D audio, not only those applications designed for them. Our user study shows that DHH participants were able to complete a simple VR task significantly faster with EarVR than without. The completion time of DHH participants was very close to participants without hearing problems. Also, it shows that DHH participants were able to finish a complex VR task with EarVR, while without it, they could not finish the task even once. Finally, our qualitative and quantitative evaluation among DHH participants indicates that they preferred to use EarVR and it encouraged them to use VR technology more.
Conference Paper
In this paper, we present Synesthesia Wear, a full-body, customizable haptic interface, and demonstrate its capabilities with an untethered spatial computing experience. This wear not only looks as flexible as ordinary cloth, but also the attached modules are powered and controlled through this conductive textile via two-dimensional signal transmission(2DST) technology. The haptic modules have a pin, badge-like connector, and it can be freely attached on the conductive textile(Fig. 1(b)), enabling users to personally customize the experience to their own liking. This module can render variations of haptic feedback to the torso and all limbs of the body, and visualize representations using LED light patterns. We also designed a spatial computing experience(Fig. 1(c)) inspired by the perceptual phenomenon called synesthesia. The system provides an sensory blended experience in which the user walks around freely in real space. The user can interact with real and virtual environments, in a reality-overlapping way by wearing Synesthesia Wear.
Article
Full-text available
This is an exploratory work aimed at enhancing mood music in film entertainment. We present the design and implementation of a haptic wearable prototype system which aims to amplify mood music in film through haptic sensations (vibrotactile feedback). This approach also could potentially have implications for hearing-impaired audiences, providing a new enriched emotional experience while watching a movie. This paper reports on a set of three studies conducted to assess whether vibrotactile stimuli are able to enhance moods. Preliminary findings show that vibrotacile stimuli at low intensity and low frequency induce a sense of calmness in users, whereas vibrotactile stimuli at low intensity but higher frequency increased excitement. The combination of high intensity and high frequency vibrotactile stimuli heightened tension on the other hand. These findings support our position that vibrotactile feedback could be used to enrich the emotional aspects of cinematic experience through haptic sensations.
Article
Full-text available
To enrich a movie viewing experience with personalised tactile effects, the Touch Blanket was created. It is a flexible blanket for use on a seat or sofa, with inside a 2D matrix of 176 small, individually controllable vibration motors attached to the fabric. One or two users sitting on the blanket can experience a range of tactile effects at their legs, arms and back, synchronised to events in a movie.
Article
Full-text available
We present a model human cochlea (MHC), a sensory substitution technique and system that translates auditory information into vibrotactile stimuli using an ambient, tactile display. The model is used in the current study to translate music into discrete vibration signals displayed along the back of the body using a chair form factor. Voice coils facilitate the direct translation of auditory information onto the multiple discrete vibrotactile channels, which increases the potential to identify sections of the music that would otherwise be masked by the combined signal. One of the central goals of this work has been to improve accessibility to the emotional information expressed in music for users who are deaf or hard of hearing. To this end, we present our prototype of the MHC, two models of sensory substitution to support the translation of existing and new music, and some of the design challenges encountered throughout the development process. Results of a series of experiments conducted to assess the effectiveness of the MHC are discussed, followed by an overview of future directions for this research.
Article
Full-text available
Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.
Article
Full-text available
Factor-analytic evidence has led most psychologists to describe affect as a set of dimensions, such as displeasure, distress, depression, excitement, and so on, with each dimension varying independently of the others. However, there is other evidence that rather than being independent, these affective dimensions are interrelated in a highly systematic fashion. The evidence suggests that these interrelationships can be represented by a spatial model in which affective concepts fall in a circle in the following order: pleasure (0), excitement (45), arousal (90), distress (135), displeasure (180), depression (225), sleepiness (270), and relaxation (315). This model was offered both as a way psychologists can represent the structure of affective experience, as assessed through self-report, and as a representation of the cognitive structure that laymen utilize in conceptualizing affect. Supportive evidence was obtained by scaling 28 emotion-denoting adjectives in 4 different ways: R. T. Ross's (1938) technique for a circular ordering of variables, a multidimensional scaling procedure based on perceived similarity among the terms, a unidimensional scaling on hypothesized pleasure–displeasure and degree-of-arousal dimensions, and a principal-components analysis of 343 Ss' self-reports of their current affective states. (70 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In this paper we outline the fundamentals for a tactile feedback system to be used in conjunction with open-air computer music performance devices. Some underlying physiological and perceptual mechanisms of haptics are examined, some currently available open-air controllers are reviewed, and previous technologies and experiments regarding haptic/tactile feedback are surveyed. Our VR/TX system is proposed as a solution for adding tactile feedback to open-air controllers; experiments show that the VR/TX vibrotactile stimulators provide invaluable perceptually-significant tactile feedback when used in conjunction with an open-air music controller. A typology of tactile sound events is also described, as well as the notion of a tactile simulation event (TSE).
Conference Paper
Full-text available
Music is a multi-dimensional experience informed by much more than hearing alone, and is thus accessible to people of all hearing abilities. In this paper we describe a prototype system designed to enrich the experience of music for the deaf by enhancing sensory input of information via channels other than in-air audio reception by the ear. The system has two main components-a vibrating 'Haptic Chair' and a computer display of informative visual effects that correspond to features of the music. The Haptic Chair provides sensory input of vibrations via touch. This system was developed based on an initial concept guided by information obtained from a background survey conducted with deaf people from multi-ethnic backgrounds and feedback received from two profoundly deaf musicians. A formal user study with 43 deaf participants suggested that the prototype system enhances the musical experience of a deaf person. All of the users preferred either the Haptic Chair alone (54%) or the Haptic Chair with the visual display (46%). The prototype system, especially the Haptic Chair was so enthusiastically received by our subjects that it is possible this system might significantly change the way the deaf community experiences music.
Conference Paper
Full-text available
In this abstract, we present the Emoti-Chair, a sensory substitution system that brings a high resolution audio-tactile version of music to the body. The system can be used to improve music accessibility for deaf or hard of hearing people, while offering everyone the chance to experience sounds as tactile sensations. The model human cochlea (MHC) is the sensory substitution system that drives the Emoti-Chair. Music can be experienced as a tactile modality, revealing vibrations that originate from different instruments and sounds spanning the audio frequency spectrum along multiple points of the body. The system uses eight separate audio-tactile channels to deliver sound to the body, and provides an opportunity to experience a broad range of musical elements as physical vibrations.
Article
Full-text available
This paper presents a development history of a wearable, scalable vibrotactile stimulus delivery system. This history has followed a path from desktop-based, fully wired systems, through hybrid approaches consisting of a wireless connection from the host computer to a body-worn control box and wires to each tactor, to a completely wireless system employing Bluetooth technology to connect directly from the host to each individual tactor unit. Applications for such a system include delivering vibrotactile contact cues to users of virtual environments, providing directional cues in order to increase situational awareness in both real and virtual environments, and for general information display in wearable contexts. Through empirical study, we show that even a simple configuration, such as eight tactors arrayed around the torso, can be effective in increasing situational awareness in a building-clearing task, compared to users who perform the same task without the added cues.
Conference Paper
The Synesthesia Suit provides immersive embodied experience in Virtual Reality environment with vibro-tactile sensations on the entire body. Each vibro-tactile actuator provides not a simple vibration such as traditional game controller, but we designed the haptic sensation based on the haptic design method we have developed in the TECHTILE[Minamizawa et al. 2012] technology. In haptics research using multi-channel vibro-tactile feedback, Surround Haptics [Israr et al. 2012] proposed moving tactile strokes using multiple vibrators spaced on a gaming chair. And then they also proposed Po2[Israr et al. 2015], which shows illusion of tactile sensation for gesture based games by providing vibrations on the hand based on psycho-physical study.
Article
The coupled perception of sound and vibration is a well-known phenomenon during live pop or organ concerts. However, even during a symphonic concert in a concert hall, sound can excite perceivable vibrations on the surface of the body. This study analyzes the influence of audio-induced vibrations on the perceived quality of the concert experience. Therefore, sound and seat vibrations are controlled separately in an audio reproduction scenario. Because the correlation between sound and vibration is naturally strong, vibrations are generated from audio recordings using various approaches. Different parameters during this process (frequency and intensity modifications) are examined in relation to their perceptual consequences using psychophysical experiments. It can be concluded that vibrations play a significant role in the perception of music.
Conference Paper
This paper describes the results of a survey that was conducted to involve consumers in the design of a haptic jacket as a HCI device. It also describes how the system is designed, implemented and evaluated. The jacket contains an embedded computing system and haptic actuators that are programmed to affect its user emotionally and improve their immersion in gaming and movie watching. The prototype is composed of 6 affective haptic components: chest and neck vibration, neck warmth, heartbeat simulation, arms vibration and shivering. The paper shows that the system may be used in different applications with the proper interfacing software. Finally, evaluation shows an improvement in purpose by conducting a QoE experiment.
Article
In this paper we propose an architecture for rendering rich and high-resolution haptic feedback on the user's body while playing interactive games. The haptic architecture consists of three main elements, namely, haptic engine, haptic API/codec, and haptic display. The haptic engine extracts events from the game, assigns haptic feedback to these events, and sends coded packets to haptic API/codec. The haptic API/codec translates the coded packets and computes driving signals based on carefully evaluated algorithms derived from psychophysical modeling of tactile perception. The driving signals are then routed to the haptic display embedded with an array of vibratory transducers. A user feels high resolution and refined tactile sensations on the body through the display. We have integrated the Surround Haptics system with a driving simulation game to provide an enjoyable gaming experience.
Conference Paper
Adding haptic stimulation to movies is a promising step in creating more emotionally immersive experiences. To explore the potential of this concept, we have created a wearable tactile jacket that is used to deliver movie-specific tactile stimuli to the viewer's body that are specifically targeted to influence the viewer's emotions. Immersion was evaluated in a user test using questionnaires and physiological measurements. The findings show promising effects of the haptic stimuli that need to be substantiated in further more refined user tests.
Conference Paper
In this paper, we present a web-based framework in which users can annotate tactile feeling to a YouTube video and experience the tactile feeling by wearing a tactile device while watching\annotating the video. The tactile device is embedded into a wearable garment, a haptic jacket and a haptic arm band in this paper, and has a rectangular layout like a video screen. Therefore, the tactile information is represented as a sequence of rectangular arrays with time stamps and stored in XML format. Each element of the array represents a tactile intensity, a magnitude of actuation. In the framework we provide a web-based authoring tool to add tactile feeling while navigating a video and setting tactile intensity in a time line. We also introduce a web browser in which a tactile device driver is embedded to activate the tactile device based on the annotated tactile information.