June 2021
·
5 Reads
·
1 Citation
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
June 2021
·
5 Reads
·
1 Citation
June 2021
·
23 Reads
April 2020
·
7 Reads
·
1 Citation
January 2020
·
30 Reads
·
3 Citations
Advances in Intelligent Systems and Computing
This paper describes a human subject study that compared the limits at which humans could communicate information through pursuit tracking gestures versus pointing (i.e. tapping) gestures. These limits were measured by estimating the channel capacity of the human motor-control system for pursuit tracking versus pointing along a single axis. A human-computer interface was built for this purpose, consisting of a touch strip sensor co-located with a visual display. Bandwidth-limited Gaussian noise signals were used to create targets for subjects to follow, enabling estimation of the channel capacity at bandwidth limits ranging from 0.12 Hz to 12 Hz. Results indicate that for lower frequencies of movement (from 0.12 Hz to 1 Hz or 1.5 Hz), pointing gestures with such a sensor may tend to convey more information, whereas at higher frequencies (from 2.3 Hz or 2.9 Hz to as high as 12 Hz), pursuit tracking gestures will afford higher channel capacities. In this work, the direct comparison between pursuit tracking and pointing was made possible through application of the Nyquist sampling theorem. This study forms a methodological basis for comparing a wide range of continuous sensors and human capacities for controlling them. In this manner, the authors are aiming to eventually create knowledge useful for theorizing about and creating new kinds of computer-based musical instruments using diverse, ergonomic arrangements of continuous sensors.
March 2019
·
13 Reads
The Journal of the Acoustical Society of America
The design of a Spatially Distributed Vibrotactile Actuator Array (SDVAA) is presented. It employs a multitude of vibrotactile actuators in order to communicate a larger amount of information than is possible using a single actuator. The SDVAA is currently being used for applications in music-to-vibrotactile sensory augmentation. While prior related projects have focused more on sensory substitution, this project aims only to add to a person’s experience by augmenting a multimedia presentation with vibrotactile feedback. Because haptic perception is fundamentally different than auditory perception, it makes sense to rearrange the information transmitted to the haptic senses. For example, while auditory perception is limited to approximately the range 20 Hz to 20 kHz, tactile perception is limited primarily to the range 0 Hz to 800 Hz (if not higher). Accordingly, one approach being considered for converting auditory signals to vibrotactile signals is pitch shifting. Project results relating to music composed specifically for the SDVAA as well as general musical vibrotactile prototyping concepts will be presented.
March 2019
·
2 Reads
The Journal of the Acoustical Society of America
Actuated instruments can be created in a variety of ways. An interesting class of actuated instruments are feedback-controlled acoustic musical instruments. A robust way to create these is to employ pairs of collocated sensors and actuators, and then to use the feedback control to simulate virtual physical systems. In the linear and time-invariant case, this means that the feedback control functions can be designed to be positive real. Accordingly, the physical properties of the instrument can become adjustable via the feedback control. An interesting case arises when the number of actuators and sensors differ. The actuators and sensors can however still potentially be collocated, which will result in the individual transfer functions from the actuators to the sensors (e.g. the mobilities) in being positive real. Under some special cases, stable feedback control can still be attained for a wide variety of feedback gains. The Feedback Guitar serves as an interesting case study for this. It has one actuator, which is approximately collocated with six piezoelectric sensors, one for each string. Using any non-negative linear combination of the sensor signals, an approximately positive real mobility can be obtained, which can enable stable feedback control for a wide variety of feedback gains.
May 2018
·
426 Reads
·
7 Citations
Digital musical instruments yielding force feedback were designed and employed in a case study with the Laptop Orchestra of Louisiana. The advantages of force feedback are illuminated through the creation of a series of musical compositions. Based on these and a small number of other prior music compositions, the following compositional approaches are recommended: providing performers with precise, physically intuitive, and reconfigurable controls, using traditional controls alongside force-feedback controls as appropriate, and designing timbres that sound uncannily familiar but are nonetheless novel. Video-recorded performances illustrate these approaches, which are discussed by the composers.
May 2017
·
24 Reads
·
2 Citations
This paper demonstrates Invisible, a critical digital artwork as performance in a conceptual framework derived from performance studies. Invisible exemplifies how digital art can reflect and influence critical thinking by focusing on three key features of performance studies: constitutive, epistemic, and critical. This intersects with Human-Computer Interaction (HCI) in a digital art context, which addresses inspirational roles of digital art.
May 2017
·
23 Reads
The Journal of the Acoustical Society of America
Using a musical instrument that is augmented with haptic feedback, a performer can be enabled to haptically interact with a modal synthesis-based sound synthesizer. This subject is explored using the resonators object in Synth-A-Modeler. For synthesizing a single mode of vibration, the resonators object should be configured with a single frequency in Hertz, a single T60 exponential decay time in seconds, and a single equivalent mass in kg. Changing the mass not only changes the level of the output sound, it also changes how the mode feels when touched using haptic feedback. The resonators object can further be configured to represent a driving-point admittance corresponding to arbitrarily many modes of vibration. In this case, each mode of vibration is specified by its own frequency, decay time, and equivalent mass. Since the modal parameters can be determined using an automated procedure, it is possible to (within limits) approximately calibrate modal models using recordings of sounds that decay approximately exponentially. Various model structures incorporating the resonators object are presented in a variety of contexts. The musical application of these models is demonstrated alongside presentation of compositions that use them.
May 2017
·
7 Reads
·
1 Citation
The Journal of the Acoustical Society of America
A wide variety of electronic sensors can be used for designing new digital musical instruments and other human-computer interfaces. However, presently human abilities for continuously controlling such sensors are not well quantified. The field of information theory suggests that a human together with a user interface can be modeled as a communication channel. Previously, Fitts' Law used a discrete communications channel to model information conveyed by a human pointing at discrete targets. In contrast, the present work employs a continuous communications channel to model a human continuously controlling an analog-valued sensor. The Shannon-Hartley theorem implies that the channel capacity (e.g., HCI throughput) can be estimated by asking human subjects to perform gestures that match idealized, bandlimited Gaussian “target gestures” across a range of bandwidths. Then, the signal-to-noise ratio of the recorded gestures determines the channel capacity (e.g., HCI throughput). This approach is tested on human users alternately operating simple analog sensors. Suggestions are made for creating knowledge about user interfaces that could potentially transmit an enhanced amount of information to a computer.
... This study employs a model [8] developed in recent studies of pursuit tracking with continuous-control sensors as a comparison [9] and in relation to pointing [10], extending earlier work [11] [12][13] [14] [15] In the present model, it is assumed that a performer attempting to express a signal as a continuous input of a sensor apparatus will generate a signal with some difference between these two, labeled , which may be attributed to neuromotor noise, interference, sensor noise or other causes of error (see Figure 1). The user's input signal is modeled to be attenuated by the constant factor , which represents the deterministic component of a user's performance. ...
April 2020
January 2020
Advances in Intelligent Systems and Computing
... The integration of force feedback into DMIs has also been shown to reduce reaction times and enable more precise All control of musical gestures, further blurring the lines between physical and digital music-making [2]. Altukhaim et al. [3] demonstrated that synchronized haptic feedback significantly strengthens the sense of ownership over virtual body parts. ...
May 2018
... A third wave of mass-interaction tools have appeared in the last decade, driven by open-source initiatives: HSP (hap- Screenshot of the Ruratae environment, allowing dynamic creation/playing of 3D sounding mass-interaction models tic signal processing) [6] provided a first means for audio rate simulation in Max/MSP, whereas Synth-a-modeler [10] provides a Faust-based [58] engine allowing compilation for a variety of targets and platforms. It has since been extended with a modelling user interface and bridges allowing for interconnection between mass-interaction, waveguide and modal synthesis elements [8]. Recent developments have yielded new prototypes for 3D mass-interaction frameworks with audio and haptic capabilities [47,73] (Fig. 5), the migen toolkit for efficient simulation in Max/MSP [48], as well as Ruratae [1] a system offering a novel approach to sound-producing 3D mass-interaction models (Fig 6). ...
October 2016
... Pesquisadores de IHC e profissionais de design discutem como as modalidades de interação em artes digitais se cruzam com tópicos de IHC, como interações baseadas em tela, interação incorporada, ambientes virtuais e aumentados, jogos e visualização de dados (Nam et al., 2017). Discussões sobre o design artístico e suas soluções de interação acontecem desde 2022, durante o evento internacional de IHC chamado CHI, que é promovido pela Sociedade de Computação Internacional (ACM), especificamente, em um Workshop anual, chamado CHI(Art). ...
May 2017
... As discussed by Cantrell, the success of a hacked DMI should not be measured in terms of audience experience or techno-scientific advancements; instead, the more or less tacit goal of instrument hacking is "to promote and explore a communal egalitarian embrace of trial and error and an embodied focus on technical and artistic learning." 27 Given the NIME community's stance of self-designing the instrument that one will compose for-and/or perform with-it is not surprising that the main efforts to promote a maker attitude among musicians come from members of the NIME community, who have developed a number of open-source platforms for creating musical instruments-the most notable examples being Satellite CCRMA 28 and Bela. In the remainder of the chapter, we introduce Bela and discuss the advantages of using open hardware and free software in electronic music pedagogy. ...
Reference:
Composing by Hacking
March 2017
... This study employs a model [8] developed in recent studies of pursuit tracking with continuous-control sensors as a comparison [9] and in relation to pointing [10], extending earlier work [11] [12][13] [14] [15] In the present model, it is assumed that a performer attempting to express a signal as a continuous input of a sensor apparatus will generate a signal with some difference between these two, labeled , which may be attributed to neuromotor noise, interference, sensor noise or other causes of error (see Figure 1). The user's input signal is modeled to be attenuated by the constant factor , which represents the deterministic component of a user's performance. ...
October 2016
... Acoustic viability is a digital design principle that recognizes the importance of integrating nuance and expressive control into digital instruments, using traditional acoustic instruments as inspiration [4,5]. Traditional acoustic musical instruments have been refined over long periods, often spanning performers' lifetimes, whole centuries, or even longer. ...
October 2016
... However, as we are a community with these artefacts at the core of our research, this lack of record -and even more the lack of urgency around it -is bewildering. (Interestingly, a workshop in 2016 [43] pondered what a documentation system for NIME instruments might look like, but, perhaps ironically, no record of the workshop or its outcomes exists.) NIME's historical record only really reflects published proceedings. ...
July 2016
... Synth-A-Modeler (SaM) Compiler by Berdahl and Smith (2012) and Designer by Berdahl et al. (2016) together constitute an interactive development environment for designing forcefeedback interactions with physical models. With SaM, designers interconnect objects from various paradigms (mass interaction, digital waveguides, modal resonators) in a visual programming canvas reminiscent of electronic schematics and mechanical diagrams and compile applications generated with the Faust digital-signal-processing (DSP) framework. ...
July 2016
Lecture Notes in Computer Science