Michael Zbyszyński

Michael Zbyszyński
Goldsmiths, University of London · Department of Computing

Doctor of Philosophy

About

19
Publications
3,229
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
100
Citations
Introduction
Michael Zbyszyński is a lecturer in the Department of Computing, at Goldsmiths, University of London, where he is co-leader of the Electronic Music, Computing, and Technology program.

Publications

Publications (19)
Chapter
This chapter explores three systems for mapping embodied gesture, acquired with electromyography and motion sensing, to sound synthesis. A pilot study using granular synthesis is presented, followed by studies employing corpus-based concatenative synthesis, where small sound units are organized by derived timbral features. We use interactive machin...
Article
Full-text available
To better support creative software developers and music technologists' needs, and to empower them as machine learning users and innovators, the usability of and developer experience with machine learning tools must be considered and better understood. We review background research on the design and evaluation of application programming interfaces...
Conference Paper
Full-text available
This paper presents a method for mapping embodied gesture , acquired with electromyography and motion sensing, to a corpus of small sound units, organised by derived timbral features using concatenative synthesis. Gestures and sounds can be associated directly using individual units and static poses, or by using a sound tracing method that leverage...
Conference Paper
Full-text available
We present a system that allows users to try different ways to train neural networks and temporal modelling to associate gestures with time-varying sound. We created a software framework for this and evaluated it in a workshop-based study. We build upon research in sound tracing and mapping-by-demonstration to ask participants to design gestures fo...
Conference Paper
Full-text available
In this paper, we consider the effect of co-adaptive learning on the training and evaluation of real-time, interactive machine learning systems, referring to specific examples in our work on action-perception loops, feedback for virtual tasks, and training of regression and temporal models. Through these studies we have encountered challenges when...
Conference Paper
Full-text available
User interaction with intelligent systems need not be limited to interaction where pre-trained software has intelligence " baked in. " End-user training, including interactive machine learning (IML) approaches, can enable users to create and customise systems themselves. We propose that the user experience of these users is worth considering. Furth...
Conference Paper
Full-text available
fzero∼ is a monophonic, wavelet-based, real-time fun-damental estimation object released as part of the standard Max 6 distribution. It was designed to provide usable results in a large variety of cases with a minimum of parameter modification by the user. It implements a Fast Lifting Wavelet Transform (FLWT) using the Haar Wavelet. The object prov...
Article
Full-text available
We summarize a decade of musical projects and research employing Wacom digitizing tablets as musical controllers, discussing general implementation schemes using Max/MSP and OpenSoundControl, and specific implementations in musical improvisation, interactive sound installation, interactive multimedia performance, and as a compositional assistant. W...
Article
This paper outlines recent developments in pedagogical software resources at the CNMAT. We describe the Max/MSP/Jitter Depot: an organized system where software can be stored and shared. The Depot offers a wide range of support and includes basic programming tips, modular programming units for copy and paste, interactive tutorials on all aspects of...
Conference Paper
Full-text available
Software and hardware enhancements to an electric 6-string cello are described with a focus on a new mechanical tuning device, a novel rotary sensor for bow interaction and control strategies to leverage a suite of polyphonic sound processing effects.
Article
We have measured the total system latencies of MacOS 10.2.8, Red Hat Linux (2.4.25 kernel with low-latency patches), and Windows XP from stimulus in to audio out, with stimuli including analog and digital audio, and the QWERTY keyboard. We tested with a variety of audio hardware interfaces, audio drivers, buffering and related configuration setting...
Article
Full-text available
This paper proposes the creation of a method book for tablet-based instruments, evaluating pedagogical materials for traditional instruments as well as research in human-computer interaction and tablet interfaces.
Article
Full-text available
Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology. Compared to protocols such as MIDI, OSC's advantages include interoperability, accuracy, flexibility, and enhanced organization and documentation

Network

Cited By

Projects

Projects (3)