Andrew McPherson’s research while affiliated with Imperial College London and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (128)


Shifting Ambiguity, Collapsing Indeterminacy: Designing with Data as Baradian Apparatus
  • Article

December 2024

·

2 Reads

·

1 Citation

ACM Transactions on Computer-Human Interaction

Courtney N. Reed

·

Adan L. Benito

·

Franco Caspe

·

Andrew P. Mcpherson

This article examines how digital systems designers distil the messiness and ambiguity of the world into concrete data that can be processed by computing systems. Using Karen Barad's agential realism as a guide, we explore how data is fundamentally entangled with the tools and theories of its measurement. We examine data-enabled artefacts acting as Baradian apparatuses: they do not exist independently of the phenomenon they seek to measure but rather collect and co-produce observations from within their entangled state: the phenomenon and the apparatus co-constitute one another. Connecting Barad's quantum view of indeterminacy to the prevailing HCI discourse on the opportunities and challenges of ambiguity, we suggest that the very act of trying to stabilise a conceptual interpretation of data within an artefact has the paradoxical effect of amplifying and shifting ambiguity in interaction. We illustrate these ideas through three case studies from our own practices of designing digital musical instruments (DMIs). DMIs necessarily encode symbolic and music-theoretical knowledge as part of their internal operation, even as conceptual knowledge is not their intended outcome. In each case, we explore the nature of the apparatus, what phenomena it co-produces, and where the ambiguity lies to suggest approaches for design using these abstract theoretical frameworks.


Sonic Entanglements with Electromyography: Between Bodies, Signals, and Representations

September 2024

·

16 Reads

Courtney N Reed

·

Landon Morrison

·

Andrew P Mcpherson

·

[...]

·

This paper investigates sound and music interactions arising from the use of electromyography (EMG) to instrumentalise signals from muscle exertion of the human body. We situate EMG within a family of embodied interaction modalities, where it occupies a middle ground, considered as a ''signal from the inside'' compared with external observations of the body (e.g., motion capture), but also seen as more volitional than neurological states recorded by brain electroencephalogram (EEG). To understand the messiness of gestural interaction afforded by EMG, we revisit the phenomenological turn in HCI, reading Paul Dourish's work on the transparency of ''ready-to-hand'' technologies against the grain of recent posthumanist theories, which offer a performative interpretation of musical entanglements between bodies, signals, and representations. We take music performance as a use case, reporting on the opportunities and constraints posed by EMG in workshop-based studies of vocal, instrumental, and electronic practices. We observe that across our diverse range of musical subjects, they consistently challenged notions of EMG as a transparent tool that directly registered the state of the body, reporting instead that it took on ''present-at-hand'' qualities, defamiliarising the performer's own sense of themselves and reconfiguring their embodied practice.


Figure 1: An electrical equivalent of a moving coil transducer, modelling both the electrical and mechanical characteristics of the transducer. Adapted from Borwick [5].
Figure 2: Frequency response of the voice coil transducer's transfer function when driven with a current source measuring the voltage across the transducer in both a damped (red) and undamped (blue) state. Note that the low frequency roll-off seen on both plots is due to the AC-coupling of the measurement interface.
Figure 5: System of two plates connected via springs (left) or rattles (right)
Figure 6: Example of nonlinear potential (2) and corresponding force (1). Here, K = 10 3 , γ = 1.2, β = 0.1. The shaded area, whose width is given by 2β, represents a dead zone (no force exerted), yielding intermittent contact between the plates.
A Self-Sensing Haptic Actuator for Tactile Interaction with Physical Modelling Synthesis
  • Conference Paper
  • Full-text available

September 2024

·

83 Reads

The use of transducers to excite physical modelling syn-thesisers with real-world audio signals is a well-established practice within the digital musical instrument design community , yet it is normally presented as a unidirectional process energy is transferred into the system from human to instrument. In this paper, a novel approach to tactile interaction with physical modelling synthesis is presented, through the use of a self-sensing vibrotactile transducer. This enables simultaneous collocated sensing and haptic ac-tuation with a single moving coil transducer. A current drive amplifier is used for haptic actuation, using signals derived from the physical modelling synthesiser. The varying impedance of the transducer (due to changes in the mechanical damping) enables the sensing of force applied upon the device whilst also acting as a pickup to excite the physical model, all with simultaneous haptic actuation. A digital filter equivalent of the transducer's impedance is used to prevent feedback in the system, allowing simultaneous ex-citation and haptic actuation without self-oscillation.

Download

Figure 3: Real-time timbre remapping experiment overview. We learn a mapping network m ϕ (·) to predict synthesizer parameter modulations θ mod to create timbre analogies. Feature differences y are measured between two input sounds (xa, x b ) and m ϕ (·) learns to modulate a synthesizer preset θpre to create a synthesized sound pair (xc, x d ) with a feature differencê y. The feature difference loss measures the error between y andˆyandˆ andˆy. Real-time remapping is enabled by onset features f0(·), which are measured on a short window of audio at a detected onset and are used as input to the mapping network.
Real-time Timbre Remapping with Differentiable DSP

July 2024

·

27 Reads

Timbre is a primary mode of expression in diverse musical contexts. However, prevalent audio-driven synthesis methods predominantly rely on pitch and loudness envelopes, effectively flattening timbral expression from the input. Our approach draws on the concept of timbre analogies and investigates how timbral expression from an input signal can be mapped onto controls for a synthesizer. Leveraging differentiable digital signal processing, our method facilitates direct optimization of synthesizer parameters through a novel feature difference loss. This loss function, designed to learn relative timbral differences between musical events, prioritizes the subtleties of graded timbre modulations within phrases, allowing for meaningful translations in a timbre space. Using snare drum performances as a case study, where timbral expression is central, we demonstrate real-time timbre remapping from acoustic snare drums to a differentiable synthesizer modeled after the Roland TR-808.


Sonic Entanglements with Electromyography: Between Bodies, Signals, and Representations

July 2024

·

67 Reads

·

1 Citation




Auditory imagery ability influences accuracy when singing with altered auditory feedback

February 2024

·

31 Reads

Musicae Scientiae

In this preliminary study, we explored the relationship between auditory imagery ability and the maintenance of tonal and temporal accuracy when singing and audiating with altered auditory feedback (AAF). Actively performing participants sang and audiated (sang mentally but not aloud) a self-selected piece in AAF conditions, including upward pitch-shifts and delayed auditory feedback (DAF), and with speech distraction. Participants with higher self-reported scores on the Bucknell Auditory Imagery Scale (BAIS) produced a tonal reference that was less disrupted by pitch shifts and speech distraction than musicians with lower scores. However, there was no observed effect of BAIS score on temporal deviation when singing with DAF. Auditory imagery ability was not related to the experience of having studied music theory formally, but was significantly related to the experience of performing. The significant effect of auditory imagery ability on tonal reference deviation remained even after partialling out the effect of experience of performing. The results indicate that auditory imagery ability plays a key role in maintaining an internal tonal center during singing but has at most a weak effect on temporal consistency. In this article, we outline future directions in understanding the multifaceted role of auditory imagery ability in singers’ accuracy and expression.


Music, Dementia, Technology: You Said, We Did. Project Report

December 2023

·

18 Reads

·

1 Citation

Working with a participatory design approach, this report details how we (the Music, Dementia, Technology team at The University of Sheffield) have incorporated viewpoints from people living with dementia and those who provide care for them into thinking about the design of musical activities and new musical technologies. This work is funded by a UKRI Future Leaders Fellowship. The collection of data during this project has been approved by the University of Sheffield (ethics approval number 051300). We have express permission from the individuals involved to include their images in reports and web publications resulting from this research.



Citations (77)


... Our proposal accounts for the works of scholars who have discussed how people and IoT devices are increasingly intertwined, and warned about the threats that this entanglement poses [36]. Notably, the formulation of the IoMusTP vision is situated in the ongoing debates promoted by the "Entanglement" wave in HCI research [37], which is impacting also the field of music technology [38], [39], [40]. This wave deals with the increasingly blurry borders between humans and devices and the distribution of agency among these actors, positing that we may become what we build for ourselves and questioning whether our designs is who we want to be. ...

Reference:

Entangled Internet of Musical Things and People: A More-Than-Human Design Framework for Networked Musical Ecosystems
Entangling Entanglement: A Diffractive Dialogue on HCI and Musical Interactions
  • Citing Conference Paper
  • May 2024

... Involving stakeholders at the design stages and feeding back to disseminate knowledge as well as enable impact are ways to make research better aligned with the questions, issues and concerns of a broader and more diverse population (e.g. MacRitchie et al., 2023). The studies in this special issue did not explicitly employ co-design. ...

Music, Dementia, Technology: You Said, We Did. Project Report
  • Citing Research
  • December 2023

... There is an opportunity for technology to facilitate more equitable group musical interactions that can better allow people with different experiences and abilities to succeed together (see for example, group music-making in Taylor, Milne, & Macritchie, 2023;Favilla & Pedell, 2013;music-making in pairs in Houben et al., 2020). As much as the success of these technologies depends on the level of support and expertise required to operate them (Nicol, Loehr, Christensen, Lang, & Peacock, 2024), the perception of materials, sensors and controls can also guide how a device will be used (Pigrem, MacRitchie, & McPherson, 2023). ...

Instructions Not Included: Dementia-Friendly Approaches to DMI Design

... Since developing new expertise on an instrument can take years, digital musical instrument designers have turned to strategies to repurpose existing skills on new instruments (e.g. [17][18] [19]), often through the augmentation of familiar instruments. In addition to building on existing sensorimotor skills, augmented instruments might connect to existing cultural references, though a new instrument need not be a literal augmentation of an existing instrument to achieve these goals. ...

Design for auditory imagery: altering instruments to explore performer fluency

... We firmly believe that fostering a sense of independence and belonging in the visually impaired community is not just a goal; it is a societal responsibility. With our pioneering method, we are dedicated to linking the physical challenges faced by the visually impaired with the limitless potential for an active and socially connected existence [9][10][11][12]. ...

Exploring the Opportunities of Haptic Technology in the Practice of Visually Impaired and Blind Sound Creatives

Arts

... Cognition links refer to visual schemas we begin to learn directly after birth, which fuel the creation of metaphors in adult age (Hurtienne et al., 2015). In strategic meetings, for instance, cognition relies on selecting individual objects for use and rearranging them (Bakker et al., 2012;Reed et al., 2023). Other cognitive links in these meetings are, for example, orientational up-down, near-far, front-back, centre-periphery, big-small, bright-dark, or lightheavy. ...

Negotiating Experience and Communicating Information Through Abstract Metaphor
  • Citing Conference Paper
  • April 2023

... This outcome might lead one to evaluate EMG as a bad tool for measuring technique; in interacting with it, technique changes -the phenomena of vocal practice cease to exist and can only be observed in the way they used to be. [69] to set up their electrode connections on suprahyoid laryngeal muscles. and the body works in the background; not all vocal activity and muscular movement is heard and many movements are chained together to produce what is. ...

The Body as Sound: Unpacking Vocal Embodiment through Auditory Biofeedback
  • Citing Conference Paper
  • February 2023

... For example, the digital music technology recording software Cubase can record the performer's voice to carry out waveform data analysis, and the performer class, with the help of visualization of the waveform situation analysis, master the appropriate vocal performance skills [15][16]. Another example is the cell phone software Edius, which can integrate songs, videos, and images so that performers can fully grasp the design style of the song, which is conducive to a perfect interpretation of the entire vocal work [17][18]. With the continuous development of digital information technology, software such as CoolEdie Pro, Mw3, CakeWalk 6.0, etc., are applied to vocal performance aids, shaping the performer's extremely strong sense of rhythm and sense of playing style [19][20][21]. ...

Embrace the Weirdness: Negotiating Values Inscribed into Music Technology
  • Citing Article
  • October 2022

Computer Music Journal

... The first of these endeavours was dedicated to analysing the "E": what is expression for NIME [24]? In recent years, the "M" was subject to a similar investigation: what is music for NIME [71]? Despite not being included in the title of their manuscript, Marquez-Borbon and Stapleton [64] reflected upon the "N" in their commentary to the re-edition of their NIME 2014 paper in A NIME Reader : ...

The M in NIME: Motivic analysis and the case for a musicology of NIME performances
  • Citing Conference Paper
  • June 2022

... The design sessions took place through weekly meetings over 12 weeks. Inspired by [4], we grouped them into three types: exploration, making, and performance & refinement sessions. Exploration sessions focused on discussing ideas and focusing them on a small number of features. ...

Dialogic Design of Accessible Digital Musical Instruments: Investigating Performer Experience
  • Citing Conference Paper
  • June 2022