Content uploaded by Nuno N. Correia
Author content
All content in this area was uploaded by Nuno N. Correia on Nov 27, 2020
Content may be subject to copyright.
SIIDS 2020 – Sound, Image and Interaction Design Symposium
Design Strategies for a Hybrid Video Synthesizer
Sourya Sen
Independent Artist
ayruos@gmail.com
Nuno N. Correia
University of Greenwich, ITI/LARSyS
n.correia@greenwich.ac.uk
AUDIOVISUAL PERFORMANCE AND VIDEO SYNTHESIS
Live audiovisual performance can be defined as an event of sound and image manipulation (Carvalho and Lund,
2015). Through rapid advances in technology, various strategies exist today towards composing and performing
such material. These range from hardware analogue instruments, digital tools, commercially available solutions
to artist-built custom solutions incorporating various media.
While any intermedia work can be termed as ‘audiovisual’ (Carvalho and Lund, 2015), Chion (1994) makes the
argument towards combining audiovisual elements to create a third “audiovisual element”, which creates “added
value”. Grierson (2005) defines audiovisual practise as the “process of composing … which exploit added
value”. According to Grierson (2005) the main strategies for composing audiovisual material are
synchronization of audio and visual elements, audiovisual congruence, and binary opposition. These
relationships between audio and visual material in an audiovisual composition come together to create added
value and is a valuable framework for analysis.
The electronic analogue video synthesizer and computer-based approaches towards image generation have
historically contributed towards building visual tools for audiovisual performance. Collopy (2014) argues that
“designers modelled video synthesizers on audio synthesizers…”, and this can be seen in the instruments built in
by Eric Siegel, Bill Etra and Steve Rutt, Dan Sandin, Stephen Beck, and others. Siegel defined his work as the
“video equivalent of a music synthesizer” (as quoted in Dunn, 1992) and Beck’s synthesizer would incorporate
“principles of control voltage and artificial signal production from audio synthesizers” (Collopy, 2014). The
result of this influence meant that these video synthesizers could be controlled the same way as audio
synthesizers - in a modular fashion with control voltages (CV) which were often compatible with each other.
Parallel to the analogue world of video synthesis, digital advancements were also underway. On the one hand,
there was the addition of digital circuitry in analogue synthesizers for added control (Collopy, 2014) and on the
other, there was development of computer-based systems for graphics generation. The role of computers in the
arts would only increase over time as they would become smaller, portable and more powerful and play an
important role in the artistic audiovisual performance world of today (Salter, 2010).
Digital platforms and computers would also fundamentally change the landscape of electronic music. Modular
synthesizers would lose favour for smaller digital synthesizers and eventually, computers (Trocco and Pinch,
2009). However, from the mid-nineties onwards, a resurgence of modular instruments have taken place with the
invention and popularity of the Eurorack format (Fantinatto, 2013). While still relying on the concepts of
discrete modules with control voltage compatibility, the Eurorack format has also incorporated modern digital
microcontrollers and offers a truly modern modular environment for sound synthesis (Connor, no date). This is
of course, in conjunction with the continued use of the computer and the digital audio workstations (DAWs).
When it comes to video synthesis however, this hybrid nature has not caught up. While some companies are
designing and releasing video synthesizers in both standalone and modular formats, they follow an all-analogue
circuitry. While digital platforms have been incorporated in some physical instruments, one of the drawbacks of
these instruments is limited control over parameters for integration in an audiovisual setting. The lack of a
modular yet modern physical video synthesis platform is important to note here considering the influence the
pioneering modular audio synthesizers has had on video synthesizers in their infancy.
SIIDS 2020 – Sound, Image and Interaction Design Symposium
CONTEMPORARY PRACTISES
Even with overarching characteristics of what constitutes an instrument suited towards live performance as
discussed by Franco et al. (2004) and Levin (2000), an exhaustive list and comparison of each would be fairly
demanding. In practice, two distinct classes of instruments and platforms can be identified: 1. Using existing
and commercial solutions towards building a performance, and 2. Artist-developed tools and systems for
performance.
When analyzing tools with classifications by Franco et al. (2004) and Levin (2000), artist-developed tools are
seen to provide the highest flexibility, as the artist has fundamental control over every audio and visual event
and their interactions. However, this comes at the added expense of the initial investment of development time.
Existing solutions, including visual jockey (VJ) software and standalone software or hardware platforms, on the
other hand, are more immediate in their output, being easier to use at the cost of flexibility.
Magnusson’s (2010) analysis of instrument design through the concept of affordances and constraints can be
extended towards audiovisual platforms as well as purely visual tools. On the topic of commercial solutions
versus artist developed tools, he writes,
Problems with the former lie in the conceptual and compositional constraints imposed upon users by
software tools that clearly define the scope of available musical expressions. It is for this reason that
many musicians, determined to fight the fossilization of music into stylistic boxes, often choose to work
with programming environments that allow for more extensive experimentation. However, problems
here include the practically infinite expressive scope of the environment, sometimes resulting in a
creative paralysis or in the frequent symptom of a musician-turned-engineer.
The same phenomenon is also seen in purely visual performance systems, both in hardware and software.
Striking the right balance between an extensive development framework and the constraints put into it often lead
to more immediate output. To such an end, Magnusson (2010) introduces the idea of mapping as design
constraints. This is proposed through a model where the artist interacts with a controller using a mapping engine
connected to the sound engine for performance and composition. Through this interaction and sonic and haptic
feedback (and additionally, visual feedback), the musician can continue their process of composition and
performance.
PROPOSED DESIGN FOR A HYBRID VIDEO SYNTHESIZER
For developing a modern hybrid video synthesizer which would be physical, tactile and work in a modular
fashion, based on the earlier discussions, the authors would like to propose the following strategies.
Firstly, it would have to be compatible with the Eurorack format. As discussed earlier, modular video synthesis
platforms of the past have had a rich history of inheriting from the modular sound synthesis platforms of their
time. Following a similar strategy would have a few advantages. With compatible voltage sources, it would be
possible to drive both audio and video together towards fulfilling the characteristics identified by Franco et al.
(2004) and Levin (2000) and correspond to the strategies as identified by Grierson (2005) towards exploiting
added value. Working with established standards and having various control voltage sources would be added
advantages.
Secondly, it should be both easy to use as well as extensively customizable towards generating complexity.
Following the idea of mapping as design constraints (Magnusson, 2010), this instrument would be able to map
both physical inputs as well as control voltage inputs to visual parameters while retaining the advantages of
artist developed tools by being customizable enough to go under the hood and fundamentally alter the
programming paradigms.
Finally, it would need to be compatible with contemporary display technologies to easily integrate with modern
studios and performance venues.
CONCLUSIONS
A prototype instrument with the strategies discussed above has already been developed, implemented and used
in live performances by the first author1. This current version of the instrument, built around a Raspberry Pi
1 https://github.com/sourya-sen/rPi_synth
SIIDS 2020 – Sound, Image and Interaction Design Symposium
using custom hardware and software, allows both direct plug and play capability for visuals, as well as CV
control over parameters using Eurorack standards. While it has immediate output and follows a comprehensive
mapping engine with the panel controls, CV inputs and parameters, it is also extensible for more complex
operations on the software side. Also supporting a HDMI output, this instrument fulfills the requirements as
discussed in this paper. It is already available open source for those interested in implementing their own
versions and going forwards, future iterations are planned.
REFERENCES
Carvalho, A. and Lund, C. (eds) (2015) The Audiovisual Breakthrough. Berlin: Fluctuating Images.
Chion, M. (1994) Audio-Vision: Sound on Screen. New York: Columbia University Press.
Collopy, P. S. (2014) ‘Video Synthesizers: From Analog Computing to Digital Art’, IEEE Annals of the History
of Computing, 36(4), pp. 74–86. doi: 10.1109/MAHC.2014.62.
Connor, N. O. (no date) ‘Reconnections: Electroacoustic Music & Modular Synthesis Revival’.
Dunn, D. (1992) Eigenwelt Der Apparate-Welt Pioneers of Electronic Art. The Vasulkas.
Fantinatto, R. (2013) ‘I dream of wires’, DVD. Jason Amm (Producer).
Franco, E., Griffith, N. J. L. and Fernström, M. (2004) ‘Issues for Designing a flexible expressive audiovisual
system for real-time performance & composition’, in NIME.
Grierson, M. (2005) Audiovisual Composition. University of Kent.
Levin, G. (2000) Painterly Interfaces for Audiovisual Performance. M.Sc. Dissertation. Massachusetts Institute
of Technology.
Magnusson, T. (2010) ‘Designing Constraints: Composing and Performing with Digital Musical Systems’,
Computer Music Journal, 34(4), pp. 62–73. doi: 10.1162/COMJ_a_00026.
Salter, C. (2010) Entangled: technology and the transformation of performance. MIT Press.
Trocco, F. and Pinch, T. J. (2009) Analog Days: The Invention and Impact of the Moog Synthesizer. Harvard
University Press.