Figure 4 - uploaded by Tae Hong Park
Content may be subject to copyright.
Source publication
Max Mathews was last interviewed for Computer Music Journal in 1980 in an article by Curtis Roads. The present interview took place at Max Mathews's home in San Francisco, California, in late May 2008. (See Figure 1.) This project was an interesting one, as I had the opportunity to stay at his home and conduct the interview, which I video-recorded...
Context in source publication
Context 1
... yes. She was a close associate of mine; we did a number of research projects together (see Figure 4). She studied at [University of California] Berkeley and had an advanced degree in statistics, but she too liked the computer as a device, and she was a very good programmer, and she could use this [ability] in her research. ...
Citations
... There is a clear distinction between hearing an action or process and hearing the result of an action or process. It seems more obvious if I put this in the form -you do not always hear a cause, 13 you hear its effect. With this in mind I have always doubted the very limited debate about 'hearing algorithms' or indeed any generative procedure whatsoever. ...
... It played a single line Tune. It was followed later by different enhanced versions [14]. As mentioned in [10] [15], one of the earliest examples of composed music using computers was the "The Illiac Suite for String Quartet" by Hiller and Saacson (1958). ...
The aim of this paper is to automatically compose new pleasing music from randomly generated notes without human intervention. To achieve this goal, Genetic Algorithm was implemented to generate random notes. The Neural Network was trained on a set of melodies to learn their regularity of patterns and then it is used as a fitness evaluator for the generated music from the Genetic Algorithm. Four Genetic Algorithms (using different combinations of tournament, roulette-wheel selections and one-point, two-point crossovers) were used in generating music to compare them according to which one is the most suitable for music composition. The experiments show that using tournament selection and two-point crossover produces better music patterns than using other combinations by 57%. The experiments show that the generated music was good and the results were promising. For evaluation, 10 music experts were asked to listen and evaluate four samples of the generated music; two of them were evaluated high from the Neural Network and two were evaluated low. Then we compared their results with the results from the Neural Network. The results show that the error rate for Neural Network was 16.7% and accuracy was 83.3%.
... Pierce was a music enthusiast who realised that understanding how much information was in speech, and music, was useful for telephony research. Mathews, after going to a concert with Pierce, started writing the Music I program to perform music on the computer through digital waveform synthesis (Park 2009). The first people to use the program were engineers. ...
This article documents the early experiments in both Australia and England to make a computer play music. The experiments in England with the Ferranti Mark 1 and the Pilot ACE (practically undocumented at the writing of this article) and those in Australia with CSIRAC (Council for Scientific and Industrial Research Automatic Computer) are the oldest known examples of using a computer to play music. Significantly, they occurred some six years before the experiments at Bell Labs in the USA. Furthermore, the computers played music in real time. These developments were important, and despite not directly leading to later highly significant developments such as those at Bell Labs under the direction of Max Mathews, these forward-thinking developments in England and Australia show a history of computing machines being used musically since the earliest development of those machines.
... While some may assume that the unit-generator concept is derived or inspired from the modular synthesizers, Mathews states that both the unit-generator concept and the modular synthesizers were developed fairly simultaneously in an interview (Roads and Mathews 1980), and in fact, as noted by Park in another interview to Mathews (Park 2009), MUSIC-III, which introduced the unitgenerator concept, was developed in 1960, clearly before Ketoff developed the Synket synthesizer in the early 1960s and also before Buchla and Moog "independently developed the first voltage-controlled modular synthesizers" in 1964 (Pinch 2001); it is fair to consider the unit-generator concept was invented at least simultaneously and independently with the modular synthesizers (There can be some argument if the Synket synthesizer can be considered as an early example of the modular synthesizer, as it was not fully voltage controlled as seen in the Moog synthesizer. However, it was at least a patchable synthesizer, though it was controlled mainly by many buttons and the flexibility in patching was quite limited (according to Luigi Pizzaleo, in an email exchange dated September 3, 2015). ...
This chapter briefly overviews the history of computer music languages and related systems, mainly focusing on those developed in the research community (hence, less focus is put on those commercial computer music software such as digital audio workstation (DAW) software or sound editor software). As is often seen in other surveys of computer music history, the historical development of computer music languages and systems is divided into several overlapping eras in this chapter. The division between the eras of non-real-time computer music systems and real-time computer music systems is particularly emphasized, as it gave a significant impact on both creative practices by artists and musicians and the design of computer music languages and systems by researchers and engineers.
While the evolution of computer music languages has been largely supported by the advance of computer technology and the achievement of the related research in computer science and audio engineering, it should be also noted that issues found in creative practices also have given significant influences to the development of computer music languages and systems throughout its history. Along with the technical advancement, the synergy between technology and creativity in computer music is also highlighted when appropriate in this chapter, as such a perspective can be beneficial to reconsider the relationship between computer technology and artistic creativity in our decades.
... Pierce was a music enthusiast who realised that understanding how much information was in speech, and music, was useful for telephony research. Mathews, after going to a concert with Pierce, started writing the Music I program to perform music on the computer through digital waveform synthesis (Park 2009). The first people to use the program were engineers. ...
While the early experiments in Australia to get a computer to play music are relatively well documented, recently there have been unearthed some previously unknown similar early experiments in England. The Ferranti Mark 1 played popular tunes similar to Australia’s CSIRAC and the UK Pilot ACE computer could algorithmically create sounds. This paper examines the activities in Australia and England in the early 1950s to make a computer play music, which occurred some six years before the celebrated activities in the USA, and they played music in real-time. The experiments in England and Australia are significant, despite them not leading to important further activity such as occurred at Bell Labs under the forward-thinking guidance of Max Mathews. Comparing these earliest developments in Australia and England shows a history of using computers to create music from the earliest existence of those machines, and shows how collaboration across disciplines was the key to success for the Bell Labs team in the USA.
... Computers are not new to musical composition (Laske, 1981;Park, 2009). The notion of computer musical composition has been attributed to two different meanings that are complementary. ...
Automatic melodic harmonization tackles the assignment of harmony content (musical chords) over a given melody. Probabilistic approaches to melodic harmonization utilize statistical information derived from a training dataset, producing harmonies that encapsulate some harmonic characteristics of the training dataset. Training data is usually annotated symbolic musical notation. In addition to the obvious musicological interest, different machine learning approaches and algorithms have been proposed for such a task, strengthening thus the challenge of efficient & effective music information utilisation using probabilistic systems. Consequently, the aim of this chapter is to provide an overview of the specific research domain as well as to shed light on the subtasks that have arisen and since evolved. Finally, new trends and future directions are discussed along with the challenges which still remain unsolved.
... While some may assume that the unit-generator concept is derived or inspired from the modular synthesizers, Mathews states that both the unit-generator concept and the modular synthesizers were developed fairly simultaneously in an interview (Roads and Mathews 1980), and in fact, as noted by Park in another interview to Mathews (Park 2009), MUSIC-III, which introduced the unitgenerator concept, was developed in 1960, clearly before Ketoff developed the Synket synthesizer in the early 1960s and also before Buchla and Moog "independently developed the first voltage-controlled modular synthesizers" in 1964 (Pinch 2001); it is fair to consider the unit-generator concept was invented at least simultaneously and independently with the modular synthesizers (There can be some argument if the Synket synthesizer can be considered as an early example of the modular synthesizer, as it was not fully voltage controlled as seen in the Moog synthesizer. However, it was at least a patchable synthesizer, though it was controlled mainly by many buttons and the flexibility in patching was quite limited (according to Luigi Pizzaleo, in an email exchange dated September 3, 2015). ...
This chapter briefly overviews the history of computer music languages and related systems, mainly focusing on those developed in the research community (hence, less focus is put on those commercial computer music software such as digital audio workstation (DAW) software or sound editor software). As is often seen in other surveys of computer music history, the historical development of computer music languages and systems is divided into several overlapping eras in this chapter. The division between the eras of non-real-time computer music systems and real-time computer music systems is particularly emphasized, as it gave a significant impact on both creative practices by artists and musicians and the design of computer music languages and systems by researchers and engineers.
While the evolution of computer music languages has been largely supported by the advance of computer technology and the achievement of the related research in computer science and audio engineering, it should be also noted that issues found in creative practices also have given significant influences to the development of computer music languages and systems throughout its history. Along with the technical advancement, the synergy between technology and creativity in computer music is also highlighted when appropriate in this chapter, as such a perspective can be beneficial to reconsider the relationship between computer technology and artistic creativity in our decades.
... While some may assume that the unit-generator concept is derived or inspired from the modular synthesizers, Mathews states that both the unit-generator concept and the modular synthesizers were developed fairly simultaneously in an interview (Roads and Mathews 1980), and in fact, as noted by Park in another interview to Mathews (Park 2009), MUSIC-III, which introduced the unitgenerator concept, was developed in 1960, clearly before Ketoff developed the Synket synthesizer in the early 1960s and also before Buchla and Moog "independently developed the first voltage-controlled modular synthesizers" in 1964 (Pinch 2001); it is fair to consider the unit-generator concept was invented at least simultaneously and independently with the modular synthesizers (There can be some argument if the Synket synthesizer can be considered as an early example of the modular synthesizer, as it was not fully voltage controlled as seen in the Moog synthesizer. However, it was at least a patchable synthesizer, though it was controlled mainly by many buttons and the flexibility in patching was quite limited (according to Luigi Pizzaleo, in an email exchange dated September 3, 2015). ...
This chapter briefly overviews the history of computer music languages and related systems, mainly focusing on those developed in the research community (hence, less focus is put on those commercial computer music software such as digital audio workstation (DAW) software or sound editor software). As is often seen in other surveys of computer music history, the historical development of computer music languages and systems is divided into several overlapping eras in this chapter. The division between the eras of non-real-time computer music systems and real-time computer music systems is particularly emphasized, as it gave a significant impact on both creative practices by artists and musicians and the design of computer music languages and systems by researchers and engineers.
While the evolution of computer music languages has been largely supported by the advance of computer technology and the achievement of the related research in computer science and audio engineering, it should be also noted that issues found in creative practices also have given significant influences to the development of computer music languages and systems throughout its history. Along with the technical advancement, the synergy between technology and creativity in computer music is also highlighted when appropriate in this chapter, as such a perspective can be beneficial to reconsider the relationship between computer technology and artistic creativity in our decades.
... The question which is going to dominate the future is now understanding what kinds of sounds we want to produce rather than the means of usefully generating these sounds musically. [14] Here the notion that the computer is capable of producing, "any sound you can imagine", echoing Varèse's desire for, "undreamed-of timbres" in "any combination I choose to impose" continues a rhetoric of control and domination that I want to question for a moment. ...
... Although any perceivable sound can be synthesized by a digital computer [1], most sounds are generally considered not to be musically interesting, and many are even unpleasant to hear [2]. Hence, it can be argued that new music composers and performers are faced with a complex control problem-out of the unimaginably large wealth of possible sounds, they need to somehow specify or select the sounds they desire. ...
... Because the phase, as described by (3), evolves independently of the amplitude (see (2)), the output position of the Large oscillator tends to be approximately sinusoidal, even if the amplitude is changing relatively quickly. This characteristic is especially useful for our musical application as explained in Appendix C. In contrast, many other commonly employed neural oscillator models have a complex interaction between the magnitude and phase [19,25,28,29]. ...
A study on force-feedback interaction with a model of a neural oscillator provides insight into enhanced human-robot interactions for controlling musical sound. We provide differential equations and discrete-time computable equations for the core oscillator model developed by Edward Large for simulating rhythm perception. Using a mechanical analog parameterization, we derive a force-feedback model structure that enables a human to share control of a virtual percussion instrument with a "robotic" neural oscillator. A formal human subject test indicated that strong coupling (STRNG) between the force-feedback device and the neural oscillator provided subjects with the best control. Overall, the human subjects predominantly found the interaction to be "enjoyable" and "fun" or "entertaining." However, there were indications that some subjects preferred a medium-strength coupling (MED), presumably because they were unaccustomed to such strong force-feedback interaction with an external agent. With related models, test subjects performed better when they could synchronize their input in phase with a dominant sensory feedback modality. In contrast, subjects tended to perform worse when an optimal strategy was to move the force-feedback device with a 90° phase lag. Our results suggest an extension of dynamic pattern theory to force-feedback tasks. In closing, we provide an overview of how a similar force-feedback scenario could be used in a more complex musical robotics setting.