Is the model "rhythm, melody and harmony" (as seen in classical music theory) sufficient enough to grasp the language of electronic music composition?

The model of rhythm, melody and harmony is the basis of classical music theoretical understanding. It is true that electronic music theory involves totally different concepts of sound synthesis and signal processing, and basic encoding models for signals: waveform (frequency, phase, amplitude). However, we can still recognize the applicability of classical music theory in electronic music composition.
Thus, the interest now is to know, solely from the scope of electronic music composition/writing, if the classical theory is sufficient. Otherwise, what are the extra concepts that emerge from the digital domain?


1 / 1  ·  42 Answers  ·  1622 Views

Popular Answers

All Answers (42)

  • Samrat Bee · Risk vs. Reward
    To answer your query in a simple way.
    Yes Electronic music and most of it's popular sub genres apply classical methods and rudiments to compose
    and give cohensive shape to the composition. Hence the use of terms like rhythm and melody etc.
    Yet once outside these boundaries, as we step into subjects of ambience, glitch, noise and non linear
    broken beats and sounds, the field of listening and grasping becomes rather open or at times abstract.
    There are a whole bunch of artists, developers and sound buffs who have encouraged this stream or methodology
    for decades ( even before computers and samplers arrived ) .. Also a whole bunch of sound artists who remain
    outside the commercial realms of music do not use terms like melody or harmony as terms to explain their work
  • Larry English · Georgia Institute of Technology
    It;s easy to invent new sounds, you left that out, it is also part of all music, not just digital.

  • Paul Doornbusch · Australian College of Arts
    Well, you need to define "electronic music"... If you mean classical electronic music, rooted in musique concrete or elektronishe muzik or something similar, as developed by Schaeffer, Varese, Stockhausen, Koenig and Xenakis, then no, you do not need melody and rhythm because this kind of electronic music expresses form through timbrel manipulations and other techniques (density, texture etc). If by "electronic music" you mean some more traditional form of music produced electronically, then you may well be ok with rhythm, melody and harmony. "Electronic music" is such a broad term that you need to define what you are talking about.
  • Larry English · Georgia Institute of Technology
    Agree - the question is too vague

    even if it were precisely worded, i doubt there is any objective, provable answer to what you are trying to ask

  • Nadya Markovska · University of Southampton
    Also it will be helpful to define this model of rhythm, melody and harmony. All of these three elemens have independent weight in music. Which means that one of these three is always dominating in a piece of music. Think about harmony for example. Also, do you consider the role of lyrics (poetic text)? If so this changes entirely the focus of a musical composition.
  • Larry English · Georgia Institute of Technology
    "trying to sing or whistle some musique concrete and failing ;-)"

    how could you fail to do that :) ?

    "the structure of the compositions is not restricted to the normal musical rules of melody, harmony, rhythm, metre and so on.:" --wikipedia

  • Deleted
    Your probably going to talk more about sound envelopes, oscillators, and equalizers rather than rhythm, melody and harmony. When you teach electronic music, you need to describe the "how" of electronics. This means diving into subjects like RTS chords, microphones, midi, and definitely MaxMSP.
  • Daniel Hernandez · Instituto de Educación Media Superior del DF
    I believe that it is not possible to apply that model so what. Electronic music moves in other parameters, but I believe that first it is necessary to define what is electronic music. Groves dictionary defines like "Music in which electronic technology, now primarily computer-based, is used to access, generate, explore and configure sound materials, and in which loudspeakers are the prime medium of transmission (…).There are two main genres. Acousmatic music is intended for loudspeaker listening and exists only in recorded form (tape, compact disc, computer storage). In live electronic music the technology is used to generate, transform or trigger sounds (or a combination of these) in the act of performance; this may include generating sound with voices and traditional instruments, electro-acoustic instruments, or other devices and controls linked to computer-based systems. Both genres depend on loudspeaker transmission, and an electro-acoustic work can combine acousmatic and live elements." Taking care of this definition encounter that are cases where these concepts if it is possible to apply them.
    However, does all music own these three elements? they are universal? If we put ourselves to analyze musics of the world we will find that all does not support the application of these three concepts, only the European music from 16th. to 19th. centuries and leaves from the 20th supports the strict application of these, it is an invention of them.
  • Ravimal Bandara · University of Moratuwa
    As I know all rhythm, melody and harmony can be specified in digital music. But even in classical music, rhythm, melody and harmony cannot describe the whole things in a piece of music. So I think the model that we currently Used in electronic music should be extended.
    The major problem is we don't have a clear model even in classical music. But we use some notations and theories, finally the music is output from a music instrument or a vocalist. While performing, they add many expressions to voice and instrumental parts. Most of these fine expressions are still not modeled. The best example is, we can feel the difference between an actual performance of a flute solo and the same music that has been generated electronically. Precisely the flute sound of each and every note may not have the same shape. It is always changed according to the blowing direction, how hard it blows, starting and ending style of blowing etc.
    Extending the current models to support these fine variations in classical music may help to reach to perfect electronic music composition..
  • Tom Benjamin · The New South Wales Department of Education and Communities
    We have human-created electronic music and now machine-created electronic music. The latter now includes averaging of human tastes as part of composition. Recent experiments showed sounds could be 'shaped' by having humans rate them. After each round they became more what is taken as music by humans. So there are cultural factors beyond 'rhythm, melody and harmony' that define what humans will accept and like as music. Even 'windshield wipers tappin' time' can be perceived as musical.
  • James Batcho · Kyungsung University
    I don't see how electronic music changes the definitions at all, just the applications. What you say about frequency, phase and amplitude (the physics of generating/manipulating waveforms) could also apply in classical music.

    There are two differences between electronic music and classical music as far as I see it:
    1. The instruments are different. Instead of a violin (or string section) for example, you have a computer or some kind of generated, filtered signal or signals. These signals can still present notes (or events) on the basis of rhythm, melody and harmony. They may not use one or two of those elements (harmony, in particular, is expendable), but the elements are used regardless of the instrumentation.
    2. The act of composition/performance/audience changed with experimental music and Cage, Wolff, et al. Still, rhythm/melody/harmony applied (and still apply), but the only thing that radically altered was an emphasis, namely an emphasis on rhythm. This is because music cannot escape duration, the time aspect of what sound is. This really shook things up, and one could make the argument that it isn't music anymore.

    And on that last point... One can get really far out there with stretching the definitions of what music is. This is the cultural/aesthetic value of postmodern challenges to forms... we begin to perceive what we thought we knew in different ways. But just because experimental music practitioners call themselves composers doesn't (necessarily!) make it true. They put sound into action, and in that regard I don't really consider them composers, personally, but enablers. Here, we have something more like sound art rather than music.

    But that's just me.
  • Muruganandan K. · Pondicherry University
    The precise factor of consideration here is the assessment of harmony, rhythm and melody. To some extent, the first two are quantitative, and hence can be made electranically with perfection (perhaps too perfectly). But, melody is more subjective, and it necessarily includes the nuances of the human intervention/artisty. In the machine-aided music, i think it emerges that these three classical theories/models are not sufficient to make any clear measure/definition of music. It should then involve several cultural and other scales.

    The need is to extend as well as expand the classical models into new spectrums without both sanctifying and condemning them.
  • Renaud Bougueng · University of Ottawa
    I want to thank everybody for your inputs. I have to admit the question is indeed quite vague :) But, your inputs give me a starting point and quite a lot of information to work from. I will keep you updated. Thanks for the feedbacks!
  • About 20 years ago I had written a full Pascal program which could play Indian classical ragas. However,
    Today it should be certainly possible to generate even the western classical as you have mentioned, what is required is a strong interdisciplinary especially math,music,electronics forehand education course. Musicology is quite feasible as a new science, but perhaps a 12 year specially designed course will be a prerequisite.
    Dilip Apte
  • Fernando Nicknich · Universidade Federal do Paraná
    "just because experimental music practitioners call themselves composers doesn't (necessarily!) make it true. They put sound into action, and in that regard I don't really consider them composers, personally, but enablers."

    James Batcho, thanks for that! I totally agree. I just won't comment further since this would lead us to a whole new conversation here. Maybe in another topic.
  • Larry English · Georgia Institute of Technology
    Freya Vass-Rhee · Hochschule für Musik und Darstellende Kunst, Frankfurt ×
    @Larry English: Have a go at it:

    ok i tried to listen
    now my mind is erased

    what was the original question?

    that stuff is like drinking battery acid

  • Y.S.Kumara Swamy · Dayananda Sagar Institutions
    Question is perfectly correct.
    Because you want a Model.(Model is nearer to truth but not truth) Answer is yes.Because rhythm and harmony are periodic function.But melody is a nonlinear function.For your clarification Music has no language but language is a feeling of particular area which is processed centuries after century.

    Nonlinear function can be made linear like wild elephant can be converted into city trained elephant and viceversa. Use Fourier Tranform for rythm and harmony and use wavelet transform for melody to retrive the language of a electronic music components.Because above is monitered by the man only.So language can be identified.
    This is my personal feelings and way of answer.
  • Kurt Gorman · University of Tennessee
    I played an progressive electronic piece for my appreciation class last semester, and the discussion focused on melody, timbre, texture.
  • Paul Doornbusch · Australian College of Arts
    I played Xenakis' Gendy 3 for my students yesterday and no one mentioned melody, rhythm or harmony in the discussion...
  • Gordon Harvey · Independent Researcher
    I would like some clarification of the question. What exactly do you mean by 'grasp the language'? The answer to your question may depend on how you want to use the language. Do you want to notate electronic music? Do you want to use theory to explain how electronic compositions are constructed? Do you want to teach how to compose electronic music through a knowledge of theory?
  • Renaud Bougueng · University of Ottawa
    Actually, I am looking for a theory to explain music composition in general (so also supporting electronic music). A way to give understanding and creative control over music composition. By that I mean a theory which would encompass and explain music more from a pragmatic perspective of a listener appreciation/interpreation mechanism. I guess it will probably be necessarily quite abstract and have a minimalistic formalism. I believe such a system is possible to certain extent given observable patterns in listener's appreciation
  • Larry English · Georgia Institute of Technology
    What would your theory do?

    How would someone use it,
    who would use it,
    what question would it answer,
    what problem would it solve,
    what outputs and inputs would it have?

  • Renaud Bougueng · University of Ottawa
    It will be primarily intended for composers to hold a knowledge that bridges the gap between the creative idea and its intended realization in an effective and more "deterministic" way. It is about a more complete and "sound" understand of how to create effective music. By effective music, I mean that the resulting piece fully satisfies the vision of the composer when observed (the piece) in the experience of a listener. And I wish to make a distinction here between the experience of a music piece (human processing of the sound pressure) and the
    appreciation (judgment) of it (mental work of the human evaluation of the piece based on individual factors and knowledge).
    And do not get me wrong, most great composers "naturally" understand this in the sense that they have a confident and precise way of knowing how to get a piece the way they want. And in that, I am not excluding the conscious use of improvisation or random creative sub-processes in the creative process.
    But, those knowledge are not always physically captured. I believe it will be interesting and beneficial from this point of view. At the point where theoritical understanding could be abstracted from it.

    One can say the actual knowledge of rhythm, melody, harmony, etc is what I am talking about. But as mentioned in some earlier comments, it is quite incomplete.

    Of course, some limitations are identifiable to such, primarily, evolution of music interpretation in common society (given the subjective character of music interpretation)
  • Paul Doornbusch · Australian College of Arts
    "Actually, I am looking for a theory to explain music composition in general (so also supporting electronic music). A way to give understanding and creative control over music composition. By that I mean a theory which would encompass and explain music more from a pragmatic perspective"
    - well this made me think that you should check out more of Koenig's writings and maybe Xanakis'...

    "of a listener appreciation/interpreation mechanism. I guess it will probably be necessarily quite abstract and have a minimalistic formalism. I believe such a system is possible to certain extent given observable patterns in listener's appreciation"
    - but this part looks more like you want a neurological approach...

    To me, you are asking the wrong question alltogether. Why would classical instrumental theory have anything to do with electronic music, except in the most rudimentary way? How can theory used for music which is based on melody, harmony and rhythm be useful for music based on other concepts such as timbal, density and other changes? And what use is theory anyway? Usually only to musicologists. Composers don't use "music theory", they make systems and rules as they see fit to complete a piece. The creation of music theory is a reductive practice, and it is the *antithesis* of composition which is an expansive activity.
  • There's no such thing as "classical music theory". The music theory is universal for all styles. The styles don't come with their own unique theories. But, some styles don't require in-depth knowledge of the theory. The same concept applies to the electronic music as well, as long it is about music and not just a sound. The electronic signal wave forms and other attributes are just the physical properties for the sound, which has nothing to do with the music theory itself. Yes, music makes a sound but not every sound makes music! Melody, harmony and rhythm are the universal components for music of all styles.
  • Gordon Harvey · Independent Researcher
    Right now I'm listening to 'Rembihnútur' by Sigur Ros and I'm wondering how on established music theory would satisfy Renaud's needs in discussing this piece. Although rhythm, melody and harmony occur in the song, the seemingly loose structure, the heavy use of samples and the way effects like delay, reverb, compression and distortion are integral to the design of the piece seem to make music theory too limited to be of much use. I've heard plenty of other compositions where rhythm, melody and harmony play an even smaller role, although I'd still describe them as music.
  • To David Chilashvili

    "Yes, music makes a sound but not every sound makes music!"

    John Cage is mad at you right now
  • Paul Doornbusch · Australian College of Arts
    "Melody, harmony and rhythm are the universal components for music of all styles."

    That statement is simply untrue and demonstrably so. How much melody is there in Varese's "Ionisation"?

    How does this "universal music theory" treat Cage's "Water Music", William's Mix", or "Imaginary Landscapes"? Or Schaeffer's output? Varese's "Poeme Electronique"? Stockhausen's "Studie II"? Koenig's "Klangfiguren"? Berio's "Thema"? Xenakis' "S709", "Concrete PH" and "Gendy 3"? Dick Raaijmakers' "Canons" or "Funktions 1-4"? David Tudor's "Rainforests"? Tenney's "Analog #1"? Much of the output of Ivo Malec and Horatio Vaggione? My own works make NO use of harmony and they have been quite successful thanks. I could add over a hundred more pieces to this list, and there must be at least THOUSANDS more from highly respected composers, and thousands more after that.

    Also, it is demonstrable that some music uses "melody, harmony and rhythm" in vastly different ways, for example The Grateful Dead's "Anthem of the Sun" or Frank Zappa's "Uncle Meat"

    How does the "universal music theory" treat Punk music? Or Rap? or Thrash Metal?

    The essence is, of course, one's definition of music. If it is narrow, then perhaps you can say that all music has melody etc, but there are many composers of music who would disagree with this. The only definition of music which made sense to me is, "Something which someone wishes to perceive or interact with as music." Putting the onus of definition onto the listener seems to be the only sensible way to cover all aspects of human activity which is "music". Sorry, I do not want to hijack this thread, but I could not let the statement at the top of this post go uncorrected.
  • Scott Mc Laughlin · University of Leeds
    I agree with Paul above, and would like to add that the closest I think you could get to the universality you (Renaud) seem to be after is just Information Theory, even better with the nuance of Gregory Bateson's cybernetics. He described a theory of perception where all perceptual knowledge is simply "news of difference"; when an input goes over a certain threshold we experience this "news of difference" and interpret it according to context. In music there are materials/sounds/timbres/gestures etc/ moving through time, and there are differences that occur at every perceivable level of resolution: difference and repetition at varying levels, that's about as universal as you can get, for what it's worth.
  • Larry English · Georgia Institute of Technology
    How is that ("rhythm, melody and harmony") even a 'model', anyway?
    It leaves out about 1000 aspects of music..
  • Sarah Farmer · Birmingham City University
    Denis Smalley has some interesting ideas about a new taxonomy to describe the new sounds of much acousmatic music; he's written lots of stuff on spectromorphology - worth a look as a composer or performer/technician. I'd argue it can apply to some (particularly very contemporary) acoustic music too and is particularly focused on the movement of sound through space and time.
  • Tom Benjamin · The New South Wales Department of Education and Communities
    Also are we talkin' humans composing music using electronic tools or decision rules that allow electronic tools to compose music? And you can have a combination of both, whereby the electronic tool composes music played by electronic tools. One day electronic devices could even be the audience that decides whether it is music or noise. Certainly we have plenty of electronic composing software these days so theories of composition are probably relevant to that genre and can be described as decision rules since they are programmed as such. Electronic tools are commonly used to emulate traditional instruments but their unique new sounds have also gained a following.
  • I think that we must differ between "normal" music from pre-historic time until the 20th century(from our continent) and "new" music that is mostly electronic music. But think also about the music of for instance the Chinese , or music from Africa or Papua..Most of this music has its own theory, or perhaps no theory,such as we understand it,at all.During my lessons in analysis at the university, my teacher came with an example.the analysis from Erhard Karkoschka of a piece by Dieter Schnebel;MO-NO,Musik zum Lesen.Completely different from what we did with music of the classical and romantic composers, mostly complicated graphs, wihich I still have,a thick parcel,mostly A3-format..
  • Danna Waldman · University of Victoria
    I would like to share some thoughts on this discussion.

    I am learning a lot more than I thought I would following academic discussions in this site. I am a successful (for Canada) pro musician and have been since I was a teenager, with a minimum of technical training and in-depth research into the academic aspects of the field. Fortunately, my natural abilities have made up enough ground to be able to work well in music, although I do wish that I had the deeper understandings of it to be able to be more intellectually involved.

    That said, I find this particular thread fascinating in the ways that it includes views on popular receptions of, what is to me, atonal, arythmic, sound classified by a large group people as music. In my experiences with listeners, measured 3/4 and 4/4 rhythm and tonal melody within a western scale of up to seven sharps or flats have been of prime importance.

    Can elements of electronic music be included in western classic and traditional forms? Obviously. Would it "work"? Questionably.

    Could I integrate elements of electronic music into my compositions and performances? Would it work or would it leave the audience cold? I do not think so, considering that, as opera listeners have a connection not only to the music but also to the culture of opera which increases their enjoyments of it, electronic music listeners also share the same understandings.

    However, that is not to say that by comparison there is little or no sophistication to other kinds of music.

    It seems that what the considerations come down to is that there can be great comfort and communion in the socialised listening to music when the audience is not only an auditory component of the experience, but also social and culture part, and that aspects of rhythm, melody, and key are relative to the expectations,experiences, and purposes of the listeners.

    Can it be simply said that what defines music is what listeners decide it is, considering that a performer without an audience can be compared to a tree falling in the forest?
  • Renaud Bougueng · University of Ottawa
    I understand. And I really like your input! Because of the different origins, culture and evolution of those musical domains, they indeed have developed different expression of music.
  • Renaud Bougueng · University of Ottawa
    Regarding your last question, I believe the expression of music cannot exist without a connection. We can argue a performer without an audience is nothing. Interestingly, he is at least, his first listener (which is a "self-connection"). I believe we experience music in terms of communication where the producer of musical piece projects a creative intention in his material, and the listener experiences his interpretation of the material based on his own culture, his understanding of the particular piece of music, and its personal preferences.
    Ultimately, the intention of the musician to have his audience "understand" his music can be achieved to the extent that he and its audience interpret the piece of music the same way (common understanding of the piece). This is made possible because of culture. Common understanding of music allows the musician to project the idea that he wants to project and he is only limited by his technical abilities (as a musician) to do so.
    On the same note, I think, for example, a cat probably if experiences some kind of music probably does it differently for human and only god knows how it really experiences it lol.
  • Paul Doornbusch · Australian College of Arts
    "Ultimately, the intention of the musician to have his audience "understand" his music can be achieved to the extent that he and its audience interpret the piece of music the same way (common understanding of the piece). This is made possible because of culture. Common understanding of music allows the musician to project the idea that he wants to project and he is only limited by his technical abilities (as a musician) to do so. "

    This is not necessarily so. There is no _need_ for understanding to have enjoyment. Perhaps it helps, perhaps not.

    A case in point: a rather excellent and famous British composer premiered a piece on the same night as I did, and her mother was in the audience. At drinks afterwards, the other composer complained that her mother liked my piece more than hers. The mother said about the end of my work, which is lots of whispering with etherial radio sounds, "It reminded me of being in church and trying to hear what was being said in the confessional." I was well chuffed. The lady did not understand the various degrees of the "travesty" algorithm which had been used to distort the text (Lewis Caroll) from weird sounds to nonsense english. Nor did she understand the spectral mutations the short wave radio sounds had been put through and how these related to the vocal sounds. She just enjoyed it for the meaning she could find. I know that many listeners of standard repertoire do not understand sonata form or a 2/5/1 progression is, yet they still enjoy the music for what they can get out of it - their enjoyment is perfectly valid, even without understanding.
  • Renaud Bougueng · University of Ottawa
    I perfectly agree! I think we are on the same page. My statement assumed that the musician is targeting his work to be understood by others to the extent that he does. But this is not necessarily the case and usually not the achieved purpose in practice ( as your example illustrated). I think I am basically opening the discussion about what is the intention of the musician? or what can it be, given the fact that art is a naturally subjective matter?
    By the way, your example is an interesting story.

Question Followers (32) See all