Improvisation in Stockhausen’s Solo

Years ago I wrote a paper on a piece by Stockhausen called Solo. The paper itself was long and boring, so I’ll spare you a reproduction of it here. I recently suffered through a rereading of it and discovered that there are some interesting thoughts in it about improvisation which I do find worthwhile to explore a bit. One of the most interesting things about Solo is the methodology of improvisation that it asks the player to use, which I believe is a very rare kind of improvisation.

It’s a bit difficult to describe Solo briefly since it is such a complex work. Solo is an electroacoustic work for a single player and feedback delay. The delay times are much longer than those that we usually associate with delay as an effect, which tend to have delay times in milliseconds. Rather, the delay in Solo uses times in multiple seconds, so whole or multiple phrases could be repeated by the delay after the performer has played them.

Solo1

The notation consists of six form schemes and six pages of notated music. An example of a page of notation is shown above, and a form scheme is shown below. The player is instructed to letter the pages of notation A-F and place them in order. Since the lettering is left up to the player, the order of the pages ends up being more or less arbitrary. Stockhausen then refers the player to different divisions of the material on each page. Specifically, pages, systems, parts, and elements. Pages and systems have the same definitions that they would in other notated music. Stockhausen defines a “part” as any group of notes contained within a pair of bar lines. This is not called a “bar” or a “measure” simply because the printed music contains both proportional and mensural notation. An “element” is any single normally printed note, any grace note by itself, any group of grace notes, or any single grace note and its associated normally printed note.

Formscheme2

The form schemes represent the way in which the player will interpret the notated music. For a performance, only one form scheme is selected to be played. Each of the form schemes are broken into smaller sections made up of cycles and periods. A cycle is the group of periods between two letters as determined in the form scheme. Each form scheme has six cycles which are lettered to correspond generally to the similarly lettered page of notation. So, cycle A is the first cycle of periods on all of the form schemes and generally will contain material from page A of the notation. Periods are smaller groupings within cycles which have time values in seconds assigned to them based on the delay time of the electronics for the corresponding cycle. So, as we can see in the image taken from form scheme II below, in cycle A, there are nine periods of twelve seconds each. Within cycle B there are seven periods of twenty-four seconds each, and so on.

FS topFS top2

A performance of Solo is never a “start at measure one and play to the end” kind of endeavor. Rather, the player is at liberty to select portions of each page to play in a given cycle. Below each cycle there is a group of symbols that tells the player relatively loosely how they should perform the music for that cycle. Stockhausen calls these “what,” “where,” and “how” symbols. A “what” symbol tells a player what size of gesture they should select (systems, parts or elements); a “where” symbol tells a player from where they should select these gestures (from the current page, the current and the following page, the current and the previous page, or all three); a “how” symbol tells the player how the gestures they select should relate to each other (different, the same, or opposite). The criteria for the how gesture is up to the player. So, the player might decide that the how symbol relates to pitch. In this case, the “same” symbol would indicate that the gestures within a cycle should all have more or less the same pitch range.
Two additional symbols indicate the length of time a player may pause between periods, and how the player should attempt to relate to the electronics part within a cycle.

The image below is from cycle B of form scheme V. These particular symbols indicate that, within this cycle, the player must draw musical material made up of parts, from pages A, B, and C, which are either the same or different, with medium pauses following each part, and entrances staggered so as to create a polyphonic texture with the electronics.

Polyphon

So, in actual performance, the player might play this part from page B 1, then this one from page C 2, this from A 3, this from B 4,and so on until they had played a 45 second period from the cycle. Then the player can take a medium pause before they continue the same process again, trying to create a polyphonic texture as the electronics play back what they played from the previous period.

Whew! Remember when I said it was difficult to describe this piece simply? There’s actually quite a bit more to the performance of the piece (for example, we haven’t really discussed the electronics at all!), but I think that’s all you’ll need to know for now.

Solo represents an excellent example of what I would call “composed improvisation.” The term itself seems like an oxymoron, but the concept is actually much more common than one might think. For example, virtually all ‘traditional’ jazz is composed improvisation. Jazz players are generally given, or have learned, some kind of chart or lead sheet which contains the chord changes and melody of a piece, and then improvise based on that information.

19442405

In fact, it’s fairly common for this same kind of controlled improvisation based on notation to occur in contemporary classical music as well. What I have seen most commonly, and have used the most in my own music, is a section wherein only pitches are notated and everything else is left to the player to decide. An example from my music is shown below. Note that the given pitches can be used in any order, in any octave, with any rhythm, dynamic, articulation and so on.

Naut

These are by no means the only ways that notated improvisation can occur. There are probably as many different ways to utilize these kinds of ideas as there are composers using them. But Solo is actually an example of something very rare in the world of composed improvisation. To work out what that is, we have to take a quick step back.

Music is fundamentally organized into a series of impulses. A note begins on an impulse. That note can be combined with other notes into a larger phrase, which has its own larger impulse. That phrase is then grouped with other phrases to form a section, which has its own, still larger impulse. Sections can be grouped into a large form which we might call a movement, or a complete work, each of which also has its own much larger impulse. Sometimes people refer to this concept of grouping things into larger and larger impulses as “the big beats” of music. I’m deliberately avoiding the word “beat” here because it can be misleading.

This concept is actually alluded to in a Ted talk by Benjamin Zander, which you can watch below, and is more scientifically stated by Stockhausen himself in an essay which appears in Perspectives on Contemporary Music Theory edited by Benjamin Boretz and Edward T. Cone.

Composed improvisation can generally be organized into three levels based on with what level of impulses the player is being allowed to improvise and what levels of impulse have been predetermined. In the first level, the form and the phrases are both predetermined, but the specific notes which are played are up to the performer. In the second level, the form and the specific notes are determined, but the phrases which are constructed out of those notes are up to the performer. In the final level, specific notes and phrases are determined, but the form of the piece is left to the performer.

So, the two forms of composed improvisation that we have discussed thus far are both level-one improvisation. Consider jazz improvisation: the form of the piece and the phrase structure are already given based on the notation within the chart, but exactly which notes are played when is up to the player to decide. Specific notes are undetermined, but the larger impulses are predetermined.

An example of third-level improvisation would be the “open form” music found in some of the works of Pierre Boulez is an example of this as are numerous works by Stockhusen (Zyklus, and Licht, for example). In this kind of improvisation, while entire sections of notes and phrases are specifically notated, the order in which those sections occur is determined by the performers.

Solo is a rare example of level-two improvisation in which specific notes and gestures are determined, as is the overarching form, but the way those notes and gestures organize to make phrases is left to the player. I have not yet encountered another piece of composed improvised music that contains large-scale, level-two improvisation, even among Stockhousen’s works. What’s more, the understanding by the performer that this work functions as level-two improvisation is absolutely imperative to a particular performance faithfully representing Stockhausen’s intentions for Solo.

For those interested in hearing Solohere is a recording of me and horn player Briay Condit playing this piece.

The fact that this work is, as far as I am aware, unique in the world of improvised music makes it more meaningful to the cannon, and likely explains why the work is so notationally involved and difficult for performers to meaningfully understand. And, frankly, this only begins to deal with the things about this work that are fascinating and misunderstood, which probably explains why my previous paper was so long and boring… perhaps more on this another day.

For more from Stephen Bailey, you can visit his website here.

“Classical” music: Ah, you’re Indians!

Dinner parties with strangers are notoriously dangerous ground for me, and, I think, for most composers. Inevitably, as the group deals with the appropriate small talk, someone asks “what kind of music do you write?” This question seems innocuous to them; they really only mean it as a way of getting to know me better. They really don’t understand how difficult something like that is to answer. When answering that question, one has to judge not only how much or how little that person knows about music in general, but also how much or how little they actually want to learn about MY music.

My answer should probably be something like this: “I write texture-based chamber, choral, band, and orchestral music that often equally integrates both electronic instruments and acoustic instruments and which is informed by all of the compositional techniques and languages from the last century; the goal of which is to capture a moment, express an idea or emotion, and generally to cause an audience member or listener to have an experience of some kind.”

But that’s a lot.

Maybe I’m underestimating the strangers with whom I attend dinner parties, but I’ve always assumed that’s more than someone wants to hear as an answer to that question. My real answer is this: “I write avant-garde classical music.” It’s short, it’s to the point, and it does, in some way, actually give a person an idea of what my music is like. Moreover, it leaves some openness for more questioning, if someone is actually interested in going down that rabbit hole with me.

Some people would have a problem with my usage of the term “classical” to describe my music. The technical definition of “classical music” is music that was written in Western Europe from about 1750-1850. That’s not my music. In fact, that’s not anyone’s music that has been alive for the last 150 years. But this means that there are several generations of composers who have no words to describe their music. The music that we write isn’t pop music, it isn’t jazz, it’s not rock, and if it isn’t “classical,” then what the hell is it? How should we describe it to potential listeners? What can we say that will give them some idea of what we do and also allow them the option of learning more without feeling intellectually alienated by an incomprehensible stream of music-specific terminology?

Several terms have been proposed or used over the years in an effort to remedy this situation. Some call this music “art music,” some “serious music,” even “legitimate music.” The rather offensive implication of these terms is that other genres are “not art,” “not serious,” or “not legitimate.” Some call it “concert music,” which, of course, absurdly means that no other music has ever or will ever be performed in a concert. “Orchestral music” is an attractive candidate, but implies a specific ensemble and excludes others. Can one really say that a piece written for string quartet is “orchestral?” Furthermore, the term “orchestral” tells us very little about what the music sounds like. Composers like Philip Glass and Arnold Schoenberg have both written for, recorded with, and performed with orchestras, but so have Ray Charles and Metallica.

The two most recent candidate terms that I have seen are “notated music” and “composed music.” These two terms came to me via blogs that were mentioned to me by colleagues. They certainly seem attractive at first, but I believe that, just like all the other terms mentioned above, neither actually does an effective job of telling us about the music they are attempting to describe.

“Composed music” comes from music journalist and radio producer Craig Havighurst. You can read his blog on the subject here. “Notated music” ultimately comes from Steve Reich, but is brought up again by Ethan Hein whose blog you should read here.

For those of you who are too lazy to do that (no judgement), here’s the abridged version: Havighurst likes “composed music” because it venerates the composer again. He says it implies music that comes from “a singular mind, fixed and promulgated in written form” as well as a particular restraint and “composure” that is expected of us when we listen to this music. Hein, whose blog is actually an excellent critique of Havighurst’s term, points out the reek of exclusionist privilege that permeates Havighurst’s concept of “composed” music. He also draws attention to the fact that, really, all music is composed in one way or another. Lastly, Hein proposes Reich’s “notated music” as an alternative. There’s actually a lot more to be said here, but it’s not entirely pertinent to this particular conversation, so it will have to wait until another time.

The creators behind these two terms are forgetting, or perhaps ignoring, two extremely important things about genre terminology. The first really has to do with the nature of language. Language is a means of expressing or describing something in the absence of that thing. In other words, the only reason that we use the word “chair” is because at some point in time someone had to refer to a chair without being able to point to one and say “this.” The word “chair” creates in us a series of definitions that we understand about chairs. Probably “a place for sitting” is number one on that list for most of us. But those definitions aren’t inherent to the word itself; they had to be taught to us over time. This is why if I say “chair” to someone who doesn’t speak English, it doesn’t mean anything to them, and similarly why if I say “get off the chair” to my cat, he does absolutely nothing.

This same concept should be applied to genre terminology. We create words to define the differences between different kinds of music. But the terms we create only have meaning if there is a common understanding of their definition. “Composed music” is meaningless to the layperson; as is “notated music.” If I have to explain the definition of the terminology I’m using then I’m back to square one. Why would I waste time doing that, when I could just as easily actually explain my music itself to them? In fact, the only people to whom “classical music” is not an effective descriptor are those with enough musical knowledge that other preexisting musical terminology, like “minimalist” or “post-serial,” is already meaningful and serves as a better descriptor.  These are academic words that only academics are arguing over.

To the layperson, the word “classical” doesn’t mean “music written by Western European men between 1750 and 1850.” It means “music typically composed for acoustic instruments from the orchestral families and/or voices and performed in a particular kind of concert setting.” The proof of this is the fact that the vast majority of people consider contemporary film scores to be “classical” music. Frankly, that description is pretty close to what I do. Adding the words “chamber,” or “electroacoustic,” or “avant-garde” gets the definition close enough that someone will actually know what I’m describing to them and that’s the only point of having words to explain genre.

The second point that those focused on creating new terms for music are forgetting is a product of the first. It is this: we don’t actually get to decide what our music is called. Debussy famously railed against the idea that his music would be classified as “impressionism,” yet every music history textbook that I have ever seen places him in that movement. In fact, John Adams, Arnold Schoenberg and Steve Reich have all attempted to reject the genre labels that have ended up being applied to them. Yet three quick searches for these composers’ names on iTunes reveal this gem:
UntitledIt’s probably also worth mentioning that Josquin Des Prez, and Gerard Grisey both come up under this same genre in iTunes.

Louis CK makes this point well as he discusses how white people ruined America.

CK’s remark, “ah! You’re Indians!” has come to be my mantra when discussing new terminology for “classical” music. No matter what terms we invent to try and better define what we do, people are still going to call it classical music. People aren’t concerned with the start and end dates of a particular aesthetic movement when they ask what kind of music you write. To correct them about their terminology, or to try and teach them some new definition, is fundamentally disrespectful to the fact that someone just expressed an interest in what you do! If we ever want to make our music relevant to the world at large we need to meet people where they are by describing what we do in ways that actually mean something to them. We have enough battles to fight as living composers without fighting people about the name they call our music.

I don’t care if people call it classical music, as long as they call it something.

For more from Stephen Bailey, you can visit his website here.

Musical Repetition (featuring a video from TedEd)

As a composer of music that frequently features strong emphasis on stasis or repetition, I frequently butt heads with the notion that repetition without transformation is non-music, or “boring.” When we allow ourselves to let go of our judgments about if we like music or not, or if we’re bored or not, we tend to be more open to having the experience that the music is facilitating. In many ways, music that contains strong repetition allows us to experience more fully the depth of the musical event because we are allowed to engage with this music several times over, each time learning a new thing about what the music is saying.  This concept has fascinated me since the first time I heard a minimalist work and has had a strong drive on my recent conceptualization of what I want my music to be like.

This brief video from TedEd explains some of the science behind this idea.

Intellect, Intuition, and Inspiration and Their Link to Compositional Process

When I was first studying composition, I watched an episode of 60 Minutes about a very young composer named Jay Greenburg. At the age of twelve, this young man had already written several symphonies and was studying music theory and composition at Julliard. Jay mentioned during this documentary that he often “heard” his music in his head, sometimes several complete pieces at once and only then needed to write them down. Because of this statement, one of the major subjects of this documentary was a discussion of “where” Jay’s music came from. Jay himself answered this question by casually smiling and explaining that he didn’t know. This was followed by a series of experts, including Jay’s teacher Samuel Adler, explaining how dangerous this is. I recall in particular Adler’s statement that the most important thing for Jay was that he continually keep questioning where his inspiration came from, and never take it for granted, lest it leave him. Jay, now 20, is still writing music and is currently published by G. Schirmer.
Inspiration itself is a tough nut to crack and is a dangerous tool on which to rely. Recently, while attempting to write a new work, I discovered that a new idea, unrelated to what I was actually intending to write, had crept into my head. I certainly would never claim to hear “fully formed” music in my head, but this experience is a related one. It is more like understanding how a piece works, how it moves from one moment to another, the sensation one has when listening to it, without actually hearing it. The actual task of composing then becomes a process of working out the details of how to recreate those sensations in a way that makes musical sense. I will admit that this has happened to me before, but this particular episode was significant because I became aware of the fact that it was happening at a strange and inopportune time while working on another piece.  In a way, I had the sense that my own inspiration was dictating when I should work on what.
Compositional process and inspiration, I believe, are intrinsically related. Much of the path that a composer takes from the inception of an idea to a completed work is determined by the way that composer fosters and reacts to their own inspiration. This year, while studying at DU, I had the pleasure of participating in a number of discussions about compositional process moderated by my colleague, Sarah Perske.  Sarah was going through an evaluative and analytical journey with her own compositional process and was kind enough to share that journey with the rest of us. For me, the most important element of Sarah sharing this was that it caused me to analyze my own process. I found that Sarah’s process and my own are strikingly different, and that these differences shed a remarkable light on the role of inspiration in our two different compositional methods.
I believe that Sarah’s process is largely driven by her musical intellect.  She begins with what she calls “a topic.” This can be something extra-musical, or can just be a sound or technique she wishes to explore. This is followed by “messing around at the piano” in an effort to find a sound that will fit with her topic. From here, Sarah moves into a sketching process that I know to be very in depth. She explains: “I start thinking in terms of creating ‘pillars;’ these could be important events in the piece, or textures that I want to create that have some sort of goal. Once I reach this point, I tend to alternate between ‘zooming in’ on detail, and ‘zooming out’ to look at the overall form.” This zooming in and out process seems to necessitate a non-linear composition process that jumps around the piece and, interestingly, is also how Sarah describes her method of dealing with writer’s block. Another thing that I found interesting regarding this stage in Sarah’s process is that the composition and engraving processes are largely separate for her; she typically doesn’t start on the engraved score until the composition itself is nearly complete through her sketches or until she has a complete, handwritten score.
I believe that my process is driven largely by intuition. For me, almost all of my works begin with extra-musical concepts and a series of decisions regarding instrumentation, pitch content, and form based on the “feeling” I wish to express about the given concept. Once I have these ideas formed, I listen to music for the ensemble I am writing for or to music that is related to my ideas in some other way. Usually during this time is when actual musical ideas are formed in my head. This is typically followed by brief sketches, usually consisting of line drawings and verbal notes to myself regarding textures and important events. To give an example of the brevity of these sketches, a current work in progress is made up of three movements; sketches for the entire piece take up only about half of a page in total. I then immediately begin composing directly into finale, feeling my way through the piece and deciding what happens next based on intuition. This process is almost always linear, though it may include vaguely fleshed out ideas or written notes between sections of fully notated music. If I get stuck, I stop working and wait for a solution to present itself to me. Often times this means returning to the listening portion of my process again.
Since I am not a third party to this comparison, I’m not really able to present an accurate discussion of the quality of the results of these methods.  I can say subjectively that I like both Sarah’s and my own music.  I believe that I can discuss the efficiency of these two processes though.  To me, Sarah’s process seems highly active and proactive. Her preparation time is spent in researching and improvising and when she gets stuck, her reaction is to work her way out of it. I see my own process as much more passive. I prepare by listening and waiting for something to occur to me and when I get stuck, I wait my way out. This, I believe, is the fundamental difference between the two processes, and leads to the fundamental danger that I frequently encounter in my own writing. On several occasions the end result has been me staring at a blank page waiting for something to happen, sometimes for months on end.
Inspiration, intuition, and intellect are resources that all composers share.  We all develop methods of creating that utilize our different strengths in each area and attempt to compensate for our perceived weaknesses. No one process actually yields “better” results than any other, but, like Samuel Adler explains, relying too heavily on one tool over the other can be dangerous. I don’t actually know if Adler is right; maybe we can’t rely on the tools that we use if we use them too much, but maybe we can. I have the sense that composers like Mozart, and perhaps Jay Greenburg, create solely based on inspiration.  I have spent a great deal of time trying to foster my own intuition and learn to listen to myself while creating, yet I remain a slave to my own inspiration. I don’t think I know how to change my process and still be authentic to the music that I feel compelled to create, regardless of the danger that implies. For me at least, I suppose staring at a blank page is as important to my method as anything else.

Some Thoughts Regarding Electronic and Electroacoustic Music (Part I)

Last year I had several experiences with electronic music that have caused me to think a great deal about its composition, presentation, and performance.  In this blog, I’d like to address two issues related to the decision making process that composers face when writing electronic music and the ways that these decisions tend to shape the perception of that music by audiences.

The first issue occurred to me last year when I attended a performance by the Colorado Symphony Orchestra as part of their “Inside the score” concert series.  While all of the works that were programmed were fantastically well performed and conducted, two works specifically stayed with me because of their treatment of prerecorded elements.  These were a new work by conductor Scott O’Niell which incorporated both the recorded sounds of crickets chirping* and whale sounds; and a work by Respighi entitled Gli ucceli which used the recorded call of a nightingale.  Here is why these works stuck with me, and why they have caused enough thought on my part that I am writing about them almost five months later: both of the recorded elements for these pieces were played back through the house PA system.

I know that doesn’t seem so interesting at the moment, but let’s talk about what that actually means.  The PA system in Boettcher concert hall is many, many feet above the orchestra, and generally operates using a speaker system that is dispersed around the room.  By contrast, the orchestra is set up in one location and each seat in the hall will actually have a slightly different experience of the sound as a result of the relative positioning of the individual players . So here’s the problem with that: in terms of how we hear, physical separation equates to psychological separation. Our brains are wired so that when we hear two sounds coming from two separate places, we assume that they are two separate and unrelated entities. This is great for navigating primordial savannahs, but it also means that it is very difficult for us to correlate the sounds from a PA system with those of an acoustic orchestra. This probably doesn’t always pose a huge problem, but for these two pieces, especially O’Niel’s piece which involved a great deal of interaction between the orchestra and the recording, it was a major drawback.

The solution to this problem is very simple:  The CSO should have put a speaker or two in the orchestra, probably back with the percussion. This would not only have merged the two sounds into one, but would also likely have been more in line with the early performances of the Respighi piece since it was written in the infancy of recording and amplification technologies and the likelihood that a PA system would have been available in the hall at its premiere seems very slim.**

This brings me to my first point about how composers need to think differently than we are used to when we write electronic music.  Putting the electronic element of a piece through the PA is not always wrong, but it will have different effects on the listener. This is especially true if the work also calls for acoustic instruments. We as composers have to think about the way the two elements will interact musically and make decisions about how they will interact physically.  If the acoustic instrument is supposed to be set apart, “surrounded” by the electronic element, the PA is certainly the best choice as this is the effect that the audience will perceive. If the two parts are supposed to interact, and to have equal footing, then a speaker on the stage, probably as close to the other performer as possible should be used.  Most importantly, we need to specifically state these decisions in the score and ensure that it happens when we attend performances. Our job as composers is to make and defend musical decisions about the kind of experience we want our listeners to have.  Allowing these decisions to be made by others countermands the work that we do in other, more obvious areas of our music.

The second issue occurred to me when I read a journal article by Miller Puckette. In this article he detailed a new algorithm developed for more accurate score following.  For those who aren’t familiar with the terminology, score following is essentially a way to remove the necessity for human intervention in performance of electronic elements alongside acoustic instruments. By sensing the pitch played by the acoustic performer, the computer follows along a preprogrammed score of the performance and reacts to what the performer is doing in specific ways at predetermined times in the score. In truth, my experience has been that a great deal of research and effort has gone into this particular topic and I have to wonder why. Rarely, if ever, is the computer actually making its own “decisions” regarding what happens. It is almost always the case that the computer merely triggers certain events to occur, a task that has demonstrably and repeatedly been easier and more accurately performed by a human. So why are we trying to remove a human from the equation?

The conclusion here is that we shouldn’t remove the human from the equation. This actually has more benefits than just being easier and more accurate. When a computer is on stage without a person operating it, there is a kind of disembodiment that occurs. The idea, particularly in a concert setting, that a sound is occurring without a point of origin seems to catch people off guard. What’s more, audiences tend to take certain visual cues regarding when the piece has begun or finished from the people on stage and a lack of these visual cues will often make audience members disoriented and uncomfortable.

The point is that we as composers need to make deliberate choices about the way our music is performed. It is very common for composers to include stage diagrams for their works and these diagrams could easily include the location of a speaker and a laptop. The problem is that we aren’t used to making these kinds of decisions so explicitly because a tradition already exists regarding where instrumentalists should sit on stage in relation to each other. Really, the only decisions we’ve had to make in the past are those of musical content.  But since electronics have so drastically changed the face of the music world, we have to also change the way that we control the experiences our audiences have.

*A recording of crickets slowed and pitched down has circulated around the internet for some time. The sound is quite beautiful and is worth hearing. https://soundcloud.com/acornavi/jim-wilson-crickets-audio

**I actually know very little about the history of this piece and am only making educated assumptions based on the time period when it was written and my understanding of the history of recording technology. If anyone knows more about this, I’d be fascinated to hear about it.