Improvisation in Stockhausen’s Solo

Years ago I wrote a paper on a piece by Stockhausen called Solo. The paper itself was long and boring, so I’ll spare you a reproduction of it here. I recently suffered through a rereading of it and discovered that there are some interesting thoughts in it about improvisation which I do find worthwhile to explore a bit. One of the most interesting things about Solo is the methodology of improvisation that it asks the player to use, which I believe is a very rare kind of improvisation.

It’s a bit difficult to describe Solo briefly since it is such a complex work. Solo is an electroacoustic work for a single player and feedback delay. The delay times are much longer than those that we usually associate with delay as an effect, which tend to have delay times in milliseconds. Rather, the delay in Solo uses times in multiple seconds, so whole or multiple phrases could be repeated by the delay after the performer has played them.

Solo1

The notation consists of six form schemes and six pages of notated music. An example of a page of notation is shown above, and a form scheme is shown below. The player is instructed to letter the pages of notation A-F and place them in order. Since the lettering is left up to the player, the order of the pages ends up being more or less arbitrary. Stockhausen then refers the player to different divisions of the material on each page. Specifically, pages, systems, parts, and elements. Pages and systems have the same definitions that they would in other notated music. Stockhausen defines a “part” as any group of notes contained within a pair of bar lines. This is not called a “bar” or a “measure” simply because the printed music contains both proportional and mensural notation. An “element” is any single normally printed note, any grace note by itself, any group of grace notes, or any single grace note and its associated normally printed note.

Formscheme2

The form schemes represent the way in which the player will interpret the notated music. For a performance, only one form scheme is selected to be played. Each of the form schemes are broken into smaller sections made up of cycles and periods. A cycle is the group of periods between two letters as determined in the form scheme. Each form scheme has six cycles which are lettered to correspond generally to the similarly lettered page of notation. So, cycle A is the first cycle of periods on all of the form schemes and generally will contain material from page A of the notation. Periods are smaller groupings within cycles which have time values in seconds assigned to them based on the delay time of the electronics for the corresponding cycle. So, as we can see in the image taken from form scheme II below, in cycle A, there are nine periods of twelve seconds each. Within cycle B there are seven periods of twenty-four seconds each, and so on.

FS topFS top2

A performance of Solo is never a “start at measure one and play to the end” kind of endeavor. Rather, the player is at liberty to select portions of each page to play in a given cycle. Below each cycle there is a group of symbols that tells the player relatively loosely how they should perform the music for that cycle. Stockhausen calls these “what,” “where,” and “how” symbols. A “what” symbol tells a player what size of gesture they should select (systems, parts or elements); a “where” symbol tells a player from where they should select these gestures (from the current page, the current and the following page, the current and the previous page, or all three); a “how” symbol tells the player how the gestures they select should relate to each other (different, the same, or opposite). The criteria for the how gesture is up to the player. So, the player might decide that the how symbol relates to pitch. In this case, the “same” symbol would indicate that the gestures within a cycle should all have more or less the same pitch range.
Two additional symbols indicate the length of time a player may pause between periods, and how the player should attempt to relate to the electronics part within a cycle.

The image below is from cycle B of form scheme V. These particular symbols indicate that, within this cycle, the player must draw musical material made up of parts, from pages A, B, and C, which are either the same or different, with medium pauses following each part, and entrances staggered so as to create a polyphonic texture with the electronics.

Polyphon

So, in actual performance, the player might play this part from page B 1, then this one from page C 2, this from A 3, this from B 4,and so on until they had played a 45 second period from the cycle. Then the player can take a medium pause before they continue the same process again, trying to create a polyphonic texture as the electronics play back what they played from the previous period.

Whew! Remember when I said it was difficult to describe this piece simply? There’s actually quite a bit more to the performance of the piece (for example, we haven’t really discussed the electronics at all!), but I think that’s all you’ll need to know for now.

Solo represents an excellent example of what I would call “composed improvisation.” The term itself seems like an oxymoron, but the concept is actually much more common than one might think. For example, virtually all ‘traditional’ jazz is composed improvisation. Jazz players are generally given, or have learned, some kind of chart or lead sheet which contains the chord changes and melody of a piece, and then improvise based on that information.

19442405

In fact, it’s fairly common for this same kind of controlled improvisation based on notation to occur in contemporary classical music as well. What I have seen most commonly, and have used the most in my own music, is a section wherein only pitches are notated and everything else is left to the player to decide. An example from my music is shown below. Note that the given pitches can be used in any order, in any octave, with any rhythm, dynamic, articulation and so on.

Naut

These are by no means the only ways that notated improvisation can occur. There are probably as many different ways to utilize these kinds of ideas as there are composers using them. But Solo is actually an example of something very rare in the world of composed improvisation. To work out what that is, we have to take a quick step back.

Music is fundamentally organized into a series of impulses. A note begins on an impulse. That note can be combined with other notes into a larger phrase, which has its own larger impulse. That phrase is then grouped with other phrases to form a section, which has its own, still larger impulse. Sections can be grouped into a large form which we might call a movement, or a complete work, each of which also has its own much larger impulse. Sometimes people refer to this concept of grouping things into larger and larger impulses as “the big beats” of music. I’m deliberately avoiding the word “beat” here because it can be misleading.

This concept is actually alluded to in a Ted talk by Benjamin Zander, which you can watch below, and is more scientifically stated by Stockhausen himself in an essay which appears in Perspectives on Contemporary Music Theory edited by Benjamin Boretz and Edward T. Cone.

Composed improvisation can generally be organized into three levels based on with what level of impulses the player is being allowed to improvise and what levels of impulse have been predetermined. In the first level, the form and the phrases are both predetermined, but the specific notes which are played are up to the performer. In the second level, the form and the specific notes are determined, but the phrases which are constructed out of those notes are up to the performer. In the final level, specific notes and phrases are determined, but the form of the piece is left to the performer.

So, the two forms of composed improvisation that we have discussed thus far are both level-one improvisation. Consider jazz improvisation: the form of the piece and the phrase structure are already given based on the notation within the chart, but exactly which notes are played when is up to the player to decide. Specific notes are undetermined, but the larger impulses are predetermined.

An example of third-level improvisation would be the “open form” music found in some of the works of Pierre Boulez is an example of this as are numerous works by Stockhusen (Zyklus, and Licht, for example). In this kind of improvisation, while entire sections of notes and phrases are specifically notated, the order in which those sections occur is determined by the performers.

Solo is a rare example of level-two improvisation in which specific notes and gestures are determined, as is the overarching form, but the way those notes and gestures organize to make phrases is left to the player. I have not yet encountered another piece of composed improvised music that contains large-scale, level-two improvisation, even among Stockhousen’s works. What’s more, the understanding by the performer that this work functions as level-two improvisation is absolutely imperative to a particular performance faithfully representing Stockhausen’s intentions for Solo.

For those interested in hearing Solohere is a recording of me and horn player Briay Condit playing this piece.

The fact that this work is, as far as I am aware, unique in the world of improvised music makes it more meaningful to the cannon, and likely explains why the work is so notationally involved and difficult for performers to meaningfully understand. And, frankly, this only begins to deal with the things about this work that are fascinating and misunderstood, which probably explains why my previous paper was so long and boring… perhaps more on this another day.

For more from Stephen Bailey, you can visit his website here.

The Power of Recurrence: Further Thoughts on Form

JaegerKnob620

I recently had the pleasure of seeing a live performance of Mario Davidovsky’s Synchronisms No. 10 for guitar and tape. Davidovsky is an Argentine-born composer who has spent most of his career in the US, especially at Columbia and Harvard universities. As with much American music from the more “academic” strain (Davidovsky’s biggest mentor was Milton Babbitt), Synchronisms No. 10 does not follow any traditional form. Instead, the piece appears to be through-composed, with a number of distinct sections following organically upon each other, creating an interesting and colorful variety of sound worlds. Somewhat surprisingly, the piece begins with several minutes of solo guitar before the electronic part enters. However, near the end of the piece, the guitar’s opening gestures recur exactly as at the beginning, but with an electronic accompaniment this time. As I listened to the performance, the obvious recurrence of this passage gave the whole a much more defined shape in my mind, causing me to smile and nod in approval almost involuntarily. Suddenly, it seemed as if I liked the piece a whole lot more, even though it had done nothing new.

Even though Davidovsky studiously avoided using any classical forms, I realized that this recurrence of the opening material was actually functioning in a way analogous to a recapitulation in traditional form, even apart from the return to a home tonality which is traditionally associated with it. This suggested to me that perhaps the main purpose of traditional musical forms such as sonata and rondo is not to provide a tonal structure, but simply a framework for recurrence. In a piece of any substantial length, some element of recurrence is necessary to create a satisfying listening experience, whether the language is tonal or atonal. In fact, I would argue that the longer a piece is, the more essential repetition or recurrence is to maintaining a coherent construction of form.

Similarly, in the visual arts, the larger a work’s physical dimensions, the more important its form or composition is. The painter, potter, sculptor, or architect constructs these forms out of elements dealing with the distribution of materials across space, such as shape, color, balance, and proportion. However, while a work of visual art can be grasped instantaneously, in a single glance, a work of music must be experienced through time. Therefore, its structure must be articulated through elements dealing with the disposition of materials across time, such as repetition, variation, recurrence, expansion, or contrast.

Because of this principle, I would argue that truly through-composed music (that is, forms relying exclusively on variation or contrast instead of repetition or recurrence) can only work on small scales. One significant exception to this might be so-called process music, in which certain musical parameters follow a clearly-defined trajectory over the course of the piece, so that the character of the music is constantly in flux and thus never literally repeating. These large-scale trajectories provide a way for the listener to conceptualize the entire piece in a single glance, so to speak, without needing to recognize material they heard earlier. Even so, most examples of process pieces use either repetition or recurrence as well to help construct the form. Composers may even construct processes that undo or spiral back upon themselves, so that the end of the piece is the same as the beginning—a sort of terminal recurrence that signals the piece’s completion. (For a brilliant example of process music, see Thomas Adès’ In Seven Days.)

In an earlier post, I reflected on how minimalist art showed me that the ideal balance between repetition and variation in a work often tilts much more towards repetition than I think. After my experience listening to the Davidovsky, I now wonder if this principle applies to all musical styles, not just minimalism. For example, one of the most stimulating experiences I’ve had as a composer was taking a seminar in Schenkerian analysis, a music theory paradigm which attempts to show that tonal music uses the same basic patterns at all levels of its structure, from phrases to sections to entire pieces. As a theorist, I don’t necessarily buy all the assertions of Schenkerian philosophy, but as a composer, it opened my eyes to the potential to expand any musical idea without adding any new material, by simply replicating the pattern of the whole in each of the parts, much like a fractal.

To take a completely opposite example, serial music also relies heavily on repetitions of a basic tone row, albeit transformed through processes such as retrograde and inversion (not to mention extreme contrasts in rhythm, timbre, or texture). While serial music is notoriously difficult for listeners to comprehend, I wonder if this is not due to its lack of tonality but rather to the fact that the repetition and recurrence in its structure are not apparent to listeners, having been buried by the radical variation of other musical parameters. The same sort of structure is still there, but it fails to create a sense of cohesion for listeners if they are unable to perceive it.

In my opinion, the difficulty for composers in writing long pieces is not in coming up with enough ideas to fill the piece, but in stretching out a single idea to fill the appropriate amount of time, like blowing up a balloon or throwing a pot on the wheel. Much as novice potters tend to leave the walls of their pots too thick because they don’t realize how far they can stretch the clay to enclose a larger volume, aspiring composers tend to leave their musical materials underdeveloped, moving on from an idea before it has grown to its full potential.

So the next time I’m stuck searching for inspiration in a piece, I intend to check what I’ve already written and consider if it might just be time for some repetition, or at least a little more stretching of an idea. After all, if you never pop a balloon, you’re not blowing them up big enough, right?

Some Thoughts Regarding Electronic and Electroacoustic Music (Part I)

Last year I had several experiences with electronic music that have caused me to think a great deal about its composition, presentation, and performance.  In this blog, I’d like to address two issues related to the decision making process that composers face when writing electronic music and the ways that these decisions tend to shape the perception of that music by audiences.

The first issue occurred to me last year when I attended a performance by the Colorado Symphony Orchestra as part of their “Inside the score” concert series.  While all of the works that were programmed were fantastically well performed and conducted, two works specifically stayed with me because of their treatment of prerecorded elements.  These were a new work by conductor Scott O’Niell which incorporated both the recorded sounds of crickets chirping* and whale sounds; and a work by Respighi entitled Gli ucceli which used the recorded call of a nightingale.  Here is why these works stuck with me, and why they have caused enough thought on my part that I am writing about them almost five months later: both of the recorded elements for these pieces were played back through the house PA system.

I know that doesn’t seem so interesting at the moment, but let’s talk about what that actually means.  The PA system in Boettcher concert hall is many, many feet above the orchestra, and generally operates using a speaker system that is dispersed around the room.  By contrast, the orchestra is set up in one location and each seat in the hall will actually have a slightly different experience of the sound as a result of the relative positioning of the individual players . So here’s the problem with that: in terms of how we hear, physical separation equates to psychological separation. Our brains are wired so that when we hear two sounds coming from two separate places, we assume that they are two separate and unrelated entities. This is great for navigating primordial savannahs, but it also means that it is very difficult for us to correlate the sounds from a PA system with those of an acoustic orchestra. This probably doesn’t always pose a huge problem, but for these two pieces, especially O’Niel’s piece which involved a great deal of interaction between the orchestra and the recording, it was a major drawback.

The solution to this problem is very simple:  The CSO should have put a speaker or two in the orchestra, probably back with the percussion. This would not only have merged the two sounds into one, but would also likely have been more in line with the early performances of the Respighi piece since it was written in the infancy of recording and amplification technologies and the likelihood that a PA system would have been available in the hall at its premiere seems very slim.**

This brings me to my first point about how composers need to think differently than we are used to when we write electronic music.  Putting the electronic element of a piece through the PA is not always wrong, but it will have different effects on the listener. This is especially true if the work also calls for acoustic instruments. We as composers have to think about the way the two elements will interact musically and make decisions about how they will interact physically.  If the acoustic instrument is supposed to be set apart, “surrounded” by the electronic element, the PA is certainly the best choice as this is the effect that the audience will perceive. If the two parts are supposed to interact, and to have equal footing, then a speaker on the stage, probably as close to the other performer as possible should be used.  Most importantly, we need to specifically state these decisions in the score and ensure that it happens when we attend performances. Our job as composers is to make and defend musical decisions about the kind of experience we want our listeners to have.  Allowing these decisions to be made by others countermands the work that we do in other, more obvious areas of our music.

The second issue occurred to me when I read a journal article by Miller Puckette. In this article he detailed a new algorithm developed for more accurate score following.  For those who aren’t familiar with the terminology, score following is essentially a way to remove the necessity for human intervention in performance of electronic elements alongside acoustic instruments. By sensing the pitch played by the acoustic performer, the computer follows along a preprogrammed score of the performance and reacts to what the performer is doing in specific ways at predetermined times in the score. In truth, my experience has been that a great deal of research and effort has gone into this particular topic and I have to wonder why. Rarely, if ever, is the computer actually making its own “decisions” regarding what happens. It is almost always the case that the computer merely triggers certain events to occur, a task that has demonstrably and repeatedly been easier and more accurately performed by a human. So why are we trying to remove a human from the equation?

The conclusion here is that we shouldn’t remove the human from the equation. This actually has more benefits than just being easier and more accurate. When a computer is on stage without a person operating it, there is a kind of disembodiment that occurs. The idea, particularly in a concert setting, that a sound is occurring without a point of origin seems to catch people off guard. What’s more, audiences tend to take certain visual cues regarding when the piece has begun or finished from the people on stage and a lack of these visual cues will often make audience members disoriented and uncomfortable.

The point is that we as composers need to make deliberate choices about the way our music is performed. It is very common for composers to include stage diagrams for their works and these diagrams could easily include the location of a speaker and a laptop. The problem is that we aren’t used to making these kinds of decisions so explicitly because a tradition already exists regarding where instrumentalists should sit on stage in relation to each other. Really, the only decisions we’ve had to make in the past are those of musical content.  But since electronics have so drastically changed the face of the music world, we have to also change the way that we control the experiences our audiences have.

*A recording of crickets slowed and pitched down has circulated around the internet for some time. The sound is quite beautiful and is worth hearing. https://soundcloud.com/acornavi/jim-wilson-crickets-audio

**I actually know very little about the history of this piece and am only making educated assumptions based on the time period when it was written and my understanding of the history of recording technology. If anyone knows more about this, I’d be fascinated to hear about it.

A few words about Stephen Bailey’s “The Uncurling Nautilus”

posted by: Sarah Perske

Nautilus - Version 3One of my goals for this blog is to periodically say a few words about works posted on the “Listening” page in the hope of initiating conversation about those works. This week I’d like to highlight Stephen Bailey’s “The Uncurling Nautilus” for cello and laptop. Please take a moment to listen to the piece if you haven’t already done so (click the image on the left). For that matter, take a moment to listen even if you’re already familiar with the piece! I’ve heard the version for horn and laptop in live performance several times now, and new details have emerged upon each hearing.

Both versions of the piece strike me as particularly compelling integrations of electronics and an acoustic instrument. The electronic element functions both as a virtual space in which the cello resonates (I think this is most audible in the outer sections of the work), and as an “instrument” in its own right (this can be heard in the inner sections, starting at 2:37 in this recording). It is also notable that the laptop performer’s role is truly performative, with a high degree of interaction between the laptop performer and the cellist. At 2:37, for example, the laptop performer must trigger groups of notes in response to the cellist’s pacing. I appreciate the sonic depth and richness that the electronic element creates in this piece, and the timbral variety created by the use of vocalization and percussive sounds in the cello part.

Stephen Bailey has interesting things to say about the structure of the piece:

“Many notable composers have had a fascination with the Fibonacci sequence. This is a series of numbers where the next number is reached through the addition of the previous two. The order of these numbers is 0, 1, 1, 2, 3, 5, 8, 13, 21 and so on. Another important element of this sequence of numbers is the ratio between each consecutive number after the third. This ratio is about 62% and has for many years been known as the golden ratio. This ratio also describes the spiral curling of the shell of a nautilus, a sea-dwelling cephalopod related to, but far more ancient than, the squid and the octopus.

The Uncurling Nautilus is not me expressing my own fascination with the Fibonacci sequence, though I do use the sequence as a compositional tool. The initial concept behind this work was one of gradual accumulation of elements over time and the Fibonacci sequence stuck out as a significant and interesting pattern through which to accumulate elements that wasn’t simply 1, 2, 3, 4, 5 etc. The work is split into three main sections: in the first, the cello plays brief gestures which are played back by the computer as microtonal clusters through a delay. The Fibonacci sequence governs the accumulation of attacks in this section. So first the cello plays one note, then one again, then two, three, five, eight and so on. This creates micro-level call and response periods of growth and decay which, together, create a macro-level accumulation of sound. In the second main section, the cello plays a lyrical, rhythmically free melody and is accompanied by chords played by the computer. The accumulation of texture within the accompaniment is governed by the Fibonacci sequence: first the cello is accompanied by one note, then two, three, five and so on. The third section is a shortened recapitulation of the first. Each of these sections is separated from the next by a cadenza, first improvised by the cello, and then played by the computer based on recorded and highly altered material from the cello’s cadenza.”

About Stephen Bailey:

SBaileycolorcropA fierce experimentalist, Stephen Bailey is a Colorado-based composer of chamber, choral, and electronic music. Stephen’s music embodies a language in which the primary concern is expression, and the primary tool is texture. This language borrows techniques from composers of minimalism, sound mass, and post-serialism. The result can be both ecstatically serene and forcefully chaotic, both sumptuously beautiful and disturbingly ugly.

Because of a strong background in audio engineering and music production, Stephen fully embraces the incorporation of technology into music, while also respecting the beauty and expression of classical forms, genres and instruments.

Stephen’s music has been featured twice on the Playground Ensemble’s annual Colorado Composer’s Concert, as well as their 2013 New Creations concert. Stephen was also one of
three composers to have their music performed at The Classical Salon at Dazzle Nightclub. His devotion to modern music has garnered him commissions from the Metropolitan State University of Denver Men’s Choir, Our Lady of Fatima Catholic Church and a number of Denver-area musicians and chamber groups. He has studied composition with composers such as Conrad Kehn, Leanna Kirchoff, Fred Hess, Cherise Leiter, Abbie Betinis, Brian Johanson,
 and Chris Malloy. He holds an Associate of Arts degree and a Bachelor of Music degree in music composition from Arapahoe Community College and Metropolitan State University of Denver respectively and is currently pursuing a Masters of Music in composition from the University of Denver.

Raphaël Cendo, “Registre des lumières”

by Nathan Cornelius

I recently had the opportunity to hear French composer Raphaël Cendo’s Registre des lumières performed live at Citè de la Musique in Paris. In his work, Cendo aims for what he calls “saturation,” that is, overloading the sonic environment so that unforeseen qualities emerge in it, like overloading a microphone with a signal so strong that it generates distortion or feedback. This process can take many forms, such as dense unsynchronized textures, radical extended techniques, or the buildup of contrasting timbres (tone colors). Although I am not yet a fan of all of Cendo’s music, Registre des lumières left a deep impression on me, radically reshaping my conception of musical timbre.

This piece particularly explores timbral saturation, stretching listeners’ ability to perceive many diverse sound qualities at once. At any given moment, one’s attention might rapidly be shifting from the violinists playing col legno, to the pianist hammering on the strings with felt beaters, to the trombonist playing multiphonics with a double reed, to the choir stage-whispering into their microphones.

In this context, Cendo achieves a radical reversal of the qualities of “normal” and “unusual” timbres, so that a simple piano note or plucked cello string seems a fresh and almost alien sound.  I find a parallel here to certain works by composers such as Penderecki and Rochberg, where the prevailing atonal harmonies are suddenly interrupted by pure tonal triads, which seem to break in from another world.  Now that our ears have been violently awoken from their well-worn habits of hearing, we can hear Penderecki’s traditional chords, or Cendo’s traditional timbres, for what they really are—and what they really were all along.