Last year I had several experiences with electronic music that have caused me to think a great deal about its composition, presentation, and performance. In this blog, I’d like to address two issues related to the decision making process that composers face when writing electronic music and the ways that these decisions tend to shape the perception of that music by audiences.
The first issue occurred to me last year when I attended a performance by the Colorado Symphony Orchestra as part of their “Inside the score” concert series. While all of the works that were programmed were fantastically well performed and conducted, two works specifically stayed with me because of their treatment of prerecorded elements. These were a new work by conductor Scott O’Niell which incorporated both the recorded sounds of crickets chirping* and whale sounds; and a work by Respighi entitled Gli ucceli which used the recorded call of a nightingale. Here is why these works stuck with me, and why they have caused enough thought on my part that I am writing about them almost five months later: both of the recorded elements for these pieces were played back through the house PA system.
I know that doesn’t seem so interesting at the moment, but let’s talk about what that actually means. The PA system in Boettcher concert hall is many, many feet above the orchestra, and generally operates using a speaker system that is dispersed around the room. By contrast, the orchestra is set up in one location and each seat in the hall will actually have a slightly different experience of the sound as a result of the relative positioning of the individual players . So here’s the problem with that: in terms of how we hear, physical separation equates to psychological separation. Our brains are wired so that when we hear two sounds coming from two separate places, we assume that they are two separate and unrelated entities. This is great for navigating primordial savannahs, but it also means that it is very difficult for us to correlate the sounds from a PA system with those of an acoustic orchestra. This probably doesn’t always pose a huge problem, but for these two pieces, especially O’Niel’s piece which involved a great deal of interaction between the orchestra and the recording, it was a major drawback.
The solution to this problem is very simple: The CSO should have put a speaker or two in the orchestra, probably back with the percussion. This would not only have merged the two sounds into one, but would also likely have been more in line with the early performances of the Respighi piece since it was written in the infancy of recording and amplification technologies and the likelihood that a PA system would have been available in the hall at its premiere seems very slim.**
This brings me to my first point about how composers need to think differently than we are used to when we write electronic music. Putting the electronic element of a piece through the PA is not always wrong, but it will have different effects on the listener. This is especially true if the work also calls for acoustic instruments. We as composers have to think about the way the two elements will interact musically and make decisions about how they will interact physically. If the acoustic instrument is supposed to be set apart, “surrounded” by the electronic element, the PA is certainly the best choice as this is the effect that the audience will perceive. If the two parts are supposed to interact, and to have equal footing, then a speaker on the stage, probably as close to the other performer as possible should be used. Most importantly, we need to specifically state these decisions in the score and ensure that it happens when we attend performances. Our job as composers is to make and defend musical decisions about the kind of experience we want our listeners to have. Allowing these decisions to be made by others countermands the work that we do in other, more obvious areas of our music.
The second issue occurred to me when I read a journal article by Miller Puckette. In this article he detailed a new algorithm developed for more accurate score following. For those who aren’t familiar with the terminology, score following is essentially a way to remove the necessity for human intervention in performance of electronic elements alongside acoustic instruments. By sensing the pitch played by the acoustic performer, the computer follows along a preprogrammed score of the performance and reacts to what the performer is doing in specific ways at predetermined times in the score. In truth, my experience has been that a great deal of research and effort has gone into this particular topic and I have to wonder why. Rarely, if ever, is the computer actually making its own “decisions” regarding what happens. It is almost always the case that the computer merely triggers certain events to occur, a task that has demonstrably and repeatedly been easier and more accurately performed by a human. So why are we trying to remove a human from the equation?
The conclusion here is that we shouldn’t remove the human from the equation. This actually has more benefits than just being easier and more accurate. When a computer is on stage without a person operating it, there is a kind of disembodiment that occurs. The idea, particularly in a concert setting, that a sound is occurring without a point of origin seems to catch people off guard. What’s more, audiences tend to take certain visual cues regarding when the piece has begun or finished from the people on stage and a lack of these visual cues will often make audience members disoriented and uncomfortable.
The point is that we as composers need to make deliberate choices about the way our music is performed. It is very common for composers to include stage diagrams for their works and these diagrams could easily include the location of a speaker and a laptop. The problem is that we aren’t used to making these kinds of decisions so explicitly because a tradition already exists regarding where instrumentalists should sit on stage in relation to each other. Really, the only decisions we’ve had to make in the past are those of musical content. But since electronics have so drastically changed the face of the music world, we have to also change the way that we control the experiences our audiences have.
*A recording of crickets slowed and pitched down has circulated around the internet for some time. The sound is quite beautiful and is worth hearing. https://soundcloud.com/acornavi/jim-wilson-crickets-audio
**I actually know very little about the history of this piece and am only making educated assumptions based on the time period when it was written and my understanding of the history of recording technology. If anyone knows more about this, I’d be fascinated to hear about it.