Even after the performers stop, the music will play on

Musicians are eager to perform at the Dan Harpole Cistern, which is nearly 200 feet in diameter and 14 feet deep, giving it a 45-second reverberation time. By comparison, Benaroya Hall is about 4 seconds. The cistern is at Fort Worden State Park in Port Townsend.

By R.M. CAMPBELL

P-I MUSIC CRITIC


Summer often is full of unusual happenings, but it would be hard to rival tonight’s performance at Fort Worden State Park of John Cage’s “Atlas eclipticalis” at the bottom of the Dan Harpole Cistern by trombonist Stuart Dempster, no stranger to various musical wonders, and assorted colleagues.

A leftover of the park’s military days, the cistern, on the upper hill at Fort Worden, is nearly 200 feet in diameter and 14 feet deep. It was built as a water supply system. For reasons of safety it is entirely covered except for a trap door that allows people, mostly musicians eager to perform or record, to enter the space. One of its principal attractions to musicians is the extremely long reverberation time — about 45 seconds.

“It’s a big echo chamber,” said Dempster.

By comparison, Benaroya Hall is only about four seconds, St. James Cathedral six seconds and St. Mark’s Cathedral a little less. Grace Cathedral in San Francisco, Dempster said, is about 10 seconds.

Since only performing musicians will be in the cistern, loudspeakers will take the sound up to the audience seated on the grass above.

Dempster, who was on the University of Washington music faculty for 30 years, is not only familiar with Cage’s music, he also is an old hand with the cistern’s novel acoustical properties and ambience. While concerts in the space may be relatively rare, recordings are not.

His first recording there was with Pauline Oliveros, once a strong presence in the Seattle new music scene as a composer and performer and member of the UW and Cornish College faculties. Other recordings followed, including Cage’s music.

“The cistern is like an instrument you have to learn how to play,” Dempster said. “You whisper along the wall and your voice can be heard 186 feet away. But if you talk loudly, nothing is understood.”

Cage’s “Atlas eclipticalis” was commissioned by the Montreal Festival Society in 1961. It reached New York three years later when Leonard Bernstein and the New York Philharmonic did a performance at Lincoln Center. Microphones were attached to each instrument, the sounds of which were then channeled into six speakers placed throughout the hall. Many in the audience left during the performance. In subsequent concerts, orchestra musicians rebelled against the composer and the piece.

The work itself was inspired by a kind of celestial atlas published in 1958 by Czech astronomer Antonin Becvar. It is scored, according to the New Grove Dictionary of American Music, for “any ensemble from 86 instruments.”

There will not be 86 musicians in the cistern for tonight’s performance. There will be Dempster playing the trombone, conch shell and “assorted musical mayhem,” along with a handful of colleagues: Matt Kocmieroski, percussion; Walter Gray, cello; Seth Krimsky, bassoon.

Because Cage is an iconoclast and a revolutionary, such a disparity is not earth shattering — it comes with the territory. Throughout his long, fruitful life, Cage was always doing the unexpected, improvising and exploring the world around him. Rules either were simply discarded or rewritten. Chance was an important element.

“He composed for every imaginable kind of instrument,” according to his 1992 obituary in The New York Times, “from standard orchestral strings to ‘prepared’ pianos (which originated at Cornish). … He wrote electronic and tape works, and works that involved only spoken texts. His often impish scoring, in fact, might include radios, toys, the sounds of water being sipped or vegetables being chopped. … And one of his most famous and provocative pieces, ‘4’33’,’ is 4 minutes and 33 seconds of silence, divided into three movements. Indeed, Mr. Cage considered virtually every kind of sound potentially musical.”

Arnold Schoenberg, with whom Cage studied in California before coming to Seattle, called him “not a composer but an inventor of genius.”

Dempster said there often is flexibility to Cage’s music, and the “Atlas,” one of his more performed works, is no exception.

“I did it a few times in the 1960s with friends at the Zen Center in San Francisco, and it has been done at Cornish. Matt was hoping we could record this version.”

The trombonist has spent nearly a lifetime in the world of avant-garde art. He grew up in Berkeley, Calif., and attended San Francisco State University. Quickly he found a home in avant-garde circles, an association that continues today. It was that interest that propelled his

move in the late ’60s to Seattle, where he quickly became an integral part of the musical life of the city.

He played in traditional ensembles and ventured far beyond as well, experimenting with performances in unusual places and with a huge variety of sounds — some from the trombone and some from related instruments. Behind his considerable experimentation was a highly sophisticated technique and musical sensibility. He provided an informed and intelligent, often evocative, taste of contemporary music.

Maximize Information Flow: How to Make Successful Live Electronic Musi

Sam Pluta is a composer and improviser living in New York City. He plays laptop both as a solo performer and with his two bands, Glissando Bin Laden, an improvising quintet, and exclusiveOr, an analog synth/laptop duo. He is technical coordinator for Wet Ink, a New York-based ensemble dedicated to performances of new music and a faculty member at The Walden School Young Musicians Program, where he is computer music coordinator. His music has been commissioned and performed by groups and performers such as Wet Ink, Dave Eggar, ICE, Prism Quartet, RIOT, and Ha-Yang Kim. Pluta is a doctoral candidate at Columbia University where he studies composition with George Lewis.

By Sam Pluta

Published: June 18, 2008


After taking over the video game market in 1985 with the NES, Nintendo revolutionized the video game industry once again in recent years with their Wii gaming console and its controller, the Wiimote. They decided early in the development process that rather than trying to make the most powerful, graphic intensive gaming system they could, they would spend their energy thinking hard about the controller for the system: specifically how to maximize information flow from the gamer to the the game. At the same time, Apple Computer basically owns the portable mp3 market with their iPod and has has gained a large share of the cell phone market with their iPhone. What makes these products different from similar products on the market? The answer is interface. The users are more easily able to communicate with their device, get done what they want to do, and look sexy doing it.

Tomes already exist that lay bare the techniques associated with making electronic music—Roads, Dodge, Rowe, etc.—but little has been theorized about what actually makes an effective live performance. This is most likely because the field of computer music, which dominates discussion in electronic circles, is so inherently technical and focused on new technology. This leaves little room for the discussion of performance, which to be fair, has only recently become a mainstream element in the field.

In any performance, there is an information network that exists between the performer, the instrument, and the audience. By maximizing information exchange between objects in the performer/instrument/audience network—as Apple and Nintendo have done in their commercial products by maximizing user interaction—and creating interactions between the separate information streams of that network, an electronic composer/performer is more likely to create a compelling performance (what I can’t guarantee is that they will look sexy doing it).

Maximizing information flow does not necessarily mean maximum information flow. As Edward Tufte explains in his wonderful books on information display, more information does not necessarily mean better information. It is the subtle and thoughtful interweaving and synchronization of streams that makes for effective performance.

Information Exchange In Live Performance

In any performance situation, there are a number of possible data streams over which information can flow. These streams connect performer to instrument, instrument to performer, instrument to audience, performer to audience, audience to performer, and performer to performer. The most obvious stream of information in a music network is the audio stream. We will consider a focus on the audio stream to be the factor that defines our performance as music. Many streams exist, but keep in mind that a performance does not need to include every possible information stream nor have the maximum amount of information flowing over each stream. However, a general recognition that these streams exist and thinking about how they interact can only help a composition.

In instrumental music, the framework for data flow through the performer/instrument/audience network is inherent in the setup. In other words, when we write acoustic music, we do not necessarily need to think about information exchange between objects in the system, because people have been thinking about this for thousands of years and have come up with a setup that works for this music. Live electronic music is at an infant and experimental state (similar to the state of the sound film in the 1920’s and 1930’s), and therefore the setup does not yet automatically work as performance. The common result of its failure is the “is he playing music, or did he just press play in iTunes and is now checking his email” syndrome. We have all seen a solo laptop performer up on stage just sitting there, looking blankly at the screen and twiddling the mouse every now and then. Focus on the audio stream is not enough, and a bad performance can simply be the result of the composer/performer being ignorant to the system in which they have contracted themselves to take part.

Performer/Sound Interface

A violinist naturally must interact physically with their instrument in order to make music. The information exchange between a violinist and their violin is great. Fingers in the left hand control the length of the string, passing information about pitch and vibrato to the instrument. The right hand of the violinist holds the bow, which crosses over the strings, pulling on them and forcing them to vibrate. Depending on the quickness of the bow stroke, the bow pressure, where the bow crosses the string between the bridge and the fingerboard, and at what angle the bow strikes the string, the sound will be made up of different pitched and unpitched elements. Due to this high rate of information exchange, a good performer has unbelievable control over their instrument. More importantly, it is the interaction of multiple, separate, analog streams of information (bow pressure, speed, etc), combining to make a single sonority, that makes the violin’s sound so dynamic.

In live electronic music, the performer must interact similarly with their instrument. In order for any electronic instrument to make dynamic, changing sounds, the performer must have some kind of interface through which he/she tells the machine what to do. The early inventors of electronic instruments understood how to make an expressive interface. The theremin is still one of the best electronic musical devices, giving each hand expressive control over the pitch and amplitude of a single oscillator. Don Buchla’s interfaces for his wonderful analog synthesizers maximized user activity by giving the user a series of knobs to twiddle and turn (mmm, and those deliciously noisy circuits!). These are physical devices, and generally, the more physical the device, the more information it can transfer (obviously you could argue with me on this, but lets just let it fly for the moment). On the theremin, the user has analog control over two separate but interacting data streams: pitch and volume. On the Buchla, any one user can simultaneously be turning two or three knobs. Add a second player and that number is four to six. That means control over four to six pitched, timbrel, or temporal elements, in all their analog glory, interacting to make a single sound. Compare this to our once-a-minute mouse clicker and you can start to see how this could be a problem in terms of performance.

More physicality also equals greater ability for the audience to relate to what is going on. In the brain, there are a set of transmitters called mirror neurons. What is crazy about these neurons is that they fire in exactly the same manner when a subject is performing an action as when a subject is observing the same action. This means that a person watching a piano performance does not only stimulate the sonic receptors in the brain associated with sound, but also stimulates the mirror neurons in the brain that would fire if they were actually playing the piano. A lack of physical action in an electronic performance means that these mirror neurons are not firing and this means that the brain has less to do. But this is not usually what a live performance is supposed to achieve, right? It is supposed to make the brain light up and start tossing ideas around like a middle-school food fight. In other words, physical action in performance is incredibly important to an audience member. Physical action is a data stream over which large quantities of information can travel, and by ignoring this, a performer is ignoring their audience.

Historically, there have been many approaches to gestural performance in electronic music. Jonty Harrison and many of his contemporaries perform their electronic tape compositions by diffusing them in space. Generally, the performer sits at a mixing console in the middle of a concert hall and moves the sound around the hall by manipulating a set of sliders. This can be quite a visceral experience, especially with a performer experienced in diffusion, and it adds a live dimension to an otherwise static medium. La Monte Young’s Dream House takes on a completely different approach. Here, the audience is the performer. A series of high-pitched sine-waves (2000-4000hz), tuned as very high partials of a very low fundamental, fill the room. By moving around the room, or just moving their head very slightly, the performer (audience member) changes the sound world in which they are engulfed. Interestingly, both these examples bring into view the spacial stream, an important information stream in much electronic music. In Karlheinz Stockhausen’s Mikrophonie I, seven or eight players interact with a tam-tam via microphones, filters, and potentiometers, constantly maintaining a physical interaction with the sound. In John Cage’s Imaginary Landscape No. 4, twenty-four players play twelve radios, giving each player physical control over either the volume and timbre of the sound or the tuning of the station.

The computer, the most popular electronic instrument today, was not designed to take large quantities of gestural information and convert it into data, yet 27 years after the first commercial mouse became available, we are finally at a point where processors and bus speeds are fast enough to deal with multiple continuous streams of data. Furthermore, people seem to be thinking about the problem, and this is why commercial products like the Wiimote and iPhone are appearing now. We are at the beginning of the age of the high-bit-rate continuous controller, and performers are able to get more information into the computer than ever before. A few examples of new continuous controllers are David Wessel’s SLAB, Sergi Jorda’s Reactable, Jeff Snyder’s MEAPBook, and, of course, Nintendo’s Wiimote. Aside from market solutions, live electronic performers have also gained more access in recent years to cheap hardware micro-controllers, like the Arduino, which allow users to more easily connect custom-built controllers to their computer, thus putting custom-built data flow literally in the palm of the performers’ hands.

All of this talk of continuous controllers and hardware micro-controllers is exciting, but still by far the best way to get loads of data flowing from the performers to the electronics is through an audio signal. Once it enters the computer, an audio signal is a 16-bit pathway of information that can be used in a thousand different ways. It can be used as the sound that the setup will be making, as well as a controller to manipulate the sound that the setup is making. Mic’ing up an instrument is still the best way to get information into the performance system, and we will see below that this is also a double whammy, adding information flow in the visual realm. With all of these options available, the electronic performer of the future has no right to look bored or even have a moment to check their email while performing.

1+1=3 (Fulfilling the Audio-Visual Contract)

What are the correlations between what the audience sees and what the audience hears? In Audio-Vision, Michel Chion’s seminal book on film sound (and required reading for anyone interested in the audio-visual arts), Chion defines “added value” as “the expressive and informative value with which a sound enriches a given image so as to create the definite impression, in the immediate or remembered experience one has of it, that this information or expression ‘naturally’ comes from what is seen, and is already contained in the image itself.” Film is, of course, a primarily visual medium and music is a primarily sonic medium, yet vision has the same effect on sound in live music as sound does in film. By syncing up elements of what the audience sees with what the audience hears, we enhance our sonic experience. Furthermore, with the correct balance of sound/image interaction, the composer is able to transcend audio and vision as separate elements, creating a third and distinct element that we will call the “audio-visual.”

Adding a visual element like a video to an electronic set significantly changes how the viewer /listener views/hears what is going on. Once a visual element is present, the viewer tries to make a connection between what is being heard and what is being seen. If there is no connection, the viewer will probably make one and start seeing connections that are not there (try watching Saturday morning cartoons and listening to Gesang der Jungling). If there is some connection, the viewer will think that the two elements perfectly interact (how many people you know have watched The Wizard of Oz with Pink Floyd’s album The Dark Side of the Moon serving as substitute soundtrack?). If there is only connection or too much connection between audio and visual elements (in film this is known as Mickey Mousing, a term referencing Walt Disney’s early animated sound films), the viewer will be left with only the audio-visual and will lose the separate dimensions of audio and visual. Usually this is humorous and we associate this kind of audio-visual connection with The Three Stooges, Looney Tunes, Chaplin talkies, and modern classics like Dumb and Dumber. The human brain wants to see connections between audio and visual elements. Our survival as a species has relied on this being the case. However, in art, we are most impressed when we see a connection between elements, but are left with some level of mystery about what is going on, and are required to constantly try to seek that exact connection. It is at this point, with some connection and some level of mystery, where the audio-visual dimension exists.

And who can argue that the creation of a whole new dimension doesn’t add a, uh…whole new dimension to a performance? Furthermore, let’s take a step back from sound and image for a moment, and notice that what is at play here is the intertwining and interaction of data streams in general. In other words, it is the interaction of any number of separate data streams in the performer/instrument/audience network that creates an exponential increase in perception as experienced by the audience member.

Though video is one way to create the audio-visual dimension, it is not the only way. The historic solution is to add dancers. There is nothing more exciting than watching the human body interact with sound. (Perhaps this is once again due to stimulated mirror neurons?) Another solution is to add more musical performers. To understand this, let us return to our violin performer. Yes, the Bach Chaconne for solo violin is amazing, but there is something about watching the interaction of a string quartet that makes the experience of live music all the more exciting. Watching the players communicate adds a whole new dimension to the musical experience. The addition of another player (or many) adds n streams over which information can flow and interact, adding n streams of information for the audience to interpret, and this can do wonders for a live performance. The multi-laptop approach, as advocated by such groups as The Princeton Laptop Orchestra (PLOrk) is one way to increase performer interaction. With up to fifteen players on stage, PLOrk is as much a visual spectacle as it is sonic. As stated above, adding an instrumental performer to a setup is another way to increase gestural interaction while at the same time adding an audio-visual connection. Watching and listening to an instrumental performer play and then hearing how their sound interacts with the electronics is one of the most exciting ways to add visual data to electronic performance.

*
All this talk of information flow is not just a call for more, more, more information. There is such a thing as too much, and in fact the conscious brain can only handle so much. Of the millions of bits of information that the senses send to the brain per second, the conscious brain can only handle about 40. Not to say that the audience member should be perfectly conscious of what is going on. As stated earlier, a certain level of mystery must be involved to keep the observer’s interest. Therefore, we must each use our own judgment in knowing where the saturation point is. By understanding information streams and their interaction, a composer/performer can maximize information flow in the performance network, creating a more interesting and effective composition. The subtle intertwining, mixing, separation, and interaction of streams is different in each performance situation, and it is the role of the composer/performer to find the correct mix of these elements for their desired results.

References/Further Reading


Michel Chion: Audio-Vision Sound on Screen (Columbia University Press)
Tor Nørretranders: “The Bandwidth of Consciousness,” Chapter 6 from The User Illusion, Cutting Consciousness Down to Size (Penguin Press Science)
Charles Dodge and Thomas Jerse: Computer Music (Schirmer Books)
Curtis Roads: The Computer Music Tutorial (MIT Press)
Robert Rowe: Machine Musicianship (MIT Press)
Giacomo Rizzolatti: “Mirrors in the Mind” (Scientific American, November 2006)